Interoperability of open-source tools: the emergence of interfaces

Katie Gamanji
6 min readMar 8, 2020

--

In the past years, Kubernetes has been the nucleus of container orchestration frameworks. Numerous tools have been developed to extend Kubernetes capabilities and enhance its features. Over time, tools with similar functionalities would have fundamentally different implementations and practices to converge with the Kubernetes components. The emergence of shared standards and a set of best practices became imperative.

This blog post will focus on the evolution of interfaces within the Kubernetes landscape, including networking, storage, service mesh and cluster provisioning. As well, an emphasis will be placed on why the interoperability of open-source tools is pivotal in the modern infrastructure.

The Theoretical Past

Cloud Native Landscape

Kubernetes is widely known for its adaptability and flexibility to run containerised workloads with pre-defined technical requirements. One of its goals is to efficiently supply the ecosystem crucial for application execution while reducing its footprint in the cluster. To reach this state of the art, complex challenges needed solutionizing, such as support of different hardware and networking systems. Henceforth, plugins for container runtimes and network interfaces were developed to reach a smoother transition between VM native workloads and containerized applications.

Historically, interface adoption has been encountered in the early stages of Kubernetes evolution. For example, Container Runtime Interface (CRI) was introduced in its alpha release in Kubernetes v1.5, while Container Network Interface (CNI) was announced as a CNCF hosted project in 2017. These projects define a solid set of guidelines to enhance the portability and connectivity of containerized applications.

CRI

In the beginning, Docker was the default supported container runtime, and this created tight coupling between the kubelet and Docker releases. At the time, Docker and rkt logic were deeply ingrained within the kubelet source code. To increase the velocity of change and allow affinity with new container runtimes, segregation of these components was necessary.

Container Runtime Interface (CRI) was developed to provide an abstraction layer over the integration of container runtimes, from which Docker would be one of the supported runtime plugins. This was an important step for Kubernetes extensibility and acclimatization to the rapidly growing container runtime space. Currently, widely used container runtimes are Docker, Containerd, CRI-O, gVisor and AWS Firecracker.

CNI

Approaching networking in a Kubernetes environment is a rather challenging task. At its core, Kubernetes enables the execution of workloads on a group of computing resources, while preserving the connectivity and reachability of these workloads. As a result, the Kubernetes networking model is strongly assertive, and it revolves around the principle that “every pod is allocated a unique IP address”.

This direction of the core principles dismissed the difficult task of dynamic port allocation. At the same time, it brought into the light new challenges, and more specifically, how to create a shared flat network and ensure containers can still access other workloads and physical machines.

Out of the box, this networking model is provided by kubenet, a basic network plugin for Kubernetes. However, kubenet does not support the configuration of advanced features and has a plethora of technical limitations, even when used alongside a cloud provider (e.g for AWS there is a limit of 50 EC2 instances for a cluster).

Taking the above in consideration, based on CoreOS proposal, Container Network Interface (CNI) was adopted under the CNCF umbrella. The CNI project aims to establish a set on instructions for developing plugins which can “configure network interfaces within the Linux containers “. It gravitates towards a solution for reachability and connectivity of the containers, as it ensures IP assignment for each pod and clean-up of allocated resources when the pod is deleted. Current popular CNI solutions are represented by Flannel, Calico, Wave Net, Cilium providing support for, either or both, overlay network and routing protocols.

The Innovation Wave

The innovation wave

The runtime and network specifications pioneered the use of interfaces within the Kubernetes landscape. Increasingly, the community concerns itself with the extensibility of Kubernetes and the advocacy of pluggable components, rather than promoting opinionated technical decisions and settlement to a subset of products.

Within the next innovation wave, solid baselines were established for the implementation of storage, service mesh and cluster provisioning tools. These will be described in more details in the following sections.

CSI

As of Kubernetes v1.13, Container Storage Interface (CSI) was promoted to GA phase, gaining stability and significant interest from the community. Prior to CSI, the volume plugins were embedded within the Kubernetes codebase. Intrinsically, the integration of new 3rd party storage systems was complex and highly depended on the Kubernetes release procedure. The CSI project was introduced to address all of these issues in an out-of-tree manner, promoting a pluggable interface for volume providers and independent development of storage systems.

The CSI drivers expose containers workloads to a variety of block and file storage systems in a cross container orchestrator ecosystem. At the moment, more than 60 volume drivers are actively maintained, including StorageOS, OpenEBS, Ceph RBD and many more.

SMI

In May 2019, at KubeCon EU in Barcelona, Microsoft announced the development of Service Mesh Interface (SMI), as a solution to democratize service mesh on Kubernetes. As anticipated, SMI defines a set of declarative APIs and specifications to facilitate the implementation of various service mesh providers.

At its core, SMI covers 3 areas: traffic policy, telemetry and traffic management. These focus on the configuration of:

  • traffic structure on a per-protocol basis and fine-grained access policies across services
  • common traffic metrics to be captured and exposed
  • traffic administration on a percentage basis across services (e.g. to facilitate canary rollouts)

At this stage, SMI has three service mesh implementations: HashiCorp’s Consul, Linkerd and Istio. Additionally, SMI can be used directly by invoking the APIs directly.

ClusterAPI

Over time, multiple tools emerged within the ecosystem, providing bootstrap capabilities for Kubernetes clusters hosted on various infrastructure providers (e.g. AWS, GCP, Azure, OpenStack). Kubeadm, tectonic installer, kops, kubespray are just a few tools that are widely used within the community. However, it is difficult to find a common denominator when it comes to the supported cloud providers for each tool.

As a result, ClusterAPI was announced in April 2019, and it provides a set of libraries for cluster creation, configuration, management and deletion. It is a tool that aims to expose a unified and sustainable interface for cluster initialization on-prem and with supported cloud providers. ClusterAPI is currently in v1alpha2 release and collaborates with 12 major infrastructure providers.

A Perspective on the Future

Impact of the emergence of interfaces

Within its 6 years of existence, Kubernetes has transmogrified and defined its identity in multiple stages. In a recurring manner, the out-of-tree approach is used to migrate independent and stable components, to further extend the orchestrator framework canvas. At the same time, the core binaries footprint is shrinking, instigating the genesis of Kubernetes identity as we will know it in the future.

Being non-opinionated about the adoption of specific technologies and the methodologies of distributing its primitive resources, were the main axes of Kubernetes evolution. Additionally, the proliferation of solutions from multiple vendors played an instrumental role in the emergence of interfaces and it served as the engine for further development and innovation.

For existing and new vendors, advocacy of interfaces is still the driver for rapid innovation and healthy competition. It is no longer required to implement segmented integration techniques, and the focus is solely placed on the development of new features, distributing values to the consumers at topmost efficiency.

From the end-user perspective, composability within the container framework systems translates into a higher velocity of change with minor technical compromises. Unparalleled, it is more straightforward to benchmark available tools and interoperate between them, advancing with a well-grounded consumer-first strategy.

Adoption of interfaces across various domains was pivotal in the Kubernetes evolution cycle. Implementation defragmentation through standardisation started the technical metamorphosis that further empowered end-users and their infrastructure.

Interoperability of open-source tools is one the key characteristic of the Kubernetes development, whilst it is progressively becoming the base container orchestrator framework that anchors extensibility and innovation.

--

--

Katie Gamanji
Katie Gamanji

Written by Katie Gamanji

Sailing open-source tooling and supporting the community as an Senior Kubernetes Field Engineer @Apple

Responses (3)