The Building Blocks of DX: K8s Evolution from CLI to GitOps

In the past years, Kubernetes has become the default container orchestrator framework, setting the standards for application deployment in a distributed architecture. Wider adaptability of the tool prompted the diversification of the end-user base, and a consistent DX for cluster interaction became essential for Kubernetes. The community channeled herculean efforts towards the enhancement of developer experience by extending the cluster CLI, building portals, and highly-responsive UIs.

This blog post will focus on the cluster DX chronicles, showcasing tools which contributed to wider adoption for Kubernetes. An emphasis will be placed on cluster CLI and how it can be extended using kubectl plugins and wrappers. This will be followed by an introduction of widespread cluster state managers, covering mechanisms such as GitOps, ClickOps, and even SheetOps.

Cluster CLI

By default, Kubernetes has a multi-utility tool to query and manage the cluster state. This represents the cluster CLI, known as kubectl, governed by SIG-CLI. Kubectl offers a wide range of operations (40+ verbs) and more than 70 flags for quick and tailored application provisioning.

Cluster CLI bridges the local terminal to a running cluster, by using a kubeconfig file. This file will contain the details of the API server endpoint and essential authentication data (e.g. certificates or tokens). Typically, kubectl will refer to the config file in ~/.kube directory, however, the user can set the context by using --kubeconfig flag or or by setting the KUBECONFIG environmental variable.

Imperative vs declarative resource management

Additionally, kubectl is universally known for its coverage of declarative and imperative management techniques. With the imperative object configuration, the user can operate on live resources in the cluster, gravitating towards a 1-command application deployment. Whilst, the imperative approach is easy and straightforward to assimilate, this technique is intended for development phases only, as the config source is not stored and no audit trailing is available.

On the other side, the declarative approach operates on configuration files stored locally and provides a mechanism for change retention. In contrast with the 1-command deploys, the declarative approach is predetermined for production environments, as it requires a thorough understanding of the object nested configuration.

DX Enhancement

Kubectl provides a great platform to explore the cluster assets, and in real case scenario engineers with different expertise and exposure to Kubernetes will interact with it. While it is the native mechanism to query the cluster, it is generally and wrongly anticipated for the engineers to be fluent in operating the cluster CLI. At this stage, a clear path was outlined for the community to focus on extending and enriching the cluster DX and making it inclusive to engineers of all levels.

The next sections will highlight the techniques to enhance the cluster DX by using kubectl plugins and wrappers.

As mentioned, kubectl is a highly versatile tool that enables the deployment of an application via 1 command, e.g.:

kubectl run pod — image=nginx — restart=Never \
— requests=cpu=0.5,memory=128Mi \
— labels=foo=bar \
— serviceaccount=test-sa

Troubleshooting and debugging an application can result in equally fierce and lengthy commands. Fortunately, kubectl has an in-build process for writing custom actions, aka kubectl plugins, which conceptualizes the building blocks of cluster CLI.

Generally, kubectl plugins are standalone executables written in any language. To be discovered and integrated with the CLI, the plugins should comply with the following requirements:

  • the binary name should be prefixed by kubectl-
  • the binary should be present in any directories from the defined $PATH environmental variable

For more details on naming conventions, refer to the official documentation.

Let’s have a closer outline of how a typical plugin would be constructed. In the following example the plugin action would be a simple echo command (note the binary path):

$ cat /usr/local/bin/kubectl-hello-cloudnative#!/bin/bashecho "Hello Cloud Native eParty!"

To invoke the plugin, the command below can be used:

$ kubectl hello cloudnativeHello Cloud Native eParty!

Easy, wasn’t it?

Over time, the community has discerned patterns and reusable kubectl plugins. A collection of open-sourced add-ons have been curated, indexed, and distributed using Krew. In a nutshell, Krew is a plugin manager that resides under the SIG CLI umbrella and offers a range of more than 90 custom commands.

Overall, the introduction of plugins facilitated the construction of a tailored cluster DX, with an outspread practice to combine kubectl aliases and plugins. While existing kubectl actions cannot be overwritten, a wide spectrum of pre-build extensions are just one Krew install away.

So far, kubectl has been outlined as a utility tool with an abundance of flags for highly customized deployments. This formed the perfect basis to further the optimizaition the cluster visualization. Henceforth, multiple portals were built to wrap the kubectl get commands, providing a graphical representation of the cluster state and refining the troubleshooting and debugging paths.

Some of the widely used tools for illustrating the Kubernetes state are:

  • Octant — open-source developer-centric web interface
  • k9s — terminal based UI to query Kubernetes clusters
  • Lens — Kubernetes IDE, that can be installed as a standalone application
  • Spekt8 — view-only portal for visualising the networking topology of the deployed applications

ApplicationOps

At this stage, kubectl building blocks were used to extend and visualize the user journey within a cluster. The next organic step in the DX optimization pipeline is the abstraction of operations for application deployment.

In the current ecosystem, this translates in the introduction of configuration managers such as Helm or Kustomize, and application operation managers such as ClickOps, GitOps, and the newly discovered SheetOps (imagine the opportunities!).

Helm and Kustomize are the industry leaders in handling the abstraction for application-level configuration. In essence, these tools have fundamentally different implementation strategies, but both provide a solution for templating Kubernetes resources and per environment segregation of input parameters (Helm via values.yaml files and Kustomize via overlays).

The output from the configuration managers will be represented by a collection of YAML manifests, ready to be deployed to the clusters. Instead of handling the manifest propagation manually via command line, a variation of tools were build to handle the AppicationOps e.g ClickOps, GitOps, and SheetOps.

In the next paragraphs, the ApplicationOps techniques will be described in more detail.

ClickOps resumes the application deployment to a collection of clicks through a myriad of menu settings, and its often supplied as a service by the public cloud providers. Whilst, it equips the users with a powerful DX, ClickOps is tightly coupled with the implemented abstraction layer.

A balanced deployment experience is composed of a reasonable amount of parameters to be configured and a thorough understanding of how to reach and debug the application. However, in a ClickOps driven ecosystem, actions such as reusable configuration and rollbacks are challenging to implement.

On the other side, lately GitOps usage has rocketed in the end-user community. GitOps is an application delivery mechanism that has git repositories as a representation of the desired application state. That conveys that the delta between the IDE and cluster deployment is one just PR away. GitOps is associated with an automatic reconciliation of data, meaning that it has a pull system, always watching for new commits.

Popular implementations of GitOps are showcased by Flux (CNCF sandbox project) and ArgoCD (CNCF incubation project).

And lastly, the newest addition to the ApplicationOps circle is SheetOps, which is dedicated to the mission of substituting YAML with spreadsheets. As expected, this technique aims to control the cluster state using Google spreadsheets, and at the moment it only administers the replica count for an application. However, the project is looking for sponsorship to extend its functionalities.

If SheetOps is a concept close to your heart, now is the time to act!

Conclusion

Throughout Kubernetes evolution, the community engine was fuelled by initiatives to further enrich and simplify the user journey in a cluster. Fundamentally, this was achievable because the Kubernetes APIs provide a declarative configuration schema for the system, making the implementation of ClickOps, GitOps, and even SheetOps possible.

The cluster interaction chronicles are in constant expansion, so far covering kubectl plugins, UIs to showcase the operational state of the cluster and ApplicationOps techniques. However, the journey is far from completion, as all of these mechanisms are just the building blocks of the future deployment primitives.

Sailing open-source tooling and supporting the community as an Ecosystem Advocate @CNCF

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store