Kubernetes and YAML: A DevOps Primer

 

Kubernetes makes DevOps easier

One of the core principles of DevOps is that the team responsible for writing code owns that code through its entire lifecycle: development, QA, deployment, operations. The team writes and tests the code. The team does whatever is necessary (manually or with scripts) to deploy that code to test, then staging, and finally production. The team sets up monitoring for their code, handles pager alerts in the middle of the night, triages and debugs problems, and folds solutions into the roadmap and codebase.

The challenge with DevOps is that each and every team needs to have expertise across the entire application lifecycle. Developers aren’t necessarily well-versed in operations or security. Operations and security aren’t developers. And rarely do you have enough operations/security people to assign one to every development team. Instead, you need everyone to learn something new, and you need to embrace tools that embody and automate whatever DevOps tasks they can.

Kubernetes does just that. It grew out of Google’s experiences running containers at scale for years. It embodies best-practices and accelerates every phase of the application life cycle, and in so doing meets the goal of DevOps: accelerated software delivery. No wonder so many developers and organizations all across the world are embracing Kubernetes.

Kubernetes' secret sauce: Desired State (a.k.a. YAML)

The secret sauce to much of the acceleration that Kubernetes provides comes from the fact that developers/operators do something now that they didn’t do in the old world: they write YAML that describes what their code expects, how it should be configured, where it should be deployed, and how it should be run. In the old world, that’s information that would be written in a wiki or docs or just known by the people deploying the software. But Kubernetes requires you to write down that information in a form machines can understand so that the machine can do what it needs to.

For example, imagine that you’re a developer who has built a shopping cart application. After you’ve written your code and packaged it into a container image, all you need to do to run your code is to give Kubernetes the following YAML file. It tells Kubernetes there should be 2 copies of the shoppingcart binary version 1.7.9 that responds to requests on port 443.

 

apiVersion: apps/v1
kind: Deployment
metadata:
 name: shoppingcart-deployment
spec:
 replicas: 2 # tells deployment to run 2 pods matching the template
 template:
   spec:
     containers:
     - name: shoppingcart
       image: shoppingcart:1.7.9
       ports:
       - containerPort: 443


Now let’s walk through each step of the application lifecycle and see how Kubernetes makes your life easier as a developer.  

 

Development

One of the age-old problems in software development is “it works on my machine.” A developer writes some code, and everything seems to be working fine, but when another developer runs it or it gets deployed to production, something breaks. Kubernetes solves this problem using containers: a way of packaging code that makes its behavior completely reproducible. Containerized code runs the same on every machine.  This doesn’t have anything to do with Kubernetes’s secret sauce (YAML), but it is what makes YAML practical.

 

Containerized Code for Kubernetes

 

QA

Thorough testing of code can be challenging because it means running application components that were written by different people with different configuration options and interacting with 3rd party systems. With Kubernetes, you configure each component of an application in a YAML file in a well-defined format. Developers often check those YAML files into source control, just like their code. Just as containers make the code itself completely reproducible, YAML makes the configuration of code and how components are integrated completely reproducible. So now the application as a whole is completely reproducible, which accelerates QA because reproducibility is one of its cornerstones.

 

Deployment

Deploying an application in the old days meant assembling all the siloed tools your organization invested in across compute, networking, and storage to meet the demands of your application. Maybe you chose Dell + VMware for compute, Cisco ACI for network, and EMC for storage; you reconfigured all of those to run your new application, to use the appropriate storage, and to route traffic correctly. In the new world, Kubernetes provides a unified interface for all of that. There are times when you need to understand the configuration options for, say, a load-balancer, but you never need to worry about the configuration file format, where that file is stored, or the like. Deploying an application with Kubernetes amounts to simply telling Kubernetes: here are the YAML files describing what my application needs--make it so.

For example, below is how you configure the network to route traffic to your application.

 

 

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: test-ingress
 annotations:
   nginx.ingress.kubernetes.io/rewrite-target: /
spec:
 rules:
 - http:
     host: hooli.com
     paths:
     - path: /shop
       backend:
         serviceName: cart
         servicePort: 443


Operations

One of the keys to operating an application is monitoring—making sure that every application component is healthy. Kubernetes does this automatically for you. In fact, Kubernetes will not only monitor the application, it will restart unhealthy application components. The key to this is the YAML files. The YAML files tell Kubernetes how to detect that an application component has started correctly and whether the component is still running correctly. Those YAML files also tell Kubernetes how many replicas of each component should be running. Kubernetes uses all of that information to automate the basic monitoring and restart functionality so you don’t need to. If a server dies and takes down several application components with it, Kubernetes will spin up new application components on different servers. If one application component suddenly stops working, Kubernetes will restart it automatically and wait until it has restarted to begin routing traffic to it.

For example, the following YAML snippet tells Kubernetes that in order to check if this application component is healthy, Kubernetes should make an HTTP request to /healthz .

   livenessProbe:
     httpGet:
       path: /healthz
       port: 8080
     initialDelaySeconds: 3
     periodSeconds: 3


Make YAML a first-class citizen

Obviously Kubernetes doesn’t automate the entirety of the application development lifecycle—no software system can—but it solves a number of difficult, core problems that every DevOps team faces. And the more core problems the platform they are using solves, the faster they can deliver those quality applications to customers.

The key enabler for much of Kubernetes’ magic are those YAML files that both tell Kubernetes what the application needs and that make the application configuration and deployment reproducible. So write those YAML files to take advantage of the goodness Kubernetes delivers, and treat those YAML files just like you treat your code—they’re the crown jewels of your application.

 Prev

Announcing our Declarative Authorization Service  for K8s

Next  

Open Policy Agent Moved to CNCF Incubating Status

Subscribe

Minimize Kubernetes Compliance Audit Heartache

As Kubernetes matures and moves from..

August 12
Go

Investigate and Correct CVEs with the K8s API

Detecting and understanding the impact of..

June 27
Go

How to implement guardrails for AWS EKS

Today marked the first ever AWS..

June 26
Go