Top trends from the CNCF survey & what they mean for enterprises

The results are in! The Cloud Native Computing Foundation (CNCF) seventh annual survey was recently released, showing that cloud-native technologies have become mainstream, and that deployments are maturing and increasing in size. This cloud-native shift means developers can more easily build complex applications, and organizations can deploy and manage these applications more quickly and with more automation than ever before. 

Don’t have time to read the whole thing? We’re here for you. TL:DR, there’s no longer any question as to the viability of containerized apps, or the impact that Kubernetes will have across enterprises. We picked three key stats that make this pretty clear:

 

Containers are now the norm 

In just two years, production use of containers has jumped from just 23% to 84%. Clearly enterprises are well past experimenting, and have placed their trust in containers. By association, this means they have placed their trust in the open source projects that define and enable this new app development ecosystem.

This is a fundamental shift in thinking. Developers and DevOps teams have made the shift already, but the rest of the enterprise team needs to catch up.

 

Increase in Kubernetes use

Kubernetes is now the de facto standard for container orchestration and management, with 78% of respondents now using Kubernetes in production, a huge jump from 58% last year. Again, note that this is in production, not just experimentation. 

At this time next year, we expect to see near 100% adoption of Kubernetes—as it truly has eliminated competition in the space. That said, we will continue to see various “flavors” of Kubernetes (Amazon EKS, Google GKE, Microsoft AKS, Rancher, D2iQ, etc) compete for mindshare, as enterprises try to find the right balance of flexibility, speed and security that works for their particular teams, mandates and goals.

The survey also points to an increase in the number of clusters per production deployment. While alone this doesn’t necessarily correlate to larger deployments, in combination with the other stats in the survey, this would seem to indicate that production deployments are growing both larger and potentially more complex—again underscoring the maturity of Kubernetes in the enterprise.

 

Service Mesh gains traction 

As production complexity grows, so too does the difficulty of maintaining consistency, managing risk and minimizing error. The need to control and automate what can happen within clusters has led more organizations to evaluate service mesh. And, while still a relatively new technology, nearly half of respondents said they were already evaluating various types of service mesh solutions.  

Again, this further indicates that Kubernetes deployments are growing larger and more complex, and that developers are looking for a layer of policy-based control to define what should and should not communicate within an application.

 

Enterprises must embrace a containerized world 

It's clear from the report that core technologies such as Kubernetes have been proven, and are well adopted across enterprises. This recent alignment around a single platform for orchestration, has paved the way for a huge shift, with the development community consolidating their effort on a common toolset and the best practices around that. 

Now it’s time to take the next step. In order to make the best use of automation across the stack, developers and platform engineers are looking for ways to codify best practices into the infrastructure itself. Again, this is a huge shift. 

We have never had the ability to implement security policy within apps themselves before. Security has always been around, or near or adjacent to the code. But, this new containerized world has arrived—as proven by these survey results—and it allows us a chance to improve upon the old, flawed security model. When infrastructure, compute, network and storage are all software-defined, or delivered “as-code,” so too can security be delivered “as code.” This means implementing security policy within apps, between app containers/services, within and around Kubernetes and even on the cloud platforms themselves. Cloud-native allows us, (and requires us), to rethink how we implement and deploy security policy.

The results speak for themselves: the change isn’t coming, it has arrived. Our Developers and DevOps teams have paved the way to a new, more automated set of tools that speed delivery and optimize resources. It’s time for the rest of the enterprise team to follow suit, with policy-as-code guardrails to eliminate operational, security and compliance risk when it comes to cloud-native apps. 

 

Ready to make the shift to policy-as-code?  See how in our Kubernetes Security Via Admission Control whitepaper!

 Prev

Kubernetes Security at RSA: The Time is Now

Next  

The origin of Open Policy Agent and Rego

Subscribe

Open Policy Agent: Cloud-native Authorization

Talks focused on Open Policy Agent (OPA)..

May 6
Go

How guardrails secure and accelerate K8s deployments

It’s clear from the latest Cloud Native..

April 21
Go

The origin of Open Policy Agent and Rego

Why the cloud-native architecture..

April 16
Go