Networking has long been the holdout in enterprise aspirations toward high-performance, multicloud or hybrid architectures. While such architectures were once aspirational marketing buzzwords, they are today’s enterprise reality. Now, with the launch of Cilium Mesh, enterprises get “a new universal networking layer to connect workloads and machines across cloud, on-prem and edge.” Consisting of a Kubernetes networking component, a multi-cluster connectivity plane and a transit gateway, Cilium Mesh helps enterprises bridge their on-premises networking assets into a cloud-native world.
It sounds cool, and it is cool, but reaching this point was anything but simple. It also remains complex for enterprises hoping to bridge their existing infrastructure to more modern approaches.
Sometimes we take for granted cloud-native architectures because we fail to appreciate the complex requirements they place on the infrastructure layer. For example, infrastructure software must now be capable of running equally well in public or private cloud infrastructure. It must be highly scalable to meet the agility of containers and CI/CD. It must be highly secure because it often runs outside of company premises. And it must still meet the traditional enterprise networking requirements in terms of interoperability, observability and security, all while generally being open source and somewhat community-driven.
Oh, and to be relevant to enterprises, all this cloud-native goodness must translate back into the legacy-infrastructure “badness” that enterprises have been running for years. This is what Cilium Mesh does for the networking layer, and it’s what Thomas Graf, the co-founder and chief technology officer of Isovalent, the creator of Cilium, took time to explain.
Jump to:
On the road to cloud native
Cilium and Kubernetes emerged at roughly the same time, with Cilium quickly earning its place as the default networking abstraction for all of the major cloud service provider offerings (e.g., Azure Kubernetes Service and Amazon EKS Anywhere). Not that everyone knowingly runs Cilium. For many, they get Cilium as a hidden bonus while using a cloud’s managed services. How much a company knows about its Cilium use has much to do with where it’s at in its cloud journey, according to Graf.
In the initial stage of a Kubernetes journey, it is often only an application team that uses Kubernetes as they build an initial version of the application. We see heavy use of managed services in this phase and very limited requirements on the network aside from the need to expose the application publicly via an Ingress or API gateway. Graf noted: “These initial use cases are solved really well by managed services and cloud offerings, which have accelerated the path to developing services massively. Small application teams can run and even scale services fairly easily in the beginning.”
With more experience and greater adoption of Kubernetes, however, this changes, and sometimes dramatically.
For larger enterprise Kubernetes users, Graf highlighted, they bring typical enterprise requirements such as micro-segmentation, encryption and SIEM integration. While “these requirements haven’t changed much” over the years, he stressed, “their implementation must be completely different today.” How? Well, for starters, their implementation can no longer disrupt the application development workflow. Application teams are no longer interested in filing tickets to scale infrastructure, open firewall ports and request IP address blocks. In other words, he summarized, “The platform team is tasked to tick off all the enterprise requirements without disrupting and undoing the gains that have been made on agility and developer efficiency.”
Additionally, the platform that is built is cloud agnostic and works equally well in public and private clouds. The latest requirements even demand to integrate existing servers and virtual machines into the mix without slowing down the highly agile processes built on CI/CD and GitOps principles. It’s non-trivial; however, with Cilium Mesh, it’s very doable.
This shift will change networking more than SDN
With Cilium Mesh, the project has unified some specific types of hybrid and multicloud networking concerns like cluster connectivity, service mesh and now legacy environments. Now that Kubernetes has become a standard platform, Graf suggested, it has established a set of principles that must find their way into a company’s existing infrastructure. In other words, as Graf continued, “Existing networks with fleets of VMs or servers must be able to be connected to the new north star of infrastructure principles: Kubernetes.”
This is where things get interesting, and it’s where Cilium Mesh becomes critical.
“With Cilium Mesh, we are bringing all of Cilium — including all the APIs built on top of Kubernetes — to the world outside of Kubernetes,” Graf declared. Instead of running on Kubernetes worker nodes, Cilium runs on VMs and servers in the form of transit gateways, load-balancers and egress gateways to connect existing networks together with new cloud-native principles including identity-based, zero-trust security enforcement, fully distributed control planes and modern observability with Prometheus and Grafana.
Importantly, Cilium Mesh is equally appealing to Kubernetes platform teams and more traditional NetOps teams. The Kubernetes-native approach gives platform teams the necessary confidence to assume additional responsibility for managing non-Kubernetes infrastructure, while the use of well-known building blocks like transit gateways and Border Gateway Protocol (essentially the postal service for the internet) gives the NetOps team a clear yet incremental path to a Kubernetes world.
This is a big deal for enterprises struggling to make sense of multicloud, which includes just about everyone. True, the concept of multicloud has been discussed for a long time, but it’s only now that we’re getting beyond the hype (i.e., the ability to deploy simultaneously into multiple public clouds to optimize costs) to the messy reality of enterprise IT (i.e., different teams use different tools for a host of different reasons). The main struggle, Graf pointed out, “is less about how to connect all the public cloud providers together (and rather) how to get to a unified architecture to connect existing on-prem infrastructure with each public cloud offering while maintaining uniform security and observability layers.”
This shift to Kubernetes-style principles powering the network layer has a range of benefits. Chief among these will be significantly smaller teams that will operate and provide infrastructure more effectively while offering platforms that will allow enterprises to adopt modern development practices to remain competitive. It’s a big deal, and one that promises to change networking even more completely than software-defined networking once did.
Disclosure: I work for MongoDB, but the views expressed herein are mine.