Some light Kubernetes skepticism

I'm pleased to see VMware "succeeding with success" and sticking with Cloud Foundry on BOSH for what that system is really good at: Mission critical apps.

Some light Kubernetes skepticism
Photo by Łukasz Łada / Unsplash

Five of you beautiful maniacs responded to my reader survey saying you were interested in Kubernetes content. Then, VMware published something I've been wanting to talk about for a while: why they backed off from migrating Cloud Foundry from BOSH/Diego and onto Kubernetes. So this week, you're getting some fresh, hot Cloud Opinion.

Old man yells at cloud

A recap, for those of you just tuning in to the container wars: Cloud Foundry and Kubernetes are both systems for running large numbers of applications automatically. If you've ever used Heroku, you've used a system like this.

Cloud Foundry specifically is designed to allow large, regulated companies to run their own internal Heroku, so that their developers can continuously push applications without IT tickets, or audits that care about things like "where the server is plugged in" and "what version of Linux the VM the container runs on is using." Cloud Foundry separates developers from those concerns.

Like Kubernetes, it provides a container runtime, but it also has a developer workflow API, a container build system, routers, log egress, authentication, and a bunch of other bits and pieces that make it a complete system for running large numbers of applications in regulated environments.

Kubernetes is a lot more limited – but a lot more flexible, too. On its own, it "just" runs processes. It doesn't manage their logs, build their images, provide routing, or any of the other services Cloud Foundry provides to its apps. This means that getting a simple application with a single database set up is much harder than it would be to push that same app to Cloud Foundry or Heroku, but it also means you can run a lot of things on Kubernetes that you can't run on Cloud Foundry at all.

It's fairly common to see people describe Kubernetes, these days, as a "platform as a service toolkit." Which has given rise to the belief that the obvious thing to do would be to take all the services that Cloud Foundry provides and run them on top of Kubernetes – to get the best of both worlds. This would let the Cloud Foundry project stop maintaining BOSH, Diego, and its routing layer, and focus on the things that make Cloud Foundry distinct from Kubernetes.

There have been at least three separate attempts to do some version of that over the past few years. None are production usable. This has struck some people as mysterious.

VMware, last week, re-iterated their commitment to Cloud Foundry on BOSH. They describe it as "the best destination for mission critical apps." They had this to say about their version of Cloud Foundry on Kubernetes:

We also didn’t believe it would meet our standards for scalability, speed, security, and stability, nor would it deliver the kind of developer experience our customers have grown accustomed to. So, we pivoted.

The thing that surprises me is that some of those scalability, security, and stability constraints are problems with Kubernetes. I had kind of assumed that the reason they had been quiet about what exactly they were up to with the CF on Kubernetes project was that they didn't want to acknowledge that Kubernetes isn't appropriate, right now, for certain workloads. Apparently I underestimated them!

The big problem for any project that's trying to deliver Cloud Foundry-like outcomes on Kubernetes, for existing Cloud Foundry customers, is that Cloud Foundry on BOSH is right there. It works really well. I don't like to update my phone's operating system if I can help it. The kinds of organizations that have been successful with Cloud Foundry have approximately the same feelings about updating the infrastructure running their mission-critical apps.

I can also see why organizations would choose Cloud Foundry over Kubernetes for new workloads, even today. Kubernetes has some architectural features that make me really skeptical of its ability to work well in Cloud Foundry's home turf: the enterprise data center, especially ones with limited access to the internet.

That limited access to the internet is key. When I first started on Cloud Foundry and saw how much work we had to do to make the system deployable, operable, and stable on customer data centers, I was confused. Why weren't these customers just using Heroku? Or AWS? Why were they going to all of this trouble with vCenter? Why weren't we selling them Cloud Foundry-as-a-Service?

The answer to that question is complex. Part of the answer is that there are software environments that can't be connected to the internet, or can only be connected in a very limited way. The organization may have strict requirements about where it can store customer data. They may have chosen to protect it from attackers by sharply or entirely limiting its direct connection to the internet.

The software may be running on a submarine.

Photo by Ан Нет / Unsplash

These environments are challenging ones for distributed systems. A lot of modern software development practices assume access to difficult-to-run services provided by external experts. The need to perform well without those services has deeply shaped Cloud Foundry's design.

For example: Kubernetes commits you to some fancy shit in your routing layer. Cloud Foundry's primary routing system, by contrast, is very boring. It has a set of routers, each deployed on its own dedicated VM. Applications running on Diego register their locations with those routers. Traffic comes in through the public internet, gets load balanced to a router, and then that router passes it on to the correct VM.

Kubernetes requires you to use some kind of service mesh, and frankly, I am glad that I have never had a problem that sounded like it needed a service mesh to solve. Those things are some complicated nonsense. Lots of places to look whenever something goes wrong, lots of shenanigans your proxies can get up to when they're passing around your requests. These are not the characteristics of a big, stable, supportable system.

Some of this is immaturity, and possibly problems that are specific to Istio. Projects like Linkerd seem a lot simpler, and more likely to be successful in the conditions I'm describing.

A deeper source of my skepticism is that Kubernetes has this fancy sub/pub architecture based on the "cluster event logs." This enables its extensibility. Components can listen to and act on whatever events they want in that shared event stream. The event producer doesn't have to know anything about the consumer, and vice versa. Cloud Foundry also used to have a fancy pub/sub architecture based on a distributed message queue, but ripped that out and replaced it with a SQL database.

The remnant of that old message-bus based architecture is still the source of one of Cloud Foundry's most serious failure modes. Remember how I mentioned that applications register their locations with the routers? They do that over NATS. If the NATS cluster stops communicating with itself, which can happen for a variety of mysterious NATS reasons, the routers will decide all their routes are stale and dump all their route records. Goodbye, application availability.

Cloud Foundry is, on some level, "three CRUD apps in a trench coat," and I love it for that. Like Kubernetes, it relies heavily on reconciliation loops for failure recovery, but it uses a cascade of HTTP requests for its first attempt at most operations. It can usually tell you when something deep within the container runtime caused your application push to fail.

Platforms built out of controllers listening to events are, kind of unavoidably, these big, disconnected Rube Goldberg machines. They can tell you that your client call succeeded, but the thing that responded to the client call has no idea what's listening to it, never mind which of those things you care about. It's up to you to know what event to poll for to find out whether what you were trying to do worked. Cloud Foundry's Diego is much less flexible and extensible, but it can at least give you the dignity of a 500 error when something goes wrong.

This doesn't necessarily matter for software-as-a-service operated by a team that's on the same Slack instance as the developers who built it, and who have direct access to the system's telemetry. It matters a lot when getting the system's logs requires you to comment on a Zendesk ticket, wait for support to pass that request on to the customer, wait for the customer to upload the logs, and then go through the whole loop again when it turns out the customer didn't upload the right logs.

So I'm pleased to see VMware "succeeding with success" and sticking with Cloud Foundry on BOSH for what that system is really good at: Mission critical apps in large, relatively regulated organizations. I also look forward to seeing whether they can prove me wrong about whether it's reasonable to deliver Cloud Foundry-like outcomes on Kubernetes.

But my bet? Banks are still running COBOL. I expect to see BOSH, and Cloud Foundry, in production for a very long time.


Filthy Commerce!

I've been playing around with Shopify, Printful, and their respective APIs, and as part of that I now have a proper web store, where you can, if you wish, buy terrible shirts, and questionable mugs and troubling stationery. You can also find my photo zine there.

Making this stuff is fun, so expect to see more of it over the next few weeks. I'm also open to requests, so if you've ever wanted some hyper-specific print-on-demand apparel but don't want to fuss with the website yourself, write in and I'll see what I can do. I do encourage everyone to give it a try yourselves, though. Making computers make you real physical objects is great.


Programming Note

I'm traveling for the next two weeks. I have several posts mostly finished, but you may see some delays.


Jobs

"Rockets are distributed systems where part of it (hopefully) goes to space." If you too would like to (hopefully) send parts of your distributed systems to space, Astra is hiring software engineers, among other roles.

Jobs from previous issues