Why you shouldn't use Terraform to manage Kubernetes workloads

Terraform's "state" is deceptively hard to manage, introduces security risks, and is redundant with Kubernetes internal state management.

Why you shouldn't use Terraform to manage Kubernetes workloads
Photo by Paulo Calheiros / Unsplash

Me: Nat

This newsletter: Simpler Machines, a weekly letter about living better, with computers!

This week: Moving, getting my head around the 'Spring83 spec, (more on that soon, I hope), async streams with Elixir, naps.

Also – later this summer I'll be co-facilitating a 10-week cohort-based coaching program with colleague-and-mentor Elisabeth Hendrickson. More details in her LinkedIn post. We're looking to start a cohort in late July. If your organization has a set of teams that are struggling with cross-team interdependency and gridlock, get in touch with Elisabeth, or e-mail me. Maybe we can help you get unstuck.


Earlier this week Hacker News answered the question: “What is your Kubernetes nightmare?” If you’re from the pre-Kubernetes container world you'll find it either entertaining or infuriating (or both! why not both?) The complexity-for-complexity’s sake, foot-gun-laden setup processes, lack of clear developer abstractions, poor handling of the actually hard problems in platforms (like storage, packaging, services, and especially networking), bad breaking change policies — it's all there.

That said, a handful of folks gave some advice on the thread that I found really concerning, and I want to take a minute to clear that up here. Many folks (correctly) pointed out that for deployments of any level of complexity you are likely to want a tool to manage your YAML. Some of those folks said that tool could or should be Terraform.

At this, I made a tiny, mostly involuntary screeching noise, because Terraform – in both my personal experience as an engineer trying to figure out Terraform, and in observing other engineers working with it – is a pretty bad choice for managing Kubernetes resources. I don't blame folks for recommending it, though, because the reasons why are a little bit subtle. They're not going to be obvious if you're just trying to get something working for the first time.


The short version of my advice here is:

The short version of why is that Terraform's "state" is deceptively hard to manage, introduces security risks, and is redundant with Kubernetes internal state management. The way Terraform handles state is necessary for other things it's designed to manage but for Kubernetes specifically you have other good options that don't introduce these problems. You can get around this but it still introduces a lot of complexity.

Terraform is also somewhat hard to grok, especially Terraform modules. I'm not going to go into detail about this right now (maybe in another post? send me an e-mail 💌 if this is relevant to your interests) but there's some socio-technical heavy lifting that's necessary if you want people to put the effort into grokking it. This is very hard – in general as an infrastructure or platform developer you should expect most application developers to copy-paste your code but not to invest deeply in the Terraform skillset, unless you've got a lot of management support otherwise.


Like I said, though, the reasons for this recommendation are a little bit subtle and abstract. So I'm going to do my best to walk you through them, to the best of my understanding.

Now. I like Terraform! When you need it, you need it. (I also hear good things about Pulumi.) It’s great for what it’s for: Managing the primitives that your IaaS provider ("infrastructure-as-a-service" – AWS, Azure, etc.) gives you. VMs, service instances, load balancers and other network configuration. You may want to use it to set up your clusters. But you shouldn’t use it to manage the workloads on those clusters, because of the way that Kubernetes and Terraform respectively handle state.

In infrastructure configuration, state is the enemy. It is a little demon that needs to be carefully corralled into a single location where you can keep an eye on it. Ideally there is a gun nearby, in case it makes a noise you don't recognize.

Unfortunately for infrastructure engineers everywhere, there are three sets of state that you can’t get out of managing: What you mean to be in your infrastructure, what you most recently told the system should be there, what is there instead.

What you “mean” to be is called “configuration.” This might be Terraform files, Kubernetes YAMLs, or templates for YAMLs. Configuration should be checked in to version control, so you can record the humans who are managing it made the decisions they did, and understand how and why it has changed over time. It should probably be deployed, from version control, by a machine — when you push to version control. You can do this with regular ol’ raw Kubernetes deployment YAMLs and kubectl. (Pronounced "koob-ect-el," btw.)

Most infrastructure configuration ends up needing to be templated. This is probably worth a whole essay on it's on (again, send me an e-mail 💌 if this is relevant to your interests) but basically, you need templating because you’re probably going to want to configure a lot of things in your environment in similar ways, for various reasons. One of the things that will vary across those sets of similar things is likely to be secrets. There will probably be places in your template that are like {{secret_key_for_prod}}and then whatever system you use that renders those templates into the literal YAML will fetch those secrets from a secret store like Vault. This is going to be important later.

What “is” in your infrastructure is, likewise, relatively straightforward for both Kubernetes and IaaS primitives. When you ask your IaaS’s CLI what’s there, what does it say? What do you see clicking around in the console? What does kubectl print out? And, of course, the gold standard: When you curl an-address-that-should-have-stuff-at-it, what responds? When you ssh onto that box, or into that container, what bits are there on the disk?

It’s the third category, “what you last told the system should be there” where things get tricky.

One of the nice things about Kubernetes is that it has a reasonably good system for storing this. When you give it YAML with kubectlor whatever it digests it and stores it in etcd. Etcd keeps a running log of any changes it makes to itself, so anything that’s being managed by Kubernetes listens to that log to see when it should make changes. If they miss that message, crash, get deleted, or otherwise diverge from the state they’re supposed to be in, they can also periodically check the latest desired state in etcd.
Kubernetes is also self-contained and monolithic. If it’s supposed to be in the cluster, it’s in that etcd keystore. If it’s not, it’s not. So Kubernetes can safely nuke anything it finds that doesn’t match a record it knows about.

Terraform, unfortunately, has a much more hostile environment to deal with: The IaaS.

There are a bunch of problems that Terraform’s state solution has to deal with, and the team has gone into lovely detail in a page on their docs called “Purpose of Terraform State.” But for the purposes of this problem Terraform has two really key constraints:

  1. Terraform can’t assume that anything that it finds in the IaaS is under its own management, so it needs a way to mark objects as “Terraformed.”
  2. Terraform can’t use an IaaS feature (like tags) to handle the problem because they’re all slightly different. So that marker needs to live outside the objects that are being managed.


The iron logic of these two facts leads– painfully, inexorably– to the Terraform State file.

If you haven’t given several years of your life over already to understanding Terraform one of the most important things to know about it is this: When you run terraform apply on a system, it’s going to generate a file. And if you want to run terraform plan or terraform apply again, you must have the latest version of that file.

(Slight digression: By default this is a local file, but Terraform has a concept of "backends" which let you store your state in other places. I'm going to continue to refer to it as a file for simplicity, and also because it is funny.)

So, first, now you have the problem of managing that file. You can’t just store it locally — you must at least back it up — and you probably need to store it somewhere you can share it with your team as well. And if multiple people are using it you actually kind of can’t store it in version control because the first thing that Terraform does when it starts a terraform apply is to check whether that file has a “state lock.” If there’s a state lock, someone else is modifying that environment and Terraform won’t mess with it. Exactly how the lock is stored depends on the backend– the local backend uses "System APIs" and the S3 backend uses a DynamoDB table(!) but the upshot is that working out of a state file that’s checked into version control won’t ever have the state lock.

Shenanigan risk levels: Critical.

It gets worse than this, though. That file contains the record of the actual things that Terraform made— their IaaS assigned IDs and so on— and that it expects to see the next time you work with that environment. But because Terraform works by first generating a diff between what is and what should be, and then taking actions to correct that diff, this needs to be a complete description of those resources. And this is where using Terraform for real starts getting nasty.

Remember how I mentioned that secrets in templates were going to be important earlier? This is where they’re important. It doesn’t matter how you’ve templated your terraform files to store secrets somewhere safe. Terraform is going to render the secrets into that template and it’s going to write that template out in plaintext. And then you’re going to have to save that file if you ever want to do anything with the environment it describes ever again!

You can get around the mutex problem by using a remote backend like S3, but that S3 bucket will still have secrets in it, and anyone who has access to the S3 bucket necessary to run terraform apply will have access to any secrets used by any resources in that state file.

This means you need to be very careful about who has access to Terraform state. Ideally, no human should have access to it during normal operations. You can accomplish this with something like Atlantis or Spacelift, but now you’re running special automation tools. You also need to make sure your developers understand why you’re doing this and justify to your management why you need to pay for them or spend time setting them up and maintaining them.

In contrast — because it knows everything that it’s managing and is running a self-contained little world — Kubernetes gets to do something rather clever with secrets. You can set up a secret store, like Vault, and put in keypairs like reference_to_prod_database_admin_password: prod_database_admin_password.Then Kubernetes stores the reference to that secret anywhere it needs to describe an entity that uses it, and only fetches the reference at the point that it’s actually used — when it’s literally writing out the file mount that contains the secret for a pod, for instance.


So, to recap:

Terraform is super powerful. When you need it, you need it.

But to manage Kubernetes deployments you don’t need it. You can get the job done just as well with Helm, Kustomize, Carvel– or even just plain un-templated manifest files checked into version control. Using Terraform to manage Kubernetes resources adds a bunch of problems with state file management and security that you don’t need.


Questions? Comments? Found something technically wrong in this post? Have a great argument about why Terraform is great for managing Kubernetes actually – or something you can do to make it great?

Let me know! Reply to this e-mail directly, or, if you're reading this online, drop me a line at nat @ this website. And– as always, thanks for reading.