Why Elixir?

Some reasons I'm enjoying using Elixir, and why I think it's especially good for prototyping

Why Elixir?
Photo by wuz / Unsplash

Hey there! There are a few more new folks than usual this week – thanks to everyone who's been sharing recent issues – welcome.

I'm Nat Bennett, and I make software. You're reading Simpler Machines, my weekly newsletter about making software.

I've been writing a lot about Pivotal recently and have more on that in the queue, but I wanted to take a quick break this week from writing about how things were and write a bit about how things are.

Something that's been a big part of my technical life over the last year is learning Elixir, but I haven't written much about it yet – only two posts even mention the language. So I thought I'd take a minute to write about why I've been spending the time to learn it, and what I like about the language.

I picked it up originally because the people who were writing it and talking about it online seemed like they were having a lot of fun. I wanted to do something technical just for the pure joy of it – get back in touch with why I like building things with computers so much.

And man, is it fun to write. I love functional programming but in a very morlock-y way – my favorite functional programming language is Bash – and Elixir really fits the way my brain wants to decompose problems. A lot of Elixir programs end up being composed out of pipelines, and it has some really nice syntax for them.

It also has succinct error checking, because of the way it combines assignment and expressions. You'll often see Elixir code that looks something like this:

{ :ok, data } = rustle_up_some_data(query)

When rustle_up_some_data returns a tuple that starts with an :ok symbol, the program just continues as normal and starts processing the other value in the tuple. When rustle_up_some_data returns something like { :error, nil }, though, it just stops – because it can't match :ok and :error.

You can do more explicit error handling if you want to pass the :error back up to the caller, but for situations where you just want the caller to retry, just stopping and then letting the caller recover you is often all you need to do.

This all just really fits how my brain works. I've never really liked writing for loops – one of the reasons that Ruby was my first programming language was that I instantly grokked a bunch of concepts that had been hard for me to grasp in other languages when I encountered .each

Elixir's also given me a deeper appreciation for what functional programming can really enable in terms of operability. The BEAM can do some borderline magical stuff operationally, while I don't understand how it works well enough to give a detailed description, it is my understanding that the capabilities here are really deeply linked with the fact that all data in the BEAM is immutable: If you want to change something, you have to copy it. Learning about the BEAM has been fun for a bunch of reasons but getting more into the "why" behind functional paradigm concepts has been a big one.

My favorite thing about the language is probably the tooling. Out of the box a new Elixir project comes with a nice build tool and task runner, Mix. It's also got a default test runner with the right defaults for me, and that uses describe blocks to organize tests. So any given Elixir codebase tends to be structured in a standardized way and has scripts for the tasks I think ought to have scripts, and that standardized way tends to be pretty comfortable and familiar. Partly because of the tooling, the community generally has good defaults around documentation and testing.

There's Phoenix, of course – a Rails-inspired web application framework with standard solutions for a lot of common applications tasks. Folks tend to really like its database wrapper, Ecto, especially people who have gotten bitten badly by ActiveRecord and other "classic" ORMs – Ecto makes database queries easier, but it hides a lot fewer of the SQL details from you.

And I just started playing around with LiveBook and wish I had started using it much sooner. It's sort of a REPL, but a REPL where you can save steps, rerun them when you make changes, and annotate them with Markdown and other media. It seems really powerful for taking notes while spiking, and maintaining scripts that need to be documented.

Elixir's also got really, really good observability defaults. It's inherently well-suited to the technical problems with telemetry – running a separate process that the main process passes messages to is just, like, what Elixir does, man – and the ecosystem does some really clever stuff with that capability – Phoenix applications come with a nice little dashboard view with all the logs and metrics out of the box.

The best part, though, is the Telemetry package. It's the only package that I have ever seen take the right approach to generating metrics. When you're writing code you don't interact with metrics directly, instead:

Metrics are aggregations of Telemetry events with specific name, providing a view of the system's behaviour over time.

I don't say this lightly but: This is objectively correct.

Metrics are aggregates, but this tends not to be how developers model them, especially if they're not regularly elbows-deep in their metrics dashboard. Developers – at least in my experience – tend to think about the data their system is emitting in terms of events. I've made this mistake myself – used a counter to model an event by emitting a "1" for success and a "0" for failure, and then gotten deeply confused when sometimes that counter showed "0.5" after the event got ingested by our metrics system.

Having developers instrument their code with events, and then calculate the aggregations of those events and emit metrics, is a really sensible default that I hope more metrics packages adopt.

Then there's the BEAM. Oh man, is there the BEAM.

I'm a distributed systems nerd and I love thinking about how complex systems fail so I enjoy learning about the BEAM and agree with a lot of the philosophy behind it, but I'm not going to write about that today – we're already 1000 words in and I have Friday night to get to.

Instead I want to talk about this table from Elixir in Action, and the implications it has for application development, especially early in an application's lifecycle, or when you're building and running a lot of similar applications.

Technical requirement

Server A

Server B

HTTP server

Nginx and Phusion Passenger


Request processing

Ruby on Rails


Long-running requests



Server-wide state



Persistable data

Redis and MongoDB


Background jobs

Cron, Bash scripts, and Ruby


Service crash recovery



In a lot of cases, the services in column A will be better than the services in column B in some specific way that matters for a particular application. Using a BEAM language allows you to delay the decision about how to solve a particular problem, because the BEAM gives you a "good enough" solution to get started.

This is really attractive to me for me as a prototyping toolkit – I can get an app up and running that, say, shares state across client sessions, without having to deploy, configure, or otherwise think about what service I'm going to use for holding that cache. By learning how to debug and operate Elixir and its concurrency model – OTP – I get access to a production-useable version of a bunch of specialized capabilities that I would otherwise need dedicated software for.

It's also attractive as someone who is slowly building up a small fleet of disparate applications. Groups of apps are easier to operate the more similar they are, and using Elixir lets me do a ton of different things with apps that are essentially identical operationally. (I know of at least one consultancy that's exploring a model where they build and run client applications, and they primarily use Elixir. I don't know for sure that this is why, but it's why I'd use Elixir in such a situation.)

I'm generally pretty reluctant to add new data services to a stack, if I can help it. This feels like it makes me a bit weird these days – it seems like every system these days has multiple databases, an ETL pipeline, some kind of queue – but every time I add a new piece of tech I'm thinking, "Ugh, this is going to be such a pain to debug the first few times something goes wrong." Especially if they're doing anything distributed or stateful.

What's bringing you joy in technology these days?

- Nat