Immutability is the idea of something that can’t be changed after it’s created. Change adds complexity and over time uncertainty. In software development immutability can be used to make applications easier to run, test, and understand.
“Much of what makes application development difficult is tracking mutation and maintaining state.” Immutable.js
Difficulty of maintaining change stretches beyond the application itself. The servers we deploy to are often setup to be configured and reconfigured over and over. Tools like Chef, Ansible, and the like are designed to make this easier. Theoretically when things go wrong we can just trash the machine and provision a new one (that’s immutable-ish eh?).
Synchronization hell
The problem isn’t with the tooling itself but the pattern in which it’s commonly used. After we setup groups of servers we apply configuration to them over and over until a problem comes up. If we’re lucky we can find the one machine that got out of sync for whatever reason and then trash and replace. Maybe we even commit the cardinal sin of automation and take a quick peek at the ill member just to see if we can nurse it back to health. Modifications work at first but after [x] number of changes and scaling to [y] things inevitably go wrong and then we scratch our heads wondering what happened.
We fell into a trap because of the order of operation. We followed create > provision and when our last step is provision it’s easy to just repeat that over and over. If we flip that around to be provision > create the only way to introduce change would be “create”, and that’s the core idea behind immutability. One way to accomplish immutability with servers is to use images. Tools like Packer allow us to provision machine images before the servers are ever created. Once an image is provisioned it’s done and final. We create all new instances with our new image and delete the old ones. When it’s time to make another change, we provision a new image and repeat. We would never have to worry about the one machine that didn’t get some environment variable set, for example. Every machine in our group was created from the same image every single time. If you’re running multiple apps in multiple environments it may seem like a lot of work (or even impractical) to recreate the entire set of servers every time you want to make a change. Using traditional deployment strategies I would agree but lucky for us this is another area where Docker completely changed the game.
Immutable deployments
Docker containers are immutable by nature and allow us to package an application with everything it needs to run including code, runtime, configuration, even OS. When it’s time to ship a new version we build a new container and rollout all new instances in place of the old ones. Sound familiar? No more reaching into 20 servers and hoping your deployment scripts run successfully. Whether you’re running 5 or 500 instances it’s all the same container. Thanks to solid orchestration tools like Kubernetes the hard work of rolling out new versions is handled for us. Every change (code, config, or otherwise) brings a new container version.
Immutable infrastructure
If we’re packaging our apps as containers we still need servers to run them on. But because the apps have their runtime packaged up and self-contained the machines we build can be much more generic. Instead of building servers tailored for specific apps, which may require multiple groups configured differently, we build servers that are capable of running containers and that’s it. Following the pattern mentioned above we can use Packer to provision worker nodes for [insert scheduler here] and scale easily to any size we need. Because most of the burden of configuration is shifted to the container maintaining the underlying resources becomes much simpler. When you need to make an update to the servers (which is dramatically less often than before) you provision a new image and rollout new instances in place of the old ones.
Reduce the config nightmare and gain consistency in deployments :)