You’ve almost certainly heard the buzz around Docker and containers by now. Recently I’ve been discussing container deployments and infrastructure with a lot of people and I thought it might be useful to collect some of those thoughts.
Containers are here to stay. The hype is not just hype. They’ve been battle tested and proven for any scale to be efficient and reliable.
From development to CI to production Docker has changed the game. We’ll look at the advantages of containers at every stage and what end-to-end deployment using Kubernetes for container management looks like.
Better dev environment
Docker provides a flexible and consistent development environment. You can use Compose to quickly bootstrap everything an application needs including runtime, database, cache, etc. Source code can be mounted to the container from your local machine allowing you to develop locally and run the app in a controlled environment.
example docker-compose.yml
version: '2'
services:
app:
image: node:latest
command: node index.js 3000
working_dir: /var/www
ports:
- 3000:3000
volumes:
- ./:/var/www
database:
image: mysql:5.7
ports:
- 33060:3306
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: mydb
cache:
image: redis:latest
ports:
- 6379:6379
In this example we setup a node development environment that mounts the current directory to /var/www
on the container. We added mysql for a database and Redis to use as a cache.
Compose makes it easy and fast to build out an entire development environment and after the initial images are downloaded it takes seconds to spin up and down. No more clunky and slow VMs to manage!
Consistent and predictable
When it’s time to ship we build a new container image. Dockerfiles allow you to instruct Docker on how to build your container. You can extend a base image from Docker Hub for the runtime you need:
FROM node:6.10
ADD ./app /var/www
WORKDIR /var/www
CMD node index.js
or use a more generic image and customize as needed
FROM debian:jessie
ADD ./app /var/www
RUN apt-get update && ...
Just run docker build
and docker push
to build the new image and push it up to a registry.
One of the biggest advantages of Docker is this imaging process. Containers are packaged and built with everything they need to run. Compared with traditional methods of configuring a VM first and then deploying, this allows for much more consistent and predictable deployments.
dev-prod parity
It can take quite a bit of effort to mimic locally an entire production setup. With Docker it’s trivial. Using tools like Docker Compose and Minikube (for Kubernetes) you can re-create the entire topology of your stack.
Docker also makes debugging easier for versions that are already deployed. The app is completely packaged as a container so you’re able to simply pull down that container version and inspect the exact environment (including OS) that’s currently running.
Flexible CI
If you’re building multiple apps (or dare I even say microservices) building out CI and CD pipelines for different environments can be tedious and time consuming. With Docker most of that is abstracted allowing for a more consistent CI process leveraging specific containers for different runtimes. Adding test dependencies for integrated testing etc. is also easy as we saw above when looking at the local environment. Run end to end tests against actual database services etc. With Docker the entire testing process is extremely flexible and leveraging containers for CI can improve the speed and confidence for a development team.
Rapid deployments, automagically HA
When I first started looking at Docker I remember having a conversation with one of my colleagues about it. At that time it was easy to build and work with containers and images but there wasn’t any reliable way to manage deployments in the wild. He told me something along the lines of
the first platform that lets someone say “here’s a container, this is what it needs and how many I want, go!” is going to be big.
That platform is Kubernetes. It has emerged as one of the most powerful and community approved solutions for running containers in production. With Kubernetes deployments can be described as code:
abbreviated for example
kind: Deployment
metadata:
name: my-service
spec:
replicas: 10
containers:
- name: web
image: my/service-nginx:1.0.1
ports:
- containerPort: 80
- name: app
image: my/service-node:1.0.1
ports:
- containerPort: 3000
In this sudo example we create a deployment resource that specifies a service made up of 2 containers. We specify that we want 10 copies and when we’re ready, we just run kubectl create -f deployment-file.yml
. Kubernetes will inspect the cluster as a whole and make decisions on where to run the pods (a pod is just one or more container that makes up an app).
If our K8s cluster has nodes in multiple availability zones containers will automatically be spread out across those zones to increase stability in the case of failure. After we create the deployment we can create a Service
on top that references the containers to add DNS and load balancing automatically.
Deploying apps with Kube is as easy as defining what it looks like to run the container and pressing go! In addition to the basic container config pods can be configured with persistent volumes (backed by Cloud provider), resource requests for CPU/memory, liveness/health checks, and more.
Scale in seconds
In addition to autoscaling you can scale up or down in seconds using kubectl scale
. Specify the deployment, how many you want, and bam :)
Higher efficiency
Kubernetes will try to pack in containers across available nodes utilizing resources as efficiently as possible. Instead of provisioning groups of machines that may not be utilized fully all of the time, containers can be scheduled together across a smaller number of machines keeping costs lower.
Self healing at every level
Controllers in Kubernetes are responsible for keeping pods up and running. If a pod goes down the controller will spin up a new one always maintaining the specified number of instances.
Health checks can be configured for containers as well. In Kubernetes health checks are either “liveness probes” or “readiness probes”. If a liveness probe fails the container is considered dead and will be deleted then recreated. Readiness probes, on the other hand, will consider the container as temporarily unavailable when failing and divert any traffic from services etc. until it recovers. This kind of flexibility is one of the reasons Kube has become so popular.
In the event of an entire node failing, Kubernetes will recognize the unhealthy node and reschedule all of its current pods to another healthy node. Paired with cloud provider health checks and autoscaling the entire cluster from infrastructure to application deployments can become a giant self-healing machine.
A platform for all the things
With a Kubernetes cluster at your disposal deploying services and apps in-house becomes super easy. The wide variety of images available for Docker makes it simple to deploy things like Jenkins, Gitlab, Metabase, Prometheus, etc. just as easily as deploying your own apps.
No cloud lock-in
Using Kubernetes allows you to design your infrastructure once and run it anywhere. If you’re using GCP, AWS, Azure, or any other cloud provider you can run Kubernetes as well as on-premise with Open Stack, VMware, etc. Federation even allows you to run both on-premise and/or multi-cloud at the same time and interact with the clusters as one giant coordinated super computer!
Infra-code
Finally but certainly not least, the combination of Docker and Kubernetes allows you to maintain every aspect of your stack as code. Dockerfiles describing how to build containers and Kubernetes definitions for all the things from deployments to network config can be tracked in source control and collaborated on by teams.