Kyle Ferguson

Get started with Kubernetes

Kyle FergusonKyle Ferguson

Updated -

Kubernetes is an open source container platform from Google that builds on the lessons learned from their internal system Borg. It has quickly become one of the best ways to run containers in production and this post will go through concepts and resources for getting started.

Why containers?

This talk from Kelsey Hightower is a great introduction to the advantages of containers and Kubernetes. If you've never seen one of Kelsey's talks before he's a fantastic presenter and I definitely recommend checking it out.

Create a cluster

Kubernetes is made up of master and worker nodes. Master nodes run core components such as the API server, controller, and scheduler. Workers run the kubelet (node agent) and power the applications we deploy with Kubernetes. All components that make up K8s are documented here.

Minikube provides a way to tinker with K8s locally if you're interested in seeing how container deployments work. You can do just about everything with Minikube that you can with a full K8s cluster so it's a good place to start.

When you're ready to get started with a live setup one of the easiest ways to go is using Google Cloud's GKE service. Container engine is a managed Kubernetes setup and you can create a small cluster reasonably cheap (~50/mo for a couple small servers). Recent changes to the platform also make it easier than ever to scale up big with multiple node pools split across multiple zones.

There are actually many ways to create a K8s cluster and the right way may be dependent on your needs. These docs describe different options available from custom deployments to turn-key 3rd party solutions.

Deploying applications

The unit of deployment in Kubernetes is a pod. Pods are one or more containers that can share storage and networking and should be thought of as a single instance of an application. Pods are one of the big advantages to using Kubernetes since some applications may require more than one container to run. They also allow for support containers such as monitoring or logging agents to run side by side with each instance.

Pods are ephemeral and when they die they're gone. In order to better manage pods they should be created using deployments. Deployments are a resource in K8s that describe how pods should be created, how many there should always be, and other important configuration. Creating pods this way will allow replacements to be created in the case of a pod going down, or even if it fails health checks. They also provide abstractions for version management and rollbacks.

After applications are running you'll need to route traffic to the available pods. That's where services come in. A service will route traffic to specific pods using labels. Any pod that has the labels specified in the service will receive that services traffic. This means that as pods come and go the service is able to watch and update as necessary sending traffic to the correct location.

This is just a general overview to get started of course. The Kubernetes User Guide provides detailed explanations of these concepts and more. Udacity has a free in-depth course on deploying microservices with Kubernetes that is also available.

Kyle Ferguson
Author

Kyle Ferguson

Comments