Continuous integration and -deployment with GitHub, CircleCI and Kubernetes in Azure

Deploying an application is traditionally the most challenging part of the software delivery process. No two machines are the same, the guy who usually does the deployments is on vacation, and risk of disrupting production is ever looming. Without proper automation and safety checks, it can be a very daunting process.

In the modern, containerized world of applications, deployments can be more easily automated, with more safety checks, and with far fewer variables than before. Rather than deploying to a multitude of machines, each with their own configurations and dependencies, we can simply deploy our self-contained application as a container, and be confident it will run.

To achieve this, processes known as continuous integration and continuous delivery (often abbreviated as CI and CD, respectively) are frequently used to automate the process of building, testing, and sometimes even deploying applications.

Continuous delivery implies that every change made to the code will be — after tests have succeeded — immediately deployable. Taken one step further, continuous deployment ensures that every change is immediately deployed.

This guide outlines the process of hosting your code on GitHub, building and testing it using CircleCI, and eventually deploying a Docker image to a Kubernetes cluster, either fully- or semi automatically.

Obligatory note: the author of this article is in no way affiliated with any of the technology providers mentioned in this guide.

The goal of this guide is to establish a baseline CI/CD implementation that will work well for small teams, while at the same time allows you to scale as your product grows.

  • Created a source code repository
  • Set up automated builds and testing for your project
  • Set up automated Docker builds for your project
  • Connected to your Kubernetes cluster
  • Set up automated deployments using Keel for your project

Or you’ll get your money back. We promise.

  • GitHub Account
  • Docker Container Registry*
  • Kubernetes cluster*
  • Basic understanding of Docker, Kubernetes and YAML

*In this guide, we’ll be using Azure for both the container registry, and as our Kubernetes provider. If you don’t have these already, you can get $200 of free credit to try out Azure.


You can use any existing GitHub repository, or you can grab the example application we’ll be using for this guide over at:

You can use either a public or a private repository. If you are using a private repository, we will set up authentication with CircleCI later on.

The repository plays a key role in our integration process: it is where the actual work is stored. Every single time a pull request is merged, we’ll be running our integration and deployment processes.

Because of this, GitHub also serves as a change- and audit log. Instability can be tracked down to an individual commit. Rolling back becomes as simple as either using git revert or git reset to the last known working commit, rebuilding, and rolling out a stable release.


CircleCI provides fully-fledged continuous integration as a service. While for the purpose of this article we will only be using CircleCI to build and test our software, you can use it for a lot more:

CircleCI supports your application from build to deployment

You can get started with CircleCI for free. All you need is the GitHub account you already have.

After you have logged in, find the Add Project button on the left hand side of the dashboard. Once you are there, look for the project you are configuring continuous integration for, and hit the Set Up Project button.

On the project configuration page, you will see a handful of technologies and languages supported out of the box:

If the platform you’re building for is included, you can try selecting it in this screen and have CircleCI set up the configuration for you.

Since we use C# in our example code, and CircleCI doesn’t support it by default, we are going to have to roll our own configuration file.

For configuration, CircleCI expects a config.yml to exist in /.circleci/config.yml relative to your repository’s root. There are a lot of things you can configure for your project, but for now we’ll keep it simple.

To build our example ASP.NET Core application, we use a configuration file along the lines of:

Couple of things to note here. First off, the image we are using is microsoft/dotnet:2.2-sdk. This depends on the technology stack you are using. For instance, if you are building a React application, you will want to use a Node image here instead.

The checkout step is mandatory — it retrieves your source files from the project’s source control. We named the first step to run build in our case and we run the dotnet command to build our solution. Depending on whether or not our project compiles or not, the build will either succeed or fail.

Under normal circumstances, you would also include your unit tests as a separate step to run in your config.yml. Because our example application is quite barren, we don’t really have a whole lot to test. If we did, the complete config.yml would look something along the lines of:

Once you have set this up and your config.yml exists in your repository on GitHub, hit the “Start Building” button, and wait for the status to turn green:

If it didn’t go green, you may need to fix your configuration file. There is extensive documentation available on the CircleCI 2.0 format.

Now that we’ve successfully built and tested our project, it’s time to build a Docker image. Before we can docker build our image, we’ll need to create a Dockerfile. This file contains all the instructions required for the Docker daemon to build an isolated image containing our application, as well as any runtime dependencies.

The content of the Dockerfile will vary hugely depending on the technology you are using, and your preferences. You should look up examples for your specific application’s stack. There is documentation available for NodeJS applications, as well as this GitHub repository containing tested Docker builds for various technology stacks.

Once you have your Docker build working, we can add a step to the CircleCI integration process that builds our Docker image as part of the build, and pushes it to our registry. This assumes that you have an available registry, such as Azure Container Registry, Docker Hub, or similar.

We will extend our config.yml to include this step. As a new job, add the following:

So that docker-image is at the same indentation level as build.

You’ll notice a bunch of $variables being used in the configuration file. We strongly encourage you not to include any credentials in your Git repository, therefore we opt to use CircleCI’s variables here. Of course, we will need to assign these variables.

This can be achieved through a context, or by simply specifying environment variables for your build. The former is preferred if you will be using the variables across projects — for instance, registry credentials — , the latter should be preferred if your variables are specific to this project.

I think the variable names should be mostly self-explanatory, but in case they aren’t, here is a small rundown:

  • $IMAGE — the name of the resulting image. This is what we’ll use to deploy the image later on
  • $TAG — the tag, e.g. dev, or latest, or cuddly-octo-sniffle (thanks GitHub, knew I could count on you for a short and memorable name)
  • $DOCKER_REGISTRY — the URI of your registry
  • $DOCKER_REGISTRY_USER — the user that will be used for pushing the image to the registry
  • $DOCKER_REGISTRY_PASSWORD — the password for the user you specified. Note: some registries provide an admin account with temporary credentials or access keys. These are excellent for this purpose, as you don’t end up using a specific user’s credentials, but a service account’s instead

Of course, any of these variable names can be changed to your liking.

After you have added this, all that’s left is getting your build to succeed. If everything went well, the image and tag you specified should show up in your registry, and is ready to deploy.

At this point, you have successfully implemented continuous delivery. All that’s left is setting up the actual deployments to our cluster.


For this section, you’ll need a running Kubernetes cluster. If you opted to go with Azure, you can set up a cluster in the control panel fairly easily. Otherwise, obtain a cluster from a provider of your choice.

All commands assume a Linux operating system without any tools installed. It also assumes a Kubernetes cluster as provisioned by Azure. If you are using a different provider, the connection scenario may vary.

Because this section contains a lot of system management, it tends to turn into a command line fiesta. I tried to keep it to a minimum, but there’s no two ways about it unfortunately.

Kubernetes can be managed through a command line utility named kubectl. It can manipulate virtually any resource on the cluster, as well as other operations such as listing them.

Even though we will not be using kubectl directly in this guide, the Azure CLI uses it under the hood. On Linux, installing it is fairly simple:

If you are using a different operating system, the documentation contains instructions specific to your OS.

Because Azure manages access to the cluster, we will need to install the Azure CLI on our system. Below are the instructions for Debian and Ubuntu, instructions for macOS and Windows are available too.

First, add the Azure repo to the apt sources list:

If you are getting a “command : not found” error, try running the commands line-by-line.

Then, get the Microsoft signing key:

curl -L | sudo apt-key add -

And finally, install the CLI using apt-get:

sudo apt-get update 
sudo apt-get install apt-transport-https azure-cli

Next, we’ll need to log in to Azure from our command line. Run the command az login . You’ll receive a link and a code that you will need to authenticate your command line. If successful, you should see a small JSON representation of your Azure user.

Now that we’re successfully logged in, we can obtain the credentials for our Azure cluster. Make sure you replace the placeholders with your resource group name and the name of your cluster:

az aks get-credentials --resource-group <insert resource group name> --name <insert kubernetes cluster name>

Finally, you can establish a proxy connection to your cluster using the command:

az aks browse --resource-group kensten-k8s-dev --name kensten-dev

By default, the Kubernetes Dashboard is running on all Azure provisioned clusters and should now appear in a browser window.

We could use kubectl to deploy to our cluster. While this is fine for development and testing workloads, I would strongly advise against it for production workloads. The reason behind this is that we want to limit human interaction with the production deployment process to an absolute minimum, as to reduce the risk of costly mistakes.

There are various tools that aid you in automating your deployments. Keel is by far one of the simplest, and smallest. Size matters.

We’ll be deploying Keel without Tiller and with RBAC for the purpose of this guide. The official Keel GitHub repository provides ample resources should you want to install Keel differently.

We’ll be using a slightly modified version of the deployment-norbac.yaml file in the official repository, with the environment variables for Google Cloud Engine, AWS, and the various notification options omitted:

You can either provision these resources using kubectl , or alternatively, from the “Create” functionality in the top right corner of the dashboard.

In case you are using kubectl, run kubectl create -f keel.yaml to deploy Keel to your cluster.

After you have deployed these resources, wait for the Keel pod to wake up and make sure all related resources are operational:

Keel in action

Note: you could also deploy Keel through Helm, but the above method is slightly simpler. If your cluster is running Tiller, you can grab the chart for Keel from the Charts directory in the GitHub repository and simply call helm upgrade:

helm upgrade --install keel --namespace keel keel/

You should say a message saying the release was successful, and the resources should be created in the keel namespace.

Keel relies on your resource configuration to determine when to deploy or update a resource.

For instance, if we create a deployment for a simple nginx:latest image, we could deploy the following labels to indicate when keel should update the deployment:

name: "nginx" force poll
annotations: "@every 10m"

This configuration polls the Docker registry every 10 minutes, and checks for updates to the image. Of course, there are more options available. See the Keel documentation for all possible combinations.

By default, it is advised to apply versioning using semantic versioning to your images; e.g. my-app:0.1 or my-app:1.1.0 . When keel detects you follow semantic versioning, it will use that as it primary means to determine whether a pod should be updated or not. If you choose not to follow semantic versioning, for instance in my-app:dev , Keel will compute the SHA digest for your image instead.

If we consider a continuous deployment scenario, it would make sense to have Keel always deploy the latest tag of our images. Should we prefer a scenario where perhaps manual approvals or testing is involved, it would most likely make more sense to follow semantic versioning instead.

The above configuration represents a basic deployment that is kept up to date by Keel whenever a newer image with the latest tag is released. While functional, it is advised you tailor it to fit your specific needs.


You should now have a fully functional continuous integration and -delivery (or deployment) setup.

Your code is automatically being built, tested and turned into a Docker image as it hits the repository. Once it is successfully tested, it is either directly deployable to your Kubernetes cluster, or Keel will automatically deploy it for you, depending on your configuration.

It is worth noting that Keel can assist you in deploying your application far better than the set up we have running here. For instance, you can set it up to require approvals before a new version is deployed. I recommend checking out the documentation and the available features.

If you are running into problems in any of the steps, do not hesitate to ask and I will be glad to help out where I can.

Thank you for reading.

Software Engineer writing about his daily software adventures, wherever they may lead.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store