/ kubernetes

Kubernetes Migration

Level - Advanced. Read Time ~ <10 minutes.


How I deployed a workload to a Kubernetes cluster on Digital Ocean using Terraform and Helm -> https://github.com/blairg/fellrace-finder-server/tree/kubernetes_ready

What did I do?

Dabbled with Kubernetes a little commercially and been learning and playing with it for well over a year now. Also, generously my employer gifted me attendance to the KubeCon conference in Copenhagen last year, which was excellent. So with all the knowledge I've gained and desire to put this into practice, I had a crack at migrating an API I currently run on Heroku. It powers this application which is a results service for the fell running community.

The application I attempted to migrate can be found here. It is a Node JS Typescript Express application. A requirement for migrating to the world of Kubernetes is that your application has to be containerised. As I'm very comfortable with Docker and used it for years, I was already deploying to Heroku with Docker. The next step was to establish the following: -

  • Cloud vendor
  • Cluster creation tooling
  • Package manager

Cloud Vendor


This is an exciting time to be alive. We have the powerhouses like AWS, Azure and Google Cloud. Then we have what I'd describe as a second tier offering but very credible such as Alibaba, Oracle, IBM or Hauwei. My personal favourite is AWS. Probably from reputation and familiarity. Seems to have everything you need and I know my way around it better than the others. But this post is not about AWS or adoption of it on my project. Read on for what I chose.

Digital Ocean


In May this year at KubeCon/CloudNativeCon Digital Ocean announced their managed Kubernetes offering had progressed to general availability status. To read more and to receive a better sales pitch than I can offer, head here. As I wanted to use a managed Kubernetes offering and I'm already a Digital Ocean customer; running this blog on one of their droplets. I thought why not give their offering a look in.

Cluster Creation


To initialize my cluster I could easily have used the Digital Ocean UI and clicked create new cluster and this works perfectly well. That is okay for having a quick experiment and play. But I wanted to put my existing knowledge of Terraform into practice and do IaC (Infrastructure as code). To my delight Digital Ocean have API's for all services and they also have a Terraform provider. To see how to create a Digital Ocean Kubernetes cluster with Terraform, look here.

All I had to do to create a cluster was to define the following HCL: -

# Configure the DigitalOcean Provider
provider "digitalocean" {
  token = "${var.do_token}"

# Create the Kubernetes Cluster
resource "digitalocean_kubernetes_cluster" "bgtest-cluster" {
  name    = "bgtest-cluster"
  region  = "lon1"
  version = "1.13.2-do.0"

  node_pool {
    name       = "worker-pool"
    size       = "s-1vcpu-2gb"
    node_count = 1

A variables files was also required for the Digital Ocean token. Get your token here: -

variable "do_token" {}

Then to create the cluster I had to run the following Terraform apply command: -

terraform init && \
terraform plan -out=tfplan -input=false \
  -var "do_token=<MY_TOKEN>" && \
terraform apply "tfplan"


Package Manager

What good is a cluster which is running no workloads? Not very useful, in all honesty. The next step was to choose the method to package up my application to deploy to the cluster. As you can probably gather from the above logo I opted for Helm. My reasoning for choosing Helm was it seemed the popular and I had prior experience with it. I didn't explore the alternatives as Helm feels very suitable to me and I was able to achieve what I desired. Which was to deploy my application into the Kubernetes cluster and apply environment variables whilst deploying.

In hindsight not sure I advocate this method for deploying Helm charts to a Kubernetes cluster, but I rolled with using Terraform. I struggled with getting the right recipe to begin with. All my problems stemmed from having the correct RBAC in place.


In order to deploy my application with Helm, I had to do some preliminary work: -

  • Installed the Helm cli on my Mac using the official guide
  • Create a Helm chart within my repo, ran the following command helm create fellrace-finder-server
  • The next step was to edit the values.yaml and set things like image.repository (my image rests in public Docker Hub for simplicity), ingress settings, resource limits (very important), liveness and readiness probe and some environment variables. For a detailed explanation head here

Once I had created the Helm chart within my repo, the next step was to deploy this into the cluster. Rather than running a helm install ... command I opted for using Terraform. I wouldn't recommend this, but if you do this, here is the implementation.

In my repo here you will find the Helm deployment of my chart. Firstly I had to create the Helm server agent called Tiller to communicate with the cluster as per below: -

provider "helm" {
  install_tiller = true
  service_account = "tiller"
  namespace = "kube-system"

	kubernetes {
    host                   = "${var.cluster_host}"
    client_certificate     = "${var.cluster_client_certificate}"
    client_key             = "${var.cluster_client_key}"
    cluster_ca_certificate = "${var.cluster_ca_certificate}"
    config_context         = "${var.cluster_config}"

provider "kubernetes" {
  host                   = "${var.cluster_host}"
  client_certificate     = "${var.cluster_client_certificate}"
  client_key             = "${var.cluster_client_key}"
  cluster_ca_certificate = "${var.cluster_ca_certificate}"

  load_config_file = false

resource "kubernetes_service_account" "tiller" {
  metadata {
    name      = "tiller"
    namespace = "kube-system"

resource "kubernetes_cluster_role_binding" "tiller" {
  metadata {
    name = "tiller"

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"

  # api_group has to be empty because of a bug:
  # https://github.com/terraform-providers/terraform-provider-kubernetes/issues/204
  subject {
    api_group = ""
    kind      = "ServiceAccount"
    name      = "tiller"
    namespace = "kube-system"

After Tiller was installed I was able to deploy my application into the cluster: -

# Install App with Helm
resource "helm_release" "fellrace_finder_server" {
  name       = "fellrace-finder-server"
  chart      = "fellrace-finder-server"
  repository = "https://raw.githubusercontent.com/blairg/fellrace-finder-helm/master/"
  values     = ["${file("../k8s/fellrace-finder-server/values.yaml")}"]
  version    = "0.1.0"
  timeout = 600

  set {
    name  = "image.command"
    value = "start"

  set {
    name  = "image.repository"
    value = "blairguk/fellrace-finder-server"

  set {
    name  = "image.tag"
    value = "latest"

  set {
    name  = "service.type"
    value = "LoadBalancer"

  set {
    name  = "environment.mongo_url"
    value = "${var.mongo_url}"

One thing I omitted from the preliminary Helm work was that I created an additional repository. This was to house my packaged Helm chart. You see the package reference here. To package my Helm chart I ran the following commands and pushed the aforementioned repository: -

  • helm package fellrace-finder-server
  • helm repo index fellrace-finder-server/ --url https://blairg.github.io/fellrace-finder-helm/

The End

Enough waffling. If anyone has any questions or suggestions please leave them below. This is a basic example using Kubernetes, Digital Ocean, Terraform and Helm. Hopefully someone can get value from what I've shared in your endeavours. Happy Kuberneting!