How to Perform Canary Deployments with Istio

Introduction

Canary deployments are helpful for introducing new versions of services because they offer a way to deploy new features gradually. When an update is rolled out, it comes out in stages to a small percentage of users, allowing developers to see how the update performs before making it available to everyone.

Kubernetes is designed to perform canary deployments natively. However, the downside to this approach is that limiting traffic to the canary deployment needs to be done manually by changing replica ratios. The solution to streamlining this process is employing a service mesh, such as open-source Istio, to de-couple traffic distribution and replica counts.

In this tutorial, you will learn how to deploy the canary version of an app in an Istio enabled cluster and set up Istio to control traffic routing.

How to Perform Canary Deployments with Istio

Prerequisites

Step 1: Build the Docker Image and Container for the Canary Build

To start deploying the canary build of your app, first create a docker image containing the version you want to deploy. Go to the directory containing the necessary files for Docker image creation. The example uses an app called test-canary, stored in the directory with the same name:

cd test-canary

Use the <strong>docker build</strong> command to build the Docker image. Follow the command with your Docker Hub username and the image name:

docker build -t [dockerhub-username]/test-canary .

The output confirms the successful image creation:

Building a docker image for the canary version of the application

Use the <strong>docker images</strong> command to see the list of your images:

docker images
Using the docker images command to confirm the successful creation of the image

Next, use the <strong>docker run</strong> command to build a container with the image you previously created. Give a name to the container and choose a port for access:

docker run --name [name] -p [port]:8080 -d [dockerhub-username]/test-canary

If the operation is successful, the system outputs the full ID of the newly created container:

Building the container with the created image using the docker run command

Use the <strong>docker ps</strong> command to check the running containers:

docker ps
Confirming the success of the container creation using the docker ps command

Now use the port you assigned to the container to access via a browser:

http://localhost:[port]

The browser displays the content of the app:

Testing the canary app container in Firefox by navigating to the assign port number

After you confirm that the app is functioning, stop the container with the <strong>docker stop</strong> command. Add the Container ID to the command, which you can copy from the first column of the <strong>docker ps</strong> output:

docker stop [container-id]
Stopping the container after making sure it worked properly, using the docker stop command

Finally, to push the image to your Docker Hub account, log in to Docker Hub using the command line:

docker login -u [dockerhub-username]

The system asks for the password. Type in the password and press Enter:

Logging in to Docker Hub using the command line

Now push the image with <strong>docker push</strong>:

docker push [dockerhub-username]/test-canary
Pushing the application image to Docker Hub using the docker push command

Note: Visit our article on the most common Docker commands and download our PDF cheat sheet for future reference.

Step 2: Modify the App Deployment

To add the canary deployment to your general app deployment, use a text editor to edit the file containing service and deployment specifications.

The example uses the application manifest called <strong>app-manifest.yaml</strong>:

nano app-manifest.yaml

The manifest should look similar to this:

apiVersion: v1
kind: Service
metadata:
  name: nodejs
  labels:
    app: nodejs
spec:
  selector:
    app: nodejs
  ports:
  - name: http
    port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs
  labels:
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nodejs
  template:
    metadata:
      labels:
        app: nodejs
        version: v1
    spec:
      containers:
      - name: nodejs
        image: markopnap/test-prod
        ports:
        - containerPort: 8080

The example manifest above describes a production version of a Node.js app, whose container is stored at <strong>markopnap/test-prod</strong>. To include the canary version of the application, start by editing the Deployment section of the file, adding <strong>-v1</strong> to the name of the app:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-v1

Now, append another Deployment section to the end of the file, now with specifications for the canary build:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-v2
  labels:
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nodejs
  template:
    metadata:
      labels:
        app: nodejs
        version: v2
    spec:
      containers:
      - name: nodejs
        image: markopnap/test-canary
        ports:
        - containerPort: 8080

Note: Remember to replace the example Docker Hub image location with the location of your file.

Once you finish editing the file, save it and then update the system configuration using <strong>kubectl apply</strong>:

kubectl apply -f app-manifest.yaml

Step 3: Configure Istio Virtual Service

Create a new yaml file to store the Istio configuration. The example uses the file titled <strong>istio.yaml</strong>, but you can give it any name:

nano istio.yaml

If you previously used Istio for the deployment of the app's production version, the file already exists and should look similar to this:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: nodejs-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: nodejs
spec:
  hosts:
  - "*"
  gateways:
  - nodejs-gateway
  http:
  - route:
    - destination:
        host: nodejs

The file has two sections, defining Gateway and VirtualService objects, respectively. To introduce both versions of the application and set the routing rule for their distribution to the users, modify the <strong>http</strong> section at the bottom. The section must contain two destinations with different subsets and weights:

http:
  - route:
    - destination:
        host: nodejs
        subset: v1
      weight: 90
    - destination:
        host: nodejs
        subset: v2
      weight: 10

The weight parameter tells Istio what percentage of the traffic should be routed to a specific destination. In the example above, 90 percent of the traffic goes to the production version, while 10 percent is directed to the canary build.

After you edit the Virtual Service section, append the following lines to the end of the file to create a Destination Rule:

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: nodejs
spec:
  host: nodejs
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

The purpose of defining the Destination Rule is to manage incoming traffic and send it to the specified versions of the application.

Save the file and use <strong>kubectl apply</strong> to activate it:

kubectl apply -f istio.yaml

Step 4: Test the Canary Deployment

The rules applied to Istio configuration in the previous step now perform traffic routing to your production and canary deployments. To test this, access the application using the external IP of <strong>istio-ingressgateway</strong>, which Istio uses as a Load Balancer.

Note: If you are using minikube, you need to emulate the Load Balancer by opening another terminal window and issuing the minikube tunnel command. Without this, the external IP field in the next step will always show as pending.

Look for the <strong>istio-ingressgateway</strong> service in the list of services available in the <strong>istio-system</strong> namespace. Use <strong>kubectl get</strong> to list the services:

kubectl get svc -n istio-system
Checking the external IP address of the istio ingressgateway

Copy the <strong>istio-ingressgateway</strong> external IP address into your browser's address bar:

http://[ingressgateway_ip]

The browser will likely show the production version of the application. Hit the Refresh button multiple times to simulate some traffic:

Checking the Istio configuration by browsing to the ingressgateway IP in Firefox

After a couple of times, you should see the canary version of the app:

The canary page appearing after a couple of refreshes at the igressgateway IP in Firefox

If you have Grafana add-on installed, check the incoming requests stats to see the routing percentage for each deployment. In Grafana, go to Home:

Going to home dashboard in Grafana

In the Dashboards section, select Istio, and then click Istio Service Dashboard:

Looking for Istio Service Dashboard in the Dashboards section by selecting Istio

In the dashboard, find the Service field and select the service corresponding to your application. In this example, the service is called <strong>nodejs.default.svc.cluster.local</strong>. Once you choose the service, go to the Service Workloads section:

Selecting the service nodejs.default.svc.cluster.local and going to Service Workloads

Select the graph titled Incoming Requests By Destination Workload and Response Code. The graph shows the traffic you generated by refreshing the page. In this example, it is evident that Istio served the <strong>nodejs-v1</strong> version of the app more frequently than the canary <strong>nodejs-v2</strong> version.

Loading the graph titled Incoming Requests By Destination Workload and Response Code

Conclusion

By following this tutorial, you learned how to set up Istio to route traffic to multiple versions of one app automatically. The article also provided instructions on setting up routing rules for canary app deployments.

Was this article helpful?
YesNo
Marko Aleksic
Marko Aleksić is a Technical Writer at phoenixNAP. His innate curiosity regarding all things IT, combined with over a decade long background in writing, teaching and working in IT-related fields, led him to technical writing, where he has an opportunity to employ his skills and make technology less daunting to everyone.
Next you should read
What is Istio? - Architecture, Features, Benefits and Challenges
March 11, 2021

Istio is an open-source service mesh implementation that manages communication and data sharing...
Read more
Istio Tutorial: Getting Started with Istio Basics
April 15, 2021

This tutorial will show you how to install Istio, deploy a test application, and set up your Kubernetes cluster to work with the platform.
Read more
Grafana Prometheus Dashboard Tutorial
April 8, 2021

Prometheus is an open-source event monitoring software for high-volume distributed applications.
Read more
How to do Canary Deployments on Kubernetes
December 1, 2020

A canary deployment is used to test out new features and upgrades, to see how they handle the production environment...
Read more
  • © 2021 Copyright phoenixNAP | Global IT Services. All Rights Reserved.