Managing microservice traffic using Istio

duration 30 minutes

Prerequisites:

Explore how to manage microservice traffic using Istio.

What you’ll learn

You will learn how to deploy an application to a Kubernetes cluster and enable Istio on it. You will also learn how to configure Istio to shift traffic to implement blue-green deployments for microservices.

What is Istio?

Istio is a service mesh, meaning that it’s a platform for managing how microservices interact with each other and the outside world. Istio consists of a control plane and sidecars that are injected into application pods. The sidecars contain the Envoy proxy. You can think of Envoy as a sidecar that intercepts and controls all the HTTP and TCP traffic to and from your container.

While Istio runs on top of Kubernetes and that will be the focus of this guide, you can also use Istio with other environments such as Docker Compose. Istio has many features such as traffic shifting, request routing, access control, and distributed tracing, but the focus of this guide will be on traffic shifting.

Why Istio?

Istio provides a collection of features that allows you to manage several aspects of your services. One example is Istio’s routing features. You can route HTTP requests based on several factors such as HTTP headers or cookies. Another use case for Istio is telemetry, which you can use to enable distributed tracing. Distributed tracing allows you to visualize how HTTP requests travel between different services in your cluster by using a tool such as Jaeger. Additionally, as part of its collection of security features, Istio allows you to enable mutual TLS between pods in your cluster. Enabling TLS between pods secures communication between microservices internally.

Blue-green deployments are a method of deploying your applications such that you have two nearly identical environments where one acts as a sort of staging environment and the other is a production environment. This allows you to switch traffic from staging to production once a new version of your application has been verified to work. You’ll use Istio to implement blue-green deployments. The traffic shifting feature allows you to allocate a percentage of traffic to certain versions of services. You can use this feature to shift 100 percent of live traffic to blue deployments and 100 percent of test traffic to green deployments. Then, you can shift the traffic to point to the opposite deployments as necessary to perform blue-green deployments.

The microservice you’ll deploy is called system. It responds with your current system’s JVM properties and it returns the app version in the response header. You will increment the version number when you update the application. With this number, you can determine which version of the microservice is running in your production or test environments.

What are blue-green deployments?

Blue-green deployments are a way of deploying your applications such that you have two environments where your application runs. In this scenario, you will have a production environment and a test environment. At any point in time, the blue deployment can accept production traffic and the green deployment can accept test traffic, or vice versa. When you want to deploy a new version of your application, you deploy to the color that is acting as your test environment. After the new version is verified on the test environment, the traffic is shifted over. Thus, your live traffic is now being handled by what used to be the test site.

Additional prerequisites

Before you begin, you need a containerization software for building containers. Kubernetes supports various container runtimes. You will use Docker in this guide. For Docker installation instructions, refer to the official Docker documentation.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. Make sure you have kubectl installed. If you need to install kubectl, see the kubectl installation instructions. For Minikube installation instructions, see the Minikube documentation.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-istio-intro.git
cd guide-istio-intro

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Before you begin, make sure you have all the necessary prerequisites.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.

Start your Docker Desktop environment.

Check your settings to ensure that you have an adequate amount of memory allocated to your Docker Desktop enviornment, 8GB is recommended but 4GB should be adequate if you don’t have enough RAM.

Ensure that Kubernetes is running on Docker Desktop and that the context is set to docker-desktop.

Run the following command from a command-line session:

minikube start --memory=8192 --cpus=4

The memory flag allocates 8GB of memory to your Minikube cluster. If you don’t have enough RAM, then 4GB should be adequate.

Next, validate that you have a healthy Kubernetes environment by running the following command from the active command-line session.

kubectl get nodes

This command should return a Ready status for the master node.

You do not need to do any other step.

Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Deploying Istio

Install Istio by following the instructions in the official Istio Getting started documentation.

Run the following command to verify that the istioctl path was set successfully:

istioctl version

The output will be similar to the following example:

no running Istio pods in "istio-system"
1.20.3

Run the following command to configure the Istio profile on Kubernetes:

istioctl install --set profile=demo

The following output appears when the installation is complete:

✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete

Verify that Istio was successfully deployed by running the following command:

kubectl get deployments -n istio-system

All the values in the AVAILABLE column will have a value of 1 after the deployment is complete.

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
istio-egressgateway      1/1     1            1           2m48s
istio-ingressgateway     1/1     1            1           2m48s
istiod                   1/1     1            1           2m48s

Ensure that the Istio deployments are all available before you continue. The deployments might take a few minutes to become available. If the deployments aren’t available after a few minutes, then increase the amount of memory available to your Kubernetes cluster. On Docker Desktop, you can increase the memory from your Docker preferences. On Minikube, you can increase the memory by using the --memory flag.

Finally, create the istio-injection label and set its value to enabled:

kubectl label namespace default istio-injection=enabled

Adding this label enables automatic Istio sidecar injection. Automatic injection means that sidecars are automatically injected into your pods when you deploy your application.

Deploying version 1 of the system microservice

Navigate to the guide-istio-intro/start directory and run the following command to build the application locally.

mvn clean package

Next, run the docker build commands to build the container image for your application:

docker build -t system:1.0-SNAPSHOT .

The command builds a Docker image for the system microservice. The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default. You can verify that this image was created by running the following command:

docker images

You’ll see an image called system:1.0-SNAPSHOT listed in a table similar to the output.

REPOSITORY                     TAG                              IMAGE ID        CREATED          SIZE
system                         1.0-SNAPSHOT                     8856039f4c42    9 minutes ago    745MB
istio/proxyv2                  1.20.3                           7a3aaffcf645    3 weeks ago      347MB
istio/pilot                    1.20.3                           4974b5b22dcc    3 weeks ago      261MB
icr.io/appcafe/open-liberty    kernel-slim-java11-openj9-ubi    d6ef646493e1    8 days ago       729MB

To deploy the system microservice to the Kubernetes cluster, use the following command to deploy the microservice.

kubectl apply -f system.yaml

You can see that your resources are created:

gateway.networking.istio.io/sys-app-gateway created
service/system-service created
deployment.apps/system-deployment-blue created
deployment.apps/system-deployment-green created
destinationrule.networking.istio.io/system-destination-rule created

system.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: Gateway
 3metadata:
 4  name: sys-app-gateway
 5spec:
 6  selector:
 7    istio: ingressgateway
 8  servers:
 9  - port:
10      number: 80
11      name: http
12      protocol: HTTP
13    hosts:
14    - "example.com"
15    - "test.example.com"
16---
17apiVersion: v1
18kind: Service
19metadata:
20  name: system-service
21  labels:
22    app: system
23spec:
24  ports:
25  - port: 9090
26    name: http
27  selector:
28    app: system
29---
30apiVersion: apps/v1
31kind: Deployment
32metadata:
33  name: system-deployment-blue
34spec:
35  replicas: 1
36  selector:
37    matchLabels:
38      app: system
39      version: blue
40  template:
41    metadata:
42      labels:
43        app: system
44        version: blue
45    spec:
46      containers:
47      - name: system-container
48        image: system:1.0-SNAPSHOT
49        ports:
50        - containerPort: 9090
51---
52apiVersion: apps/v1
53kind: Deployment
54metadata:
55  name: system-deployment-green
56spec:
57  replicas: 1
58  selector:
59    matchLabels:
60      app: system
61      version: green
62  template:
63    metadata:
64      labels:
65        app: system
66        version: green
67    spec:
68      containers:
69      - name: system-container
70        image: system:1.0-SNAPSHOT
71        ports:
72        - containerPort: 9090
73---
74apiVersion: networking.istio.io/v1alpha3
75kind: DestinationRule
76metadata:
77  name: system-destination-rule
78spec:
79  host: system-service
80  subsets:
81  - name: blue
82    labels:
83      version: blue
84  - name: green
85    labels:
86      version: green

View the system.yaml file. It contains two deployments, a service, a gateway, and a destination rule. One of the deployments is labeled blue and the second deployment is labeled green. The service points to both of these deployments. The Istio gateway is the entry point for HTTP requests to the cluster. A destination rule is used to apply policies post-routing, in this situation it is used to define service subsets that can be specifically routed to.

traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9090
15        host: system-service
16        subset: blue
17      weight: 100
18    - destination:
19        port:
20          number: 9090
21        host: system-service
22        subset: green
23      weight: 0
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9090
39        host: system-service
40        subset: blue
41      weight: 0
42    - destination:
43        port:
44          number: 9090
45        host: system-service
46        subset: green
47      weight: 100

View the traffic.yaml file. It contains two virtual services. A virtual service defines how requests are routed to your applications. In the virtual services, you can configure the weight, which controls the amount of traffic going to each deployment. In this case, the weights should be 100 or 0, which corresponds to which deployment is live.

Deploy the resources defined in the traffic.yaml file.

kubectl apply -f traffic.yaml

You can see that the virtual services have been created.

virtualservice.networking.istio.io/system-virtual-service created
virtualservice.networking.istio.io/system-test-virtual-service created

You can check that all of the deployments are available by running the following command.

kubectl get deployments

The command produces a list of deployments for your microservices that is similar to the following output.

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
system-deployment-blue    1         1         1            1           1m
system-deployment-green   1         1         1            1           1m

After all the deployments are available, you will make a request to version 1 of the deployed application. As defined in the system.yaml, file the gateway is expecting the host to be example.com. However, requests to example.com won’t be routed to the appropriate IP address. To ensure that the gateway routes your requests appropriately, ensure that the Host header is set to example.com. For instance, you can set the Host header with the -H option of the curl command.

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/system/properties into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of example.com. Finally, click the blue Send button to make the request.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/system/properties into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of example.com. Finally, click the blue Send button to make the request.

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
curl -H "Host:example.com" -I http://`minikube ip`:$INGRESS_PORT/system/properties

You’ll see a header called x-app-version along with the corresponding version.

x-app-version: 1.0-SNAPSHOT

Deploying version 2 of the system microservice

Replace the SystemResource class.
src/main/java/io/openliberty/guides/system/SystemResource.java

SystemResource.java

 1// tag::copyright[]
 2/*******************************************************************************
 3 * Copyright (c) 2017, 2022 IBM Corporation and others.
 4 * All rights reserved. This program and the accompanying materials
 5 * are made available under the terms of the Eclipse Public License 2.0
 6 * which accompanies this distribution, and is available at
 7 * http://www.eclipse.org/legal/epl-2.0/
 8 *
 9 * SPDX-License-Identifier: EPL-2.0
10 *******************************************************************************/
11// end::copyright[]
12package io.openliberty.guides.system;
13
14// CDI
15import jakarta.enterprise.context.RequestScoped;
16import jakarta.ws.rs.GET;
17// JAX-RS
18import jakarta.ws.rs.Path;
19import jakarta.ws.rs.Produces;
20import jakarta.ws.rs.core.MediaType;
21import jakarta.ws.rs.core.Response;
22
23@RequestScoped
24@Path("/properties")
25public class SystemResource {
26
27  // tag::version[]
28  public static String appVersion = "2.0-SNAPSHOT";
29  // end::version[]
30
31  @GET
32  @Produces(MediaType.APPLICATION_JSON)
33  public Response getProperties() {
34    return Response.ok(System.getProperties())
35      .header("X-Pod-Name", System.getenv("HOSTNAME"))
36      .header("X-App-Version", appVersion)
37      .build();
38  }
39}

The system microservice is set up to respond with the version that is set in the SystemResource.java file. The tag for the Docker image is also dependent on the version that is specified in the SystemResource.java file. Manually update the APP_VERSION field of the microservice to 2.0-SNAPSHOT.

Use Maven to repackage your microservice:

mvn clean package

Next, build the new version of the container image as 2.0-SNAPSHOT:

docker build -t system:2.0-SNAPSHOT .

Deploy the new image to the green deployment.

kubectl set image deployment/system-deployment-green system-container=system:2.0-SNAPSHOT

You will work with two environments. One of the environments is a test site that is located at test.example.com. The other environment is your production environment that is located at example.com. To begin with, the production environment is tied to the blue deployment and the test environment is tied to the green deployment.

Test the updated microservice by making requests to the test site. The x-app-version header now has a value of 2.0-SNAPSHOT on the test site and is still 1.0-SNAPSHOT on the live site.

Make a request to the service by running the following curl command.

curl -H "Host:test.example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:test.example.com" -I http://`minikube ip`:$INGRESS_PORT/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT
Update the traffic.yaml file in the start directory.
traffic.yaml

traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9090
15        host: system-service
16        subset: blue
17      weight: 0
18    - destination:
19        port:
20          number: 9090
21        host: system-service
22        subset: green
23      weight: 100
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9090
39        host: system-service
40        subset: blue
41      weight: 100
42    - destination:
43        port:
44          number: 9090
45        host: system-service
46        subset: green
47      weight: 0

After you see that the microservice is working on the test site, modify the weights in the traffic.yaml file to shift 100 percent of the example.com traffic to the green deployment, and 100 percent of the test.example.com traffic to the blue deployment.

Deploy the updated traffic.yaml file.

kubectl apply -f traffic.yaml

Ensure that the live traffic is now being routed to version 2 of the microservice.

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:example.com" -I http://`minikube ip`:$INGRESS_PORT/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT

Testing microservices that are running on Kubernetes

Next, you will create a test to verify that the correct version of your microservice is running.

Create the SystemEndpointIT class.
src/test/java/it/io/openliberty/guides/system/SystemEndpointIT.java

SystemEndpointIT.java

  1// tag::copyright[]
  2/*******************************************************************************
  3 * Copyright (c) 2019, 2022 IBM Corporation and others.
  4 * All rights reserved. This program and the accompanying materials
  5 * are made available under the terms of the Eclipse Public License 2.0
  6 * which accompanies this distribution, and is available at
  7 * http://www.eclipse.org/legal/epl-2.0/
  8 *
  9 * SPDX-License-Identifier: EPL-2.0
 10 *******************************************************************************/
 11// end::copyright[]
 12package it.io.openliberty.guides.system;
 13
 14import static org.junit.jupiter.api.Assertions.assertEquals;
 15import static org.junit.jupiter.api.Assertions.assertNotNull;
 16
 17import javax.net.ssl.HostnameVerifier;
 18import javax.net.ssl.SSLSession;
 19import jakarta.ws.rs.client.Client;
 20import jakarta.ws.rs.client.ClientBuilder;
 21import jakarta.ws.rs.client.WebTarget;
 22import jakarta.ws.rs.core.Response;
 23
 24import io.openliberty.guides.system.SystemResource;
 25
 26import org.junit.jupiter.api.Test;
 27import org.junit.jupiter.api.Order;
 28import org.junit.jupiter.api.TestMethodOrder;
 29import org.junit.jupiter.api.BeforeAll;
 30import org.junit.jupiter.api.BeforeEach;
 31import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
 32import org.junit.jupiter.api.AfterEach;
 33
 34@TestMethodOrder(OrderAnnotation.class)
 35public class SystemEndpointIT {
 36
 37    private static String clusterUrl;
 38
 39    private Client client;
 40    private Response response;
 41
 42    @BeforeAll
 43    public static void oneTimeSetup() {
 44        // Allows for overriding the "Host" http header
 45        System.setProperty("sun.net.http.allowRestrictedHeaders", "true");
 46
 47        String clusterIp = System.getProperty("cluster.ip");
 48        String nodePort = System.getProperty("port");
 49
 50        clusterUrl = "http://" + clusterIp + ":" + nodePort + "/system/properties/";
 51    }
 52
 53    @BeforeEach
 54    public void setup() {
 55        response = null;
 56        client = ClientBuilder.newBuilder()
 57                    .hostnameVerifier(new HostnameVerifier() {
 58                        public boolean verify(String hostname, SSLSession session) {
 59                            return true;
 60                        }
 61                    })
 62                    .build();
 63    }
 64
 65    @AfterEach
 66    public void teardown() {
 67        client.close();
 68    }
 69
 70    @Test
 71    @Order(1)
 72    public void testPodNameNotNull() {
 73        response = this.getResponse(clusterUrl);
 74        this.assertResponse(clusterUrl, response);
 75        String greeting = response.getHeaderString("X-Pod-Name");
 76
 77        String message = "Container name should not be null but it was. "
 78            + "The service is probably not running inside a container";
 79
 80        assertNotNull(greeting, message);
 81    }
 82
 83    @Test
 84    @Order(2)
 85    // tag::testAppVersion[]
 86    public void testAppVersion() {
 87        response = this.getResponse(clusterUrl);
 88
 89        String expectedVersion = SystemResource.appVersion;
 90        String actualVersion = response.getHeaderString("X-App-Version");
 91
 92        assertEquals(expectedVersion, actualVersion);
 93    }
 94    // end::testAppVersion[]
 95
 96    @Test
 97    @Order(3)
 98    public void testGetProperties() {
 99        Client client = ClientBuilder.newClient();
100
101        WebTarget target = client.target(clusterUrl);
102        Response response = target
103            .request()
104            .header("Host", System.getProperty("host-header"))
105            .get();
106
107        assertEquals(200, response.getStatus(),
108            "Incorrect response code from " + clusterUrl);
109
110        response.close();
111    }
112
113    private Response getResponse(String url) {
114        return client
115            .target(url)
116            .request()
117            .header("Host", System.getProperty("host-header"))
118            .get();
119    }
120
121    private void assertResponse(String url, Response response) {
122        assertEquals(200, response.getStatus(),
123            "Incorrect response code from " + url);
124    }
125
126}

The testAppVersion() test case verifies that the correct version number is returned in the response headers.

Run the following commands to compile and start the tests:

mvn test-compile
mvn failsafe:integration-test
mvn test-compile
mvn failsafe:integration-test -Dcluster.ip=`minikube ip` -Dport=$INGRESS_PORT

The cluster.ip and port parameters refer to the IP address and port for the Istio gateway.

If the tests pass, then you should see output similar to the following example:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 s - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

Tearing down your environment

You might want to teardown all the deployed resources as a cleanup step.

Delete your resources from the cluster:

kubectl delete -f system.yaml
kubectl delete -f traffic.yaml

Delete the istio-injection label from the default namespace. The hyphen immediately after the label name indicates that the label should be deleted.

kubectl label namespace default istio-injection-

Delete all Istio resources from the cluster:

istioctl uninstall --purge

Nothing more needs to be done for Docker Desktop.

Nothing more needs to be done for Docker Desktop.

Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop and delete your Minikube cluster:

    minikube stop
    minikube delete

Great work! You’re done!

You have deployed a microservice that runs on Open Liberty to a Kubernetes cluster and used Istio to implement a blue-green deployment scheme.

Guide Attribution

Managing microservice traffic using Istio by Open Liberty is licensed under CC BY-ND 4.0

Copy file contents
Copied to clipboard

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star