git clone https://github.com/openliberty/guide-openliberty-operator-intro.git
cd guide-openliberty-operator-intro
Contents
- What you’ll learn
- Additional prerequisites
- Getting started
- Starting and preparing your cluster for deployment
- Installing the Operator
- Deploying the system microservice to Kubernetes
- Accessing the microservice
- Specifying optional parameters
- Tearing down the environment
- Great work! You’re done!
- Related Links
- Guide Attribution
Tags
Deploying a microservice to Kubernetes using Open Liberty Operator
Prerequisites:
Explore how to deploy a microservice to Kubernetes using Open Liberty Operator.
What you’ll learn
You will learn how to deploy a cloud-native application with a microservice to Kubernetes using the Open Liberty Operator.
Kubernetes is a container orchestration system. It streamlines the DevOps process by providing an intuitive development pipeline. It also provides integration with multiple tools to make the deployment and management of cloud applications easier. You can learn more about Kubernetes by checking out the Deploying microservices to Kubernetes guide.
Kubernetes operators provide an easy way to automate the management and updating of applications by abstracting away some of the details of cloud application management. To learn more about operators, check out this Operators tech topic article.
The application in this guide consists of one microservice, system. The system microservice returns the JVM system properties of its host.
You will deploy the system microservice by using the Open Liberty Operator. The Open Liberty Operator packages, deploys, and manages Open Liberty applications on Kubernetes-based clusters. The Open Liberty Operator watches Open Liberty resources and creates various Kubernetes resources, including Deployments, Services, and Routes, depending on the configurations. The Operator then continuously compares the current state of the resources, the desired state of application deployment, and reconciles them when necessary.
Additional prerequisites
You must run this guide in a Linux environment. You will also use Docker. For Docker installation instructions, see the official Docker documentation.
You will use Minikube as a single-node Kubernetes cluster that runs locally in your Linux environment. Make sure you have kubectl installed. If you need to install kubectl, see the kubectl installation instructions. For Minikube installation instructions, see the Minikube documentation.
Getting started
The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:
The start directory contains the starting project that you will build upon.
The finish directory contains the finished project that you will build.
Before you begin, make sure you have all the necessary prerequisites.
Starting and preparing your cluster for deployment
Start your Kubernetes cluster.
Run the following command from a command-line session:
minikube start
If you run Minikube as a root user, you can append the --force option to the previous command.
Next, validate that you have a healthy Kubernetes environment by running the following command from the active command-line session.
kubectl get nodes
If your environment is healthy, this command returns a Ready status for the minikube node.
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 3h48m v1.26.1
Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you can interact with Minikube’s Docker daemon and build new images directly to it from your host machine:
eval $(minikube docker-env)
Installing the Operator
Before you can deploy your microservice, you must install the cert-manager and the Open Liberty Operator. For more information, see the installation instructions.
First, install the cert-manager to your Kubernetes cluster by running the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml
Next, install Custom Resource Definitions (CRDs) for the Open Liberty Operator by running the following command:
kubectl apply --server-side -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-crd.yaml
Custom Resources extend the Kubernetes API and enhance its functionality.
Set environment variables for namespaces for the Operator by running the following commands:
OPERATOR_NAMESPACE=default
WATCH_NAMESPACE='""'
Next, run the following commands to install cluster-level role-based access:
curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-rbac-watch-all.yaml \
| sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/" \
| kubectl apply -f -
Finally, run the following commands to install the Operator:
curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-operator.yaml \
| sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${WATCH_NAMESPACE}/" \
| kubectl apply -n ${OPERATOR_NAMESPACE} -f -
To check that the Open Liberty Operator has been installed successfully, run the following command to view all the supported API resources that are available through the Open Liberty Operator:
kubectl api-resources --api-group=apps.openliberty.io
Look for the following output, which shows the custom resource definitions (CRDs) that can be used by the Open Liberty Operator:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
openlibertyapplications olapp,olapps apps.openliberty.io/v1 true OpenLibertyApplication
openlibertydumps oldump,oldumps apps.openliberty.io/v1 true OpenLibertyDump
openlibertytraces oltrace,oltraces apps.openliberty.io/v1 true OpenLibertyTrace
Each CRD defines a kind of object that can be used, which is specified in the previous example by the KIND value. The SHORTNAME value specifies alternative names that you can substitute in the configuration to refer to an object kind. For example, you can refer to the OpenLibertyApplication object kind by one of its specified shortnames, such as olapps.
The openlibertyapplications CRD defines a set of configurations for deploying an Open Liberty-based application, including the application image, number of instances, and storage settings. The Open Liberty Operator watches for changes to instances of the OpenLibertyApplication object kind and creates Kubernetes resources that are based on the configuration that is defined in the CRD.
Deploying the system microservice to Kubernetes
To deploy the system microservice, you must first package the microservice, then create and build a runnable container image of the packaged microservice.
Packaging the microservice
Ensure that you are in the start directory and run the following command to package the system microservice:
./mvnw clean package
Building the image
Run the docker build command to build the container image for your application:
docker build -t system:1.0-SNAPSHOT system/.
The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default.
Normally, you must push your image to a container registry before you can deploy it on any cloud environment. You can skip this step because you are working with a local Kubernetes environment.
Now you’re ready to deploy the image.
Deploying the image
You can configure the specifics of the Open Liberty Operator-controlled deployment with a YAML configuration file.
Create thedeploy.yamlconfiguration file in thestartdirectory.deploy.yaml
deploy.yaml
The deploy.yaml file is configured to deploy one OpenLibertyApplication resource, system, which is controlled by the Open Liberty Operator.
The applicationImage parameter defines what container image is deployed as part of the OpenLibertyApplication CRD. This parameter follows the \<image-name\>[:tag] format. The parameter can also point to an image hosted on an external registry, such as Docker Hub. The system microservice is configured to use the image created from the earlier build.
The env parameter is used to specify environment variables that are passed to the container at runtime.
Additionally, the microservice includes the service and expose parameters. The service.port parameter specifies which port is exposed by the container, allowing the microservice to be accessed from outside the container. To access the microservice from outside of the cluster, it must be exposed by setting the expose parameter to true. After you expose the microservice, the Operator automatically creates and configures routes for external access to your microservice.
Run the following command to deploy the system microservice with the previously explained configuration:
kubectl apply -f deploy.yaml
Next, run the following command to view your newly created OpenLibertyApplications resources:
kubectl get OpenLibertyApplications
You can also replace OpenLibertyApplications with the shortname olapps.
Look for output that is similar to the following example:
NAME IMAGE EXPOSED RECONCILED AGE
system system:1.0-SNAPSHOT true True 10s
A RECONCILED state value of True indicates that the operator was able to successfully process the OpenLibertyApplications instances. Run the following command to view details of your microservice:
kubectl describe olapps/system
This example shows part of the olapps/system output:
Name: system
Namespace: default
Labels: app.kubernetes.io/part-of=system
name=system
Annotations: <none>
API Version: apps.openliberty.io/v1
Kind: OpenLibertyApplication
...
Accessing the microservice
To access the exposed system microservice, the service must be port-forwarded. Run the following command to set up port forwarding to access the system service:
kubectl port-forward svc/system 9443
Visit the microservice at https://localhost:9443/system/properties.
When you’re done trying out the microservice, press CTRL+C in the command line session where you ran the kubectl port-forward command to stop the port forwarding.
Run the following command to remove the deployed system microservice:
kubectl delete -f deploy.yaml
Specifying optional parameters
You can also use the Open Liberty Operator to implement optional parameters in your application deployment by specifying the associated CRDs in your deploy.yaml file. For example, you can configure the Kubernetes liveness, readiness and startup probes. Visit the Open Liberty Operator user guide to find all of the supported optional CRDs.
To configure the Kubernetes liveness, readiness and startup probes by using the Open Liberty Operator, specify the probes in your deploy.yaml file. The startup probe verifies whether deployed application is fully initialized before the liveness probe takes over. Then, the liveness probe determines whether the application is running and the readiness probe determines whether the application is ready to process requests. For more information about application health checks, see the Checking the health of microservices on Kubernetes guide.
Replace thedeploy.yamlconfiguration file.deploy.yaml
deploy.yaml
1# tag::system[]
2apiVersion: apps.openliberty.io/v1
3# tag::olapp[]
4kind: OpenLibertyApplication
5# end::olapp[]
6metadata:
7 name: system
8 labels:
9 name: system
10spec:
11 # tag::sysImage[]
12 applicationImage: system:1.0-SNAPSHOT
13 # end::sysImage[]
14 # tag::service[]
15 service:
16 # tag::servicePort[]
17 port: 9443
18 # end::servicePort[]
19 # end::service[]
20 # tag::expose[]
21 expose: true
22 # end::expose[]
23 route:
24 pathType: ImplementationSpecific
25 # tag::systemEnv[]
26 env:
27 - name: WLP_LOGGING_MESSAGE_FORMAT
28 value: "json"
29 - name: WLP_LOGGING_MESSAGE_SOURCE
30 value: "message,trace,accessLog,ffdc,audit"
31 # end::systemEnv[]
32 # tag::healthProbes[]
33 # tag::startupProbe[]
34 probes:
35 startup:
36 failureThreshold: 12
37 httpGet:
38 path: /health/started
39 port: 9443
40 scheme: HTTPS
41 initialDelaySeconds: 30
42 periodSeconds: 2
43 timeoutSeconds: 10
44 # end::startupProbe[]
45 # tag::livenessProbe[]
46 liveness:
47 failureThreshold: 12
48 httpGet:
49 path: /health/live
50 port: 9443
51 scheme: HTTPS
52 initialDelaySeconds: 30
53 periodSeconds: 2
54 timeoutSeconds: 10
55 # end::livenessProbe[]
56 # tag::readinessProbe[]
57 readiness:
58 failureThreshold: 12
59 httpGet:
60 path: /health/ready
61 port: 9443
62 scheme: HTTPS
63 initialDelaySeconds: 30
64 periodSeconds: 2
65 timeoutSeconds: 10
66 # end::readinessProbe[]
67 # end::healthProbes[]
68# end::system[]
The health check endpoints /health/started, /health/live and /health/ready are already created for you.
Run the following command to deploy the system microservice with the new configuration:
kubectl apply -f deploy.yaml
Run the following command to check status of the pods:
kubectl describe pods
Look for the following output to confirm that the health checks are successfully applied and working:
Liveness: http-get http://:9080/health/live delay=30s timeout=10s period=2s #success=1 #failure=12
Readiness: http-get http://:9080/health/ready delay=30s timeout=10s period=2s #success=1 #failure=12
Startup: http-get http://:9080/health/started delay=30s timeout=10s period=2s #success=1 #failure=12
Run the following command to set up port forwarding to access the system service:
kubectl port-forward svc/system 9443
Visit the microservice at https://localhost:9443/system/properties.
When you’re done trying out the microservice, press CTRL+C in the command line session where you ran the kubectl port-forward command to stop the port forwarding.
Tearing down the environment
When you no longer need your deployed microservice, you can delete all resources by running the following command:
kubectl delete -f deploy.yaml
To uninstall the Open Liberty Operator and the cert-manager, run the following commands:
OPERATOR_NAMESPACE=default
WATCH_NAMESPACE='""'
curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-operator.yaml \
| sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${WATCH_NAMESPACE}/" \
| kubectl delete -n ${OPERATOR_NAMESPACE} -f -
curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-rbac-watch-all.yaml \
| sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/" \
| kubectl delete -f -
kubectl delete -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/1.2.1/kubectl/openliberty-app-crd.yaml
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml
Perform the following steps to return your environment to a clean state.
-
Point the Docker daemon back to your local machine:
eval $(minikube docker-env -u) -
Stop your Minikube cluster:
minikube stop -
Delete your cluster:
minikube delete
Great work! You’re done!
You just deployed a microservice running in Open Liberty to Kubernetes and configured the Kubernetes liveness, readiness and startup probes by using the Open Liberty Operator.
Related Links
Guide Attribution
Deploying a microservice to Kubernetes using Open Liberty Operator by Open Liberty is licensed under CC BY-ND 4.0
Prerequisites:
Great work! You're done!
What did you think of this guide?
Thank you for your feedback!
What could make this guide better?
Raise an issue to share feedback
Create a pull request to contribute to this guide
Need help?
Ask a question on Stack Overflow