Running WSO2 Products on Kubernetes
Please note that the following article has ‘expired’ in terms of accuracy when it comes to the artifacts used and the way things are done. WSO2 has made many improvements on top the configurations mentioned below and how to manipulate those artifacts might have been changed since.
It’s 2016. Kubernetes needs no introduction. Neither does WSO2, so let’s get to the point. Let’s run WSO2 Identity Server on Kubernetes!
You’ll need a basic understanding of
the following repositories.
- WSO2 Kubernetes Artifacts
- WSO2 Puppet Modules
The Docker Images
We need to build the WSO2 IS Docker image first. For this we can take a long method of configuring the IS instance manually and then creating the Docker image with that pack or we can just save some time and let Puppet do the work. The Dockerfiles in the WSO2 Kubernetes Artifacts repository make use of WSO2 Puppet Modules to configure the server inside the Docker image.
WSO2 Puppet Modules
Navigate to where you checked out WSO2 Puppet Modules and build (
mvn clean install) to get the latest WSO2 Puppet modules distribution, inside
target folder. You can alternatively get the latest released distribution from the releases page on the GitHub repository.
Now unzip the distribution to a place you prefer (Let’s call this
<PUPPET_HOME> here after). It’s targeted to be unzipped directly to a Puppet Master folder (
/etc/puppet/), so the structure of the decompressed folder looks similar to that of the inside of the Puppet Master folder.
WSO2 Puppet Modules heavily make use of Hiera to separate data and templates from the actual Puppet logic of configuration of the server. Therefore, the only modification that has to be done, has to be done to the Hiera YAML files and optionally the templates.
Let’s first change the clustering related data in Hiera. For this an understanding on how clustering for WSO2 products on Kubernetes is needed.
The Kubernetes Membership Scheme for Carbon makes use of the Kubernetes API to lookup the IP addresses of the Pods that are already up for given Kubernetes Service. For an example, for WSO2 IS, provided that the Kubernetes Service for WSO2 IS is
wso2is, the Kubernetes Membership Scheme will make an API call to the Kubernetes API Server to find out the IP addresses of the Pods that are running. It will then update the Hazelcast instance with this list of IPs and connect to those members. When new members start, the process repeats, and the existing members get notified of its existence via Hazelcast. This membership scheme is pluggable to Hazelcast starting from Carbon 4.4.1.
With this understanding, lets make the changes required to enable Kubernetes Membership Scheme in WSO2 IS.
- Navigate to
default.yamlwith your text editor.
wkarelated data and add the Kubernetes Membership Scheme data. The resulting section look something like the following.
wso2::clustering : enabled : true #local_member_host : 127.0.0.1 #local_member_port : 4000 membership_scheme : kubernetes #wka : # members : #- #hostname : localhost # port : 4000 # - # hostname : localhost # port : 4001 #multicast : # domain : wso2.carbon.domain k8: k8_master: http://172.17.8.101:8080 k8_namespace: default k8_services: wso2is
http://172.17.8.101:8080 is the Kubernetes API Server address. Furthermore, note that the value for
k8_services reflects the Kubernetes Service name we are going to use later.
- We also need to add the Kubernetes Membership Scheme distribution to the
<WSO2_SERVER_HOME>/repository/components/libfolder along with its dependencies. So let’s first build the Kubernetes Membership Scheme. Navigate to where you checked out WSO2 Kubernetes Artifacts repository and to the
common/kubernetes-membership-schemefolder inside. Build the Kubernetes Membership Scheme by running
mvn clean install. Copy the resulting JAR file to
<PUPPET_HOME>/modules/wso2is/files/configs/repository/components/libfolder. Furthermore copy the following dependencies to the same place as well.
- Now let’s specify these files inside the
default.yamlfile, so Puppet would copy them to the respective places. Add the following entry to
wso2::file_list : - repository/components/lib/jackson-annotations-2.5.4.jar - repository/components/lib/jackson-core-2.5.4.jar - repository/components/lib/jackson-databind-2.5.4.jar - repository/components/lib/kubernetes-membership-scheme-1.0.0-SNAPSHOT.jar
Copying the Packs
- Download WSO2 IS 5.1.0 and copy it to
- Download JDK 1.7_80 and copy to
Building the Docker image
Navigate to where you checked out WSO2 Kubernetes Artifacts repository. We will be working inside this directory now.
Docker images for WSO2 products make use of a base image called
wso2/k8s-base which has to be built (or pulled from Docker Hub) before building the product images.
- List the Docker images in your machine. — If the list doesn’t contain
wso2/k8s-baseDocker image you have to build it first.
- Navigate to
common/docker/base-imagefolder, and start the Docker image build by executing
- Wait until the Docker build process completes and verify after by listing the Docker images (
docker images) to check
WSO2 IS Image
wso2is/docker/ folder. Inside you will see the Dockerfile and some Bash scripts which will make your life so much easier when it comes to building and rebuilding Docker images for test purposes.
build.sh builder script will be looking for the
PUPPET_HOME environment variable. So before running
PUPPET_HOME to our Puppet home. Then run the
build.sh file by providing the Docker image version to be built and the WSO2 Carbon profiles that should be built for this product. For WSO2 IS, there is only one Carbon profile, the
default profile. So our commands will look like something as follows.
export PUPPET_HOME=~/temp/puppet ./build.sh 1.0.0 'default'
Use sudo when executing
build.sh if your Docker daemon needs privileged access. Here the place where we unzipped our WSO2 Puppet distribution (and modified Hiera data accordingly) is
~/temp/puppet, and we need our Docker image to be tagged as version
1.0.0, and we only need to build the Docker image for the
default Carbon profile for WSO2 IS. Specifying only
default is optional.
This will build the Docker image by configuring WSO2 IS using the
PUPPET_HOME folder, and including the necessary
ENTRPOINT scripts. List your docker images afterwards (
docker images) and you will see something similar to the following.
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE wso2/is-5.1.0 1.0.0 c8ab0b692142 19 hours ago 1.45 GB wso2/k8s-base 1.0.0 2216147d6c98 22 hours ago 310.6 MB
Next we deploy our Docker images on Kubernetes.
It would greatly help if you already have a Kubernetes cluster deployed somewhere nearby. However, it’s safe to assume you’re reading this just to try out this work flow, and you don’t have a Kubernetes Cluster. In that case there are several easy options you can chose from.
Kubernetes Vagrant Setup
Kubernetes ships with its own Vagrantfile which can make use of several Virtualization providers to quickly create a Kubernetes Cluster. You will be able to use VirtualBox as the provider and spawn a new Kubernetes cluster with one or more Nodes (previously
Minions). However my personal experience with this has not been pleasant, because of the time it takes for the nodes to provision (SaltStack is used to provision the Fedora based nodes) and the issues it had when recreating the Cluster.
This is a similar setup as the above, but with several differences. First off, it uses CoreOS boxes for the Master and Node VMs. Second, it’s really easy to destroy and recreate a cluster, in case you feel like Stalin. I keep the following short run script to start the cluster.
#!/bin/bash export NODES=1 export USE_KUBE_UI=true vagrant up
This starts a Kubernetes Cluster with one Master and one Node VM, with IPs 172.17.8.101 and 172.17.8.102 each respectively.
Any Other Options?
Well, I can copy paste the Kubernetes documentation here, or you can simply go there and read the other options you have, which tend to demand a little bit of commitment. So if you’re afraid of that better stick to the Vagrant setups above.
WSO2 IS Cluster
We built the Docker images, and now we have a Kubernetes Cluster. The next logical step is to go ahead and deploy the Docker image on top of the Kubernetes Cluster. To do that we need to do the following.
- Either upload the WSO2 IS Docker image to an accessible Docker registry or load it to the Nodes’ Docker registry (If you created a Vagrant setup for Kubernetes, the easier option would be to compress the WSO2 IS Docker image to a tar file, scp it to the Node/s and Load the tar to the local Docker registry)
- Deploy a Replication Controller for WSO2 IS Docker image, with a replica count.
- Deploy a Kubernetes Service for the WSO2 IS Pods
Load Docker image
Let’s load our Docker image to the Node/s. You can run the
save.sh file inside
wso2is/docker/ folder. It will save the Docker image to
~/docker/images/ folder as
.tar file. Or you can simply call
docker save and create the
.tar file yourself.
docker save wso2/is-5.1.0:1.0.0 > wso2is-5.1.0-1.0.0.tar #insecure_private_key is the key to use to ssh inside the Vagrant boxes, 172.17.8.102 is the Node's IP scp -i ~/.vagrant.d/insecure_private_key wso2is-5.1.0-1.0.0.tar firstname.lastname@example.org:. # ssh to the node and load the Docker image vagrant ssh node-01 docker load < wso2is-5.1.0-1.0.0.tar docker images # to verify the image was loaded successfully
A Replication Controller makes sure that a specified number of Pods will always be there in the Cluster. We specify the Docker image to use, the number of replicas to maintain, and the labels that should be applied to the Pods. You can find the Replication Controller for WSO2 IS in
wso2is/kubernetes/wso2is-controller.yaml. It looks something like the following.
apiVersion: v1 kind: ReplicationController metadata: name: wso2is labels: name: wso2is spec: replicas: 1 selector: name: wso2is template: metadata: labels: name: wso2is spec: containers: - name: wso2is image: wso2/is-5.1.0:1.0.0 ports: - containerPort: 9763 protocol: "TCP" - containerPort: 9443 protocol: "TCP" - containerPort: 8000 protocol: "TCP" - containerPort: 10500 protocol: "TCP"
Here, we have specified the image to use as
wso2/is-5.1.0:1.0.0. If you built your image with a different name, change this value. Also, we have specified the number of replicas to be just one.
Let’s deploy the Replication Controller. (If you used the Vagrant Setup, you can directly use the
deploy.sh script included along with the Replication Controller in the same folder. It will also deploy the Service artifact and wait until the WSO2 IS server to come up, so for the purpose of understanding the process, let’s manually deploy the artifacts separately.)
kubectl create -f wso2is-controller.yaml
If you get an error like the following, it means that kubectl cannot find the Kubernetes API Server to communicate with it. So you have to point out where the API Server is to the kubectl.
kubectl create -f wso2is-controller.yaml error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused export KUBERNETES_MASTER=http://172.17.8.101:8080 #If your Kubernetes Master IP and Port is different, change this accordingly
On the other hand if your system simply doesn’t have kubectl installed, you first need to install it.
If everything went right Kubernetes will spawn a Pod with a WSO2 IS container inside it, in one of the Nodes. You can get the list of Pods deployed by issueing
kubectl get pods.
To expose the WSO2 IS container from Kubernetes, we need to define a Service which maps the operational ports of the WSO2 IS container with ports on the Nodes. For this we need to specify a selector for the Pods that should be served through the Service and the port mapping. You can find the following Service definition in
apiVersion: v1 kind: Service metadata: labels: name: wso2is name: wso2is spec: type: NodePort sessionAffinity: ClientIP ports: # ports that this service should serve on - name: 'servlet-http' port: 9763 targetPort: 9763 nodePort: 32001 - name: 'servlet-https' port: 9443 targetPort: 9443 nodePort: 32002 - name: 'kdc-server' port: 8000 targetPort: 8000 nodePort: 32003 - name: 'thrift-entitlement' port: 10500 targetPort: 10500 nodePort: 32004 # label keys and values that must match in order to receive traffic for this service selector: name: wso2is
In this service we have exposed port 9443 of the WSO2 IS container through the port 32002 on the Node. Since the type of the Service is
NodePort, the port 32002 on all of the Nodes will be mapped to the port 9443 of the container. Another interesting thing to note is that we have specified
wso2is which is the same name we provided for
k8_services when we configured the Kubernetes Membership Scheme earlier.
Let’s deploy this Service.
kubectl create -f wso2is-service.yaml kubectl get svc
Accessing WSO2 IS
Now we have a WSO2 IS container Cluster on Kubernetes. How are we going to access it? Simple. We just access any Node on the port 32002 to access the Carbon Console. For example, in the Vagrant Setup, we can access the Carbon console by going to
https://172.17.8.102:32002/carbon. You can read more about the NodePort type Services to understand what is happening here.
Originally published at chamilad.github.io on February 9, 2016.
Written on February 9, 2016 by chamila de alwis.
Originally published on Medium