Deployment with Kubernetes
This tutorial explains how to deploy our demo site Luna (see quick start guide) to the cloud using Kubernetes. In this tutorial we use Google Kubernetes Engine (GKE) but the setup is not limited to Google Cloud Platform (GCP) and can be used with any Kubernetes cluster like Amazon EKS, Azure Kubernetes Service, etc.
WebSight Helm chart bootstraps the CMS that supports two types of storage backends: Document Storage (MongoDB) and Segment Storage (TAR). In this tutorial, we use the MongoDB instance.
Before you start
Make sure you have:
- your GCP environment set
- Google Kubernetes Engine API enabled
kubectl
installed (if you use Docker Desktop,kubectl
should already be installed)helm
installed
Step 1: Kubernetes cluster configuration
-
Create a Kubernetes cluster consisting of a single nodes pool with the following configuration:
- number of nodes:
3
- zone: any zone of your choice, e.g.
europe-west1-b
to place the cluster in Belgium - machine type:
e2-standard-2 (2 vCPUs, 8 GB memory)
Running the command above may take a couple of minutes to finish. It will also configure
kubectl
to use the cluster. - number of nodes:
Step 2: Install prerequisites
- Install an NGINX Ingress Controller
- Create
cms
namespace: -
Install MongoDB in the
cms
namespace using Helm:At the end of the installation, you should see the following message with MongoDB connection details:helm install mongodb oci://registry-1.docker.io/bitnamicharts/mongodb --version 14.3.0 \ --set auth.enabled=true \ --set auth.rootUser="mongoadmin" \ --set auth.rootPassword="mongoadmin" \ --set architecture="replicaset" \ -n cms
You will use the connection details in the next section.MongoDB can be accessed on the following DNS name(s) and ports from within your cluster: mongodb-0.mongodb-headless.cms.svc.cluster.local:27017 mongodb-1.mongodb-headless.cms.svc.cluster.local:27017
MongoDB
For simplicity in this step we use Bitnami MongoDB Helm chart with default configuration for replicaset architecture (2 data-bearing members and 1 arbiter).
Step 3: Deploy CMS
- Create
my-websight-cms
directory and download the following files into it:- Example values.yaml with the configuration for Luna demo site
- Nginx configuration for the CMS Proxy
- Find the external IP address of the Ingress Controller in your cluster (
YOUR_CLUSTER_IP
): - Update the
proxy
configuration invalues.yaml
with the IP address from the previous step: - Deploy the Nginx proxy configuration as a ConfigMap:
- Deploy the CMS using Helm and the configuration from
values.yaml
(replace<YOUR_CLUSTER_IP>
with the IP address from the 2nd step):It may take about 1-2 minute of minutes to finish. At the end of the installation, you should see the following message:helm upgrade --install websight-cms websight-cms \ --repo https://websight-io.github.io/charts \ --set cms.persistence.mode=mongo \ --set cms.persistence.mongo.hosts='mongodb-0.mongodb-headless.cms.svc.cluster.local:27017\,mongodb-1.mongodb-headless.cms.svc.cluster.local:27017' \ --set cms.livenessProbe.initialDelaySeconds=120 \ --set cms.ingress.enabled=true \ --set cms.ingress.host=cms.<YOUR_CLUSTER_IP>.nip.io \ --namespace cms \ -f values.yaml --wait
Step 4: Verification
- Check the Kubernetes Workloads Dashboard to verify that all pods are running.
- Open the WebSight CMS admin panel on
http://cms.<YOUR_CLUSTER_IP>.nip.io/
(SSL is not covered in this guide). Use the credentialswsadmin
/wsadmin
to log in. - Publish some Luna pages (see Publish demo site guide for help).
- Open
http://luna.<YOUR_CLUSTER_IP>.nip.io/
to see the demo page.
Cleanup
- When finished, you can delete your Kubernetes cluster along with all workloads using: