Legacy Product

Fusion 5.4

Deploy Fusion 5 on Other Kubernetes Platforms

The setup_f5_k8s.sh script in the fusion-cloud-native repository provides deployment support for any Kubernetes platform, including on-premise, private cloud, public cloud, and hybrid platforms.

This script is used by the setup_f5_gke.sh, setup_f5_eks.sh, and setup_f5_aks.sh scripts, which provide additional platform-specific support for Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).

This topic explains how to deploy a Fusion cluster in Kubernetes using the setup_f5_k8s.sh script in the fusion-cloud-native repository.

If you’re deploying on-premises or using a localized repository, you’ll need to use a private repository for Docker images.

Prerequisites

This section covers prerequisites and background knowledge needed to help you understand the structure of this document and how the Fusion installation process works with Kubernetes.

Release Name and Namespace

Before installing Fusion, you need to choose a Kubernetes namespace to install Fusion into. Think of a K8s namespace as a virtual cluster within a physical cluster. You can install multiple instances of Fusion in the same cluster in separate namespaces. However, please do not install more than one Fusion release in the same namespace.

NOTE: All Fusion services must run in the same namespace, i.e. you should not try to split a Fusion cluster across multiple namespaces.

Use a short name for the namespace, containing only letters, digits, or dashes (no dots or underscores). The setup scripts in this repo use the namespace for the Helm release name by default.

Install Helm

Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster. Regardless of which Kubernetes platform you’re using, you need to install helm as it is required to install Fusion for any K8s platform. On MacOS, you can do:

brew install kubernetes-helm

If you already have helm installed, make sure you’re using the latest version:

brew upgrade kubernetes-helm

For other OS, please refer to the Helm installation docs: https://helm.sh/docs/using_helm/

The Fusion helm chart requires that helm is greater than version 3.0.0; check your Helm version by running helm version --short.

Helm User Permissions

If you require that fusion is installed by a user with minimal permissions, instead of an admin user, then the role and cluster role that will have to be assigned to the user within the namespace that you wish to install fusion in are documented in the install-roles directory.

When working with Kubernetes on the command-line, it’s useful to create a shell alias for kubectl, e.g.:
alias k=kubectl

To use these role in a cluster, as an admin user first create the namespace that you wish to install fusion into:

k create namespace fusion-namespace

Apply the role.yaml and cluster-role.yaml files to that namespace

k apply -f cluster-role.yaml
k config set-context --current --namespace=$NAMESPACE
k apply -f role.yaml

Then bind the rolebinding and clusterolebinding to the install user:

k create --namespace fusion-namespace rolebinding fusion-install-rolebinding --role fusion-installer --user <install_user>
k create clusterrolebinding fusion-install-rolebinding --clusterrole fusion-installer --user <install_user>

You will then be able to run the helm install command as the <install_user>

Clone fusion-cloud-native from Github

You should clone this repo from github as you’ll need to run the scripts on your local workstation:

git clone https://github.com/lucidworks/fusion-cloud-native.git

You should get into the habit of pulling this repo for the latest changes before performing any maintenance operations on your Fusion cluster to ensure you have the latest updates to the scripts.

cd fusion-cloud-native
git pull

Cloning the github repo is preferred so that you can pull in updates to the scripts, but if you are not a git user, then you can download the project: https://github.com/lucidworks/fusion-cloud-native/archive/master.zip. Once downloaded, extract the zip and cd into the fusion-cloud-native-master directory.

Deployment

If you’re not running on a managed K8s platform like GKE, AKS, or EKS, you can use Helm to install the Fusion chart to an existing Kubernetes cluster.

Fusion version 5.5 now includes support for the Rancher Kubernetes Engine (RKE) platform. Before deploying Fusion to RKE, you must download and install the RKE software. After configuring your cluster, you can proceed with the Helm v3 installation.

You must have a working cluster configured before performing the Helm v3 installation.

Use Helm v3 to Install Fusion

You should upgrade to the latest version of Helm v3 for working with Fusion. If you need to keep Helm V2 for other clusters, ensure Helm V3 is ahead of Helm V2 in your working shell’s PATH before proceeding.

Customize Fusion Chart Settings

Fusion aims to be well-configured out-of-the-box, but you can customize any of the built-in settings using a custom values YAML file. If you use one of our setup scripts, such as setup_f5_gke.sh, then it will create a custom values YAML file for you the first time you run it using the customize_fusion_values.yaml.example as a template.

If you’re working with Helm directly and not using one of our setup scripts, then run the customize_fusion_values.sh script to create a custom values YAML file from our customize_fusion_values.yaml.example template as a starting point:

./customize_fusion_values.sh  -c <cluster> -n <namespace> \
  --provider <provider> --num-solr 1 --node-pool "<node_pool>"
Pass --help for usage details.

In this example:

  • <provider> is the K8s platform you’re running on, such as gke

  • <cluster> is the name of your cluster

  • <namespace> is the K8s namespace where you plan to install Fusion

The --node-pool option specifies the node selector label for determining which nodes to run Fusion pods. You can pass "{}" to let Kubernetes decide which nodes to schedule pods on.

This file is referred to as ${MY_VALUES} in the commands belo. Replace the filename with the correct filename for your environment. Keep this file handy, as you’ll need it to customize Fusion settings and upgrade to a newer version.

Review the settings in the custom values YAML file to ensure the defaults are appropriate for your environment, including the number of Solr and Zookeeper replicas.

Add the Lucidworks Helm repo:

helm repo add lucidworks https://charts.lucidworks.com

The customize_fusion_values.sh script creates an upgrade script to install/upgrade Fusion into Kubernetes using Helm. Look in the directory where you ran customize_fusion_values.sh for a script named like: <provider>_<cluster>_<namespace>_upgrade_fusion.sh. Run this script to install Fusion.

RedHat OpenShift

We can deploy Fusion in an existing OpenShift cluster. This cluster should be created using OpenShift Infrastructure Provider. A Red Hat Customer Portal account is required. OpenShift Online services are not supported.

The easiest way to install on OpenShift is to run the setup_f5_k8s.sh script for your existing cluster; use the --help option to see script usage. For instance, the following command will install Fusion 5 into the specified namespace (-n) and OpenShift cluster (-c):

./setup_f5_k8s.sh -c <CLUSTER> -n <NAMESPACE> --provider oc

Tip: kubectl should work with your OpenShift cluster (see: https://docs.openshift.com/container-platform/4.1/cli_reference/usage-oc-kubectl.html) and Lucidworks recommends installing the latest kubectl for your workstation instead of using oc for installing Fusion 5. However, if you do not have kubectl installed, then you’ll need to update the upgrade script created by setup_f5_k8s.sh to use oc instead of kubectl (search and replace on the BASH script using a text editor).

When you’re ready to deploy Fusion to a production-like environment, refer to the Planning section of the Survival Guide.

Lucidworks recommends using Helm v3, but in case Tiller is required for Helm v2, the cluster security needs to be relaxed to allow images to run with different UIDs:

oc adm policy add-scc-to-group anyuid system:authenticated

Verifying the Fusion Installation

In this section, we provide some tips on how to verify the Fusion installation.

Check if the Fusion Admin UI is available at https://<fusion-host>:6764/admin/.

Let’s review some useful kubectl commands.

Enhance the K8s Command-line Experience

Here is a list of tools we found useful for improving your command-line experience with Kubernetes:

Useful kubectl commands

Set the namespace for kubectl if not using the default:

kubectl config set-context --current --namespace=<NAMESPACE>

This saves you from having to pass -n with every command.

Get a list of running pods: k get pods

Get logs for a pod using a label: k logs –l app.kubernetes.io/component=query-pipeline

Get pod deployment spec and details: k get pods <pod_id> -o yaml

Get details about a pod events: k describe po <pod_id>

Port forward to a specific pod: k port-forward <pod_id> 8983:8983

SSH into a pod: k exec -it <pod_id> — /bin/bash

CPU/Memory usage report for pods: k top pods

Forcefully kill a pod: k delete po <pod_id> --force --grace-period 0

Scale up (or down) a deployment: k scale deployment.v1.apps/<id> --replicas=N

Get a list of pod versions: k get po -o jsonpath='{..image}' | tr -s '' '\n' | sort | uniq

Check Fusion Pods and Services

Once the install script completes, you can check that all pods and services are available using:

kubectl get pods

If all goes well, you should see a list of pods similar to:

NAME                                                        READY   STATUS    RESTARTS   AGE
seldon-controller-manager-6675874894-qxwrv                  1/1     Running   0          8m45s
f5-admin-ui-74d794f4f8-m5jms                                1/1     Running   0          8m45s
f5-ambassador-fd6b9b5dc-7ghf6                               1/1     Running   0          8m43s
f5-api-gateway-6b9998b9c-tmchk                              1/1     Running   0          8m45s
f5-auth-ui-7565564b4c-rdc74                                 1/1     Running   0          8m42s
f5-classic-rest-service-0                                   1/1     Running   3          8m44s
f5-devops-ui-77bb867ffb-fbzxd                               1/1     Running   0          8m42s
f5-fusion-admin-78b8f8fc7f-4d7l8                            1/1     Running   0          8m42s
f5-fusion-indexing-599c8d448-xzsvm                          1/1     Running   0          8m44s
f5-insights-665fd9f6fc-g5psw                                1/1     Running   0          8m43s
f5-job-launcher-84dd4c5c96-p8528                            1/1     Running   0          8m44s
f5-job-rest-server-6d44d964b8-xtnxw                         1/1     Running   0          8m45s
f5-logstash-0                                               1/1     Running   0          8m45s
f5-ml-model-service-6987dc94c9-9ppp8                        2/2     Running   1          8m45s
f5-monitoring-grafana-5d499dbb58-pzw72                      1/1     Running   0          10m
f5-monitoring-prometheus-kube-state-metrics-54d6678dv9h7h   1/1     Running   0          10m
f5-monitoring-prometheus-pushgateway-7d65c65b85-vwrwf       1/1     Running   0          10m
f5-monitoring-prometheus-server-0                           2/2     Running   0          10m
f5-pm-ui-86cbc5bb65-nd2n8                                   1/1     Running   0          8m44s
f5-pulsar-bookkeeper-0                                      1/1     Running   0          8m45s
f5-pulsar-broker-b56cc776f-56msx                            1/1     Running   0          8m45s
f5-query-pipeline-5d75d7d5f4-l2mdf                          1/1     Running   0          8m43s
f5-connectors-7bb6cfc65f-7wfs2                              1/1     Running   0          8m42s
f5-connectors-backend-987fdc648-dldwv                       1/1     Running   0          8m45s
f5-rules-ui-6b9d55b78f-9hzzj                                1/1     Running   0          8m43s
f5-solr-0                                                   1/1     Running   0          8m44s
f5-solr-exporter-c4687c785-jsm7x                            1/1     Running   0          8m45s
f5-ui-6cdbcc68c6-rj9cq                                      1/1     Running   0          8m45s
f5-webapps-6d6bb9bfd-hm4qx                                  1/1     Running   0          8m45s
f5-workflow-controller-7b66679fb7-sjbvp                     1/1     Running   0          8m44s
f5-zookeeper-0                                              1/1     Running   0          8m45s

The number of pods per deployment / statefulset will vary based on your cluster size and replicaCount settings in your custom values YAML file. Also, don’t worry if you see some pods having been restarted as that just means they were too slow to come up and Kubernetes killed and restarted them. You do want to see at least one pod running for every service. If a pod is not running after waiting a sufficient amount of time, use kubectl logs <pod_id> to see the logs for that pod; to see the logs for previous versions of a pod, use: kubectl logs <pod_id> -p. You can also look at the actions Kubernetes performed on the pod using kubectl describe po <pod_id>.

To see a list of Fusion services, do:

kubectl get svc

For an overview of the various Fusion 5 microservices, see: https://doc.lucidworks.com/fusion/5.3/149/fusion-microservices

Once you’re ready to build a Fusion cluster for production, please see the Fusion 5 Survival Guide in this repo.

Upgrading with Zero Downtime

One of the most powerful features provided by Kubernetes and a cloud-native microservices architecture is the ability to do a rolling update on a live cluster. Fusion 5 allows customers to upgrade from Fusion 5.x.y to a later 5.x.z version on a live cluster with zero downtime or disruption of service.

When Kubernetes performs a rolling update to an individual microservice, there will be a mix of old and new services in the cluster concurrently (only briefly in most cases) and requests from other services will be routed to both versions. Consequently, Lucidworks ensures all changes we make to our service do not break the API interface exposed to other services in the same 5.x line of releases. We also ensure stored configuration remains compatible in the same 5.x release line.

Lucidworks releases minor updates to individual services frequently, so our customers can pull in those upgrades using Helm at their discretion.

To upgrade your cluster at any time, use the --upgrade option with our setup scripts in this repo.

The scripts in this repo automatically pull in the latest chart updates from our Helm repository and deploy any updates needed by doing a diff of your current installation and the latest release from Lucidworks. To see what would be upgraded, you can pass the --dry-run option to the script.

Grafana Dashboards

Get the initial Grafana password from a K8s secret by doing:

kubectl get secret --namespace "${NAMESPACE}" ${RELEASE}-monitoring-grafana \
  -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

With Grafana, you can either setup a temporary port-forward to a Grafana pod or expose Grafana on an external IP using a K8s LoadBalancer. To define a LoadBalancer, do (replace ${RELEASE} with your Helm release label):

kubectl expose deployment ${RELEASE}-monitoring-grafana --type=LoadBalancer --name=grafana --port=3000 --target-port=3000

You can use kubectl get services --namespace <namespace> to determine when the load balancer is setup and its IP address. Direct your browser to http://<GrafanaIP>:3000 and enter the username admin@localhost and the password that was returned in the previous step.

This will log you into the application. It is recommended that you create another administrative user with a more desirable password.

The dashboards and datasoure will be setup for you in grafana, simply navigate to DashboardsManage to view the vailable dashboards