Installation
Deploy to Google GKE

Deploy MOSTLY AI to a Google GKE cluster

💡

This page will be updated soon with deployment details about MOSTLY AI v200. Stay tuned!

You can run MOSTLY AI in a Google Kubernetes Engine (GKE) cluster. If you follow the process provided below, you will be able to configure, deploy, and run MOSTLY AI.

Prerequisites

Pre-deployment

Before you deploy MOSTLY AI to a GKE cluster, you need to complete the necessary pre-deployment tasks.

Task 1. Create a GCP project

To create a GKE cluster, you must first have a project in Google Cloud Platform.

The project is the logical container for your GKE cluster. Your Google Cloud Platform monthly billing is accumulated based on the usage of services in a project.

MOSTLY AI recommends that you create a dedicated project for your MOSTLY AI application.

Steps

  1. Open the Google Cloud Console (opens in a new tab).
  2. Click CREATE OR SELECT A PROJECT. Google Cloud - Click CREATE OR SELECT A PROJECT
  3. Click NEW PROJECT. Google Cloud - Click NEW PROJECT
  4. On the New Project page, configure the new project.
    1. For Project name, enter a name for the project.
    2. (Optional) For Organization, select your organization or select to use another billing account.
    3. (Optional) For Location, select again your organization or click BROWSE to select another.
    4. Click CREATE. Google Cloud - Edit project and click CREATE
    5. From the success notification, click SELECT PROJECT. Google Cloud notification - click SELECT PROJECT
    6. After the project opens, select Kubernetes Engine from the sidebar. Google Cloud project - select Kubernetes Engine
    7. On the Product details page for Kubernetes Engine API, click ENABLE. Google Cloud project - enable Kubernetes Engine API

Result

Wait until the Kubernetes Engine API is enabled. This can take 30 to 60 seconds to complete.

The Kubernetes Engine opens on the Clusters page from where you can create a new Kubernetes cluster (also referred to as a GKE cluster).

Task 2: Create a GKE cluster

After you have a GCP project, you can now create a new Google Kubernetes Engine (GKE) cluster.

When you create the cluster, you define the number of nodes in the cluster and the type of nodes in the cluster. The type of nodes defines the compute resources that your cluster will have.

For your initial configuration, MOSTLY AI recommends that you use a single-node cluster with the node type e2-standard-16 which provides:

  • 16 vCPUs
  • 8 cores
  • 64GB memory

Steps

  1. From the Clusters page, click Create. Google Cloud project - enable Kubernetes Engine API
  2. In the Create an Autopilot cluster wizard (which starts by default), click SWITCH TO STANDARD CLUSTER in the upper right. Google Cloud GKE - click Switch to standard cluster
  3. In the confirmation dialog box, click SWITCH TO STANDARD CLUSTER.
  4. Configure the number of nodes in the cluster.
    1. From the sidebar under NODE POOLS, select the default-pool.
    2. Under Size, set the Number of nodes to 1.

    You can adjust to a higher number of nodes if you find that you need more nodes to process more synthetic data workloads.

    Google Cloud GKE - Configure nodes
  5. Configure a node type and disk size recommended by MOSTLY AI for your synthetic data workloads.
    1. From the sidebar on the left, expand default-pool and select Nodes.
    2. Under Machine configuration, select a powerful node to power your MOSTLY AI synthetic data workloads.
    3. For Machine type, select the e2-standard-16 type with 16 virtual CPUs, 8 cores, and 64GB memory.
    4. For Boot disk size, define at least 200GB. Google Cloud GKE - Configure compute and disk space
  6. Enable the Filestore CSI driver.
    1. From the sidebar on the left under CLUSTER, select Features.
    2. Under Other, enable the checkbox for Filestore CSI Driver. Google Cloud GKE - Enable Filestore CSI driver
  7. Leave the remaining default configurations.
  8. Click CREATE at the bottom.

Result

The cluster creation starts. It can take several minutes to complete the cluster creation.

Task 3: Connect to the GKE cluster

With the GKE cluster created, you can now use your command-line to connect to the cluster.

Make sure that you install gcloud CLI and kubectl as mentioned in the Prerequisites.

Steps

  1. In Kubernetes Engine, select Clusters from the sidebar.
  2. Hover over the cluster and click the Actions three-dot button.
  3. Select Connect from the drop-down menu. Google Cloud GKE - Select Connect to cluster
  4. In the modal window Connect to the cluster, copy the command to configure your cluster access. Google Cloud GKE - Copy connect command
  5. Open a terminal or a command-line application, and paste and run the command.

    For example, the following is a template command where you need to replace the values for the <cluster-name>, <zone-name>, and <project-identifier> arguments with your relevant values.

    When you copy the command in step 4, the values will be pre-filled for you.
    gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-identifier>
    If you connected successfully, you will see the following output:
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for <cluster-name>.

Result

The GKE cluster connection is now configured on your local system.

What's next

You can now use kubectl to communicate with your GKE cluster and helm to install and manage packages.

Task 4: Install NGINX

Install the NGINX ingress controller to assign a LoadBalancer IP address to your cluster. You can then assign a FQDN to the IP address and make the MOSTLY AI web application accessible to your users in your organization.

Steps

  1. Add the helm chart repository for the Ingress NGINX Controller.
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    You should see the following result:
    "ingress-nginx" has been added to your repositories
  2. Install the Ingress NGINX Controller.
    helm install nginx-ingress ingress-nginx/ingress-nginx

Result

The Ingress NGINX Controller is now installed for your GKE cluster. You might see a result similar to the output below.

Note the line that indicates that you are now waiting for the LoadBalancer IP to be available and a suggested command to check the status of when the IP is available.

NAME: nginx-ingress
LAST DEPLOYED: Thu Sep 28 01:35:44 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

What's next

To check if cluster ingress IP address has been assigned, run the following command:

kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller

If your IP is assigned, you will see it under the EXTERNAL-IP column. The EXTERNAL-IP value below is masked on purpose.

NAME                                     TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE     SELECTOR
nginx-ingress-ingress-nginx-controller   LoadBalancer   10.48.12.255   34.140.40.***   80:31623/TCP,443:32105/TCP   4m41s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

You can now use your ingress IP and assign it to the domain name that you intend to use for your MOSTLY AI cluster.

Task 5: Enable the Cloud Filestore API

Steps

  1. Go to https://console.cloud.google.com/marketplace/product/google/file.googleapis.com (opens in a new tab).
  2. Select the project for which you want to enable the Cloud Filestore API.
  3. Click Enable. Google Cloud Filestore API - Enable

Result

After the Cloud Filestore API is enabled, you can see that the Manage button is now available which you can use to disable the API (if you that is necessary at a later stage).

Google Cloud - Filestore API enabled

Deployment

You can deploy MOSTLY AI with from the Google Cloud Marketplace or manually with the MOSTLY AI Helm chart.

Deploy from the Google Cloud Marketplace

MOSTLY AI is available in the Google Cloud Marketplace and you can use the offering to deploy in your GKE cluster.

Steps

  1. Go to https://console.cloud.google.com/marketplace/browse?q=mostly%20ai (opens in a new tab).
  2. Select MOSTLY AI Synthetic Data Platform BYOL. GCP Marketplace - View MOSTLY AI offerings
  3. Click GET STARTED. GCP Marketplace - Click GET STARTED
  4. Select a Google Cloud project for MOSTLY AI, select the Terms and agreements checkbox, and click AGREE. GCP Marketplace - Select Google Cloud project and click Agree
  5. Click DEPLOY in the prompt. GCP Marketplace - Click DEPLOY
  6. On the Deploy MOSTLY AI page, click OR SELECT AN EXISTING CLUSTER. GCP Marketplace - Select existing cluster
  7. For Existing Kubernetes Cluster, select your GKE cluster. GCP Marketplace - Select existing cluster 02
  8. For Namespace, select Create a namespace, and for New namespace name set mostly-ai.
    💡

    We recommend that you use a separate namespace called mostly-ai to aid deployment and eventual cleanup.

    GCP Marketplace - Select namespace
  9. For App instance name, set the GKE app name.
  10. For The domain name, set a fully-qualified domain (FQDN) name for the MOSTLY AI app. GCP Marketplace - Select namespace
  11. Click DEPLOY.

Result

Google Cloud redirects your browser to the Applications tab of your GKE cluster and shows the deployment progress.

GCP Marketplace - MOSTLY AI deployment in progress

When the deployment completes, the Application details page lists all components and their status.

GCP Marketplace - MOSTLY AI deployment complete

Deploy manually with MOSTLY AI Helm chart

If you wish to deploy MOSTLY AI manually with the Helm chart, you need to obtain the Helm chart from your MOSTLY AI Customer Experience Manager. You can then configure your deployment in the values.yaml file and use the helm command to start the deployment process.

The values.yaml file is part of the MOSTLY AI Helm chart. The Helm chart includes configuration files that define the resources and configurations needed to deploy MOSTLY AI.

Steps

  1. Open the values.yaml file in a text editor.
  2. Define the domain name (FQDN) for your MOSTLY AI application.
    1. For domain, define the domain name from which you want to reach the MOSTLY AI application.
      values.yaml
      name: mostlyai
       
      domain: mostlyai-cluster.com
       
      ...
    2. For ingress.domain define the domain name again.
      values.yaml
      ...
      ingress:
          fqdn: mostlyai-cluster.com
          ...
    3. For the line nginx.ingress.kubernetes.io/cors-allow-origin, edit it to reflect your domain as shown below.
      values.yaml
      ...
      nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
      nginx.ingress.kubernetes.io/cors-allow-origin: https://*.mostlyai-cluster.com
      nginx.ingress.kubernetes.io/enable-cors: "true"
      ...
  3. Define the secret to the MOSTLY AI image repository on the docker_secret line.
    💡

    You can obtain the secret from the MOSTLY AI image repository from your MOSTLY AI Customer Experience Engineer.

    If you plan to use an internal image repository, see Configure an internal image repository.

    values.yaml
    ...
    domain: mostlyai-cluster.com
     
    # required to be updated with provided by Mostly AI json encoded with base64 docker config
    docker_secret: eyJhdXR****
    ...
  4. For the platform line, define k8s. This corresponds to the use of GKE as a platform.
    values.yaml
    ...
    platform: k8s
    ...
  5. Define the storage class for each service that requires one.
    values.yaml
    ...
    PSQL:
    ...
    name: mostly-psql
    ...
        pvc:
        name: mostly-db
        size: 50Gi
        storageClassName: standard-rwo
    ...
    CORDINATOR:
        pvc:
        name: mostly-data
        size: 50Gi
        accessMode: ReadWriteOnce
        storageClassName: standard-rwo
    ...
    APP:
        name: mostly-app
    ...
  6. Verify the affinity.nodeAffinity lines and set the label mostly-app=yes to the application node and the label mostly-worker=yes too all worker nodes. If you deploy onto a single node in a cluster, set both labels on that node.
    1. View all nodes.
      kubectl get nodes
    2. Apply the mostly_app=yes label to the node that will run MOSTLY AI application services.
      kubectl label nodes APP_NODE mostly-app=yes
    3. Apply the mostly_worker=yes label to the nodes that will run AI tasks and generate synthetic datasets.
      kubectl label nodes WORKER_NODE mostly-worker=yes
    4. Make sure to leave the lines below as indicated.
      values.yaml
      ...
      PSQL:
          affinity:
              nodeAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                      nodeSelectorTerms:
                          - matchExpressions:
                              - key: mostly_app
                                  operator: In
                                  values:
                                      - 'yes'
      ...
      KEYCLOAK:
          affinity:
              ...
      ...
      COORDINATOR:
          affinity:
              ...
      ...
      DATA:
          affinity:
              ...
      ...
      APP:
          affinity:
              ...
      ...
      UI:
          affinity:
              ...
      ...
      agent:
      ...
          affinity:
              nodeAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                      nodeSelectorTerms:
                          - matchExpressions:
                              - key: mostly_worker
                                  operator: In
                                  values:
                                      - 'yes'
      ...
      engine:
      ...
          affinity:
              ...
  7. For each service in the values.yaml file, you can set its security context so that the service can securely access mostly-data. The lines below are available for each service (total of 9) and you can use them to set the security context based on your internal security policies.
    values.yaml
    ...
    name: mostly-psql
    ...
    name: mostly-keycloak
    ...
    name: mostly-coordinator
    ...
    name: mostly-data
    ...
    name: mostly-app
    ...
    name: mostly-ui
    ...
    ### Agent settings
    agent:
    ...
    ### Engine settings
    engine:
    ...
        podSecurityContext:
            fsGroup: 1001
     
        securityContext:
            securityContext:
            runAsNonRoot: true
            seccompProfile:
                type: RuntimeDefault
            supplementalGroups:
                - 26
                - 70
  8. Save the file.
  9. In a terminal or command-line application, change directory to where your MOSTLY AI Helm chart is located.
  10. Run the helm upgrade command with the options defined below.
    helm upgrade --install mostly-ai ./ --values values.yaml --namespace mostly-ai --create-namespace --debug

Result

The MOSTLY AI applications and services pods are queued for start up in your GKE cluster.

What's next

You can track the startup progress in Kubernetes Engine on the Workloads tab. While the pods are starting, you might see the status column showing errors, such as Does not have minimum availability.

Give the pods at least 10 minutes to start properly and establish the necessary connections between services.

After you see that all pods have a successful startup, you can continue with the post-deployment tasks.

Post-deployment

With the MOSTLY AI pods running, you can now log in to your MOSTLY AI deployment for the first time.

Log in to your MOSTLY AI deployment

Log in for the first time to your MOSTLY AI deployment to set a new password for the superadmin user.

Prerequisites

Contact MOSTLY AI to obtain the supeadmin credentials as you need them to log in for the first time.

Steps

  1. Open your FQDN in your browser.
    Step result: You Sign in page for your MOSTLY AI deployment opens. MOSTLY AI Deployment - Log in page
  2. Enter the superadmin credentials and click Sign in.
  3. Provide a new password and click Change password.

Result

Your superadmin password is now changed and you can use it to log in again to your MOSTLY AI deployment.