Skip to content

Kubernetes Installation Guide

TruEra Model Intelligence Platform Containers

The process for obtaining and deploying the AI Quality Platform containers may differ based on the specifics of your Kubernetes environment. TruEra will share its Azure container registry details separately. In production, customers typically automate the downloading/deployment of TruEra’s containers with their CI/CD pipeline. This guide is meant to be a high level overview of the different configuration settings involved in installing TruEra. Feel free to familiarize yourself with key Kubernetes concepts using this guide.


The TruEra system is comprised of a number of microservices that together offer the functionality to ingest models, data and then provide an interface for users to extract intelligence about their models. Each microservice runs as a set of pods in a cluster, and can be configured through Helm charts or Kubernetes YAML files, which are designed to be deployed to any Kubernetes installation. The next section contains a description of the configuration options.

Kubernetes Deployment

TruEra will provide materialized Kubernetes YAML files with customizable sections left empty to be filled in collaboration with the customers' IT/DevOps teams. Some YAML files (CRD, RBAC) might need to be provisioned by an account with cluster admin privileges.

Before installation

Identify a Kubernetes (K8s) cluster with the required amount of CPU cores, RAM and Hard Disk. Before getting started with the installation, create a namespace within K8s for the TruEra application to run. Next, set up storage: The TruEra application requires three (5) volume mounts, each of type ReadWriteMany:

  1. Metadata Database: 10 GB.
  2. Log Storage: 10 GB.
  3. Backup share (Production environments only): 30 GB
  4. TruEra repository-share (data): 50 GB or more depending on amount of data
  5. JDBC Drivers: 5GB

Download containers

TruEra will provide access to a container repository with authentication details. At this point, typically you will copy TruEra images into internal container repository.

List of docker image names: aiq, artifactrepo, backup-service, dataservice, envoy, frontend-node, grafana, kong, kong-ingress-controller, metadata-repo, model-runner-coordinator, model-runner-java, model-runner-python, mongo, monitoring, prometheus, rbac

Sample workflow for manual copying:

docker login -u <username> -p <password> <truera_repo_url>
# For each image listed above, docker image will have to be moved
# from truera registry to internal registry.
docker pull <truera_repo_url>/dev/<image_name>:<tag_name>
docker tag <image_name> <internal_registry_with_subpath>/<image_name>:<tag_name>
docker push <internal_registry_with_subpath>/<image_name>:<tag_name>

At this point, all images should be copied to internal repository.

Configure Kubernetes configurations

As described above TruEra will provide Helm charts or materialized Kubernetes YAML files which need to be configured to the customer's requirements. The following list contains the required and optional configurations.

Required configuration:

  • Kubernetes namespace for deployment: The namespace the TruEra deployment runs in.
  • Persistent volumes for various storage requirements: This allows you to set up persistent storage used by the TruEra services.
  • Docker image repository and image pull secrets: Secrets that allow Kubernetes to pull images for deployment.

Optional configuration:

  • Taints and tolerances if the cluster is being shared: These specify resource sharing parameters in Kubernetes for a multi-tenant deployment.
  • Service account details: Configuration for the Kubernetes service accounts. The list should include the accounts used to configure and/or run the deployment.
  • SSL certificates: Certificates used for all communications outside the TruEra services.
  • Service endpoint (API gateway): A URL to main endpoint for the TruEra deployment
  • OpenID Connect application configuration: For configuring SSO with your internal SSO provider, we support OpenID Connect. You can configure the required OIDC parameters including discovery URL, client id and secret.
  • CPU and memory resources for pods This allows you to configure the resources allocated to TruEra and the different services. Estimates will be provided by TruEra depending on deployment size.
  • Pip and conda configuration if custom repositories are required: If public pip and conda services cannot be accessed, in order to run python models with the correct environment, TruEra requires access to internal pip and conda repos to run models.

Installation instructions

Once you have access to all of the containers, you can install TruEra using Helm or you can apply the YAML configuration files provided to the namespaces to get the TruEra services running.

Using Helm (Preferred)

The Helm installation process uses Helm a TruEra Helm Chart. The TruEra team will provide the Helm Chart to the customer.

The default TruEra Helm chart contains a default values.yaml file with the most common configuration options for all of the TruEra Kubernetes resources. However, since every installation has unique options including different storage and TruEra requirements, an override customer-values.yaml will be provided to the customer.

The typical Helm command to install TruEra is as follows:

helm install <release_name> <path/to/helm/chart> \
    --namespace <release_namespace> \
    --set dataservice.secret.enabled=true \
    --set dataservice.secret.value=$(openssl rand -base64 512 | tr -d '\n') \
    --set tokenservice.secret.enabled=true \
    --set tokenservice.secret.value=$(openssl rand -base64 512 | tr -d '\n') \
    --set ingress.basic_auth.enabled=true \
    -f <path>/customer-values.yaml

Next Steps

Once installed, you can use kubectl get pods to verify that all services are running. It may take up to 15 minutes for all services to come up.

Other Kubernetes Flavors or Distributions


OpenShift users typically use the oc CLI instead of kubectl with some differences. For example, in installation instructions, optionally replace kubectl with oc and replace kubectl namespace with oc project.

Post Installation

  • Configure defaultAdmins in truera/templates/rbac/configmap.yaml
    • This is initially left empty, but must be populated with the kong consumer id of the required admins.
      e.g. defaultAdmins: ["XXXX-XXXX-XXXX"]
  • Configure route to allow external network requests to the cluster.
    • Route Endpoint: kong pod, proxy port (80/443)
    • This allows browsers and various API clients to connect to TruEra.
    • Note: The specific endpoint value may depend on what load balancer your infrastructure provides.