Deployment Guide
This guide covers deploying RootCause.ai in your Kubernetes environment. The platform is deployed via Helm charts with all components running within your infrastructure. All of the values and props you will/may need can be found in customisation.
Deployment Process
Install the required infrastructure services: MongoDB, PostgreSQL, Redis, RabbitMQ, Temporal, FusionAuth, and LiteLLM.
Install the RootCause.ai platform components: the web UI, data service API, data fusion API, and ML jobs processor.
3. Configure
Set up secrets, environment variables, SSO, and integrations.
4. Scale
Configure replica counts, resource limits, and autoscaling for production workloads.
4b. Upgrade
Procedures for upgrading to new versions with zero-downtime.
Architecture Overview
RootCause.ai consists of:
Platform Components:
platform – Web UI and backend-for-frontend
data-service – Core API for data processing and orchestration
ml-jobs – Headless compute for causal discovery and simulations
data-fusion - LDAP layer for all things data
Dependencies:
MongoDB – Document database for application data
PostgreSQL – Relational database for Temporal, FusionAuth, LiteLLM
Redis – Caching and session storage
RabbitMQ – Message queue for async processing
Temporal – Durable workflow execution
FusionAuth – Identity and access management
LiteLLM – LLM API proxy and management
Prerequisites
Kubernetes cluster v1.30.0+
Helm 3.0+
kubectl configured for your cluster
3-4 nodes with 8 vCPU, 64GB RAM each
SSD-backed persistent volumes
Image pull access to GitLab registry
Quick Start
See individual guides for detailed instructions.
Last updated

