Self-Hosted Requirements
Self-Hosted System Requirements
RootCause.ai is deployed via a single Helm chart into your Kubernetes environment. All components run within your infrastructure. If Kubernetes is not available, a bootstrap script is provided.
What’s Installed:
perceptura-platform: UI and backend-for-frontend (Next.js + API)
data-service: Core backend for data processing, orchestration, and LLM integration (FastAPI)
ml-jobs: Headless compute engine for batch and algorithmic workloads
Dependencies (bring your own or deploy via our charts):
MongoDB (8.0+)
PostgreSQL (16.0+)
Redis (7.2.4)
RabbitMQ (3.12.2+)
Temporal (1.30.0+)
MinIO or other S3-compatible storage
FusionAuth (optional)
LiteLLM for LLM integration
Infrastructure Requirements:
Kubernetes v1.30.0+, amd64 nodes
3–4 nodes, each with 8 vCPU, 64GB RAM
SSD-backed persistent volumes
Optional: GPU (min 48GB VRAM) if running local LLMs
LLM Integration:
RootCause.ai prefers to integrate with your existing LLM infrastructure whenever possible (eg OpenAI, Azure OpenAI, Claude, private model).
If one is not available, RootCause.ai can optionally deploy with a local model on GPU nodes, exposed via an OpenAI-compatible API.
Customer Responsibilities:
Provision Kubernetes cluster with SSD-backed storage
Provide API keys/config for external services (databases, LLMs, etc.)
Set up TLS certificates and DNS for user access
Ensure outbound internet for telemetry, registry, and external LLMs
RootCause.ai Provides:
Complete Helm chart
Infrastructure guidance and architecture docs
Deployment support via Slack, email, and scheduled check-ins
Last updated

