RootCause Assistant (BETA)
The RootCause Assistant lets you interact with your data, Digital Twins, and analyses through natural language. Ask questions, run simulations, and explore causal relationships—all through conversation.
Unlike typical AI chat tools, the Assistant doesn't try to reason about your data directly. It orchestrates RootCause.ai's analytical engines—causal discovery, counterfactual simulation, data queries—and reports what those engines compute. The AI handles the conversation; the math comes from purpose-built algorithms.
BETA Feature: The RootCause Assistant is in beta. Capabilities are expanding, and some responses may require verification.
(SCREENSHOT: RootCause Assistant chat panel open, showing a conversation with data visualizations)
How the Assistant Works
The Assistant is an orchestration layer, not an analysis engine.
When you ask a question, the Assistant translates your intent into precise tool calls: searching Digital Twins, executing data queries, running causal simulations. It then reports what the engines discovered—it doesn't generate answers from patterns in training data.
Example:
"What happens to churn if we offer free tech support?"
The Assistant:
Identifies the relevant Digital Twin
Invokes the causal simulation engine
Runs the counterfactual against your model
Returns the computed result with the causal pathway quantified
The numbers aren't generated by the AI—they're computed by specialized algorithms and returned with full traceability.
Why This Matters:
Causal discovery and counterfactual reasoning require algorithms that took years to develop. An LLM can't improvise these computations. By separating the conversational layer (AI) from the analytical layer (causal engines), results are derived from your actual data, not pattern-matched from what a "good" answer might look like.
(SCREENSHOT: Diagram showing the orchestration flow from user question to tool calls to computed results)
Evidence and Traceability
Every result from the Assistant carries an evidence trail. This isn't just a citation—it's full traceability back to source data.
Evidence IDs
When the Assistant returns a number or finding, it includes an evidence reference. Click it to see:
The SQL query or computation that produced the result
The source data that was used
The Digital Twin version and configuration
The simulation parameters (for causal analyses)
Example:
You: What's the predicted impact of a 10% price increase on revenue?
Assistant: Based on the @Revenue_Twin model, a 10% price increase is predicted to reduce revenue by 4.2% (95% CI: 3.1% - 5.3%). [Evidence: SIM-2847]
Click [Evidence: SIM-2847] to see the simulation configuration, input parameters, causal pathways, and source data view.
Why Evidence Matters:
Auditability – Every number can be verified and explained
Reproducibility – Re-run any analysis with the same parameters
Trust – See exactly how results were derived, not just what they are
Debugging – When results seem wrong, trace back to understand why
(SCREENSHOT: Evidence panel showing SQL query, source data, and computation details)
Accessing the Assistant
Click the chat icon in the bottom-right corner of the screen. The Assistant panel slides open, ready for your first question.
The Assistant is available throughout the platform—whether you're viewing a Data View, exploring a Digital Twin, or writing a report.
(SCREENSHOT: Chat icon in bottom-right corner, highlighted)
What You Can Do
Ask Questions About Your Data
Get answers backed by actual queries:
"What's the average order value by region?"
"Show me the distribution of customer tenure"
"How many records are in the sales dataset?"
The Assistant executes queries against your data and returns results as tables, charts, or summary statistics—each with evidence you can verify.
(SCREENSHOT: Conversation showing a data question and resulting chart response with evidence link)
Run Causal Simulations
Invoke the causal engine through conversation:
"What happens if we increase price by 10%?"
"Find the optimal marketing spend to maximize conversions"
"What's driving customer churn?"
"What would we need to change to reduce churn by 20%?"
The Assistant calls the appropriate simulation (intervention, optimization, explanation, counterfactual), runs it against your Digital Twin, and presents computed results with full traceability.
(SCREENSHOT: Conversation showing a simulation request, results, and evidence reference)
Navigate the Platform
Get to where you need to be:
"Show me the customer 360 data view"
"Open the revenue digital twin"
"Take me to last month's board report"
Get Help
Learn the platform through conversation:
"How do I upload a file?"
"What's the difference between intervention and counterfactual?"
"Explain what the causal graph is showing me"
(SCREENSHOT: Conversation showing a help question and step-by-step response)
Context Mentions
Use @ to reference specific objects in your workspace:
@Sales_Data– Reference a dataset@Customer_360– Reference a Data View@Revenue_Twin– Reference a Digital Twin@Q4_Report– Reference a report
Example:
"Run an intervention on @Revenue_Twin to see what happens if we increase marketing_spend by 20%"
Context mentions eliminate ambiguity about which object to use for a query or simulation.
(SCREENSHOT: Context mention autocomplete dropdown showing available objects)
Rich Responses
The Assistant returns structured results, not just text:
Data Previews – Sample rows from datasets or Data Views
Charts – Bar charts, line graphs, histograms, scatter plots
KPI Cards – Key metrics displayed prominently
Causal Graphs – Relationship visualizations from Digital Twins
Simulation Results – Intervention comparisons, optimization outcomes
Tables – Structured data displays with sorting and filtering
Each widget includes evidence links. Click through to see the underlying data, queries, or simulation configuration.
(SCREENSHOT: Rich response showing chart widget with evidence link)
Chat Sessions
Session History
Conversations are saved. Access previous sessions from the dropdown at the top of the chat panel.
New Session
Click "New Chat" to start fresh. This clears the conversation context.
Context Persistence
Within a session, the Assistant remembers what you've discussed:
"Show me revenue by region" [Chart displayed] "Now filter that to just Q4" [Updated chart]
Across sessions, the Assistant remembers your workspace structure but not previous conversation details.
(SCREENSHOT: Session selector dropdown showing saved conversations)
Page Context
The Assistant knows where you are in the platform:
On a Data View page: It can answer questions about that Data View
On a Digital Twin page: It can run simulations on that twin
On a Report page: It can help add content
A context badge shows what the Assistant is using for the current conversation.
(SCREENSHOT: Context badge showing current context)
Tips for Best Results
Be Specific
✓ "What's the average revenue per customer in Q4 2024 from @Sales_Data?" ✗ "Tell me about revenue"
Use Context Mentions
Reference specific objects with @ to avoid ambiguity.
Ask One Thing at a Time
Break complex requests into steps:
"Show me the top customers by revenue" → "Which of those churned?" → "What drove their churn?"
Check the Evidence
For important decisions, click through to the evidence. Verify the query was correct, the data was appropriate, and the computation makes sense.
Example Conversations
Exploring Data:
You: What columns are in @Customer_Data? Assistant: Here are the columns in Customer_Data: [displays column list with types]
You: Show me the distribution of customer_tenure Assistant: [displays histogram] [Evidence: QRY-1842]
You: What's the correlation between tenure and revenue? Assistant: The correlation is 0.34. [displays scatter plot] [Evidence: QRY-1843]
Running Simulations:
You: I want to understand what drives churn in @Churn_Model Assistant: I'll run an explanation analysis on the "churned" outcome.
Assistant: [displays causal drivers with contribution weights] The top drivers are: customer_support_calls (32%), contract_length (24%), monthly_charges (18%). [Evidence: SIM-3291]
You: What would it take to reduce churn by 10%? Assistant: Running best action analysis... [displays recommendations] [Evidence: SIM-3292]
(SCREENSHOT: Simulation conversation showing explanation results with evidence)
Limitations
As a beta feature, the Assistant has boundaries:
Scope
The AI handles conversation and orchestration; analysis is done by RootCause engines
Some advanced features may require the full platform UI
Very complex multi-step workflows may need manual intervention
Context
Very long conversations may lose earlier context
The Assistant doesn't remember previous sessions in detail
Data Currency
Real-time data may have slight delays before the Assistant reflects changes
Privacy and Security
Conversations may be processed by AI services to generate responses
The Assistant accesses your workspace data only with your existing permissions
Handle sensitive data according to your organization's policies
Conversation logs may be retained for product improvement
Feedback
Help us improve the Assistant:
Report Issues: If something doesn't work as expected, let your administrator know
Suggest Features: What would make the Assistant more useful?
Share Examples: Particularly helpful (or unhelpful) interactions help us learn
(SCREENSHOT: Feedback button in the Assistant panel)
Last updated

