Simulations

Simulations are where causal understanding becomes actionable. You've discovered the causal structure of your data—now you can use it to answer the questions that matter: What should we do? What will happen if we change course? Why did that outcome occur?

Unlike traditional analytics that can only describe what happened, simulations let you explore what could happen. They leverage your Digital Twin's causal model to predict the effects of actions you haven't taken yet, find optimal strategies you haven't tried, and explain outcomes you didn't expect.

This is the payoff for all the work that came before: connecting data, building ontology, and discovering causes. Simulations turn that foundation into decisions.

For the technical background, see Digital Twin & Simulations.

(SCREENSHOT: Simulation tab showing simulation type selector and configuration panel)


Choosing the Right Simulation

RootCause.ai offers several simulation types, each designed for different questions:

Type
Question It Answers
Example

Intervention

"What happens if we do X?"

What if we raise prices 10%?

Optimization

"What's the best action to achieve Y?"

How do we maximize revenue given budget constraints?

Best Action

"What's the smallest change to reach target Z?"

What's the minimum discount to convert this customer?

Explanation

"Why did outcome W happen?"

What's driving the increase in churn?

Prediction

"What will happen to this specific case?"

Will this customer churn?

Forecast

"What will happen over time?"

What will revenue look like next quarter?

Let's explore each one.


Intervention

Interventions test "what-if" scenarios. You specify changes to one or more variables, and the simulation shows how those changes propagate through the causal graph to affect outcomes.

When to Use:

  • Strategic planning: "What if we enter a new market?"

  • Policy decisions: "What if we change our return policy?"

  • Resource allocation: "What if we shift budget from channel A to channel B?"

Example Scenario:

You want to understand the impact of increasing marketing spend by 20%. An intervention simulation shows not just the direct effect on awareness, but the downstream effects on leads, conversions, and ultimately revenue—accounting for all the causal paths in your model.

(SCREENSHOT: Intervention configuration showing variable selector, value input, and condition options)

How to Run:

  1. Go to the Simulate tab

  2. Select Intervention as the simulation type

  3. Add interventions:

    • Choose the variable to change

    • Set the new value (fixed value, percentage change, or segment-specific)

    • Optionally add conditions ("only for premium customers")

  4. Define metrics to measure (SQL queries that calculate KPIs)

  5. Click Run Simulation

Reading Results:

  • Baseline vs. Intervention: Side-by-side comparison showing what would happen without the change versus with it

  • Confidence Intervals: Uncertainty ranges around the predicted effects

  • Effect Breakdown: How the effect flows through different causal paths

(SCREENSHOT: Intervention results showing baseline vs. intervention comparison with confidence intervals)


Optimization

Optimization goes beyond asking "what if" to asking "what's best." Given objectives and constraints, RootCause.ai automatically searches for the combination of inputs that maximizes (or minimizes) your goals.

When to Use:

  • Resource allocation: "How should we split marketing budget across channels?"

  • Pricing strategy: "What prices maximize revenue while maintaining market share?"

  • Operational efficiency: "What settings minimize cost while meeting quality standards?"

Example Scenario:

You want to maximize customer lifetime value while keeping acquisition cost under $50. Optimization searches across combinations of marketing channels, offer types, and targeting criteria to find the strategy that hits your target.

(SCREENSHOT: Optimization configuration showing objectives, decision variables, and constraints)

How to Run:

  1. Select Optimization as the simulation type

  2. Define Objectives:

    • Variable to optimize (e.g., revenue)

    • Direction (maximize or minimize)

    • Measurement (SQL query)

  3. Specify Decision Variables (what the optimizer can change)

  4. Set Constraints (limits that must be respected)

  5. Click Run Simulation

Reading Results:

  • Optimal Values: Recommended settings for each decision variable

  • Expected Outcome: Predicted result at the optimum

  • Trade-off Analysis: If you have multiple objectives, how they balance

(SCREENSHOT: Optimization results showing optimal values and expected outcomes)


Best Action (Counterfactual)

Best Action answers a specific question: "What's the minimum change needed to achieve a desired outcome?" It's counterfactual reasoning—what would have to be different for reality to be different?

When to Use:

  • Customer conversion: "What's the smallest discount to convert this hesitant customer?"

  • Failure prevention: "What could we have changed to prevent this defect?"

  • Goal attainment: "What's the easiest path to hitting our quarterly target?"

Example Scenario:

A high-value customer is predicted to churn. Best Action analyzes their specific situation and recommends the minimum intervention—perhaps a 15% discount, or a service upgrade—that would flip the prediction.

(SCREENSHOT: Best Action configuration showing sample record input and target outcome selection)

How to Run:

  1. Select Best Action as the simulation type

  2. Provide Sample Records (specific cases to analyze—customers, transactions, etc.)

  3. Set Targets (the outcome you want to achieve)

  4. Configure Constraints (what can and can't be changed)

  5. Set Max Changes (limit complexity of recommendations)

  6. Click Run Simulation

Reading Results:

  • Recommended Changes: Specific modifications for each sample

  • Expected Outcome: Predicted result if changes are applied

  • Confidence: How certain the model is about the recommendation

(SCREENSHOT: Best Action results showing recommended changes and success probability)


Explanation

Explanation helps you understand why things happen. Instead of predicting forward from causes to effects, it reasons backward from effects to causes.

When to Use:

  • Root cause analysis: "Why did churn increase last quarter?"

  • Driver identification: "What factors most influence customer satisfaction?"

  • Impact assessment: "What are all the downstream effects of this variable?"

Three Modes:

  • Directional: "How does A affect B?" – Traces the specific causal path between two variables

  • Discovery: "What influences B?" – Finds all causes of a specific outcome

  • Impact: "What does A affect?" – Finds all effects of a specific variable

(SCREENSHOT: Explanation mode selector with Directional, Discovery, and Impact options)

Example Scenario:

Sales dropped unexpectedly. Explanation in Discovery mode identifies that the drop traces back to a supplier change that affected product availability, which affected customer satisfaction, which affected repeat purchases.

How to Run:

  1. Select Explanation as the simulation type

  2. Choose the mode

  3. Select source and/or target variables (depending on mode)

  4. Optionally add segment filters

  5. Click Run Simulation

Reading Results:

  • Causal Paths: Chains of causation with contribution weights

  • Effect Sizes: How much each driver contributes

  • Segmented Views: How explanations differ across customer segments or time periods

(SCREENSHOT: Explanation results showing causal paths with contribution weights)


Prediction

Prediction generates forecasts for specific cases with uncertainty estimates. Given inputs, what's the likely outcome?

When to Use:

  • Risk assessment: "Will this customer churn?"

  • Lead scoring: "How likely is this prospect to convert?"

  • Outcome estimation: "What revenue can we expect from this deal?"

Example Scenario:

You have a new customer with specific attributes. Prediction estimates their likely lifetime value, along with uncertainty bounds showing the range of plausible outcomes.

How to Run:

  1. Select Prediction as the simulation type

  2. Enter Input Data (values for known variables)

  3. Select Target Variables to predict

  4. Click Run Simulation

Reading Results:

  • Predicted Values: Most likely outcome for each target

  • Confidence Intervals: Range showing uncertainty

  • Distribution: Full probability distribution of outcomes

(SCREENSHOT: Prediction results showing predicted value with confidence interval visualization)


Forecast (Temporal Twins)

Forecast projects variables forward in time, available only for temporal Digital Twins. It uses causal relationships plus temporal patterns to predict how variables will evolve.

When to Use:

  • Planning: "What will demand look like next quarter?"

  • Trend analysis: "How will this metric evolve over 12 months?"

  • Scenario planning: "What's the range of possible futures?"

Example Scenario:

You want to project revenue for the next six months. Forecast generates a time series prediction with uncertainty bands that widen as you look further into the future.

(SCREENSHOT: Forecast configuration showing target variables, horizon, and confidence level settings)

How to Run:

  1. Select Forecast as the simulation type

  2. Select Target Variables to forecast

  3. Set Forecast Horizon (number of time periods ahead)

  4. Optionally set a Starting Point

  5. Set Confidence Level for uncertainty bands

  6. Click Run Simulation

Reading Results:

  • Time Series: Projected values over time

  • Confidence Bands: Widening uncertainty as you look further ahead

  • Trend Line: Central forecast trajectory

(SCREENSHOT: Forecast results showing time series chart with confidence bands)


Temporal Intervention (Temporal Twins)

For temporal twins, you can script interventions that happen at specific times and see effects unfold over time.

When to Use:

  • Campaign planning: "What if we run a promotion in July?"

  • Phased rollout: "How does a gradual price increase affect revenue over time?"

  • Policy analysis: "What's the long-term impact of this change?"

Example Scenario:

You're planning a marketing campaign for Q3. Temporal intervention shows not just the immediate lift, but how the effect builds over time and eventually decays.

How to Run:

  1. Select Intervention (for temporal twins, this becomes temporal intervention)

  2. Define interventions with time ranges (when does each change start and end?)

  3. Set metrics with temporal measurement

  4. Click Run Simulation

Reading Results:

  • Timeline: Baseline vs. intervention over time

  • Effect Evolution: How the impact builds, peaks, and decays

  • Cumulative Impact: Total effect across the time period

(SCREENSHOT: Temporal intervention results showing timeline with baseline and intervention curves)


Natural Language Queries

Don't want to click through configuration screens? For Intervention and Optimization, you can describe what you want in plain language:

"What happens if we increase price by 10% for our premium tier?"

"Find the marketing mix that maximizes conversions with a $100K budget"

Type your query in the text field, and RootCause.ai translates it into a structured simulation specification. Review the interpretation, adjust if needed, and run.

(SCREENSHOT: Natural language input field with generated simulation specification below)


Running Across Multiple Versions

As your Digital Twin evolves, you may want to compare how different model versions affect your conclusions.

  1. Select multiple versions in the Versions dropdown

  2. Run the simulation

  3. Results show side-by-side comparison across versions

This is valuable for model validation ("do results stay consistent?") and understanding model evolution ("did that refinement change our conclusions?").

(SCREENSHOT: Version selector with multiple versions selected and comparison results)


Interpreting Results

Confidence Intervals

Every result comes with uncertainty estimates. Wider intervals mean more uncertainty—the model is less sure. Narrow intervals indicate higher confidence. Consider these ranges when making decisions.

Monte Carlo Sampling

Behind the scenes, simulations run thousands of scenarios with different random samples from the model's distributions. This produces full distributions of outcomes, not just point estimates. You see the range of what could happen, not just a single prediction.

Baseline Comparison

Intervention and optimization results always compare against a baseline (the world without your change). This shows the marginal effect of your action—what you'd gain or lose by acting.


Best Practices

Start Simple

Test one intervention at a time before combining multiple changes. Complex scenarios are harder to interpret and validate.

Use Realistic Values

Extreme interventions ("what if we increase price 1000%?") push the model outside its training distribution. Results for extreme scenarios are less reliable. Stay within realistic ranges.

Consider Constraints

The "optimal" solution that ignores real-world constraints isn't useful. Account for budget limits, operational constraints, and strategic boundaries in your simulation design.

Validate with Experts

Simulation results should make sense to domain experts. Share findings before acting—expert sanity checks catch model errors that metrics miss.

Document Scenarios

When you run an important simulation, save it. Note what you tested, why, and what you concluded. This creates an audit trail and enables reproducibility.


Next Steps

Simulations generate insights; communicating them is the next step:

  • Create a Report to share your simulation findings with stakeholders

  • Use the RootCause Assistant to explore scenarios through conversation

Last updated