# Simulation Tab

The Simulation tab is where Digital Twins become actionable. You've discovered causal relationships, validated model quality—now you can ask "what if?" and get answers backed by causal evidence.

This tab provides access to all simulation types: interventions, optimizations, counterfactuals, explanations, predictions, and forecasts. Each serves a different analytical purpose, but they all leverage the same underlying causal model.

For detailed documentation of each simulation type, see the [Simulations guide](https://docs.rootcause.ai/user-guide/digital-twin/simulations).

(SCREENSHOT: Simulation tab overview showing simulation type selector and recent runs)

***

### Simulation Types Overview

| Type             | Question It Answers                     | Best For                                        |
| ---------------- | --------------------------------------- | ----------------------------------------------- |
| **Intervention** | "What happens if we change X?"          | Testing policy changes, what-if scenarios       |
| **Optimization** | "What settings achieve goal Y?"         | Finding optimal strategies, resource allocation |
| **Best Action**  | "What's the minimum change to reach Z?" | Counterfactual reasoning, targeted actions      |
| **Explanation**  | "Why did outcome W happen?"             | Root cause analysis, driver identification      |
| **Prediction**   | "What will happen to this case?"        | Risk assessment, lead scoring                   |
| **Forecast**     | "What will happen over time?"           | Trend projection, planning (temporal only)      |

(SCREENSHOT: Simulation type selector cards with icons)

***

### Creating a New Simulation

1. Click **New Simulation**
2. Select the simulation type
3. Configure parameters (varies by type)
4. Click **Run**

Each type has a dedicated configuration form. See below for type-specific details.

(SCREENSHOT: New simulation dialog with type selection)

***

### Intervention Simulations

Test what happens when you change one or more variables.

**Configuration:**

1. **Select variables to intervene on** – Which variables are you changing?
2. **Set intervention values** – What are the new values?
3. **Define conditions** (optional) – Apply only to certain segments
4. **Define metrics** – SQL queries to measure outcomes

**Example:**

> "What happens if we increase marketing\_spend by 20%?"

**Results:**

* Baseline vs. intervention comparison
* Effect on each metric
* Confidence intervals
* Monte Carlo distributions

(SCREENSHOT: Intervention configuration form and results)

***

### Optimization Simulations

Find the best variable settings to achieve objectives.

**Configuration:**

1. **Define objectives** – What to maximize or minimize
2. **Select decision variables** – What can the optimizer change?
3. **Set constraints** – Limits that must be respected

**Example:**

> "Find the marketing mix that maximizes conversions with a $100K budget"

**Results:**

* Optimal values for each decision variable
* Expected outcome at optimum
* Constraint satisfaction status

(SCREENSHOT: Optimization configuration and results showing optimal settings)

***

### Best Action (Counterfactual)

Find the minimum changes needed to achieve a desired outcome.

**Configuration:**

1. **Provide sample records** – Specific cases to analyze
2. **Set target outcome** – What result do you want?
3. **Configure constraints** – What can/can't change?
4. **Set max changes** – Limit recommendation complexity

**Example:**

> "What's the smallest discount needed to convert this hesitant customer?"

**Results:**

* Recommended changes for each sample
* Expected success probability
* Comparison of before/after

(SCREENSHOT: Best Action configuration with sample records and recommendations)

***

### Explanation Simulations

Understand causal drivers behind outcomes.

**Three Modes:**

* **Directional** – How does A affect B specifically?
* **Discovery** – What influences outcome B?
* **Impact** – What does variable A affect?

**Configuration:**

1. Select mode
2. Choose source and/or target variables
3. Optionally filter by segment

**Results:**

* Causal paths with contribution weights
* Effect sizes for each driver
* Segmented breakdown if applicable

(SCREENSHOT: Explanation results showing causal driver breakdown)

***

### Prediction Simulations

Generate predictions for specific cases.

**Configuration:**

1. **Enter input values** – Known variable values
2. **Select targets** – What to predict

**Results:**

* Predicted value for each target
* Confidence intervals
* Full probability distribution

(SCREENSHOT: Prediction input form and results with confidence intervals)

***

### Forecast Simulations (Temporal Only)

Project variables forward in time.

**Configuration:**

1. **Select target variables** – What to forecast
2. **Set forecast horizon** – How far ahead
3. **Choose starting point** – When to begin forecast
4. **Set confidence level** – Width of uncertainty bands

**Results:**

* Time series projections
* Confidence bands
* Trend visualization

(SCREENSHOT: Forecast results showing time series with confidence bands)

***

### Natural Language Input

For Intervention and Optimization, you can describe scenarios in plain language:

> "What happens if we raise prices by 10% for premium customers?"

RootCause.ai translates this into a structured simulation. Review the generated configuration and adjust if needed.

(SCREENSHOT: Natural language input field with generated configuration below)

***

### Simulation Runs History

The tab maintains a history of all simulation runs:

**Runs List**

Shows all past simulations with:

* Type
* Configuration summary
* Run date
* Status

**Viewing Past Results**

Click any run to see its full results without re-running.

**Comparing Runs**

Select multiple runs to compare results side-by-side.

(SCREENSHOT: Runs list showing simulation history with status indicators)

***

### Running Across Versions

Test simulations on multiple Digital Twin versions:

1. Select versions from the dropdown
2. Run the simulation
3. Results show side-by-side by version

Useful for:

* Validating that model changes don't alter conclusions
* Comparing predictions across model iterations
* Testing model stability

(SCREENSHOT: Version selector for cross-version simulation)

***

### Interpreting Results

**Confidence Intervals**

All results include uncertainty estimates:

* Narrow intervals = high confidence
* Wide intervals = more uncertainty
* Consider intervals when making decisions

**Monte Carlo Sampling**

Simulations run thousands of scenarios internally:

* Results are distributions, not point estimates
* You see the range of what could happen
* Captures model uncertainty

**Baseline Comparison**

Intervention and optimization results compare against baseline:

* Shows marginal effect of your action
* Helps quantify the value of intervention

***

### Best Practices

**Start Simple**

Test single interventions before combining multiple changes.

**Use Realistic Values**

Stay within your data's range. Extreme interventions may be unreliable.

**Verify with Domain Experts**

Share results before acting. Do they make business sense?

**Document Important Simulations**

Save or export results for audit trail and reproducibility.

***

### Next Steps

* Share findings in [Reports](https://gitlab.com/perceptura/gitbooks-docs/-/blob/main/user-guide/digital-twin/reports.md)
* Use [RootCause Assistant](https://gitlab.com/perceptura/gitbooks-docs/-/blob/main/user-guide/digital-twin/rootcause-assistant.md) to explore via conversation
* Refine model in [Config Tab](https://docs.rootcause.ai/user-guide/digital-twin/tabs/config-tab) if results suggest issues
