Model Comparison

Causal models evolve. You might train a Digital Twin on January data, then retrain on February data. Or experiment with different configurations—omitting certain variables, adding domain knowledge, changing temporal settings.

Model Comparison lets you put two Digital Twin versions side by side and understand exactly what changed. Not just "the graph looks different," but precisely which relationships appeared, disappeared, or reversed direction—and how the underlying probability distributions shifted.

This is essential for model governance. Before promoting a new model version to production, you need to understand its differences from the current one. Model Comparison provides the evidence for that decision.

(SCREENSHOT: Model comparison page showing two graphs side by side with highlighted differences)


Accessing Model Comparison

Navigate to Intelligence → Compare Models from the workspace sidebar.

Alternatively, when viewing a Digital Twin, you can initiate comparison from the version selector by choosing two versions to compare.

(SCREENSHOT: Intelligence sidebar showing Compare Models option)


Selecting Models

The comparison tool lets you compare any two Digital Twin versions—even from different Digital Twins.

Selecting Model 1

  1. Choose a Digital Twin from the dropdown

  2. Select a version of that twin

Selecting Model 2

  1. Choose a Digital Twin (can be the same or different)

  2. Select a version

Common comparison scenarios:

  • Same twin, different versions: Track how the model evolved over retraining

  • Same data, different configurations: Compare the effect of constraints or excluded variables

  • Different time periods: See how causal relationships changed between datasets

(SCREENSHOT: Model selection panel with two dropdowns for twin and version selection)


View Modes

Separate View

Shows both causal graphs in a split panel—Model 1 above, Model 2 below. Each graph is fully interactive: you can zoom, pan, select nodes, and explore relationships independently.

The divider between panels is draggable. Adjust the split to focus on the model that needs more attention.

Diff View

Merges both graphs into a single visualization with color-coded differences:

Color
Meaning

Blue

Nodes/edges unique to Model 1

Purple

Nodes/edges unique to Model 2

Gray

Nodes/edges present in both models

Orange

Edges with reversed direction between models

Diff view is powerful for quickly spotting structural changes. If the graphs are mostly similar, differences pop out immediately.

(SCREENSHOT: Diff view showing merged graph with color-coded differences)


Comparison Sidebar

The sidebar provides detailed analysis tools organized in tabs:

Model Selection

Configure which models you're comparing. Shows metadata for each selected version: name, creation date, version number, training status.

Variable Mapping

When comparing models with different variable sets, this tab shows how variables align between them. Useful when comparing twins built on different (but overlapping) Data Views.

Causal Diffs

Lists all causal relationships grouped by:

  • Common: Relationships present in both models (same direction)

  • Unique to Model 1: Relationships only in the first model

  • Unique to Model 2: Relationships only in the second model

Each relationship shows:

  • Source → Target

  • Relationship strength

Summary statistics show counts for each group at a glance.

(SCREENSHOT: Causal Diffs tab showing common, unique-to-1, and unique-to-2 relationship lists)

Comparison Graphs

Additional visualizations for comparing model structure:

  • Adjacency matrix comparisons

  • Degree distribution comparisons

  • Centrality metrics side by side

Parameter Comparison

Deep dive into how specific relationships differ between models. Not just "does this edge exist?" but "how does the conditional probability distribution differ?"

(SCREENSHOT: Parameter Comparison tab showing distribution charts for a selected relationship)


Parameter Comparison

This is where Model Comparison gets powerful. Beyond structural differences, you can examine how the learned parameters differ.

Selecting What to Compare

Choose either:

  • Variable: Compare the marginal distribution P(X) between models

  • Relationship: Compare the conditional distribution P(Y|X) between models

Common Variables/Relationships

The tool shows only variables and relationships that exist in both models—these are the meaningful comparisons. If a variable exists in only one model, there's nothing to compare.

View Modes

  • Split: Shows Model 1's distribution above Model 2's

  • Overlay: Combines both distributions on one chart for direct comparison

Direction Changes

When a relationship exists in both models but with opposite direction (A → B in Model 1, B → A in Model 2), the tool detects this and shows a warning. It automatically adjusts axes so you can still compare the functional relationship.

(SCREENSHOT: Split view showing probability distributions for the same variable in two models)


Interpreting Differences

Structural Changes

  • New edges: The model discovered a relationship that wasn't present before. Could indicate new data patterns or different causal structure in updated data.

  • Removed edges: A previously discovered relationship is no longer supported. May indicate data drift or spurious correlation that didn't replicate.

  • Direction changes: The model now believes causality flows the opposite way. This is significant—investigate why.

Parameter Changes

Even when structure is identical, parameters can differ:

  • Distribution shifts: The shape of P(X) changed—maybe the variable's behavior shifted

  • Relationship strength changes: The conditional influence became stronger or weaker

  • Functional form changes: The nature of the relationship changed (linear to non-linear, monotonic to non-monotonic)

Questions to Ask

  1. Are structural changes expected given the data differences?

  2. Do direction changes make domain sense?

  3. Are parameter shifts within expected variance or statistically significant?

  4. Would these changes affect downstream decisions?


Cross-Twin Comparison

Comparing versions within the same Digital Twin tracks evolution over time. But comparing across different twins answers different questions:

Same Data, Different Configuration

"What happens if I exclude this variable?" "How does adding this constraint change the graph?"

Build two twins from the same Data View with different settings, then compare.

Different Segments

Build twins for different customer segments, regions, or time periods. Comparison reveals how causal mechanisms differ across populations.

A/B Test Analysis

If you ran an experiment with treatment and control groups, build twins for each group. Comparison shows how the treatment changed causal relationships.


Model Governance Workflow

A typical governance workflow using Model Comparison:

  1. Baseline Model: Your current production Digital Twin (v1)

  2. Candidate Model: Retrained on new data (v2)

  3. Compare: Open Model Comparison, select v1 and v2

  4. Review Structure: Check Causal Diffs for unexpected changes

  5. Review Parameters: For key relationships, verify parameter stability

  6. Document: Export comparison summary for governance records

  7. Decide: Promote v2 to production or investigate anomalies

(SCREENSHOT: Comparison summary with structural and parameter change counts)


Tips

Start with Diff View

Diff view gives the fastest overview. If graphs are similar, you'll see mostly gray. If there are major changes, colors pop immediately.

Focus on Key Relationships

You don't need to review every edge. Focus on relationships that matter for your use case—the ones that drive decisions.

Check Both Structure AND Parameters

Two models can have identical structure but very different parameters. Don't stop at the graph comparison.

Document Your Findings

Model comparison reveals insights about your data and domain. Document what you learn—it's valuable for future model development.


For understanding model quality metrics before comparison, see Evaluation Tab.

For understanding the causal graph structure being compared, see Relationships Tab.

Last updated