The simulation uses an empirically-grounded model. Output emerges from per-task speedups (not global multipliers), skill-biased effectiveness, and validation overhead.
See Mathematical Model → and Animation Concept →
| Task | AI Speedup | Source |
|---|---|---|
| Boilerplate code | 1.55x | Peng et al. RCT: 55.8% on simple tasks |
| Writing drafts | 1.40x | Noy & Zhang: +40% below-median |
| Debugging | 1.30x | Estimate (no RCT data) |
| Custom code | 1.00x | METR: 0% for experienced devs |
| Validation | 0.90x | Modeled: hallucination overhead |
AI helps junior researchers far more than senior ones. Low-skilled researchers get full speedup; experienced researchers see ~0% gain.
Instead of 3-5x, the model produces ~1.5x output factor — more honest, grounded in published studies.
The animation is a stochastic Petri net: nodes represent research states, events fire probabilistically, tokens carry confidence that increases as work is validated. See Animation Concept → for full details.
Vendor studies claim 10x-55x speedup from AI. But independent research tells a different story:
| Study | Finding | Confidence |
|---|---|---|
| METR 2025 | 0% speedup for experienced devs | HIGH |
| Peng et al. 2023 | 55.8% faster on simple tasks only | HIGH |
| Noy & Zhang 2023 | +40% for junior, ~0% for skilled | HIGH |