Generated Skill
YAML--- name: analyzing-systems-through-prediction-failures description: Analyzes complex systems by identifying where mental models break down and using those failures to reveal hidden structures. Use when encountering systems that resist conventional analysis or when existing models produce unexpected results. ---
Analyzing Systems Through Prediction Failures
- Run your current model forward: Make specific predictions about system behavior
- Notice breakage points: Where does reality diverge from expectation?
- Interrogate the failure: What does this breakage reveal about hidden structure?
Example: Predicting a company will scale linearly → observing sudden plateau → discovering hidden coordination costs that emerge at specific team sizes.
Phase 1: Model Forward-Running
- Articulate your current mental model explicitly
- Generate specific, testable predictions
- Run the model across different scenarios/timeframes
Phase 2: Failure Detection
- Compare predictions with observed outcomes
- Identify points of divergence (not just wrong predictions, but unexpected patterns)
- Map the topology of failures (isolated vs systemic)
Phase 3: Structural Revelation
- Ask: What must be true for this failure to occur?
- Look for the minimal generative mechanism that explains multiple failure modes
- Seek structural analogies in other domains
Phase 4: Model Recalibration
- Integrate discovered mechanisms into refined model
- Test new model against original failures
- Hold paradoxes without forcing resolution
Example 1: Input: Social media platform experiencing unexpected user dropoff despite positive engagement metrics Process: Model predicted sustained growth → observed cliff-like decline → revealed hidden attention saturation dynamics → found analogies in ecosystem carrying capacity Output: Recalibrated model incorporating attention as finite resource with non-linear exhaustion patterns
Example 2: Input: AI system performing well on benchmarks but failing in deployment Process: Model predicted smooth transfer → observed capability gaps → discovered alignment between training distribution and evaluation vs deployment contexts → found analogy to biological fitness landscapes Output: Framework for predicting deployment failures based on distributional geometry
- Embrace illegibility: Don't force premature clarity on genuinely complex phenomena
- Hunt minimal kernels: Look for the smallest set of mechanisms that generate the most observations
- Cross-domain pattern matching: Actively seek structural analogies in unrelated fields
- Document failure modes: Keep a catalog of how your models break—patterns emerge
- Resist summary thinking: The goal is cognitive recalibration, not information compression
- Explaining away failures: Treating breakdowns as noise rather than signal about hidden structure
- Premature resolution: Forcing paradoxes into consistent frameworks before understanding their generative role
- Surface-level analogies: Mapping superficial similarities instead of deep structural patterns
- Model worship: Defending existing models instead of using failures as upgrade opportunities
- Linear thinking: Assuming system responses scale proportionally with inputs