What is the source of 'Can experts reliably predict the outcome, or is the outcome only coherent in hindsight' and is it useful since there is no stopping rule, can this principle be used to enlighten all cond

The Source of the ConceptThe specific question of whether experts can reliably predict an outcome or if the outcome is only coherent in hindsight originates from Dave Snowden and his Cynefin framework[1].

Snowden uses this principle to draw a sharp boundary between two distinct domains:

• The Complicated Domain (“Known Unknowns”): This is the realm of traditional systems engineering where cause-and-effect relationships exist and can be discovered in advance[1]. Here, you can rely on experts to analyze the system and reliably predict the outcome[1].

• The Complex Domain (“Unknown Unknowns”): In a Complex Adaptive System (CAS), there is no linear link between cause and effect[2]. The system features “Retrospective Coherence,” meaning the relationship between cause and effect only makes logical sense when looking backward in hindsight[3]. Because order emerges from the bottom up through the interactions of independent agents, exact outcomes cannot be predicted by experts in advance[1][2].

**Is it useful since there is no stopping rule?**Yes, this principle is profoundly useful, and its utility is actually magnified by the absence of a “stopping rule.”

The concept of a “stopping rule” belongs to Horst Rittel, who defined “wicked problems” as complex social realities that have no definitive formulation, no true/false solutions, and no clear point at which the problem is permanently “solved” (no stopping rule)[4].

Because there is no stopping rule, traditional management—which relies on an expert predicting an end-state and engineering a step-by-step plan to reach it—will inevitably fail[5][6]. Snowden’s principle is useful because it provides a methodology for acting when final resolution and predictive planning are impossible:

• Abandoning the “Idealist Fallacy”: Instead of trying to design a perfect future state, Snowden advises managing the “evolutionary potential of the present”[7].

• Probe, Sense, Respond: Since experts cannot predict the outcome of a wicked problem, you must act first to understand the system. You formulate multiple hypotheses and launch small, parallel “safe-to-fail” probes[8].

• Amplify and Dampen: You monitor the environment’s response to these probes. If a beneficial pattern emerges, you amplify it; if a negative pattern emerges, you dampen it[8]. This allows you to navigate a continuous, never-ending environment safely without needing a stopping rule.

**Can this principle be used to enlighten all conditions of complexity?**Yes, the principle that predictive, linear causality breaks down in complex systems is the unifying ontological thread that connects virtually every complexity scientist in this collection. It can be used to enlighten all conditions of complexity because it establishes the absolute limits of human knowledge:

• Paul Cilliers (General Complexity): Cilliers confirms this by mathematically defining complex systems as “incompressible”[9]. Because any model we build must leave out certain variables, and because complex systems are non-linear, those excluded minor variables will multiply into massive, unpredictable effects[10]. Therefore, we are fundamentally incapable of calculating the exact future state of a complex system[10].

• Alicia Juarrero (Complex Dynamical Systems): She notes that complex systems “carry their history on their backs” (path dependence) and are exquisitely sensitive to initial conditions[11][12]. At a bifurcation point, a microscopic, random fluctuation determines the entire future macro-structure, making precise Newtonian prediction physically impossible[11][13].

• Nassim Nicholas Taleb (Antifragility): Taleb applies this to economics and risk, stating that in complex environments (“Extremistan”), causal links are invisible and predicting rare, high-impact events (Black Swans) is impossible[14][15]. He advises abandoning prediction entirely and instead focusing on building systems that benefit from unpredictable shocks[15][16].

• Robert Rosen (Relational Biology): Rosen proves mathematically that complex living systems possess “non-simulable” or non-computable models[17]. Because they contain closed loops of efficient causation (impredicativities) and adaptively change their own essence over time, any formal predictive model built today will eventually fail tomorrow[17][18].

**A Systems Thinking Nuance:**While complexity science universally accepts this unpredictability, classical cybernetic systems thinkers (like W. Ross Ashby and Stafford Beer) add an important practical caveat. They agree that you cannot predict the exact internal mechanics of a complex system (which they call the “Black Box” or a “transcomputational” problem)[19][20]. However, they argue that you can still achieve control over the system without predicting its exact outcomes. By using Variety Engineering (Ashby’s Law), you can build attenuators to filter out unpredictable environmental noise, and amplifiers to boost your organization’s flexibility, allowing the system to self-organize and survive the unpredictable future[21].