To drill into the core of this debate—whether “intelligence” resides in individual outliers (Snowden) or in the framing of the task (Noble)—you can pose a general question that challenges the ontological source of a system’s “vision.”

Suggested General Question

“To what extent is the detection of ‘weak signals’ (or outliers) a property of individual sensory capability, or an emergent artifact of the ‘net’—the specific boundary judgments, task constraints, and ‘station points’—chosen by the observer to frame the inquiry?”

Why this question reveals the “Truth” and “Importance” of the discussion:

1. It forces a choice between “Attributes” and “Relationships”

Dave Snowden’s “17% outliers” logic implies that certain individuals possess a specific attribute (inattentional blindness resistance) that makes them better “sensors”[1][2].

Dennis Noble and John Flach would argue that this view is a “Data Hypothesis” error [user-provided context]. Based on the sources, they would suggest that intelligence is not “inside” the agent but is an emergent property of the coupling between the agent and the environment[3][4].

Drill-down insight: If the 17% is an artifact of the “net,” then managing a system by simply hiring “better people” (individual attributes) is a category error. You must instead redesign the task (the relationship)[5][6].

2. It exposes the “Epistemic Cut” (The Observer’s Choice)

W. Ross Ashby and Harold Nelson emphasize that a “system” is not a real thing in the world but a list of variables selected by an observer[7][8].

• If the “17%” is established by the task, then the observer’s choice of what to measure (the “net”) is what actually “creates” the outliers. As Paul Cilliers notes, every model is a strategic choice that excludes certain information; what we call an “outlier” may simply be the “otherness” that our current frame is designed to ignore[9][10].

3. It highlights the risk of “Fragilista” Thinking

Nassim Taleb warns against relying on “expert” models that lack Skin in the Game[11][12].

• If you trust the “17% sensors” as a fixed metric, you are treating an Ecological problem (messy and unpredictable) as a Ludic one (a game with fixed rules)[13][14]. Noble’s “net” analogy reminds us that in Extremistan (complex systems), the “net” we use today may be completely wrong for the “fish” that appear tomorrow[15][16].

4. It tests for “Semantic Closure”

Howard Pattee and David Abel would ask if your “net” (the symbolic rules) is capable of harnessing the “laws” (the material dynamics) of the system[17][18].

• If the “17%” is merely an accident of how a task was described, the system lacks Semantic Closure—the rules and the reality are not in a functional loop, and the system is merely “drifting” without true autonomous identity[19][20].

Practical Application of the Question

When you ask this of the sources, look specifically for:

Snowden: Does he allow for the “17%” to change if the signification process (the metadata structure) is altered?[21].

Noble: How do boundary conditions (the net) specifically constrain the “mush” of lower-level data into ordered “catches”?[22][23].

James Wilk: Is the “17%” just another mid-level abstraction that we should “filter” out to reach a “video description” of the task?[24].

Does this question provide the right “mesh size” for your next level of inquiry, or should we narrow it down to a specific organizational “task” you are currently observing?