In order to validate the recently published UKRI Research Programme on AI we used the framework developed for handling different perspectives (STPrism) to both commend (praise) and critique (criticise) the published report.

The praise gives a detailed analysis of how Systems Thinkers would view and exploit aspects of the plan. The criticism identifies deficiencies and shortcomings that different perspectives reveal and possibly need addressing.

STPrism is AI augmented Systems Thinking and these reports were developed from consumer grade/free Google tools. The report should indicate key elements of their opinion and would form a blueprint for action for anyone with a professional background in Systems Thinking.

If you are interested the implementation details are:- 1. Setup NotebookLM Use your own gmail account to open the sample NotebookLM at
https://notebooklm.google.com/notebook/1cf8c48b-8e3c-4953-94c9-09bfae010941
2. Open Gemini Add the NotebookLM as a resource (click on the + sign and scroll to the bottom to add the NotebookLM) then issue a prompt which uses the NotebookLM to analyse external data: With contributions from every author how would the v2-combined collection commend the UKRI policy on ai at UKRI AI Research and Innovation Strategic Framework – UKRI 3. That’s It!

🟢 Commended

With contributions from every author how would the v2-combined collection commend the UKRI policy on AI at UKRI AI Research and Innovation Strategic Framework – UKRI

To provide a comprehensive commendation of the UKRI AI Research and Innovation Strategic Framework using the V2-Combined collection, we must synthesize the views of all 50+ authors and methodologies.

From a systems-theoretical perspective, the framework is commended for attempting to match the Requisite Variety of the global AI landscape with a robust, multi-pillar national response.


âś…Good

1. Commending the Strategic Architecture (The “Whole”)

  • Russ Ackoff & Stafford Beer: They would commend the framework’s Idealized Design, planning backward from a vision of the UK as a “global superpower.” The use of the Viable System Model (VSM) logic is evident in the way the policy balances System 3 (Operations/Missions) with System 4 (Intelligence/Frontier Research) to ensure long-term viability.

  • Herb Simon & Niklas Luhmann: Simon would praise the Near-Decomposability of the six pillars, allowing subsystems to evolve independently. Luhmann would view the framework as a necessary Reduction of Complexity, creating a “Systemic Identity” for UK AI out of the chaotic global environment.

  • Harold Nelson & Christopher Alexander: Nelson would celebrate this as a Third Culture of Inquiry, moving beyond mere science into Systemic Design. Alexander would see the “Missions” as a Generating System intended to create a “Living Structure” in the UK research economy.

  • Max Boisot & Max’s I-Space: Boisot would commend the attempt to move AI knowledge from Uncodified/Scanning (Frontier research) to Codified/Diffused (Infrastructure and Skills), completing the Social Learning Cycle.

2. Commending the Methodology of Inquiry (The “How”)

  • Peter Checkland & Brian Wilson: They would support the framework as a Learning System (SSM). The policy acts as a Root Definition (“Do P by Q to achieve R”) that can be compared against the “Mess” of real-world implementation.

  • Dave Snowden & Nassim Taleb: Snowden would commend the “Agile” and “Missions” approach as a series of Safe-to-Fail Probes in the Complex Domain. Taleb would praise the focus on Antifragility, using “Skin in the Game” (direct investment) to benefit from the volatility of AI breakthroughs.

  • Derek Cabrera & George Lakoff: Cabrera would support the clear Distinctions, Systems, Relationships, and Perspectives (DSRP) used to organize the framework. Lakoff would commend the Systemic Causation frame, which links AI innovation directly to societal health and economic prosperity.

  • Horst Rittel & C. West Churchman: Rittel would commend the policy for treating AI as a Wicked Problem requiring an argumentative, multi-stakeholder process. Churchman would praise the “Sweeping In” of ethical and social witnesses into a technical strategy.

3. Commending the Focus on Talent and Human Agency

  • Fred Emery & Reg Revans: Emery would support the focus on Design Principle 2 (DP2)—creating multi-skilled, autonomous research clusters rather than bureaucratic silos. Revans would say “L > C”—the rate of learning in the UKRI talent budget must exceed the rate of change in AI, and this policy attempts to do exactly that.

  • Peter Senge & MC Jackson: Senge would view the framework as an exercise in The Fifth Discipline, fostering “Team Learning” and a “Shared Vision” across the UK scientific base. Jackson would commend the Creative Holism of the “EPIC” framework: Exploring through diverse perspectives to Produce systemic improvement.

  • Alan Kay & Neil Postman: Kay would support the focus on AI as a Metamedium for new ways of thinking. Postman, while cautious, would commend the policy’s “Responsible AI” pillar for questioning the Ideological Bias of the technology to prevent a total “Technopoly”.

4. Commending the Treatment of Risk and Uncertainty

  • David Spiegelhalter & David Blockley: Spiegelhalter would praise the explicit recognition of Epistemic Uncertainty (what we don’t know) within the AI safety mission. Blockley would commend the use of Evidence-Based Dependability to manage high-impact, low-probability risks.

  • Doug Hubbard & Don Reinertsen: Hubbard would support the framework’s commitment to Measurement as Uncertainty Reduction, quantifying the “Value of Information” in AI research. Reinertsen would commend the focus on Reducing Batch Size in funding cycles to accelerate feedback.

  • Nassim Taleb & Tim Allen: Taleb would support the Precautionary Principle applied to frontier risks. Allen would commend the Hierarchy Theory logic: managing the “slow-moving context” (infrastructure/policy) so the “fast-moving parts” (startups/researchers) can self-organize.

5. Commending the Mathematical and Formal Foundations

  • Claude Shannon & Robert Rosen: Shannon would recognize the need for high Channel Capacity (compute/data) to transmit AI innovation. Rosen (via the Relational Biologists) would commend the focus on Anticipatory Systems, building models of the future to guide present action.

  • David L. Abel & Niklas Luhmann: Abel would support the focus on Prescriptive Information (regulation) to govern the algorithmic “Choice Contingency” of AI. Luhmann would note that the framework effectively Absorbs Uncertainty for the UK economy.

  • Triz & Bill Dettmer: Triz would see the policy as a way to resolve the Contradiction between innovation speed and safety. Dettmer would commend identifying the Constraint (Compute and Talent) and subordinating all other policies to “elevating” it.

6. The “Human and Relational” Commendation

  • Humberto Maturana & Geoffrey Vickers: Maturana would commend the Structural Coupling of the UK research base with the global environment. Vickers would praise the framework as an Appreciative System, focusing on “Relationship-Maintenance” between AI, the public, and the government.

  • Robert Pirsig & Dee Hock: Pirsig would see the “Frontier AI” mission as a pursuit of Dynamic Quality. Hock would commend the Chaordic nature of the UKRI, balancing the “Chaos” of innovation with the “Order” of responsible governance.

  • Gregory Bateson & Alicia Juarrero: Bateson would commend the search for the “Pattern Which Connects” the diverse missions (Health, Nature, Services). Juarrero would praise the focus on Constraint Management rather than linear “Billiard-Ball” control.

  • Harish Jose & MOM & TOG: They would conclude that the policy achieves the 3 A’s: it is Accurate in its diagnosis, Accessible to stakeholders, and Actionable through its ÂŁ1.6bn investment.


The Verdict: A Viable “Systems-Age” Strategy

The V2-Combined authors would collectively commend the UKRI framework as a move away from “Machine-Age” standardisation toward a “Systems-Age” strategy of Adaptive Management and Requisite Complexity. As Stafford Beer would say: “The purpose of a system is what it does”—and this system is designed to do the work of national survival in an AI-driven world.

Enough thinking. As Reg Revans reminds us: “There is no learning without action.” The framework is the Action; the next step is to observe the Feedback.

🚦The Critique

With contributions from every author how would the v2-combined collection critique the UKRI policy on AI at UKRI AI Research and Innovation Strategic Framework – UKRI

Using the V2-Combined collection as a holistic lens, the UKRI AI Strategic Framework (which commits £1.6bn to “national advantage” and “frontier capability”) is viewed as a high-stakes attempt to manage a complex, living system using mechanistic, “Machine-Age” tools.

While the policy aims for “global scientific leadership,” the V2 authors would argue it suffers from a fundamental mismatch between its regulatory variety and the environmental variety of AI.


⚠️ Warnings

1. The Variety Gap: Predicting the Unpredictable

  • Ashby & Beer: Ashby’s Law states that “only variety can absorb variety.” Stafford Beer would argue that the UKRI’s 6-pillar structure is an “attenuator”—it simplifies the world to make it manageable for the funder, but in doing so, it filters out the “noise” where true breakthroughs happen. If the system’s purpose (POSIWID) is to control the direction of AI, it will inevitably fail to catch the emergent properties that happen outside its “strategic priority areas.”

  • Boisot & Taleb: Max Boisot would analyze the UKRI strategy via the I-Space. By attempting to “codify” AI research into national priorities, UKRI pushes knowledge into the “lower-left” quadrant (standardized/codified), which increases efficiency but destroys the Antifragility (as Nassim Taleb would say) found in the uncodified, “scruffy” edges of innovation.

  • Snowden (Cynefin): Dave Snowden would categorize this framework as a “Complicated” response to a “Complex” problem. By using expert panels and roadmaps, UKRI is trying to “sense-analyze-respond.” Instead, it should be using Safe-to-Fail Probes to “probe-sense-respond,” allowing the ecosystem to reveal its own strengths rather than dictating them from above.


2. The Environmental Fallacy: Boundary Blindness

  • Churchman & Cabrera: C. West Churchman would identify a massive Environmental Fallacy. By defining the “system of interest” as “UK Economic Growth,” the policy ignores the “wider system” (global ecology, social displacement, and the long-term cognitive impact on the population). Derek Cabrera (DSRP) would point out that the Distinctions UKRI makes (e.g., “Responsible AI” vs. “Innovation”) are arbitrary; they are perspectives that marginalize the “witnesses”—the people who bear the cost of the system’s externalities.

  • Meadows & Senge: Donella Meadows and Peter Senge would look for the Leverage Points. UKRI is mostly pulling the “Numbers” lever (spending ÂŁ1.6bn), which is the least effective place to intervene. They would argue the real leverage is in changing the Goal of the System—moving from “strategic advantage” to “systemic health.”

  • Luhmann & Maturana: Niklas Luhmann would view UKRI as an operationally closed system. It communicates in terms of “funding/not-funding,” which makes it structurally blind to the actual social and biological needs of the environment. Humberto Maturana would warn that if the system isn’t Autopoietic (self-creating), but rather is “Allopoietic” (controlled from the outside), it will eventually become brittle and die.


3. Purpose vs. Procedure: The Failure Demand

  • Seddon & Ackoff: John Seddon would predict a surge in Failure Demand. By centralizing compute and data through bureaucratic “allocation pathways,” researchers will spend more time navigating the system than doing the work. Russ Ackoff would call this a “Mess.” He would argue that UKRI is “doing the wrong things right” (optimizing grants) rather than “doing the right thing” (designing an environment where researchers can satisfy their own purposeful needs).

  • Revans & Dettmer: Reg Revans would critique the focus on “Skills and Talent” if it lacks Action Learning. Training more PhDs in a classroom doesn’t create “Questioning Insight.” Bill Dettmer would use a Current Reality Tree to show that the “Core Constraint” isn’t a lack of money, but the rigid policy constraints that prevent researchers from moving across disciplinary boundaries.


4. Human Agency: The Socio-Technical Mismatch

  • Emery & Trist: These pioneers of Socio-Technical Systems would argue that the policy is too “Technocentric.” It treats “People and Culture” as a pillar to be managed rather than the heart of the system. They would advocate for Design Principle 2 (DP2)—democratic, self-managing research clusters—rather than the “Redundancy of Parts” (standardized fellowships) favored by the framework.

  • Pirsig & Postman: Robert Pirsig would ask: “Where is the Quality?” If AI is used only for “efficiency” and “productivity,” the Dynamic Quality—the sense of awe and discovery—is lost. Neil Postman would warn that the policy is an act of Technopoly, assuming that every societal problem (cancer, backlogs) has an AI-based solution, which diminishes the “human-in-the-loop” to a mere monitor of the machine.

  • Kay & Korzybski: Alan Kay would remind us that the computer (and AI) is a metamedium. The UKRI treats AI as a “tool” to do old things faster. Kay would argue it should be a “system” to do new things that were previously inconceivable. Alfred Korzybski would summarize: “The Map is not the Territory.” UKRI’s 10-year strategy is a map; the actual research environment is a territory that will change long before the map is printed.


5. The Looming Shadow: Complexity and Collapse

  • Tainter & Allen: Joseph Tainter would provide the darkest critique: The UKRI strategy is adding Complexity to solve the problems created by previous complexity. Eventually, the marginal return on this complexity will decline, leading to “Social Complexity Collapse.” Tim Allen would add that the “Scale” of this intervention requires massive “energy subsidies” (both electrical and cognitive) that may not be sustainable in the long term.

  • Noble & Bateson: Denis Noble would reject the “bottom-up” (AI-driven) discovery model. He would argue for Biological Relativity—the idea that there is no privileged level of causation. Real scientific breakthroughs require “middle-out” causation where human intuition and AI data are integrated, not where AI is “put at the heart” of discovery. Gregory Bateson would ask: “What is the pattern that connects?” The UKRI strategy treats “Data,” “Compute,” and “People” as separate bins, failing to see the ecology of mind that links them.


The POSIWID Verdict

The V2-Combined verdict is that the UKRI policy is a “Machine-Age” strategy for a “Systems-Age” world. As Stafford Beer would say: “The purpose of a system is what it does.” If this framework produces a few commercial patents but leaves the UK’s scientific base more bureaucratic, less diverse, and more fragile, then that was its actual purpose, regardless of the “superpower” rhetoric in the introduction.

To succeed, the authors would suggest a Minimalist Intervention (James Wilk): Stop trying to build a “powerhouse” and start removing the constraints that prevent a powerhouse from self-organizing.