Getting Started

The site has three components:-

The Challenge this Site Tackles

Our opening stance is that perspective is critical. There is no single ‘truth’ but different people focus on different aspects of the situation and the skill lies in including these viewpoints. A ‘management’ which does not require agreement or consensus but takes a coherent view across the different aspects/viewpoints.

The technology behind the site is explained in About - Architecture which identifies the main components Perlite or Quartzto host the site, Obsidian to structure and edit the content and NotebookLM to analyse the content. The content is from a private collection of documents and papers from different authors considered by Roger James as thought leaders or primary thinkers in the topic.

As the About - Architecture link explains this is a ‘hand cranked’ version of an agentic AI system in which the different perspectives are deliberately managed distinctly, essentially by providing a separate niche for each set of ideas. Only in the late stages of analysis are the differing ideas compared and contrasted. This gives protection against the merging, dilution or swamping of the concepts that happens mathematically in the LLMs. It is a paradigm for understanding what is going on and draws from HOCUS which emphasised the importance of first understanding the principles of operation before handing over the the computer.

What is the Expectation of AI?

Genius or Grunt?

There is so much written on the AI revolution with lofty ambitions and expectations of what AI can and will do, there is also a great deal of cynicism and criticism of this enthusiasm describing it as a boom expecting disappointment and failure. Perhaps we should allow that both schools of thought are correct?

My standard question is to ask “does AI help with the Grunt work or the Genius work?“. At Vanguard we used to point out that “when we put the man on the moon (genius)” we failed to appreciate that “everyone would be watching at home on a colour TV (grunt)” or “cooking a snack on a non-stick pan”. The point is that technological advances are never the sole preserve of lofty, noble ambitions but are equally transformative in the humble mundane everyday.

So it is with AI. One of my colleagues, Cathy, was an early adopter of AI. Cathy is remarkably effective in her work as a data analyst, give Cathy a messy problem and she will solve it, either with some technology or by brute force of diligent effort. Early on Cathy mentioned she had a data extraction problem and had used AI to extract and structure some sub-set of a messy spreadsheet - no grand application of cognition or artificial consciousness rather the mundane recognition of a complex pattern.

In the approach which is STPrism is more Grunt work than Genius. The ‘hard yards’ of hacking through each authors Bibliography to extract a consistent focused digest (the QSets) is Grunt work performed excellently. Any loft ambition of cognition is provided by the analyst in the design of questions and the quality of the ‘Excel like’ explorations. There are some aha moments (🤯Rogers WOW List) where the AI produces the unexpected but largely this is the contribution of the human not the technology although the two are inseparable and working in harmony (Michael Schrage’s point about Intelligent Choice Architectures augmenting human intelligence).

In essence the AI infrastructure here ONLY presents a powerful and uniform application programming interface (Semantic Normalisation and Content Levelling) for the underlying books and papers . It presents an extremely powerful and useful way of presenting the content in a structure which integrates search (finding the relevant pieces of information) and normalising vocabulary (so the language of the search terms is equivalent and standardised across every set of documents).

The difficult ‘human’ elements such conceptualising or hypothesising on the content remains the purview of the analyst in their choice of prompts and is expressed by the formulation of the queries posed to each set of documents separately and only integrated as the last stage in the process. It is a conceptual architecture similar in principle to CYC with an end result similar in operation to SQL where the questions to the LLM equate to the SQL logic of joins etc and the end result is combinations which match business perspectives.

The Schema

Schema

This structure draws on Niklas Luhmann’s concept of “Functional Differentiation” (separating systems into niches to handle complexity)[1], James Ladyman’s use of “Isomorphisms” (standardized structures that apply across disciplines)[2], and C. West Churchman’s concept of “Sweeping In” (integrating diverse perspectives)[3].

graph BT

     TOP LEVEL: INTEGRATION

    subgraph Top_Level [Top_Level:Integration+Analysis]

        direction TB

        Integration(Comparison & Synthesis:<br/>'Compare & Contrast' / Analysis):::final

        FinalView[Coherent View / Synthesis<br/>NOT Consensus]:::final

    end

  

     The Standard Query Input

        The Output Row (Results)

        subgraph Result_Layer [Distinct Perspectives]

            direction LR

            ResA[Result A:<br/>Nuanced POV]:::result

            ResB[Result B:<br/>Nuanced POV]:::result

            ResC[Result C:<br/>Nuanced POV]:::result

        end

  

         LOWER LEVEL: NICHES

    subgraph Lower_Level [Lower_Level:Niches+Data_Containers]

        direction LR

        NicheA[(Niche A:<br/>Stafford Beer<br/>Collection)]:::container

        NicheB[(Niche B:<br/>James Wilk<br/>Collection)]:::container

        NicheC[(Niche C:<br/>Alicia Juarrero<br/>Collection)]:::container

    end

  

     Data flowing up to processing

    NicheA --> ProcA

    NicheB --> ProcB

    NicheC --> ProcC

  

     StdQuery --> ProcA

    StdQuery --> ProcC

  

     Results flowing to Integration

    ResA --> Integration

    ResB --> Integration

    ResC --> Integration

  

    %% Integration to Final View

    Integration --> FinalView

Key Architectural Concepts Visualized

1. Lower Level: Isolated Niches

• Vertical Alignment: This layer is the foundation. It contains specific “slices” of the document collection, separated by author (e.g., Stafford Beer vs. James Wilk)[1].

• Purpose: These are kept distinct to prevent “destructive mixing and dilution.” If these were pooled together, the “carpet bombing” of terms like “complexity” by one author (e.g., in Cynefin) would obscure the “weaker more nuanced signals” from others (e.g., Vickers)[2][3].

2. Middle Level: Standardization Layer

• The Interface: This layer functions as a “powerful and uniform application programming interface”[4].

• Standard Query: A single question (e.g., “What is the role of the observer?”) acts as a standardized input[5].

• Separate Processing: The system interrogates each niche separately using Google’s NotebookLM technology. This ensures each niche provides an answer “FROM THEIR OWN PERSPECTIVE,” preserving their unique definitions and principles[6].

3. Top Level: Integration and Analysis

• Synthesis: This is the only layer where the streams merge. The system compares and contrasts the results from the middle layer, rather than the raw data[7].

• Goal: The aim is a “coherent view” rather than a statistical “average” or “consensus,” which is viewed as dangerous and driven by groupthink[2][8].


Link to original