Getting Started
The site has three components:-
The Voices Voices
Their Views on Complexity What-is-complexity
Summary & Conclusions What is Complexity
The Challenge this Site Tackles
Our opening stance is that perspective is critical. There is no single âtruthâ but different people focus on different aspects of the situation and the skill lies in including these viewpoints. A âmanagementâ which does not require agreement or consensus but takes a coherent view across the different aspects/viewpoints.
The technology behind the site is explained in About - Architecture which identifies the main components Perlite or Quartzto host the site, Obsidian to structure and edit the content and NotebookLM to analyse the content. The content is from a private collection of documents and papers from different authors considered by Roger James as thought leaders or primary thinkers in the topic.
As the About - Architecture link explains this is a âhand crankedâ version of an agentic AI system in which the different perspectives are deliberately managed distinctly, essentially by providing a separate niche for each set of ideas. Only in the late stages of analysis are the differing ideas compared and contrasted. This gives protection against the merging, dilution or swamping of the concepts that happens mathematically in the LLMs. It is a paradigm for understanding what is going on and draws from HOCUS which emphasised the importance of first understanding the principles of operation before handing over the the computer.
What is the Expectation of AI?
Genius or Grunt?
There is so much written on the AI revolution with lofty ambitions and expectations of what AI can and will do, there is also a great deal of cynicism and criticism of this enthusiasm describing it as a boom expecting disappointment and failure. Perhaps we should allow that both schools of thought are correct?
My standard question is to ask âdoes AI help with the Grunt work or the Genius work?â. At Vanguard we used to point out that âwhen we put the man on the moon (genius)â we failed to appreciate that âeveryone would be watching at home on a colour TV (grunt)â or âcooking a snack on a non-stick panâ. The point is that technological advances are never the sole preserve of lofty, noble ambitions but are equally transformative in the humble mundane everyday.
So it is with AI. One of my colleagues, Cathy, was an early adopter of AI. Cathy is remarkably effective in her work as a data analyst, give Cathy a messy problem and she will solve it, either with some technology or by brute force of diligent effort. Early on Cathy mentioned she had a data extraction problem and had used AI to extract and structure some sub-set of a messy spreadsheet - no grand application of cognition or artificial consciousness rather the mundane recognition of a complex pattern.
In the approach which is STPrism is more Grunt work than Genius. The âhard yardsâ of hacking through each authors Bibliography to extract a consistent focused digest (the QSets) is Grunt work performed excellently. Any loft ambition of cognition is provided by the analyst in the design of questions and the quality of the âExcel likeâ explorations. There are some aha moments (đ¤ŻRogers WOW List) where the AI produces the unexpected but largely this is the contribution of the human not the technology although the two are inseparable and working in harmony (Michael Schrageâs point about Intelligent Choice Architectures augmenting human intelligence).
In essence the AI infrastructure here ONLY presents a powerful and uniform application programming interface (Semantic Normalisation and Content Levelling) for the underlying books and papers . It presents an extremely powerful and useful way of presenting the content in a structure which integrates search (finding the relevant pieces of information) and normalising vocabulary (so the language of the search terms is equivalent and standardised across every set of documents).
The difficult âhumanâ elements such conceptualising or hypothesising on the content remains the purview of the analyst in their choice of prompts and is expressed by the formulation of the queries posed to each set of documents separately and only integrated as the last stage in the process. It is a conceptual architecture similar in principle to CYC with an end result similar in operation to SQL where the questions to the LLM equate to the SQL logic of joins etc and the end result is combinations which match business perspectives.
The Schema
Schema
This structure draws on Niklas Luhmannâs concept of âFunctional Differentiationâ (separating systems into niches to handle complexity)[1], James Ladymanâs use of âIsomorphismsâ (standardized structures that apply across disciplines)[2], and C. West Churchmanâs concept of âSweeping Inâ (integrating diverse perspectives)[3].
graph BT   TOP LEVEL: INTEGRATION   subgraph Top_Level [Top_Level:Integration+Analysis]     direction TB     Integration(Comparison & Synthesis:<br/>'Compare & Contrast' / Analysis):::final     FinalView[Coherent View / Synthesis<br/>NOT Consensus]:::final   end   The Standard Query Input     The Output Row (Results)     subgraph Result_Layer [Distinct Perspectives]       direction LR       ResA[Result A:<br/>Nuanced POV]:::result       ResB[Result B:<br/>Nuanced POV]:::result       ResC[Result C:<br/>Nuanced POV]:::result     end     LOWER LEVEL: NICHES   subgraph Lower_Level [Lower_Level:Niches+Data_Containers]     direction LR     NicheA[(Niche A:<br/>Stafford Beer<br/>Collection)]:::container     NicheB[(Niche B:<br/>James Wilk<br/>Collection)]:::container     NicheC[(Niche C:<br/>Alicia Juarrero<br/>Collection)]:::container   end   Data flowing up to processing   NicheA --> ProcA   NicheB --> ProcB   NicheC --> ProcC   StdQuery --> ProcA   StdQuery --> ProcC   Results flowing to Integration   ResA --> Integration   ResB --> Integration   ResC --> Integration   %% Integration to Final View   Integration --> FinalViewKey Architectural Concepts Visualized
1. Lower Level: Isolated Niches
⢠Vertical Alignment: This layer is the foundation. It contains specific âslicesâ of the document collection, separated by author (e.g., Stafford Beer vs. James Wilk)[1].
⢠Purpose: These are kept distinct to prevent âdestructive mixing and dilution.â If these were pooled together, the âcarpet bombingâ of terms like âcomplexityâ by one author (e.g., in Cynefin) would obscure the âweaker more nuanced signalsâ from others (e.g., Vickers)[2][3].
2. Middle Level: Standardization Layer
⢠The Interface: This layer functions as a âpowerful and uniform application programming interfaceâ[4].
⢠Standard Query: A single question (e.g., âWhat is the role of the observer?â) acts as a standardized input[5].
⢠Separate Processing: The system interrogates each niche separately using Googleâs NotebookLM technology. This ensures each niche provides an answer âFROM THEIR OWN PERSPECTIVE,â preserving their unique definitions and principles[6].
3. Top Level: Integration and Analysis
⢠Synthesis: This is the only layer where the streams merge. The system compares and contrasts the results from the middle layer, rather than the raw data[7].
⢠Goal: The aim is a âcoherent viewâ rather than a statistical âaverageâ or âconsensus,â which is viewed as dangerous and driven by groupthink[2][8].
Link to originalReferences
[1] About - Architecture.md [2] About - Architecture.md [3] About - Architecture.md [4] Welcome.md [5] About - Architecture.md [6] About - Architecture.md [7] About - Architecture.md [8] Welcome.md
