Based on the provided sources, the “Risk-case-stud…
导出时间: 30/01/2026, 20:33:08
Based on the provided sources, the “Risk-case-study” by Professor Michael Mainelli, which advocates for “Environmental Consistency Confidence” and “Predictive Key Risk Indicators,” can be interpreted through Peter Checkland’s ideas as a classic example of “hard” systems thinking applied to a human problem situation. However, the case study also contains elements that validate Checkland’s critiques regarding the complexity of human affairs.
Here is an interpretation of the risk case study through the lens of Checkland’s concepts:
1. Hard vs. Soft Systems Thinking
Checkland distinguishes between “hard” systems thinking (where the world is assumed to be systemic and problems are well-defined searches for efficient means to achieved ends) and “soft” systems thinking (where the world is complex and problematic, and the process of inquiry is systemic)[1],[2].
• The “Hard” Stance: Mainelli’s approach explicitly aligns with the “hard” tradition. He argues for the “application of the scientific paradigm” and “scientific management,” asserting that the core process should be a “statistical engine room”[3],[4],[5]. He treats the organization as a “large black box” where outputs can be predicted from inputs using multivariate statistics[6].
• Checkland’s Critique: Checkland would classify this as an attempt to apply the methods of natural science to social phenomena[7],[8]. Checkland argues that while natural laws (like gravity) are invariant, human affairs are characterized by autonomous human beings who can act capriciously or create new meanings, rendering such “laws” or rigid predictions problematic[7],[8].
2. The Failure of “Engineering” Human Behavior
A pivotal moment in Mainelli’s case study occurs when he describes a European investment bank using his statistical method. He notes that the IT department “unilaterally changed the KRI to ‘unplanned’ IT downtime,” which skewed the predicted losses[9].
• Checkland’s Interpretation: This incident perfectly illustrates Checkland’s argument that human systems cannot be “engineered” like physical systems because human beings attribute meaning to their world and act upon those meanings[7],[10]. The IT department’s action was a political act (changing a definition to look better) that invalidated the “scientific” model. Checkland would analyze this using Analysis Three (Political Analysis), which examines how power is expressed and defended (e.g., by controlling information definitions)[11].
3. Data, Capta, and Information
Mainelli distinguishes between general “Risk Indicators” (RIs) and “Key Risk Indicators” (KRIs), stating that an indicator is only “Key” if it has predictive capability[12].
• Checkland’s Framework: This aligns with Checkland’s distinction between Data (facts), Capta (facts we select to pay attention to), and Information (capta enriched with meaning)[13].
◦ Data: The “strong database of day-to-day environmental factors” Mainelli describes[14]. ◦ Capta: The selection of specific metrics like “deal volatility” or “IT downtime”[15]. ◦ Information: Mainelli’s KRIs become “information” only when they serve the specific purpose of prediction[12]. • The “Served” System: Checkland argues that an information system must serve a purposeful activity[16],[17]. Mainelli adheres to this by insisting that KRIs must serve the specific purposeful activity of “predicting incidents and losses”[18].
4. Appreciative Systems vs. Goal Seeking
Mainelli rejects simple “control structures” and “shared values” in favor of “Environmental Consistency,” asking if the organization’s inputs match its outputs[4],[14].
• Vickers’ Influence: Checkland draws heavily on Geoffrey Vickers’ concept of Appreciative Systems, which posits that social life is about “relationship maintaining” rather than linear “goal seeking”[19],[20].
• Interpretation: Checkland might view Mainelli’s “Environmental Consistency” not merely as a statistical tool, but as a mechanism for regulatory activity. It is an attempt to maintain a desired relationship between the organization and its environment (stability) rather than just achieving a one-off goal[21],[22]. The “confidence” Mainelli speaks of is a measure of how well this relationship is being maintained.
5. Methodology as Learning
Mainelli suggests that his approach is a “dynamic process” requiring a team to be “constantly cycling through an iterative refinement process” because “Today’s KRI should be tomorrow’s has-been”[23],[24].
• SSM Alignment: Here, Mainelli moves closer to Checkland’s Soft Systems Methodology (SSM). Checkland views SSM as a “never-ending process of learning” because human situations constantly change[1]. Checkland would approve of the idea that the model must change as the business improves (learning), contrasting this with “hard” methods that often search for a permanent “optimal” solution[25].
Summary of the Checkland Interpretation
Checkland would likely view the Risk Case Study as a “Hard” Systems Engineering intervention operating within a complex “Soft” problem situation.
• Strengths: It provides a structure for “finding out” about the logic of the situation[26].
• Weaknesses: It risks failure by assuming the organization is a machine (“black box”) that obeys statistical laws, ignoring the Weltanschauung (worldview) of the participants[27],[28]. As evidenced by the IT department example, the “scientific” model was subverted by the “soft” factors (politics and human meaning) that Checkland’s methodology is specifically designed to address[9],[29].
引用来源
[1] Checkland 1985 - From Optimising to Learning a development of systems thinking for the 1990s.pdf [2] Comparing the systems approaches of checkland and blockley.pdf [3] Risk-case-study.md [4] Risk-case-study.md [5] Risk-case-study.md [6] Risk-case-study.md [7] Checkland - From Optimising to Learning a development of systems thinking for the 1990s.pdf [8] [Book] Checkland,Holwell - 1998 - Information, systems, and information systems - Wiley.pdf [9] Risk-case-study.md [10] [Book] Checkland,Holwell - 1998 - Information, systems, and information systems - Wiley.pdf [11] Checkland - Soft Systems Methodology A Thirty Year Retrospective.pdf [12] Risk-case-study.md [13] [Book] Checkland - Learning for Action.pdf [14] Risk-case-study.md [15] Risk-case-study.md [16] [Book] Checkland,Holwell - 1998 - Information, systems, and information systems - Wiley.pdf [17] [Book] Checkland,Holwell - 1998 - Information, systems, and information systems - Wiley.pdf [18] Risk-case-study.md [19] Checkland - From Optimising to Learning a development of systems thinking for the 1990s.pdf [20] [Book] Checkland - Critical Issues in Systems Theory and Practice.pdf [21] Checkland - Soft Systems Methodology A Thirty Year Retrospective.pdf [22] Checkland - Soft Systems Methodology A Thirty Year Retrospective.pdf [23] Risk-case-study.md [24] Risk-case-study.md [25] Checkland 1985 - From Optimising to Learning a development of systems thinking for the 1990s.pdf [26] Checkland 1983 - OR and the Systems Movement mappings and conflicts.pdf [27] Checkland 1988 - Churchmans Anatomy of System Teleology revisited.pdf [28] Checkland 1997 - Reflecting on SSM the link between Root Definitions and Conceptual Models.pdf [29] [Book] Checkland - 1981 - Systems thinking, systems practice - J. Wiley.pdf
