Research

My research addresses a single organising question:

Why do governance architectures designed to manage risk systematically reproduce the conditions for failure — even in organisations that are formally compliant, well-resourced, and operating in good faith?

The answer is not a single mechanism but a causal sequence. Governance frameworks embed cognitive errors at every stage of the decision cycle: in what they choose to measure, in how signals travel through reporting hierarchies, in the assumptions designers make about who will use their transparency mechanisms, and in how organisations respond when scale overwhelms capacity. Each stage compounds the one before it. A compliance framework that measures the wrong thing generates false assurance; that false assurance is further softened as it moves upward through reporting layers; the resulting signal reaches designers who assume cooperative use in adversarial environments; and organisations that scale without rethinking coordination architecture absorb all three failures simultaneously.

The empirical base is cross-sectional analysis of 171 NHS trusts and Freedom of Information data across health, policing, fire and local government. A parallel strand extends the same structural logic to infrastructure investment, where forecast-dependent governance architectures exhibit analogous failures at longer timescales.


I. What gets measured: compliance as false signal

The sequence begins with measurement. Compliance frameworks assess static attainment — whether an organisation has met a standard at a point in time — rather than adaptive capacity: whether it can respond to threats that did not exist when the standard was written. An organisation maintaining unchanged 2020-era controls and one continuously experimenting with new defences receive identical ratings. The metric cannot tell them apart.

Empirical analysis of 171 NHS trusts shows that the largest, best-resourced organisations do not outperform smaller ones on mandatory security assessments — the highest-resourced quartile achieved lower compliance rates than the lowest-resourced quartile (p=0.88, Cohen's d=0.02). This is not a resource problem. It is evidence that compliance metrics are structurally disconnected from the capability they claim to represent. The false confidence this creates is the entry point for every subsequent failure in the chain.


II. What reaches the board: signal suppression in reporting hierarchies

False confidence from compliance metrics would be less dangerous if boards received unfiltered operational intelligence alongside it. They do not. Operational urgency is translated, softened, and reframed as it moves upward through institutional structures. By the time a risk reaches the board, it has often been converted into assurance. The board receives confirmation that governance is working at precisely the moment it most needs to know that it is not.

This mechanism — cue validity collapse — operates through the same institutional communication norms that make organisations appear professional: hedged language, diplomatic indirection, deference to rank. The communication register that sustains external legitimacy systematically degrades the signal clarity required for internal escalation. The same cognitive biases that distort measurement at stage one — anchoring on prior assessments, overconfidence in process — are amplified by the social dynamics of hierarchical reporting.


III. What designers assume: cognitive blind spots in governance architecture

The measurement and reporting failures are not accidental — they are designed in. Governance designers optimise for the cooperative user: the board member seeking assurance, the citizen seeking to understand a decision, the regulator seeking compliance. They are structurally blind to the adversarial dynamics their own designs create.

Transparency mechanisms — explainability requirements, disclosure obligations, open formula publication — provide accountability interfaces that simultaneously serve as reverse-engineering interfaces. The same overconfidence that inflates compliance metrics leads designers to assume that transparency will produce accountability rather than gaming. This pattern appears in welfare algorithm design, cybersecurity disclosure frameworks, and AI governance. The failure is not attitudinal — it is a predictable consequence of designing under a cooperative-use assumption in adversarial environments.


IV. What scale does to capacity: coordination costs and the resilience ceiling

Organisations that absorb all three preceding failures — false metrics, suppressed signals, blind-spot-laden design — face a fourth compounding problem. Coordination costs grow faster than defensive capability as organisations scale. When skilled personnel are structurally unavailable regardless of budget, the optimisation problem changes: the question is not how to reach a compliance target but how to improve at the fastest sustainable rate given fixed capacity.

This is where the programme's empirical finding — that resource scale does not predict compliance — connects to a prescriptive framework. Flow-Constrained Risk Management applies Theory of Constraints logic to security improvement sequencing, replacing static targets with velocity-based metrics. An organisation at 50% maturity improving at 5% quarterly demonstrates a superior security trajectory to one stagnating at 80% with zero adaptation capability. The same coordination-cost logic applies beyond cybersecurity: initiative overload in government, capability gaps in public services, and funding architectures that mistake input scale for output quality all exhibit the same structural pattern.


V. The same logic at longer timescales: infrastructure investment under deep uncertainty

The four-stage failure chain operates in cybersecurity and health governance at timescales of months to years. A parallel strand of this programme examines where the same mechanisms operate at timescales of decades: government technology and infrastructure investment.

Forecast-dependent capital commitment architectures exhibit each stage of the failure chain. Cost–benefit appraisals anchor on point estimates that function as compliance artefacts rather than decision inputs (stage I). Optimism bias is compounded by reporting structures that convert uncertainty into confidence as business cases move through approval layers (stage II). Treasury and sponsoring departments design appraisal frameworks for cooperative use — honest forecasters — while the political economy of megaprojects rewards strategic misrepresentation (stage III). And the coordination costs of multi-decade programmes overwhelm the adaptive capacity of the institutions managing them (stage IV).

The Annual Portfolio Model proposes an alternative: annually revised expenditure envelopes, staged commitments with explicit exit options, and fast-and-frugal prioritisation rules robust to forecast error.

VI. What adoption decisions ignore: overconfidence bias and the long-term consequences of AI

Each of the preceding stages concerns governance architectures that are already in place. A prior failure precedes them. At the moment of strategic AI adoption, organisations — public and private — systematically underestimate the long-term human and social consequences of their decisions. The mechanism is not negligence but a predictable product of overconfidence bias operating at the decision-architecture level: AI deployment is framed as a bounded efficiency problem, transition costs are treated as manageable and temporary, and downstream consequences are rendered invisible by the cost-benefit lens in use.

Three manifestations of this pattern are visible in the current wave of AI adoption. The first is social: corporations capture efficiency gains from AI-driven redundancies while transferring psychological and communal costs — the loss of purpose and socially recognised function — to individuals and the state. Income replacement cannot restore what historical evidence from deindustrialisation shows persists for decades. The second is organisational: firms deploying AI at scale while reducing the engineering capacity needed to review it create accountability gaps where senior staff formally own systems they cannot independently interrogate. The third is generational: automating foundational analytical work eliminates the developmental pathway through which organisations produce leaders capable of challenging the systems they inherit.


Collaboration

I am open to collaboration on research and teaching in governance, risk, and AI accountability — particularly with practitioners, policy institutions, and academic colleagues working at the intersection of these fields. If you are interested in joint work, guest teaching, or advisory engagement, please get in touch.


Foundational papers

Two further working papers provide the empirical and theoretical foundations that underpin the programme as a whole.