Article 9 AI in the Loop, Not in the Chair

AI in the Loop, Not in the Chair

A deliberate simulation to test whether AI-assisted decision making can be captured inside a governed operational record — and what that means in practice.

Shaun Flynn · April 2026 · Altomi Pty Ltd

Simulation — Not a Live Commercial Run

What follows describes a deliberate simulation conducted by the owner-operator to test a specific capability — whether AI-assisted decision making can be captured inside the UOA governance chain in a way that is attributed, timestamped and permanently recorded.

The batch was real. The system was live.
But the scenario itself was constructed as a test of the system, not executed as part of normal commercial production.

That distinction matters.

Where Article 8 Left Off

Article 8 described Silent Drift — where operational systems move away from their approved state without clear record or attribution.

The structural response discussed was a layered approach:

  • UOA / Multiverse — defines and records what is admissible

  • CARE — enforces admissibility at execution (conceptual)

  • A2SPA — verifies that what executes matches what was approved

That framework addresses execution and legitimacy.

But a more practical question sits underneath:

When a decision point arises, how can external input — such as AI — be captured properly inside the governed record?

The Simulation — What Was Being Tested

A simulation was run using Multiverse (Batch 12344321 — mango pulp).

The objective was simple:

To test whether an AI-assisted interaction could be captured, attributed, and preserved within the operational record in the same way as any other event.

During a GMP checklist step relating to dosage confirmation, the owner-operator:

  • asked an AI a relevant question

  • captured the full exchange

  • recorded it directly into the Multiverse runsheet

The simulation was designed to test three things:

  • Whether the system accepts AI interaction as a valid runsheet entry

  • Whether the entry is immutable and attributed

  • Whether the record is usable for QA review or audit

Runsheet Record (Simulation Extract)

Runsheet · Batch 12344321 · Mango Pulp · Simulation
IMMUTABLE · APPEND-ONLY

Date & TimeCategoryEntry30 Dec 2025 00:14GeneralProduction started31 Dec 2025 11:48GeneralGMP Checklist started: AIDS TO MANUFACTURE DOSAGES31 Dec 2025 11:48GeneralSIMULATION — AI interaction captured: full question and response recorded. Advisory only.03 Apr 2026 11:17Input Added110.00 kg mango added to production batch

In this simulation, the AI interaction was recorded under a General runsheet category. This was a deliberate choice — not a limitation of the system, but a reflection that no formal classification for AI-assisted events currently exists.

The system guarantees that the interaction is captured and preserved. What it does not yet guarantee is that similar interactions will be classified consistently without a defined structure.

What the Simulation Demonstrated

The system behaved as expected:

  • The AI interaction was accepted as a standard runsheet entry

  • It was timestamped and attributed

  • It became part of the immutable operational record

From a QA or audit perspective, the result is clear:

  • The interaction is visible

  • The context is preserved

  • The decision pathway is explicit

Nothing is inferred. Nothing is hidden.

Why the Simulation Was Run

This was not about introducing AI into production operations.

It was about answering a more fundamental question:

If AI is used to inform a decision, can that interaction be captured inside the governed record?

The simulation confirms that it can.

Where This Fits in Practice

The simulation demonstrates a capability — not a prescribed workflow.

It shows that Multiverse can:

  • capture AI-assisted input

  • attribute it to a decision-maker

  • preserve it as part of the operational evidence base

Where and how that capability should be used is a separate question.

In practice, AI-assisted input is more naturally aligned with:

  • recipe development

  • QA review

  • investigation and corrective actions

  • audit preparation

These are environments where decisions are already:

  • deliberate

  • reviewed

  • attributable

What This Is Not Claiming

This simulation does not suggest:

  • AI should be embedded into live production decision loops

  • AI replaces authority structures

  • AI interaction at the point of execution is appropriate

It demonstrates only this:

If AI contributes to a decision, the system can capture that contribution properly.

What Comes Next

The next step is not technical feasibility — that is already demonstrated.

The next step is structure:

  • defining when AI interactions should be captured

  • standardising how they are recorded

  • introducing a formal “AI event” within the runsheet

This moves AI from informal influence to governed input.

The Broader Implication

AI is already being used informally across many environments.

The real question is not whether it is used.

It is whether its influence is:

  • visible and attributable, or

  • external and unrecorded

If unrecorded, it introduces a new form of drift — not in the system state, but in the decision process itself.

If captured, it becomes part of the evidence base.

Key Takeaway

This simulation proves a simple point:

AI can be brought inside the governed operational record —
not as an authority, but as an attributed input.

The architecture already supports it.

Defining how it should be used is the next step.

Previous
Previous

Article 10 Explaining Admissibility and Governance in a Regulatory Sandbox

Next
Next

Article 8 Silent Drift — Where Operational Reality Breaks Before the Boundary