Proving Legitimacy When It Becomes Real
ARTICLE 7
Shaun Flynn · April 2026 · Altomi Pty Ltd
Bridge from Article 6
In Article 6, we explored the layered governance model:
Multiverse/UOA — human accountability layer, governing decisions at sequential gates with full visibility and immutable records.
CARE — machine admissibility layer, ensuring actions cannot execute unless admissibility is proven at the exact moment of execution.
A2SPA — execution verification layer, cryptographically confirming that the payload executing is identical to what was approved.
Article 6 established that these layers are complementary, addressing different failure modes across human and machine domains. Article 7 extends this discussion to the practical challenges of proving legitimacy at the moment actions become real, and the implications of AI in that stack.
Where This Starts
A recent comment in a conversation about AI governance highlighted the gap between authority and actionable legitimacy:
"Not just making authority explicit — but ensuring that legitimacy can still be proven when an action becomes real."
— Nick Vejle, April 2026
Assigning roles, documenting approvals, and running checklists makes authority explicit. Many systems claim this as governance. But proving legitimacy at the moment an irreversible action occurs — a product moves to inventory, a valve fires, or a transaction commits — is a far higher standard. The record must be complete, immutable, and verifiable at that moment, not reconstructed afterward.
This is what operational governance at human pace must achieve, and it is difficult to build. Most operational systems have not done so.
What Multiverse Actually Does
Multiverse captures operational reality as it happens:
Every input is logged with identifier, supplier, quantity, timestamp, and attributed user.
GMP checklists are completed step by step, with each QA decision recorded as a discrete, timestamped, attributed action.
State transitions from production through QA clearance to inventory require the prior state to exist as a recorded condition.
When a product reaches inventory, its complete history travels with it — attached to the item record. Legitimacy can be proven directly from the item record.
We have tested this across multiple domains — agricultural trials, music recording, food manufacturing, logistics — under the same governance model. Live production batches held at inventory gates for months demonstrate that Multiverse enforces the gate: no auto-proceed, no degraded path, no silent progression. Attempted audit deletions become permanent entries.
This is what proving legitimacy at human pace looks like.
What Article 6 Does Not Solve
As Jonathan Capriola pointed out:
"Good piece. But it still doesn't answer what proves the payload that executes is the one that was approved. That's where this breaks."
Multiverse ensures governance at human pace, but it does not cryptographically verify that downstream instructions — PLCs, agents, or automated processes — are identical to what was approved. Payloads could be mutated, replayed, or spoofed between approval and execution.
At human speed, this is manageable: a human exercises authority at each gate, sees the current state, and the decision is recorded. At machine speed, this gap is a critical vulnerability.
What the Conversation Produced
Discussions with Nick Vejle (CARE) and Jonathan Capriola (A2SPA) clarified the governance stack required:
Multiverse/UOA: Human accountability layer. Records approvals, timestamps, and attribution. Ensures decisions at the boundary are visible and permanent.
CARE: Machine admissibility layer. Resolves admissibility at the exact moment a transition would bind to reality. If conditions do not hold, execution halts.
A2SPA: Payload verification layer. Cryptographically confirms that what executes is exactly what was approved.
These layers are not competing. They address distinct failure modes across human and machine domains, forming a coherent governed stack. Article 6 provides a deeper technical breakdown of these layers.
Where Physical Systems Fit
PLCs controlling continuous-flow processes — filling tanks, regulating conveyors, or controlling temperature — continuously evaluate conditions and commit irreversible actions only when parameters are valid at execution.
The principle — separating flow from commitment and enforcing conditions at the boundary — predates AI and software. What PLCs lack is governed condition provenance: the record of who set the setpoints, under what authority, and when.
Multiverse establishes that provenance. CARE ensures admissibility at execution. A2SPA verifies payload integrity. Together, they define what a complete governed stack would look like in both human and machine environments.
AI in the Loop — Value vs Risk
Even with this stack, AI cannot operate as a loose cannon. Example:
A costing sheet labeled Project Fowler ($35k for 1 unit) was misinterpreted by an AI, which concluded it was a chicken project with high risk.
The AI’s judgment relied on context misinterpretation rather than anchored operational reality.
This highlights a critical point: AI must anchor to proven operational reality. Otherwise, at machine speed, errors propagate irreversibly. Users deserve full awareness of the risk vs value trade-off when introducing AI into operational systems.
What This Article Is Not Claiming
Multiverse operates at human pace. It does not perform cryptographic payload verification.
CARE is under development and not yet validated in production at machine speed.
A2SPA provides execution verification but requires integration with human and machine layers.
Multiverse provides a verified, immutable human layer — necessary for AI systems to operate defensibly. It is the foundation, not the complete answer, for governance at machine speed.
Why This Matters Now
AI is being integrated into regulated operational environments faster than governance can adapt. Without structured, immutable operational records:
AI decisions may rely on incomplete or reconstructed reality.
Accountability and traceability are compromised.
Risk of regulatory, safety, or operational failure increases.
UOA and Multiverse provide the operational foundation. CARE and A2SPA are the next steps to extend governance into machine-speed AI environments.
Key Takeaway: AI governance must anchor to proven operational reality — not inferred authority or post-fact reconstruction. Multiverse defines that reality at human pace; CARE and A2SPA aim to extend it to machine speed, completing the governance stack.