Article 8 Silent Drift — Where Operational Reality Breaks Before the Boundary
Altomi Journal — Article 8
Silent Drift — Where Operational Reality Breaks Before the Boundary
Shaun Flynn · April 2026 · Altomi Pty Ltd
Where This Starts
In previous articles, we established that governance must resolve at the execution boundary.
Authority must be explicit.
Admissibility must hold at the moment of execution.
What executes must match what was approved.
But there is a more practical problem that exists before any of that:
operational reality does not stay still.
The Reality Inside Operational Systems
In most production environments, systems do not fail because they were designed incorrectly.
They fail because they drift.
Operators are measured on output.
They are expected to keep production moving.
They see the system working, and they make small adjustments to maintain flow:
a setpoint is nudged
a timer is shortened
a tolerance is widened
a sequence is adjusted
These changes are rarely malicious. They are almost always made with good intent.
But they are also rarely:
formally approved
structurally recorded
attributed in a way that survives the moment
The system continues to run.
The output appears correct.
And so the new state becomes accepted as normal.
This is not a people problem.
It is a system design problem under real-world pressure.
Operators learn the system by observing it.
If the system permits unrecorded change, that behaviour becomes the playbook — regardless of policy.
The Problem — Silent State Mutation
What has actually happened is simple:
the system has moved away from its approved state without record
There is no clear answer to:
who changed what
when it changed
why it changed
under what authority it changed
The system is no longer operating under its defined conditions.
But nothing has forced that fact to surface.
Why This Only Appears When Things Go Wrong
Because the system still “works.”
Until it doesn’t.
A customer complaint surfaces.
A quality issue emerges.
A batch fails.
At that point, everyone starts looking:
maintenance checks the machine
QA checks the records
management looks for cause
And there is nothing definitive to anchor to.
The investigation becomes reconstruction.
The answers become assumptions.
The PLC — The Cleanest Execution Boundary
A PLC is the most literal execution boundary in any system.
When it fires:
a valve opens
a motor starts
temperature changes
physical reality is altered
There is no undo.
There is no rollback.
And yet, in most environments:
the PLC enforces physical execution limits, but it does not automatically enforce the approved governance state. Small adjustments can drift silently, leaving the system’s recorded state out of sync with reality.
The Missing Link — Binding Approval to Execution
In a direct exchange, Jonathan Capriola put this clearly:
A PLC is the cleanest example of the execution boundary.
Once it fires, reality changes. No rollback. No audit can fix it.
Logs don’t control execution. They explain it after.
A2SPA is designed to enforce that at the moment an action commits and alters reality, the payload matches exactly what was approved. If there is any deviation, execution is stopped.
This ensures that the missing control is addressed — not through better logs or audits, but by preventing execution unless the approved state is strictly respected.
Even so, the reality on the floor is that operators observe, adapt, and push for throughput. Setpoints are nudged, timers adjusted, tolerances widened — often with no malicious intent, but under commercial and cultural pressure. QA sign-off in Multiverse is deliberately structured to counter this: it forces explicit approval and allows the QA to record lack of confidence if necessary. They can push back, refuse to sign, or document reservations. The system preserves this decision. BUT someone has to sign off and they could do it with a caveat in the Notes
Here is a real-world example of how a QA clearance is captured in Multiverse:
Field Value Date4/3/2026 4:59 AM
Sign-off by
Notes
Sign Off QA Clearance
This snippet illustrates the principle: the record is timestamped, attributed, and verifiable, and it documents the decision in context. Even under commercial pressure, this creates an immutable link between what was approved and what is ready for execution. The QA has visibility, authority, and accountability — and the system enforces that nothing proceeds without that verified clearance.
Connecting the Layers to Reality
This is where the stack becomes concrete:
UOA / Multiverse
Defines and records what is admissible.
Captures conditions, authority, and provenance before execution.
CARE (emerging)
Represents the admissibility model at execution — resolving whether conditions truly hold at the moment of commitment.
Conceptually, it closes the gap between defined state and execution readiness, but is still developing.
A2SPA
Enforces that what executes is exactly what was approved.
No mutation. No replay. No drift at execution.
Together, these layers address two distinct problems:
drift away from approved state
mismatch between approved and executed reality
What This Changes
Without this structure:
systems drift silently
records do not reflect reality
failures cannot be traced cleanly
With this structure:
drift cannot remain invisible
any deviation must resolve explicitly
execution cannot occur on unverified state
accountability is preserved at the moment it matters
What This Is Not Claiming
This does not eliminate human behaviour.
Operators will still optimise for output.
What it does is change the system so that:
optimisation cannot occur without visibility, attribution, and consequence
Why This Matters
Most discussions around AI governance focus on the future.
Agentic systems.
Machine-speed decision making.
Autonomous execution.
But the failure described here already exists — today — in physical systems.
AI does not create this problem.
It amplifies it.
If operational systems cannot maintain a provable link between:
what was approved
and what actually executed
then introducing AI into that environment increases risk, not capability.
Key Takeaway
Operational systems do not fail only at the execution boundary.
They fail long before it — through silent, ungoverned drift.
Governance must do two things:
define and record what is admissible
ensure that only that state can execute
Without both, legitimacy cannot be proven when reality changes.
Even when admissibility is defined and recorded, and execution is governed at the boundary, a gap can still exist.
If the execution layer itself can be altered — even briefly — the system can diverge from its approved state without record.
The process completes. The records hold.
But the link between what was approved and what physically occurred is no longer provable.
This version is ready to copy & paste — all your clarifications are integrated, the flow is logical, and Jonathan’s wording is intact.
If you want, I can also draft a LinkedIn-friendly preamble specifically for this article that ties it to Article 7.