Worked examples demonstrating Declared Intent Records in practice.
These worked examples demonstrate how a Declared Intent Record (DIR) binds authority, policy, execution, and audit around a shared Intent_ID. — much like a flight plan is declared once, then interpreted by multiple authorities as action unfolds.
(Finance + Health + Public Services)
Consider a public authority deploying an AI system that provides An AI assistant to help a person apply for public support. To do this, it touches financial data, health-related information, and government systems.
What this layer is doing
This is the moment where someone must stop implying what they mean and explicitly commit to what they are about to do – digitally.
This is analogous to filing a flight plan: intent is declared once, before execution, and fixed so that everything that follows can reference it.
Nothing runs yet.
No data is touched.
This is about making intent visible and fixed.
// The Declared Intent Record (DIR) Intent_ID: DIR-AI-2026-001 WHAT assess ON support_eligibility FOR individual_applicant HOW assistive LIMIT no_automated_decisions, explainable, human_review OUTPUT recommendation //
What is recorded
Why this layer exists
So that later no one has to ask:
The answer is already fixed.
What this layer is doing
This layer answers a very narrow question:
It does not judge whether the idea is good or safe — only whether the right people are authorised to try.
What happens here
Different authorities look at the same intent:
Each authority evaluates the same declared intent independently — just as airspace, airport, and national regulators assess a single flight plan under their own rules.
Each authority gives approval (or refusal) for this intent, not in general.
What is recorded
Why this layer exists
To stop “we were allowed to run the system” being confused with:
“We were allowed to run this kind of system for this purpose.”
What this layer is doing
This layer applies rules and constraints to the declared intent.
It does not decide whether the intent is allowed to exist — that has already been established by authority.
Instead, it asks:
What happens here
Different policy regimes examine the same declared intent and apply their own rules:
Health policy sees FOR = individual_applicant
→ requires safeguards and explanations
Financial policy sees OUTPUT = recommendation
→ prohibits automated decisions
Public-sector policy sees HOW = assistive
→ requires human oversight
Each policy body constrains how the intent may be executed, without changing what was declared.
This mirrors aviation policy: once a flight plan is accepted, different airspace authorities, safety regulators, and airports apply their own operational rules — altitude limits, routing constraints, noise restrictions — without rewriting the flight plan itself.
What is recorded
Why this layer exists
So constraints are applied before execution, and compliance is evaluated against declared rules rather than reconstructed after harm.
What this layer is doing
Only now does the system actually run.
This layer is about doing the thing that was declared, and nothing else.
Execution is constrained by the declared intent in the same way a flight is constrained by its filed plan — permitted to operate, but not to improvise purpose.
What happens here
The AI system:
It does not:
Every action taken is tagged with Intent_ID.
What is recorded
Why this layer exists
So behaviour can be checked against promises, not assumptions.
What this layer is doing
This layer asks:
It does not reinterpret intent.
It compares reality to the record.
Audit does not ask what the system claims it was trying to do; it checks what happened against what was declared — the same way aviation incidents are evaluated against the original flight plan.
What happens here
An auditor retrieves everything linked to Intent_ID:
Why this layer exists
So accountability is factual, not narrative.
Before an AI system is deployed, intent is typically expressed through prompts, system instructions, design specifications, model cards, risk assessments, or governance documentation. These artefacts already communicate what a system is meant to do.
However, these artefacts do not function as a Declared Intent Record in the sense proposed here.
Prompts and system instructions are inherently mutable. They are routinely edited, tuned, overwritten, or replaced during development and operation. While this flexibility is essential for system improvement, it means there is often no single, fixed record of what the system was committed to doing at the moment it was authorised to run.
Design documents, model cards, and governance statements may be more stable, but they are not operationally binding. They describe intent in prose, are not consistently referenced at runtime, and are rarely linked directly to execution logs, policy decisions, or audit trails. As a result, intent must be inferred after the fact from behaviour, configuration, or outcomes.
Most importantly, these artefacts are not designed to operate across domains. An AI system may simultaneously implicate data protection law, financial regulation, healthcare governance, and public-sector accountability. Each domain interprets intent independently, often using different documents, assumptions, or versions of the system description.
The Declared Intent Record differs in three critical ways:
Immutability
The DIR fixes intent at the point of authorisation. Once execution occurs, the declared intent cannot be altered. Changes in purpose require a new declaration, preserving a clear history of commitments and preventing intent from being retroactively rewritten to fit outcomes.
Standardised structure
The DIR expresses intent in a minimal, structured, domain-agnostic form. This avoids reliance on narrative descriptions and allows intent to be interpreted consistently by different systems and governance bodies.
Cross-domain binding
The DIR is explicitly designed to be referenced by authority, policy, execution, and audit layers across domains. The same declared intent can be evaluated by data protection authorities, AI governance bodies, financial regulators, and internal oversight teams, each applying their own rules but referencing the same commitment.
In this sense, the Declared Intent Layer does not replace prompts, specifications, or model documentation. It formalises the moment of commitment that these artefacts imply, and makes that commitment durable, inspectable, and binding across systems and time.
Prompts and documentation describe intent, but they are mutable, narrative, and domain-local; a Declared Intent Record fixes intent immutably and binds it across policy, execution, and audit.
(Same structure, different domain)
A homeowner builds a house on a plot of land.
Before construction begins, a homeowner must declare exactly what they intend to build. In the physical world, this declaration is typically expressed through building plans, planning applications, and design documents. These artefacts already capture intent in a practical sense.
However, these artefacts do not function as a Declared Intent Record in the sense proposed here.
Building plans are not immutable. They are routinely revised, amended, and superseded as construction progresses. While this flexibility is operationally necessary, it means that there is often no single, fixed record of what was committed to at the moment permission was granted. Intent can shift without a clear historical anchor.
More importantly, building plans are not designed to act as a binding reference across systems or domains. Even when plans are given identifiers and linked to planning approvals or inspections, these links are local and non-standardised. They do not propagate cleanly into other systems that may have a legitimate interest in the declared intent, such as financial institutions, insurers, or regulatory bodies operating outside the planning domain.
As a result, different actors may rely on different versions of the “plan,” interpret intent differently, or reconstruct intent retrospectively based on outcomes rather than commitments.
The Declared Intent Record differs in three critical ways:
Immutability
The DIR fixes intent at a specific moment in time. Changes to intent require the creation of a new record, rather than the modification of the original. This preserves a clear, auditable history of commitments.
Standardised structure
The DIR expresses intent in a consistent, domain-agnostic form. This allows intent to be referenced and interpreted across systems without translation or reinterpretation.
Cross-domain binding
The DIR is explicitly designed to be referenced by authority, policy, execution, and audit layers across domains. The same declared intent can be evaluated by planning authorities, building inspectors, financial institutions, insurers, and auditors, each applying their own rules but referencing the same commitment.
In this sense, the Declared Intent Layer does not replace building plans. It formalises the moment of commitment that plans imply, and makes that commitment durable, inspectable, and reusable beyond the boundaries of a single system or sector.
What this layer is doing
The homeowner must declare exactly what they intend to build.
// Declared Intent Record Intent_ID: DIR-BUILD-2026-014 WHAT build ON plot_12_elm_street FOR residential_use HOW single_family_dwelling LIMIT max_2_storeys, footprint<=120m2 OUTPUT completed_house //
Why this layer exists
So later no one argues about what was “meant”.
What this layer is doing
Checks whether the homeowner is allowed to build this house on this land.
What is recorded
What this layer is doing
This layer applies rules and constraints to the declared intent.
It does not decide whether the intent is allowed to exist — that has already been established by authority.
Instead, it asks:
What happens here
Different policy regimes examine the same declared intent and apply their own rules:
Health policy sees FOR = individual_applicant
→ requires safeguards and explanations
Financial policy sees OUTPUT = recommendation
→ prohibits automated decisions
Public-sector policy sees HOW = assistive
→ requires human oversight
Each policy body constrains how the intent may be executed, without changing what was declared.
This mirrors aviation policy: once a flight plan is accepted, different airspace authorities, safety regulators, and airports apply their own operational rules — altitude limits, routing constraints, noise restrictions — without rewriting the flight plan itself.
What is recorded
Why this layer exists
So constraints are applied before execution, and compliance is evaluated against declared rules rather than reconstructed after harm.
What this layer is doing
The house is built.
Inspectors check work against the declared intent, not against guesses.
What is recorded
What this layer is doing
Confirms the finished house matches the approved plan.
Why this works
Because the plan (intent) was explicit and fixed from the start.
As with flight plans and declared intent in digital systems, the key is that commitment is fixed before action, and everything else refers back to it.
Across both examples: