Regulatory Alignment

How the Declared Intent Layer supports risk‑based AI regulation, including the EU AI Act.

The Declared Intent Layer is not a compliance framework or a legal instrument. It is a technical governance primitive that aligns closely with how modern AI regulation — including the EU AI Act — already works.

1. Regulation Already Depends on Intent

Risk-based AI regulation hinges on a simple but powerful concept: intended purpose. Under the EU AI Act, an AI system’s obligations, risk classification, and liability exposure depend on what the system is declared to be used for, in which context, and under what conditions.

In practice, this makes intent load-bearing. Yet today, intent is usually expressed implicitly — scattered across documentation, policies, prompts, contracts, and post-hoc explanations.

When harm occurs, regulators are often forced to reconstruct intent after the fact.

2. The Intent Gap in Modern AI Systems

This dependency on intended purpose applies to all AI systems — not just general-purpose ones.

In practice, however, intent is rarely fixed or operational at the moment a system acts. Even for narrowly scoped systems, intent is typically described in narrative artefacts: specifications, policies, model cards, risk assessments, contracts, and internal governance documents.

There is often no single, authoritative record that captures what the system was explicitly committed to doing at the point of authorisation or execution. As a result, intent is inferred from behaviour, configuration, or outcomes — or argued about after an incident.

This creates a structural gap: regulation is asked to govern intent, but intent itself is not treated as a first-class, technical object. It does not exist in a durable, inspectable form that can be consistently referenced by authority checks, policy application, execution logs, and audit.

3. What the Declared Intent Layer Adds

The Declared Intent Layer introduces a missing technical layer: a Declared Intent Record (DIR) fixed at the moment of authorisation.

A DIR explicitly states:

  • what action is intended
  • on what object
  • for what purpose
  • under what constraints
  • with what expected output

This declaration is immutable once execution begins, and is referenced consistently by authority, policy, execution, and audit layers.

The Declared Intent Layer does not decide legality or enforce policy. It makes intent legible and stable so that regulation can operate on something concrete.

4. Mapping to the EU AI Act

The Declared Intent Layer aligns cleanly with the EU AI Act’s structure:

  • Intended purpose → Declared Intent Record
  • Risk classification → Interpretation of declared intent
  • Provider vs deployer responsibility → Who declares intent
  • Post-market monitoring → Audit against immutable intent

When intent is declared explicitly and early, regulators can evaluate compliance without reconstructing narratives after harm has occurred.

5. Accountability Without Over-Prescription

Crucially, the Declared Intent Layer does not standardise ethics, policy, or outcomes. It standardises only the structure of intent.

Different regulators can apply different rules to the same declared intent, just as aviation authorities apply different airspace rules to the same flight plan.

This preserves regulatory sovereignty while enabling interoperability.

6. Why Reusable AI Systems Make This Gap Impossible to Ignore

Some AI systems are built to do one specific job.

For example:

  • a system that checks whether a loan application meets fixed criteria
  • a system that flags unusual transactions for review
  • a system that routes customer service requests

In these cases, the system’s purpose is narrow and mostly fixed. Even if intent is poorly recorded, it can often be inferred from the system’s design.

Many modern AI systems are different. They are built to be reused across many tasks. A single system might help draft emails, summarise documents, analyse patterns in data, or support decision-making in different domains.

The technology stays the same. What changes is how people use it. This is what is meant by general-purpose or reusable AI.

A helpful way to think about it is as a tool rather than a machine. A hammer has no single purpose on its own — its purpose depends on who is using it, and for what. The same is true of reusable AI systems.

Because of this, intent is no longer defined only at design time. It is defined when the system is deployed, by a specific actor, in a specific context, for a specific outcome.

If intent is not explicitly declared at the moment of use:

  • the same system can quietly move into high-risk situations
  • responsibility can shift without anyone noticing
  • regulators must infer purpose after the fact

Reusable systems do not create a different governance problem. They make the existing one visible. The Declared Intent Layer ensures that whenever intent is defined — whether once or many times — it is declared explicitly, fixed before execution, and available for oversight.

7. Beyond the EU

Although the EU AI Act provides a clear illustration, the same problem appears globally. OECD principles, national AI frameworks, and sector-specific rules all rely — implicitly or explicitly — on understanding what a system was meant to do.

A global, technical standard for declared intent offers a common reference point across jurisdictions without requiring harmonised law.

Conclusion

The Declared Intent Layer does not replace regulation. It strengthens it by giving regulators, deployers, and auditors a stable object to reason about.

As AI systems become more autonomous, adaptable, and cross-domain, making intent explicit before execution is not optional. It is a prerequisite for credible, scalable governance.