All research
Innovation

Intent Used to Live in Humans

Mal Wanstall 10 March 2026 18 min read

For decades, organisational intent survived in people. The architect who remembered why. The strategist who held the thread. AI is removing the carrier, and what's left are static artefacts that were already inadequate. We argue that intent must become a living system state.

The architect who remembered why

In 2019, we sat in an architecture review at a financial services firm. A senior engineer, 22 years with the company, was explaining why a particular subsystem processed transactions in a specific order. The sequencing looked inefficient. Two separate teams had proposed optimisations. Both proposals were technically sound, well-documented, and would have saved meaningful compute.

The senior engineer rejected both. His reasoning had nothing to do with the code. Fourteen years earlier, a regulatory incident had revealed that the original sequencing prevented a class of reconciliation errors that only manifested under specific cross-border conditions. The conditions were rare. The regulatory finding had been resolved. But the sequencing remained because it encoded a piece of institutional knowledge about how the system could fail.

None of this was in the documentation. The original incident report existed somewhere in a decommissioned compliance system. The architectural decision record, if one had ever been written, was long gone. The knowledge lived in one person. When we asked what would happen when he retired, he laughed. “That’s the fun part,” he said. “Nobody will know to ask the question.”

This is the pattern we’ve spent two years studying. Across 14 organisations, in sectors from financial services to healthcare to logistics, we found the same structural condition: the most critical knowledge about why systems, processes, and strategies exist in their current form lives in people. Not in documents. Not in systems. In the memories, relationships, and judgment of specific individuals.

The carrier thesis

Every organisation runs on intent. Someone decided that this system should work this way, that this market should be pursued, that this process should exist. Those decisions had reasons. The reasons had context. The context had constraints, trade-offs, regulatory considerations, competitive dynamics, failure modes observed and avoided.

For decades, that intent survived because the people who shaped it stayed. They carried it in their heads, passed it through mentoring, expressed it in code reviews and strategy sessions and corridor conversations. The carrier, the human who held the context, was the mechanism.

We examined how intent transfers across four distinct domains:

Technical architecture. Design decisions, system constraints, integration choices. In 11 of 14 organisations, we found critical architectural decisions where the reasoning was held by fewer than three people. In four cases, it was held by one.

Strategic direction. The connection between a board-level bet and the operational activity meant to execute it. We traced this in detail in The Translation Problem, where we found 73% signal loss across three organisational layers. The humans bridging those layers were the primary mechanism keeping any signal alive at all.

Process design. Why a workflow exists in its current form. In eight organisations, we found processes that had survived multiple “transformation” programmes because a specific person intervened each time to protect them, knowing (from experience, not documentation) that removing them would break something downstream.

Regulatory and compliance knowledge. Why certain controls exist. This proved the most fragile. When the person who implemented a control in response to a specific finding left, the control often survived as a zombie: still running, still consuming resources, but with no one able to explain whether it was still necessary or how it connected to current obligations.

The pattern across all four domains is the same. The knowledge was never properly externalised. It didn’t need to be, because the humans carried it. The organisations had wikis, design documents, strategy decks, and process manuals. These artefacts captured a version of the intent, often the version that existed at the moment of writing. They were static snapshots of a dynamic reality, and they began decaying the moment they were created.

What AI changes

The carrier model worked, imperfectly but functionally, when humans remained in the loop. A coding agent changes this equation.

We observed three AI deployment patterns across our study organisations that directly affect the carrier model:

Pattern 1: Implementation without institutional context. Six organisations had adopted coding agents for software development. In all six, we found instances where agents had modified systems without access to the institutional knowledge that shaped those systems. In one case, an agent refactored a data pipeline and removed what appeared to be redundant validation steps. The steps were not redundant. They existed because of a data quality issue discovered two years earlier that had caused incorrect customer billing. The issue had been fixed at the source, but the validation remained as a safety net. No one had told the agent. No document captured the reasoning. The engineer who had added the validation had left eight months prior.

Pattern 2: Strategy acceleration without the judgment layer. Four organisations were using AI tools to accelerate strategic planning and analysis. The tools produced faster analysis, broader competitive scans, more comprehensive market assessments. But they operated without the contextual judgment that experienced strategists brought: the knowledge of which data sources were unreliable, which market signals were noise, which competitive moves were bluffs. The speed increased. The judgment did not transfer.

Pattern 3: Autonomous gap-filling. In the most concerning cases, AI systems were filling knowledge gaps with assumptions. When an agent encounters ambiguity in a codebase (unclear requirements, missing documentation, contradictory patterns), it resolves the ambiguity by inferring intent from context. Sometimes the inference is correct. When it isn’t, the error is silent. The system behaves as if the inferred intent were actual intent, and no one may notice until the consequences surface.

Each of these patterns is rational in isolation. Organisations adopt AI to move faster, reduce costs, scale capabilities. The problem is structural: the carrier is being removed without the knowledge being transferred. The human who held the context is being replaced by a system that operates without it.

The static artefact problem

The instinctive response is “better documentation.” Write it all down. Capture the knowledge before the humans leave. Build comprehensive wikis and decision logs and architecture decision records.

We examined this approach in three organisations that had invested heavily in knowledge management. The findings were discouraging.

Organisation A had spent 18 months building an internal knowledge base with over 4,000 articles. Usage analytics showed that 78% of articles had not been accessed in six months. Of the articles that were accessed, most were procedural (how to do X) rather than intentional (why X exists in this form). The knowledge base captured process. It did not capture intent.

Organisation B maintained architecture decision records (ADRs) for all significant technical decisions. The practice was disciplined. Records were written, reviewed, and stored. But when we traced actual decision-making, we found that engineers consulted ADRs in fewer than 12% of decisions where a relevant record existed. The records were written for a future audience, but the future audience didn’t read them because they didn’t know what to look for. You have to know the question before you can find the answer.

Organisation C had implemented a comprehensive strategy documentation system: cascading OKRs, initiative charters, quarterly business reviews. The documentation was thorough. But when we tested alignment between documented strategy and operational activity (using the methodology from our translation study), we found the same 30-40% orphaned activity rate we’d observed elsewhere. The documentation existed. The connection to execution did not.

The common failure across all three: static artefacts decay. They capture a point-in-time snapshot. The reality they describe continues to evolve. Within weeks of being written, they begin diverging from the truth. Within months, they are unreliable. Within a year, they are actively misleading, worse than having no documentation at all, because they create false confidence.

This is not a discipline problem. It’s a structural one. Static documents cannot capture dynamic intent. The format is wrong for the job.

Intent as a living system state

If static artefacts fail and human carriers are being removed, what’s left?

We argue that intent must become a system state: a living, machine-readable, continuously verified representation of why things are the way they are. Not a document that someone writes and others might read. A system property that is referenced at the point of decision, verified against reality, and updated as conditions change.

This is a different category of thing from documentation. Three properties distinguish it:

Living, not static. The representation of intent must update as the reality it describes changes. When a system is modified, the intent record must be updated in the same operation. When a strategic bet is informed by new evidence, the bet’s articulation must reflect that evidence. The intent and the thing it describes must be coupled, not separated by the gap between “doing work” and “writing about work.”

Machine-readable, not human-authored prose. If intent is going to inform AI systems (agents making implementation decisions, tools accelerating strategic analysis), it needs to be structured in a way those systems can consume. Natural language documentation is optimised for human reading. What we need is intent expressed as typed, structured, queryable data: constraints, rationale, dependencies, evidence, and the relationships between them.

Continuously verified, not assumed. A design document is assumed to be accurate until someone discovers it isn’t. A living intent system is continuously verified: does the system still behave consistently with the stated intent? Does the strategic activity still connect to the strategic bet? Does the evidence still support the hypothesis? Verification is not periodic review. It’s ongoing, automated comparison between intent and reality.

What this looks like in practice

In software engineering, this means design specifications that are not Word documents or wiki pages but structured artefacts that coding agents reference before making changes. The specification encodes not just what the system should do but why it was designed this way, what constraints shaped it, what failure modes it guards against. When an agent encounters ambiguity, it consults the spec. When the spec and the code diverge, the divergence is flagged. When a change is made, the spec is updated as part of the same operation.

We’ve seen early versions of this pattern in two of our study organisations. In both cases, the shift from static documentation to structured, agent-readable specifications reduced rework by 30-40% in the first quarter. The agents stopped guessing. They started asking.

In strategic execution, this means strategic bets expressed as falsifiable hypotheses wired to live evidence streams. Not a strategy deck reviewed quarterly but a continuously updated model of what the organisation believes, what evidence supports those beliefs, and what would cause them to change. When new data arrives (a market shift, a competitive move, an execution result), it’s automatically evaluated against the hypothesis. The bet either strengthens, weakens, or needs revision.

We’ve modelled this approach theoretically and tested components in two organisations. The results are preliminary but suggestive: organisations that wire strategic bets to live evidence detect misalignment 60-70% faster than those relying on periodic reviews. They don’t execute better. They see faster.

The structural gap

The two domains, software and strategy, face the same structural gap. In both cases:

  1. Intent originates in human judgment (an architect’s design reasoning, a strategist’s market thesis)
  2. That intent is currently preserved through human carriers or static artefacts
  3. AI is accelerating execution while bypassing the intent layer
  4. The result is fast, confident activity that may have no connection to the original reasoning

The fix is the same in both domains: make intent a first-class system property rather than a byproduct of human memory or a static document gathering dust.

This is harder than it sounds. It requires changing how organisations think about knowledge. Documentation is something you do after the work. A living intent system is part of the work. The specification is not written about the code; it is written with the code and verified against the code. The strategic hypothesis is not reviewed after the quarter; it is updated as evidence arrives.

What we’re watching for

We are tracking three indicators across our study organisations:

Knowledge half-life. How long does institutional knowledge remain accurate and accessible after the person who created it leaves? In organisations relying on human carriers, we’re measuring half-lives of 3-6 months. The knowledge doesn’t vanish instantly. It degrades as situations arise where the absent person would have intervened, and no one intervenes.

Intent drift. The divergence between stated intent and actual system behaviour over time. In codebases without structured intent, we’re measuring meaningful drift within 8-12 weeks of active development. In strategic execution without live evidence, drift is slower but more consequential: a strategic bet can operate for two or three quarters before anyone notices the thesis no longer matches reality.

Recovery cost. What does it cost to reconstruct lost intent? In the financial services case from our opening example, the architectural reasoning was eventually recovered through interviews with retired employees. The recovery took six weeks of consultant time and cost approximately $180,000. The knowledge had originally been free. It just hadn’t been captured.

The thesis

The transition we’re describing is generational. For the entire history of modern organisations, institutional knowledge has been a human property. People carried it. People transferred it. People applied it. The systems were secondary, tools that humans used and directed and imbued with purpose.

AI breaks this model. Not by being hostile or misaligned, but by operating at a speed and scale where human carriers can’t keep up. When an agent can modify a system in minutes, the architect who would have flagged the issue is too slow. When an AI tool can generate a strategic analysis overnight, the strategist who would have questioned the assumptions isn’t in the loop.

The response cannot be to slow AI down. The response must be to make the knowledge the humans carry explicit, structured, and alive. To transform intent from a property of people into a property of systems. Not to replace human judgment, which remains essential for the original act of forming intent, but to ensure that judgment persists after the moment of its formation.

Intent used to live in humans. It needs a new home.