The Accountable Committer: Orchestrating Intent in an Agent-Native World

2026-04-27 | 7 min | 1216 words | Jonas

TL;DR: The Doctrine of Meaningful Accountability

In the transition to an agent-native reality, the traditional maps of organisational design—specifically those defined by Team Topologies—must be radically redrawn. We are shifting from a paradigm of managing human cognitive load to one of orchestrating “Agentic Intent.” However, this acceleration brings a profound risk: the erosion of human responsibility. By adopting a principle inspired by Linus Torvalds—that AI may author, but a human must remain the accountable committer—we can ground this new architecture in reality. The role of the architect evolves from a librarian of syntax to a curator of constraints, ensuring that the “Architect Elevator” maintains a clear, high-fidelity link between strategic intent in the penthouse and automated execution in the engine room.

Views subject to change - see my disclaimer.

Beyond the Syntax: Orchestrating Intent in an Agent-Native World

For more than two decades, I have lived within the “Sense-and-Respond” cycle. It has been the steady heartbeat of my career, from the early days of manual system development to my current focus on leading complex architectural transitions. My journey has taken me from the heat and noise of the “engine room” to the quiet, strategic vistas of the “penthouse.” Through it all, the most persistent challenge has never been the technology itself, but the friction of translation. We spend an inordinate amount of our professional lives translating business intent into technical syntax, a process that is notoriously lossy, slow, and mentally exhausting.

Today, we stand at a threshold where that translation layer is being automated. The emergence of agent-native architectures—systems where autonomous AI agents handle the bulk of implementation and operational tasks—promises to liberate us from the “dark matter” of software development. But as an adaptive pragmatist, I see the danger in this promise. If we automate the execution without redefining the accountability, we are not building a more efficient organisation; we are building a more sophisticated “black box” that no one truly understands or controls.

To navigate this, we must revisit the human side of architecture. We must look at how our teams are structured, how our “Architect Elevator” moves, and most importantly, how we maintain the human thread of responsibility in a world where silicon handles the syntax.

The Cognitive Load Paradox

In my writing, I frequently return to the concept of cognitive load. It is the silent regulator of innovation. Team Topologies gave us a brilliant framework for managing this load by defining team boundaries that align with human cognitive limits. By creating Stream-aligned teams and supporting them with Platform teams, we sought to create “fast flow”—a state where developers could sense a market need and respond to it without being crushed by the weight of the entire system.

The initial promise of AI agents is that they will delete this cognitive load. If an agent can write the boilerplate, provision the infrastructure, and wire up the APIs, the human burden should, in theory, vanish. However, as anyone who has wrestled with a complex system knows, complexity is rarely destroyed; it is merely relocated.

When agents begin to populate the engine room, the cognitive load shifts from creation to curation. It is often more mentally taxing to review and validate five hundred lines of agent-generated code than it is to write fifty lines of one’s own. We risk falling into the “Rubber-Stamping Trap,” where humans, overwhelmed by the sheer volume of automated output, begin to approve changes they don’t fully comprehend. This is the point where the “Sense-and-Respond” cycle breaks. You cannot truly respond to a signal that has been obscured by a thousand automated “micro-decisions.”

The Torvalds Doctrine: The Anchor of Accountability

To prevent this drift into opacity, we must look to the governance of one of the most successful technical projects in history: the Linux kernel. Linus Torvalds established a rule that is perfectly suited for our current architectural crossroads: AI can author code, but a human must be the accountable committer.

This is not merely a rule for version control; it is a fundamental architectural constraint. Responsibility is the one variable in an organisation that cannot—and should not—be automated. An AI agent does not care if a production system fails at three o’clock on a Sunday morning. It does not understand the ethical implications of a data breach or the long-term cost of technical debt.

Applying this doctrine to our organisational design means that while an agent can function as a high-performance member of a Stream-aligned team, the human remains the “Owner of Intent.” The human does not necessarily audit every line of syntax, but they must be the one to verify that the agent’s output meets the defined constraints. This introduces what I call “Verified Friction.” It is an intentional slowing of the cycle to ensure that the response actually matches the signal.

The Architect Elevator in High-Frequency

One of the most critical aspects of my role is the “Architect Elevator”—the ability to travel between the strategic penthouse and the technical engine room. In an agent-native organisation, the penthouse strategy and the engine room implementation can, for the first time, be perfectly synchronised.

Because agents can map code back to business requirements instantly, we gain a level of observability that was previously impossible. We can “sense” the state of our strategy through the actual code being written. However, this only works if the elevator remains transparent. The architect’s job is to ensure that the “translation” done by agents is traceable. We must design our systems so that a business goal in the penthouse can be tracked through the agents that executed it, all the way to the specific “human commit” that authorised it.

This is where my interest in Rust provides a compelling analogy. Rust’s strictness around memory safety and ownership forces you to handle complexity at compile-time. Our organisational design must be equally explicit. We need “compile-time” checks on our agentic interactions to ensure they don’t violate our strategic intent.

The Path to Sustainability

Ultimately, this vision is about sustainability. We have spent the last two decades burning out our best talent on the “dark matter” of IT—the repetitive, soul-crushing toil of manual translation. By delegating execution to agents while retaining human accountability, we can finally return to the “Flow State” of true architecture.

We are moving toward a world where we no longer wrestle with syntax, but with logic and ethics. This is the true promise of the agent-native era: it allows us to spend our cognitive load on the things that actually matter. It allows us to be more human, not less.

As an adaptive pragmatist, I don’t see AI as a replacement for human wisdom, but as a high-precision instrument that requires a skilled hand to steer. The “Sense-and-Respond” cycle is accelerating, and the noise is increasing. But as long as we hold to the rule that a human must always be the one to hit “commit,” we ensure that our technology remains a tool for human ends. We are still the ones in the elevator. We are still the ones who must sense the world and decide how to respond.

The future of software architecture is not a cold, automated grid; it is a vibrant, human-led collaboration between our intent and the machine’s efficiency. And that is a future I am ready to commit to.