The traditional sociotechnical model, defined by Team Topologies, focuses on managing human cognitive load to ensure a steady “flow” of value. However, as we transition into an “Agent-Native” era, the architecture of our organisations must shift. AI agents are no longer just tools; they are becoming pseudo-members of the team. This requires a redefinition of team boundaries, where Platform Teams evolve into Context Providers and Stream-aligned Teams become Orchestrators of Intent. The “Architect Elevator” must now travel deeper into the engine room to manage the integration of non-human intelligence while ensuring the “Penthouse” strategy remains grounded in the reality of automated execution. The goal is a “Sense-and-Respond” organisation that scales not through more headcount, but through the seamless orchestration of human creativity and agentic efficiency.
Views subject to change - see my disclaimer.
For over two decades, the discipline of software architecture has been engaged in a slow, deliberate migration. We have moved from the monolithic “Cathedrals” of the early 2000s (where a single, rigid structure dictated the behaviour of thousands) to the “Bazaars” of microservices and agile squads. Yet, the most significant bottleneck in this evolution has never been the technology itself. It has always been the human mind.
The core premise of modern organisational theory, most notably articulated in Skelton and Pais’s Team Topologies, is that software architecture and organisational structure are two sides of the same coin. If you wish to change the code, you must first change the communication paths of the people writing it. This “sociotechnical” approach has served us well in the era of Cloud and DevOps. But a new variable has entered the equation: the autonomous AI agent.
When the “engine room” of an organisation begins to be populated by entities that can write code, debug systems, and make low-level tactical decisions at a speed no human can match, the old maps of team interaction begin to tear. We are entering the era of the Agent-Native Architecture, and it requires a fundamental rethink of what it means to lead, to design, and to respond.
To understand where we are going, we must first acknowledge the gravity of cognitive load. In the “Sense-and-Respond” cycle, the ability of a team to respond to feedback is directly proportional to the amount of mental space they have available. If a team is drowning in “dark matter” (legacy code, poorly defined APIs, and constant context-switching) they cannot sense the market’s signals, let alone respond to them.
Team Topologies introduced a beautiful taxonomy to solve this: Stream-aligned teams (focused on a continuous flow of work), Platform teams (reducing the cognitive load of stream-aligned teams by providing internal services), Enabling teams (bridging knowledge gaps), and Complicated Subsystem teams (handling the maths or physics that require PhD-level focus).
In a pre-AI world, the goal was to keep these teams small, because human communication overhead grows exponentially with every new member added. But an AI agent does not attend stand-ups. It does not suffer from “Social Loafing.” It does not need a performance review in the traditional sense. It does, however, consume context.
The architectural challenge of the next five years is not just about integrating LLMs into our products; it is about integrating them into our organisational structures. We are moving from a world of Human-to-Human interfaces to Human-to-Agent collaboration, and finally to Agent-to-Agent autonomous flow.
How do the four fundamental team types evolve when agents become first-class citizens?
In the traditional model, the stream-aligned team spends a significant portion of its time on “toil” (the repetitive plumbing required to move a feature from a whiteboard to production). In an agent-native organisation, the human members of a stream-aligned team shift their focus upwards. They become curators of intent.
Instead of writing every line of a React component or a Python microservice, the team provides the high-level constraints, the “Definition of Done,” and the strategic “Why.” The “How” is increasingly delegated to a swarm of specialised agents. This creates a fascinating paradox: the team’s cognitive load regarding syntax decreases, but their cognitive load regarding system integration and ethics increases. They are no longer just builders; they are governors of a micro-ecosystem.
The Platform team has always been the unsung hero of the “Architect Elevator.” Their job is to make the right way the easy way. In the era of AI agents, the “Platform” is no longer just a Kubernetes cluster or a CI/CD pipeline. It becomes a Context Engine.
Agents are only as good as the data and constraints they are given. A platform team in 2026 is responsible for building the “Knowledge Graphs” and “Authorisation Fabrics” that allow agents to operate safely. If an agent is tasked with optimising a database, the Platform Team provides the guardrails (i.e. the “Golden Path”) that prevents the agent from accidentally deleting production data in a fit of over-zealous optimisation. The platform becomes the interface through which silicon understands the organisation’s rules.
Enabling teams traditionally help other teams adopt new technologies. As we transition, enabling teams will likely focus on “Agentic Literacy.” They will be the ones who understand how to fine-tune a model for a specific domain or how to debug a “hallucination” in a critical business workflow. They bridge the gap between the raw capabilities of a Foundation Model and the specific, idiosyncratic needs of a Complicated Subsystem team.
There will always be areas of the architecture, like high-frequency trading algorithms, advanced cryptography, or complex physical simulations, where “generalist” agents fail. These teams will likely remain the most human-centric, but they will use agents as “exoskeletons.” The agents will handle the verification and formal proofs, while the humans focus on the breakthrough conceptual leaps that machines still struggle to initiate.
I often reference Gregor Hohpe’s “Architect Elevator,” the idea that a great architect must be able to travel from the “Penthouse” (business strategy) to the “Engine Room” (technical implementation) and back again.
In an agent-heavy organisation, the elevator moves faster, but the floors are changing. The engine room is becoming increasingly automated, which risks creating a “Black Box” effect. If the people in the penthouse can no longer understand how the work is being done because it is being handled by a thousand disparate agents, the “Sense-and-Respond” loop breaks. You cannot respond to what you cannot comprehend.
The architect’s new role is to ensure Observability of Intent. We need to be able to trace a high-level business goal down through the agents that executed it, ensuring that the “emergent behaviour” of the system still aligns with the corporate strategy. The architect becomes the “Chief Context Officer,” ensuring that the signals coming from the engine room are translated into meaningful insights for the penthouse, even when those signals are generated by non-human actors.
One of the most profound shifts in this new architecture is the concept of “Digital Labour” and its impact on human morale and ownership. If an agent writes 80% of the code, who “owns” the service? If the service fails at 2 AM, does the human on-call feel the same sense of responsibility for code they didn’t technically write?
This is the human side of architecture that cannot be ignored. We must design our team topologies to prevent Human Alienation. If we turn our developers into mere “prompt engineers” or “output reviewers,” we strip away the craft and the “Flow State” that makes software engineering a fulfilling profession.
The successful organisation of the future will use agents to remove the drudgery, not the design. We must protect the “Sense-and-Respond” cycle by ensuring that humans remain “in the loop” for the sensing of nuance and the responding with empathy. Agents can optimise a supply chain, but they cannot (yet) understand the political ramifications of a plant closure or the subtle shift in a brand’s cultural relevance.
I prefer systems that are Resilient (can recover from failure) rather than just Robust (can resist failure). In an agent-native architecture, we must embrace unpredictability. Agents, especially those based on probabilistic models, will behave in ways we didn’t explicitly program.
This means our “Team Topologies” must be even more decoupled. We need “Circuit Breakers” not just in our code, but in our organisational processes. If an autonomous agent in a Stream-aligned team starts making suboptimal decisions, the impact must be contained. This is where the “Platform” comes back into play, acting as a sandbox for innovation where the cost of failure is minimised.
To prepare for this shift, we must stop thinking of AI as a “tool” like a better IDE or a faster compiler. We must start thinking of AI as a Structural Catalyst.
My journey from the engine room to the penthouse has always been guided by a simple truth: technology exists to serve human ends. As we stand on the edge of the Agent-Native era, that truth becomes more important than ever.
The architecture of the future is not a cold, sterile grid of autonomous bots. It is a vibrant, adaptive, and deeply human system where AI agents act as the “connective tissue,” allowing us to sense the world more clearly and respond to it more gracefully. By applying the principles of Team Topologies and the “Architect Elevator” to this new reality, we can build organisations that are not just more efficient, but more creative, more resilient, and ultimately, more human.
The “Sense-and-Respond” cycle is about to get a lot faster. The question for the architects of today is: are your teams structured to keep up, or will the “engine room” leave the “penthouse” behind? The answer lies in the sociotechnical design we choose to build today.