
In our previous article, we argued that UI in the classical sense is losing its central role. Not because interfaces disappear, but because systems are starting to take on the work that users used to do manually. The natural next question is not what the new interface looks like. It’s how these systems should behave.
Designing for a System That Acts
Most digital products today are still built around a simple assumption: nothing happens until the user triggers it. Even the most complex tools rely on the same model. The user selects, filters, confirms, and navigates, while the system responds. AI agents break that model.
Systems have been acting on behalf of users for years — recommendation engines, automated alerts, trigger-based workflows. What agents change is the depth and personalisation of that action. A recommendation engine could suggest the next film; an agent can plan a weekend trip and book the flights. And with that depth comes a shift in how the relationship feels: users begin to perceive the system not as a faceless algorithm, but as something working specifically for them.
They don’t wait for instructions in the same way. They interpret goals, decide what needs to be done, and execute across multiple steps. That might involve querying data, comparing options, applying rules, or coordinating between services — often before the user sees anything.
Once you’re designing for that kind of system, the interface stops being the core design surface. The core becomes behaviour.
The Shift Becomes Obvious in Real Products
This shift becomes tangible as soon as you work on products where decision-making is the main source of value. In AMS AI, a procurement platform in the healthcare supply chain, the initial instinct was to organise complexity through the interface. Procurement teams needed visibility into vendors, pricing, compliance, and risk, so the solution was to structure that into dashboards, filters, and tables.
But exposing more data didn’t reduce complexity — it moved the burden onto the user. Introducing AI into the workflow changed that dynamic. Instead of helping users navigate information, the system began analysing suppliers, identifying trade-offs, and generating recommendations. At that point, the design problem shifted.
It was no longer about making information easier to browse. It was about deciding how the system should act on that information:
- 𐩒Should it recommend one option or several?
- 𐩒Should it explain its reasoning or keep it concise?
- 𐩒Should it automatically exclude risky choices?
Importantly, these are not one-time decisions. The answers evolve as trust develops. Early on, the system may present multiple options with full reasoning. Over time, as users observe reliable outcomes, they grant the system more autonomy — much like approving auto-accept in a code editor after seeing it make the right call repeatedly.
These questions shape the experience far more than any interface structure. And they are not UI decisions.
From Interaction to Delegation
As systems become capable of acting, interaction itself starts to change. Instead of guiding users through a sequence of steps, products increasingly allow users to express intent and let the system do the work. What used to require multiple screens and manual actions can now begin with a single request.
This shift is often described in terms of AI agents. Jakob Nielsen frames it as a move toward systems that act on behalf of users, reducing the need for direct interaction with interfaces.
In practice, this doesn’t remove interaction — it reframes it. Users are no longer executing tasks step by step. They are delegating outcomes.
Where UX Moves
It’s important to clarify what is actually changing here. This shift is not about the disappearance of user experience. It’s about where that experience is designed.
It’s important to clarify what is actually changing here. This shift is not about the disappearance of user experience. It’s about where that experience is designed.
In traditional products, UX is largely expressed through the interface — through flows, screens, and interactions.
In agent-driven systems, UX is designed at the level of processes the agent executes — not the steps the user follows. And the interaction model inverts: instead of walking through a sequence of steps toward a result, the user sees the result first and modifies it. The user shifts from executor of a process to editor of an outcome.
"The interface doesn’t disappear. It stops being the primary place where experience is shaped."
Designing Behaviour Instead of Screens
Once you accept that systems can act on behalf of users, the design problem changes quite fundamentally. You’re no longer deciding how people move through an interface. You’re deciding how the system behaves when it moves on its own.
That shift sounds subtle, but in practice, it reframes the entire process. Instead of focusing on flows and layouts, the work moves toward defining how the system interprets intent, how confident it needs to be before acting, and when it should step back and involve the user.
In AMS, this became clear very quickly. The challenge was not how to present supplier data, but how far the system should go in making decisions.
- 𐩒Should it present a single recommendation or multiple options?
- 𐩒How much of its reasoning should it expose?
- 𐩒At what point should it require human confirmation?
These decisions define the experience long before anything is drawn on a screen. The interface becomes a reflection of system behaviour, not the place where that behaviour is defined.
The medium of interaction shifts accordingly. Where users once navigated through buttons and filters, they now engage through conversation. Designers have always asked “what happens next?” — but the answer is no longer about placing the right element in the right spot. It’s about how the agent conducts the dialogue.
Autonomy Changes the Role of the User
As soon as the system starts acting, the user’s role changes as well. In traditional software, users operate the system step by step. They trigger actions, move through flows and assemble outcomes themselves.
In agent-driven systems, there are dynamic shifts. Users stop operating the product and start supervising it. They don’t need to see every step. They need to understand the outcome, evaluate whether it makes sense, and decide whether to accept or override it. This introduces a different kind of interaction model — one built around trust rather than control.
And this is where many AI products struggle today. Some automate too aggressively and lose user confidence. Others expose too much of the underlying process and recreate the same complexity they were meant to remove. Designing for agents means navigating that balance deliberately.
The Interface Becomes a System of Accountability
When a system acts on behalf of the user, the role of the interface shifts in a way that is easy to underestimate. It no longer exists primarily to let users perform actions. That part of the work is increasingly handled by the system itself. Instead, the interface becomes the place where those actions are made visible, understandable, and open to challenge.
"In practice, this means the interface is less about guiding users through a sequence of steps and more about helping them answer a different set of questions: what exactly did the system do, why did it do it this way and should I trust the outcome?"
This is a fundamentally different design goal. In classical UX, clarity comes from structure. You guide the user through a flow and reduce ambiguity step by step. In agent-driven systems, clarity comes from explanation. The system has already acted, and the user needs to understand its reasoning well enough to accept or override it.
This requires making system behaviour legible without exposing all of its internal complexity. Too little visibility and the system feels opaque. Too much and you recreate the complexity the system was meant to remove.
Design, at this point, becomes less about interfaces and more about accountability.
What This Means for Product Teams
Once systems start acting, the centre of product work shifts. In traditional products, most decisions are expressed through the interface — how flows are structured, how actions are triggered and how information is organised. That’s where product quality is shaped.
In agent-driven systems, the critical decisions move deeper. They define how the system interprets intent, what signals it prioritises, what actions it is allowed to take and how it behaves under uncertainty. These decisions are not visible in the interface, but they determine whether the product works.
This changes how teams approach product development.
Instead of prescribing the happy path, teams define what the system cannot do. The agent is intelligent enough to figure out how to reach the goal; the team’s role is to set the guardrails. Think of it like managing a competent employee: you don’t dictate each step. You define the task, the scope, and the red lines — and trust them to find the best route.
Product quality becomes a function of system behaviour. The interface becomes the layer where that behaviour is communicated.
What Comes After UI
If UI is no longer the primary layer of interaction, the next step is not replacing it with a new visual paradigm. What replaces it is a shift in where product logic lives.
Across AI-native systems, interaction is moving away from direct manipulation and toward delegation. Users express intent, systems act on it and the interface becomes the layer where those actions are surfaced, explained and, if necessary, corrected.
In that model, the quality of a product is no longer defined by how efficiently users move through screens. It is defined by how well the system interprets intent, how reliably it acts and how clearly it communicates its reasoning.
This is where agents stop being a feature and become the core product layer. The interface doesn’t disappear. But it is no longer where the product happens. It is where the product explains itself.
What does this mean for you?
Discuss with your AI.

A Product Strategist with over 13 years of experience in marketing, product strategy, and branding. His love for analytics, funnels, and a structured approach ensures that the digital products we craft aren't just functional—they impress.


