
For years, UX design has been built on a stable foundation: the user is in control. They decide what to do, move through a flow, and complete actions step by step. The system responds to those actions, but it does not initiate them. Even as products became smarter, more adaptive, and increasingly automated, the overall structure remained the same. Interaction sat at the centre of the experience. That assumption is now starting to break.
OpenClaw and the shift
Tools like OpenClaw make this shift difficult to ignore. Instead of guiding users through interfaces, these systems operate on their behalf. They can read emails, generate responses, trigger actions across tools, organise information, and carry tasks through to completion without requiring constant step-by-step input from the user.
At first glance, this looks like a natural continuation of software evolution: fewer clicks, faster workflows, more intelligent automation. But underneath, something more fundamental has changed. The user is no longer the one doing the work.
And this is not a niche shift limited to experimental AI tools. According to Gartner, by 2028, at least 15% of day-to-day work decisions will be made autonomously by AI agents — a major departure from the interaction-driven software model most digital products are still built around.
When interaction is no longer the main event
Traditional UX is built around interaction loops. A user takes an action, the system responds, and the design challenge is to make that exchange as clear, efficient, and predictable as possible. Everything from navigation patterns to onboarding flows exists to support this structure. That model works well as long as the system remains reactive.
Agent-based systems introduce a different dynamic. But the shift is not simply that a single request can trigger many operations behind the scenes — a single “Place Order” button has always done that. The real change is structural: the interaction loop itself has been redesigned.
In traditional products, reaching a result requires completing a fragmented sequence of steps. The user moves through ten screens to get an outcome. In agent-based systems, the user sees a result after the first step. That result may not be perfect. But instead of retracing a workflow, the user provides feedback, the system adjusts, and a new result appears. The loop becomes: intent → result → feedback → better result.
Interaction is no longer the mechanism through which the experience unfolds. It is where it begins — and where the user returns to steer.
According to Microsoft’s 2025 Work Trend Index, 75% of knowledge workers already use AI at work, while companies increasingly expect AI systems to coordinate and execute operational tasks rather than simply assist with them.
OpenClaw exposes the gap
OpenClaw is not important because it is the most advanced agent system on the market. It is important because it exposes a mismatch between how modern systems behave and how we still design interfaces around them.
Under the surface, OpenClaw operates less like a traditional application and more like a persistent operational layer. It has access to tools, files, communication systems, and external environments. It can chain actions together, maintain context between tasks, and execute requests with minimal direct supervision.
From the interface, however, most of this complexity disappears. A request goes in. A result comes out.
The interaction feels deceptively simple because the interface presenting it is simple. But the behaviour underneath is not. Between the request and the outcome, the system may have interpreted intent, prioritised actions, applied assumptions, accessed multiple tools, and made decisions the user never explicitly reviewed.
Traditional UX succeeds by abstracting complexity away from the user. It simplifies systems, reduces cognitive load, and hides operational details in order to make products feel intuitive.
That abstraction becomes much harder to sustain once systems start acting independently.
When UX changes its mandate
One of the core promises of UX has always been clarity. Good interfaces make systems feel understandable and manageable. Users may not know exactly how a product works internally, but they feel confident using it because the interaction model gives them a sense of control.
"With agentic systems, the design challenge is no longer about sustaining that sense of control through interaction flows. It is about building trust in a system that already acts. "
Traditional UX was built from the ground up: designers mapped out how to help users execute their tasks, step by step. The system was a tool, and UX made it usable. Agent-based systems invert this relationship. A capable system already exists — a black box that can interpret, decide, and act. The designer’s job is to create clarity around what that black box does, so that users can trust it enough to delegate.
These are fundamentally different design tasks. Old UX helped humans solve their problems through interfaces. New UX helps humans give instructions to agents and trust the outcomes they produce.
Chat works remarkably well for expressing intent because natural language is flexible and low-friction. But conversation is not the same thing as execution.
As systems become more capable, chat starts collapsing under the weight of behaviour it was never designed to represent. It cannot reliably expose system state, ongoing operations, or the decision-making logic behind actions.
Researchers at Anthropic have pointed to this challenge: capability is scaling faster than oversight.
When workflows change their audience
One of the less obvious consequences of agent-based systems is that they transform the role of the workflow itself.
Traditional digital products are built around predefined paths. A user moves through a structured sequence of actions: open a page, fill out a form, confirm a decision, and move to the next step. UX design has historically been about refining these paths — making them faster, clearer, and easier to complete.
Agentic systems do not eliminate these structures. Instead, they redirect them. Workflows that once guided users now guide agents. Skills, prompt chains, and structured operational flows have become increasingly popular precisely because agents need guardrails. Without them, systems hallucinate, lose context, and produce unreliable results.
Agents have goals, and they can explore different paths to achieve them. But that flexibility is exactly why structured workflows matter more, not less. The difference is that the workflow’s audience has changed: it no longer walks a human through ten steps — it keeps an agent on track so it delivers the right result on the first pass.
This shift was especially visible during work on Happy. Traditional interaction patterns quickly started feeling restrictive because the system needed to respond dynamically to emotional context, behaviour, and changing user intent rather than guide people through predefined sequences.
But flexibility without structure creates a new design problem. Without guardrails, predictability drops. Without predictability, it becomes harder to trust how a system will behave under uncertainty.
When behaviour becomes the product
This shift becomes easier to understand when looking at how AI-native products are already evolving in practice.
In projects like Norvana, the goal was never simply to create a more visually sophisticated dashboard. Early concepts explored exactly that — organising health data into cleaner interfaces, visual summaries, and easier-to-read systems. But those interfaces alone could not answer the deeper question users actually cared about: what should happen next?
As the product evolved, the focus shifted away from displaying information and towards interpreting it. The system began to guide decisions, generate recommendations, and respond dynamically to changing user context. The value no longer came from visualising data more effectively. It came from translating complexity into meaningful action.
This is not entirely new. Backend logic, algorithms, and databases have always been central to product value. Behavioral mechanics and gamification have always been about shaping user actions. What has changed is the degree of autonomy: the system no longer just processes data or nudges behavior through interface patterns — it interprets, decides, and acts on its own.
Across products like this, the interface gradually stops being the primary source of differentiation. System behaviour — how the product reasons, adapts, and acts — becomes the core of what users experience.
Why UX is no longer enough — from UX to AX
UX was designed to optimise interaction: clarity, usability, efficiency, navigation, and flow. These principles remain important, but they are no longer sufficient once systems begin acting independently.
"The core challenge is no longer whether users can complete a task. It is whether the system completes the right task in the right way under the right conditions."
That is not an interface problem. It is a behavioural one.
If UX describes how users interact with systems, then systems that act on behalf of users require a broader model. Action Experience — AX — begins to describe that shift.
But to understand why AX is different, it helps to ask: hasn’t automation always acted on behalf of users? Tools like Zapier, Make, and IFTTT have been triggering autonomous action chains for years. Nobody called that “AX.”
The difference is not in the autonomy itself. It is in the interaction model. Zapier connects predefined triggers to predefined actions. The user programs a rule, and the system follows it. There is no interpretation, no judgment, no adaptation.
AI agents introduce something closer to collaboration. In the emerging model, users do not press buttons or configure rules. They give tasks to agents, review results, provide feedback, and teach agents to improve over time. It is closer to delegating work to a junior colleague: delegation, review, teaching, and gradual trust-building.
This reframes the design challenge entirely. UX was about making buttons and flows intuitive. AX is about designing delegation: how users give instructions that agents understand, how results are presented for quick evaluation, how feedback loops help agents learn, and how trust is built and maintained over repeated interactions.
In UX, success is often measured through friction reduction and usability metrics.
In AX, success depends on whether the system behaves appropriately over time: whether it interprets intent correctly, whether it acts reliably across different contexts, whether users can understand and intervene when necessary, and whether autonomy remains bounded and predictable.
Unlike traditional UX failures, which often create frustration or confusion, failures in AX create consequences. The system does not simply make an interface harder to use. It makes the wrong move.
In UX, success is often measured through friction reduction and usability metrics. In AX, success depends on whether the system behaves appropriately over time.
In UX, success is often measured through friction reduction and usability metrics. In AX, success depends on whether the system behaves appropriately over time.
Designing for action
As this transition continues, the centre of design work inevitably shifts. Less time is spent refining static flows and polishing interfaces. More time is spent defining behaviours, permissions, constraints, and operational boundaries.
What should the system be allowed to do autonomously? When should it ask for confirmation? How should it behave when the context is incomplete or ambiguous? What actions should remain irreversible, and which should be interruptible?
These are not purely visual or interaction-level decisions. They are structural decisions about how systems operate in the real world.
This became particularly relevant in our work on AMS AI, where the product needed to automate procurement logic across complex operational flows. The challenge was not only designing an interface that users could navigate, but defining how the system itself should behave under different business conditions, priorities, and edge cases.
Without clear boundaries, even highly capable systems become difficult to trust. Not because they lack intelligence, but because they lack sufficiently defined behavioural constraints.
When mistakes become actions
In traditional software, most mistakes remain contained within the interface. A user clicks the wrong button, misunderstands a flow, or abandons a process midway through. These failures create friction, but they rarely continue operating after the interaction ends. The user remains the one executing the action, which means they also remain the final checkpoint before consequences occur.
Agent-based systems change that dynamic entirely. When systems act autonomously, mistakes no longer stay inside the interface. An incorrect assumption can trigger a chain of actions across tools, workflows, and environments before the user even notices something is wrong.
This is already visible in systems like OpenClaw, where relatively simple prompts can lead to unintended behaviour when autonomy is not properly constrained. As these systems become more capable, failures become less visible but more consequential.
Conclusion
UX was built for a world where interaction was the primary mechanism of control.
Users moved through defined flows, made decisions step by step, and systems responded accordingly. Design succeeded by making those interactions clearer, faster, and easier to navigate.
Agentic systems operate differently. As software begins acting on behalf of users, the interaction model shifts from step-by-step execution to delegation: giving instructions, reviewing results, providing feedback, and building trust over repeated interactions.
This does not make UX irrelevant. But it does expose its limits.
Because designing systems that act requires more than interaction design alone. It requires designing how autonomy is guided, how trust is built, and whether users can steer effectively once the workflow is no longer theirs to walk through — but theirs to direct.
What does this mean for you?
Discuss with your AI.

A Product Strategist with over 13 years of experience in marketing, product strategy, and branding. His love for analytics, funnels, and a structured approach ensures that the digital products we craft aren't just functional—they impress.

