Back to RAAK Advisory
Execution Notes

Why adding AI tools does not fix execution

AI can increase speed inside a system. It cannot repair a system that is structurally unclear.

In the past two years, I have watched dozens of organisations implement AI tools with high hopes and disappointing results. The pattern is remarkably consistent. The leadership team identifies an area where AI could help—customer support, content creation, code generation, research. They select a tool, negotiate a contract, train the team. Usage begins. And then, six months later, they take stock and realise that almost nothing has changed. The tool is being used, sometimes heavily, but the outcomes that matter have not improved. Costs may have increased. Frustration definitely has.

The conventional explanation for this failure is that the tools are not ready, or that the teams are not adopting them properly, or that more training is needed. This is almost always wrong. The real reason AI tools fail to deliver value is that they have been inserted into organisations with broken operating systems, where the problem was never a lack of speed or capacity but a lack of clarity.

The seduction of tooling

There is something deeply appealing about the idea that better tools can solve organisational problems. Tools are concrete. They can be purchased, implemented, measured. They offer a clear narrative of progress: we did not have this capability before, and now we do. In a world of ambiguous challenges and contested solutions, tools provide certainty.

This appeal is particularly strong with AI because of its novelty and promise. The fear of falling behind is real. Competitors are adopting these tools. The technology is advancing rapidly. Waiting feels like a strategic risk. So organisations move quickly, sometimes too quickly, to get AI into their operations.

But this focus on tools obscures a more fundamental truth: execution problems are rarely tool problems. They are system problems. They arise from unclear ownership, from ambiguous decision rights, from workflows that were designed for a different era and never updated. Adding AI to a broken system does not fix the system. It accelerates it. It makes the brokenness happen faster.

The automation trap

Consider a common use case: using AI to draft customer support responses. On the surface, this seems like a clear win. The AI can generate responses faster than humans, freeing up the support team to handle more tickets. But look closer at the organisation that is implementing this tool, and you often find a more complex picture.

The support team is already unclear about what responses are appropriate in different situations. The escalation criteria are fuzzy—when should a ticket be escalated to engineering, when should it be handled by support, when should it be passed to sales? The knowledge base that support agents are supposed to draw from is out of date and poorly organised. Quality standards are inconsistent across the team.

In this environment, adding an AI tool that drafts responses does not solve these problems. It amplifies them. Now the team can generate more responses, faster, but those responses are based on the same unclear guidelines and inconsistent standards. The volume of work increases, but the quality does not. The tool creates the appearance of efficiency while actually degrading the customer experience.

This is what I call the automation trap: the belief that automating a process will improve it, without recognising that automation magnifies the characteristics of the process it automates. If the process is clear, efficient, and well-designed, automation makes it faster and cheaper. If the process is unclear, inefficient, or poorly designed, automation makes it faster at being bad.

Where AI actually helps

This is not to say that AI has no value. Used correctly, it can be transformative. But the conditions for it to create value are specific and often underestimated.

AI works best when the workflow is already well-defined, when the criteria for success are clear, when the human operators know exactly what they are trying to achieve and can recognise when the AI has achieved it. In these contexts, AI becomes a genuine multiplier—it lets skilled people work faster, or it enables operations at scale that would not be possible with human labour alone.

AI is also valuable as a diagnostic tool. When you attempt to implement AI and it fails to deliver value, that failure often reveals problems in the underlying system that were previously hidden. The attempt forces clarity about processes that have never been documented, about decision criteria that have never been articulated, about ownership that has never been assigned. The failure of AI can be the catalyst for the kind of systemic redesign that actually solves execution problems.

The sequence that works

If tools alone cannot fix execution, what can? The answer is a deliberate, sequenced approach that treats tool implementation as the final step, not the first.

Step one: Define the workflow. Before introducing any tool, map the workflow that the tool is supposed to support. Who does what, in what order, with what inputs, producing what outputs? Where are the decision points? What are the quality gates? This mapping often reveals problems that have been hidden by the familiarity of routine.

Step two: Clarify ownership and criteria. For every step in the workflow, identify who is accountable for it and what criteria define successful completion. If these cannot be stated clearly, the workflow is not ready for automation. The time spent on this clarification is never wasted—it pays dividends regardless of whether AI is ultimately introduced.

Step three: Redesign for clarity. Use the clarity gained from steps one and two to redesign the workflow. Remove steps that add no value. Combine steps that are redundant. Clarify decision rights. Update documentation. The goal is a workflow that is so clear and well-designed that it functions well even without AI.

Step four: Introduce the tool. Only after the workflow has been redesigned should AI be introduced. Now the tool is amplifying a system that works, rather than accelerating a system that does not. The conditions for success have been created deliberately, not assumed.

The real leverage

The organisations that get the most value from AI are not the ones that move fastest to implement it. They are the ones that use the prospect of AI as a forcing function to clean up their operating systems. They recognise that AI is not a substitute for clarity, but a reward for it. The work they do to prepare for AI—documenting workflows, clarifying ownership, establishing criteria—would have improved their execution even if AI had never been introduced.

This is the deeper lesson. The question is not whether to adopt AI. The question is whether your organisation is clear enough to benefit from it. The organisations that will thrive in an AI-enabled world are not those with the most sophisticated tools. They are those with the clearest operating systems, the most explicit decision rights, the most effective workflows. AI amplifies capability. The question is: what capability are you amplifying?

AI does not fix broken execution. Clarity does.