The deployment decisions are moving faster than the purpose conversations. In most organisations right now, AI tools are live, budgets are committed, and teams are actively using outputs. Few have written down what the initiative is meant to achieve, for whom, and at what cost to the people inside it.
Every data point on enterprise AI failure points to the same cause. MIT’s NANDA initiative found that 95% of generative AI pilot programmes fail to produce measurable financial impact. The failures stem from poor workflow integration and misaligned organisational incentives. The model did not fail. The thinking upstream of it did. Smiling CFO
The question most organisations are not asking is not “can AI do this?” It is “what does this AI deployment produce, and for whom?”
Productivity is what happens when effort is applied efficiently toward an objective. AI can make an organisation enormously productive at things that do not matter, at things that cause harm, and at things that were never worth optimising in the first place. The efficiency is real. The direction may be entirely wrong.
Organisations reporting significant financial returns from AI are twice as likely to have redesigned end-to-end workflows before selecting modelling techniques. The insight here is not about methodology but about sequence. The organisations getting value from AI thought about what they were trying to produce before they chose how to produce it. Most organisations inverted this. They selected a tool, licensed a platform, or deployed a model, and then attempted to find a use case that justified the investment. People the Brand
Only 15% of employees say their workplace has communicated a clear AI strategy. When strategy is absent at the workforce level, the problem definition at the project level is almost certainly degraded. Teams are optimising for outputs that were never formally agreed upon. The AI is working. Nobody is certain what it is working toward. People the Brand
The ethics conversation around AI has become a compliance conversation. Organisations have policies. Some have governance frameworks. Most are focused on what AI must not do: must not discriminate, must not expose personal data, must not generate harmful content. These are necessary constraints. But they are not a design principle.
The prior question, the one that determines whether an AI deployment is genuinely responsible before a single governance checkbox is ticked is: who is affected by this, and in what direction?
71% of employees trust their employers to act ethically as they develop AI. They trust their employers more than universities, large technology companies, and technology start-ups. This is a significant and fragile thing. It is not trust earned by past behaviour. It is trust extended in advance, based on the belief that the organisation deploying AI on their behalf has thought carefully about what it is doing to them. Dovetail
Half or more of global C-suite leaders worry that ethical use and data privacy issues are holding back employee AI adoption. The irony is that the thing holding back adoption is not ethics but the absence of a clear answer to the question employees are implicitly asking: is this for me, or is this being done to me? Dovetail
The employee whose workflow is being automated. The customer whose interaction is now handled by a model. The colleague whose role is being redefined by a copilot that was selected without their input. Each of these is a design decision with consequences for real people. Most AI deployments treat these as edge cases to be managed after go-live rather than questions to be answered before it.
The organisations building durable value from AI share three structural characteristics that have nothing to do with the sophistication of their models.
They defined the outcome in human terms before the technical terms. Not “increase automation” or “reduce headcount in this function.” Instead: what does this make better for the person doing this work, and how will we know? The measurable business case follows from this.
They treated the people inside the system as stakeholders in the design. Employees need to understand why AI is being introduced, how it affects their roles, and what new opportunities it creates. The people closest to the workflow being changed hold the knowledge that determines whether the AI deployment will actually function in practice. Organisations that consulted them built better systems. Organisations that informed them after the fact built expensive ones. Marketingscience
They asked what success at scale produces. A pilot demonstrates feasibility. It does not reveal second-order effects. What happens to the quality of human judgement when AI handles routine decisions over time? What happens to institutional knowledge when it is no longer practised? What happens to the customer relationship when personalisation replaces genuine understanding? These are not hypothetical concerns. They are questions that purposeful AI deployment addresses before the rollout, not during the post-mortem.
The organisations that will look back on this period as having built something durable are not the ones that paused long enough before the procurement decision, before the pilot, before the board presentation on AI strategy to answer a question that is easier to defer than to resolve.
Not in the sense of which business function it will touch. In the sense of what it produces for the people inside it, and whether that is worth producing.
AI does not have values. It executes objectives. The values of any deployment are entirely determined by the quality of thinking that happened before the first tool was selected. In most organisations, that thinking has been the shortest part of the process.
It should be the longest.
Sources: MIT NANDA Initiative 2025 · McKinsey AI in the Workplace Report 2025 · S&P Global Market Intelligence 2025 · BCG AI Value Survey 2025 · Gallup Workplace AI Strategy Survey 2024 · RAND Corporation AI Implementation Analysis 2025
If you’re dealing with comparable constraints, we’re open to a conversation.