Eighty percent of AI projects fail to deliver their intended business value, and, according to an MIT study in 2025, 95% of generative AI pilots generated zero measurable return.
When initiatives stall, the instinct is to blame the model. It is the wrong diagnosis, and an expensive one at that.
But the models are rarely the issue; the environment around them is. And, unlike model capability, that environment is entirely within an organization’s control.
Chief Data Officer, Hippo.
AI models are evaluated under controlled conditions, including clean inputs, defined objectives, and stable variables. Enterprise environments are the opposite.
They are layered systems built over years, often decades, and each has its own data structures, update cycles, and embedded business logic. The model doesn’t arrive pre-adapted to that complexity.
Deliberately accounting for how data flows, how systems interact, and how outputs will be consumed is the implementation work that most organizations underestimate. That gap is where the majority of AI value is lost.
Why enterprise AI actually breaks
From my experience, AI failures consistently fall into one of the below three categories. Whatever the problem, each is solvable, and the companies getting this right are already proving it.
1. Systems are not prepared
Most enterprise AI failures have nothing to do with the AI itself. The real problem is that most business systems are messy behind the scenes and weren’t built to work together in the first place; running on a mix of old software, disconnected cloud and on-site systems, clunky integrations, third-party tools, and years of workarounds layered on top of each other.
The result is inconsistent data, the same information labeled or stored differently depending on where it sits, and staff still having to fix any errors or gaps to keep things moving. All of this complexity must be addressed and smoothed out for AI to work properly.
Overcoming this particular problem involves honing in on integration; enforcing schema contracts at ingestion, using change-data-capture pipelines to maintain a consistent state, and separating model inference from operational workflows so failures don’t continue. Inputs should also be standardized before reaching the model, and outputs designed so that they integrate directly into downstream processes.
You can already see this in insurance claims, for example, where successful deployments sit alongside core systems, ingesting structured and unstructured data across first notice of loss, document processing, and adjudication workflows. The result is faster, more consistent decisions and higher throughput without proportional headcount growth. The model is the visible component and the integration is where value is created.
2. The data isn’t ready
Despite there being a persistent and damaging myth that AI will fix bad data, it will not. If anything, it does the exact opposite, exposing bad data far faster and wider than any technology has done before it.
For too long, organizations have relied on experienced people to compensate for data that was incomplete, inconsistently formatted, or poorly contextualized. A skilled analyst knows which field to trust when two systems disagree, while a claims adjuster knows a particular source runs three days late.
But leaning on this safety blanket has essentially covered up structural data problems that were never properly solved in the first place. So that when AI then swoops in, it exposes everything.
With no institutional memory, no tolerance for ambiguity, and no ability to compensate for context it was never given, it simply processes data, propagates it downstream, and incorporates it into a decision, at scale and at speed.
The AI deployments generating real, measurable business value share one common denominator: their data was ready before the model was. This is not about generic data quality efforts, but rather, building data as infrastructure that includes canonical data models with persistent identifiers for core entities such as customers, policies, and claims.
It’s data infrastructure that features consistency between training and inference, embedded lineage so outputs can be traced and validated. It’s also data infrastructure with governance that operates as part of the system, with definitions, access controls, and quality checks enforced continuously
3. Organizations are not structured to support deployment
AI deployment demands more than software engineering. Software engineers are essential, but AI systems behave differently from traditional software.
They are sensitive to data quality, subject to drift over time as real-world conditions shift away from their training distribution, and require model selection judgment that lives in a different discipline entirely.
Data scientists bring exactly that: the expertise to assess whether a general-purpose LLM is the right tool or whether a more targeted or purpose-built agentic approach fits better, to design evaluations grounded in production reality rather than benchmarks, and to catch performance degradation before it becomes a business problem.
Organizations that staff AI programs as software projects will keep hitting a ceiling that no model upgrade will solve.
AI is a distinct operational discipline, and the goal is that no single discipline should carry the weight alone. Data engineers ensure pipeline reliability; data scientists design evaluation frameworks and monitor performance; platform teams manage deployment and observability; and governance teams take care of compliance and traceability.
Deloitte’s 2026 State of AI in the Enterprise research identifies the skills gap as the single biggest barrier to AI integration, with most companies responding through training rather than structural change. What actually shifts outcomes is treating team composition as a strategic decision made with the same deliberateness as model selection.
The real differentiator
The companies getting real value from AI usually have one thing in common – they all did the hard operational work first.
Their data is organized, their systems talk to each other, and their AI outputs plug into actual day-to-day workflows instead of sitting in a dashboard nobody uses. And when something breaks or drifts off course, there’s a process for catching it.
That’s the gap a lot of businesses are missing. Buying access to a powerful model is the easy part. Getting it to work reliably inside a real company – with messy data, disconnected systems, and teams working in silos – is where things fall apart.
Ultimately, AI is not a magic layer that can be dropped on top of broken foundations. The foundations must be fixed first, and that means investing in cleaner data, better infrastructure, and systems that can actually support automation at scale.
The best cloud storage: tested, reviewed and rated by experts.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

