Your knowledge hub on nearshore software development | Pwrteams

Why most AI projects fail & what can you do about it

Written by Magdalena Zawarska | April 11, 2025

The original article was published in October 2024 and updated in April 2025 with the latest research and developments.

Imagine this: it’s 2023, and AI is suddenly everywhere. From the release of ChatGPT to the constant chatter about automation, companies are scrambling to get on board. The promise? A revolution. Faster workflows, fewer errors, and innovation on a scale we’ve never seen before.

But fast forward to 2025, and reality has become even harsher. What was once a rush to implement AI has turned into a widespread struggle. The numbers still tell a sobering story: 42% of businesses are now scrapping most of their AI initiatives, up dramatically from 17% just six months ago.

So, what happened? Why are even more companies falling short of AI success than we reported last October? More importantly, how can you make sure that your AI projects don't just survive but thrive in this increasingly challenging landscape?

So, what happened? Why did so many companies fall short of AI success? More importantly, how can you make sure that your AI projects don’t just survive but thrive?

Recent research highlights that leadership misunderstanding and data quality challenges remain the dominant root causes of AI project failures. Across industries, these challenges are compounded by limitations in infrastructure, unrealistic expectations of AI, and a focus on technology rather than clear business objectives

The truth behind failed AI projects

Let’s be real - getting AI to work well is tough. What seems impressive in a demo often crumbles when it's time to go live. In fact, according to a recent Gartner report, at least 30% of generative AI projects will be abandoned after the proof-of-concept stage by the end of 2025. By other estimates, more than
80% of AI projects fail. This is twice the already-high failure rate in corporate IT projects.

In fact, a significant reason for failure is leadership-driven issues. Many failures occur because business leaders misunderstand objectives or set unrealistic expectations. But that’s not all. Other factors, like high costs, low accuracy, and resistance to adoption, also play critical roles.


  • High costs
    AI is expensive - there’s no getting around that. From maintaining infrastructure to updating models, the costs can spiral quickly, leaving companies questioning whether the return on investment is worth it. Gartner estimates that building or fine-tuning a custom generative AI model can cost between $5 million and $20 million, plus $8,000 to $21,000 per user per year. Additionally, using a generative AI API might cost up to $200,000 upfront and an additional $550 per user per year.
  • Low accuracy 
    Building a prototype that works at 75% accuracy is relatively straightforward, but businesses need AI that performs at 90% or higher. That last 15%? Achieving it is significantly harder and takes far more work than expected. It's the difference between an AI that occasionally assists and one that consistently delivers value.
  • Low adoption
    Even if AI works, getting teams to embrace it is a different story. People are creatures of habit, and AI can feel like an unwelcome disruption. That's why, despite the hype, the percentage of employees consistently using AI tools like Copilot is low. Resistance to change, lack of understanding, and fear of job displacement all contribute to low adoption rates.

     

These hurdles mean that AI projects often end in disappointment. Instead of celebrating, many teams are left explaining why their solutions aren’t delivering the expected results.


The AI reality check: only 7% of people are truly proficient

Here’s a number that might surprise you: just 7% of the workforce can be classified as truly proficient in using AI to make a real difference. These people save 30% of their time, unlock AI’s potential, and drive productivity. That leaves 93% of people either experimenting or not using AI at all. This gap is a significant factor in why AI projects fail. AI tools alone won't automatically lead to success. You need the right talent and training to make AI initiatives work. Without it, AI remains just another buzzword.

Beyond the obvious challenges

Beyond the obvious hurdles, there are underlying issues that sabotage AI projects:

  • Data quality and availability

AI thrives on clean, relevant, and well-structured datasets, but this is where many projects fall short. Poor data quality is a recurring issue, with up to 80% of an AI project’s time spent on cleaning and preparing datasets. Even organisations with extensive historical data often discover that their datasets lack the depth or structure required for AI applications. Data imbalance can skew predictions and reduce model accuracy for use cases like fraud detection or medical diagnostics. Meanwhile, data engineering teams, critical to building reliable data pipelines, are often undervalued and underfunded, further delaying project timelines.

  • Lack of clear objectives

What's the problem you're trying to solve with AI? Vague goals like "improve efficiency" aren't enough. Successful AI projects have specific, measurable objectives. Without a clear target, even the most advanced AI can miss the mark.

  • Over-reliance on technology over people

AI is powerful, but it's not a magic solution. Companies often focus on technology and neglect the human element - the people who use, manage, and are affected by AI systems. Ignoring organisational culture, change management, and user training leads to resistance and failure.

  • Misaligned leadership

Many projects falter because leaders fail to align AI objectives with business goals. For example, they may request an AI system to optimise pricing without clarifying whether the goal is to maximise sales volume or profit margins, resulting in misaligned outputs. Additionally, leaders often underestimate the time and complexity required to train and deploy effective AI models, expecting results in weeks instead of months. Unrealistic expectations about what AI can achieve only compound the problem, leading to disappointment when models fail to deliver consistent or deterministic outcomes.

  • Lack of collaboration between teams

Data science teams frequently work in isolation from the rest of the organisation, creating a critical disconnect that dooms many AI projects. This siloed approach typically manifests in three damaging ways: First, data scientists develop sophisticated models without sufficient input from business stakeholders, resulting in technically impressive but commercially irrelevant solutions. Second, IT departments aren't involved early enough in the development process, creating technical debt when AI systems need to be integrated with legacy infrastructure. Finally, end-users - who determine adoption success - are often consulted only after significant development work has occurred, leading to resistance when the solution doesn't match their workflow needs.

  • Talent shortages

Building effective data science teams remains costly and time-consuming due to the continued shortage of AI expertise despite the growing demand. As of early 2025, the talent gap in AI has actually widened, with organisations competing fiercely for a limited pool of qualified professionals. Top AI specialists command salaries exceeding $300,000 in major markets, putting them out of reach for many companies, particularly mid-sized businesses.

The shortage extends beyond just data scientists to include machine learning engineers, MLOps specialists, and AI ethics experts - roles that have become essential as AI deployments grow more complex. This talent crunch forces many companies to choose between three unappealing options: paying premium rates for experienced hires, working with less qualified personnel, or delaying critical AI initiatives indefinitely. 

[NEW] Real-world AI failures in 2025

Recent high-profile failures illustrate these challenges:

Apple Intelligence

Released in early 2025 as part of iOS 18.2, Apple's AI assistant quickly generated controversy when it produced misleading news summaries for users. In a particularly troubling incident, it falsely claimed that Italian footballer Luigi Mangione had committed suicide following a career-ending injury, when in fact no such player existed. The hallucination spread rapidly across social media before Apple acknowledged the error. The company was forced to temporarily disable the news summarisation feature while implementing more stringent factual verification protocols, damaging consumer trust in what had been a highly anticipated release. This case highlighted the dangers of deploying generative AI without sufficient guardrails and fact-checking mechanisms.

Air Canada Chatbot

The airline's customer service chatbot created significant legal and public relations problems when it incorrectly informed customers they were eligible for bereavement fare refunds, contradicting the company's actual policy. When customers attempted to claim these refunds, Air Canada initially refused to honor them, arguing that the chatbot had malfunctioned. However, in March 2025, Canada's Consumer Protection Tribunal ruled that Air Canada must honour all refunds promised by its AI system, establishing an important precedent for corporate liability related to AI-generated information. The case demonstrated how AI systems operating without clear business objectives and proper oversight can create significant financial and reputational damage.

Amazon Alexa

During the 2024 U.S. presidential election campaign, users noticed that Amazon's virtual assistant appeared to favor Kamala Harris in its responses to political questions, providing more detailed and positive information about her positions while offering briefer, more neutral responses about other candidates. The bias, apparently unintentional, stemmed from how the AI model had been trained on internet data that contained its own inherent biases. Amazon issued an emergency software update in September 2024. Still, the incident fueled ongoing discussions about AI neutrality in politically sensitive contexts and raised questions about how companies should approach political content in consumer AI products. The case illustrated how data quality issues, particularly bias in training data, can undermine trust in AI systems.

How to succeed where others fail

Don’t just throw technology at the problem and hope for the best. AI success requires more than just good software - it needs the right talent, tools, and approach. Here’s what you can do to avoid the common pitfalls.

1. Build a custom AI stack

Off-the-shelf AI models might look good in demos, but they often fall short in real-world applications. Tailoring AI solutions to your specific industry - whether that’s finance, healthcare, or another field - delivers deeper insights and better accuracy. For instance, an AI model trained on general language data might not perform well in the medical domain without specialised training.

2. Implement guardrails for control

AI can sometimes feel unpredictable. Implementing guardrails helps ensure that AI stays aligned with your business needs, providing control over outputs without constant developer intervention. This includes setting ethical guidelines, compliance checks, and validation processes to prevent undesired outcomes. With the increasingly complex regulatory landscape of 2025, these guardrails have become even more essential to manage risk effectively.

3. Invest in skilled teams

A major reason AI projects fail is the lack of the right talent. Bringing in AI specialists, whether it’s data scientists, engineers, or project managers, is crucial to moving from prototype to production. But don't stop there. Upskill your existing workforce. Provide training and resources to help your team understand and leverage AI effectively.

4. Choose developer-friendly tools

Select tools that streamline the integration process, helping your team easily deploy and manage AI solutions. Consider solutions that are agnostic towards large language models (LLMs) or AI providers, allowing you the flexibility to choose or switch between different technologies as needed. This approach prevents vendor lock-in and ensures you can test AI models available.

5. Seamless integration

AI should complement existing systems, not complicate them. Developer-friendly tools and streamlined processes help make integration smoother, ensuring that AI deployments are effective and efficient. APIs, microservices architecture, and modular design can facilitate this integration.

 

A shift in management & investment

As AI becomes more embedded in business operations, it's not just the technology that needs to evolve - your approach to product development and team management has to change, too. The old ways of working in silos won't cut it anymore.

  • Embrace cross-functional teams
    Truly cross-functional teams bring together domain expertise and AI proficiency. When data scientists, engineers, and business analysts collaborate, they can bridge the gap between technical capabilities and business needs.
  • Cultivate an AI-driven culture
    Foster an environment where experimentation is encouraged, and failure is seen as a learning opportunity. Leadership buy-in is essential. Leaders should champion AI initiatives and model the behaviours they wish to see in their teams.
  • Invest in talent development
    It's not enough to hire a few costly AI experts. You need to upskill your existing workforce, giving them the training and confidence to become part of the AI class. This investment pays off in increased innovation and a more agile organisation.

To illustrate how these principles come together in practice at Pwrteams, have a look at how we helped a fintech company build an AI-powered price prediction engine that benefits businesses across industries.

[NEW] The evolving regulatory landscape

The regulatory environment around AI shifted significantly in early 2025. With the change in U.S. administration, there's an expectation of lighter federal oversight of AI, potentially rolling back parts of the previous administration's Executive Order on AI. However, individual states like Colorado and Tennessee have enacted their own laws governing AI usage and deepfakes.

This patchwork of regulations creates new compliance challenges for companies implementing AI solutions, requiring them to navigate different requirements across jurisdictions. Building compliance into your AI strategy from the beginning is no longer optional - avoiding legal complications that could derail your projects is essential.

Ready to thrive with AI?

AI is here to stay, but succeeding with it isn’t automatic. It takes the right strategy, the right technology, and, most importantly, the right team. With Pwrteams, you get all three. We’re here to help your AI projects not just survive - but thrive.

Let’s make AI work for you. Ready to take your next step? Get in touch with us today and find out how we can help your business succeed with AI.