The AI Backlash Is Here
What CTOs Need to Understand Before It’s Too Late
In April 2025, Duolingo’s CEO Luis von Ahn did something that most tech executives do every day—he talked about AI. He posted an internal memo on LinkedIn announcing that Duolingo would become an “AI-first” company, phasing out contractors whose work AI could handle and only allowing teams to hire if they couldn’t automate more of their work. It was the kind of announcement we’ve seen dozens of times over the past two years; the kind that usually gets a few likes from other executives and disappears into the feed.
This one didn’t disappear. It exploded.
Within weeks, Duolingo’s social media accounts were flooded with rage. The company lost over 400,000 TikTok followers. Comments on every post—even ones featuring a baby owl plushie asking for a cookie—turned into angry manifestos about AI replacing humans. The backlash was so severe that the company wiped its social media accounts entirely. Daily active user growth dropped to 40% year-over-year, the low end of expectations, down from 60% the previous year. Von Ahn later admitted on an earnings call that “I said some stuff about AI, and I didn’t give enough context.”
Here’s the thing: von Ahn wasn’t saying anything that dozens of other CEOs hadn’t already said. Klarna had announced similar plans. Shopify’s CEO told employees that AI was now a baseline expectation. So why did Duolingo become the lightning rod? And more importantly, what does this tell us about where we’re headed in 2026?
I’ve been watching this space closely, and I think the Duolingo incident isn’t an outlier—it’s a canary in the coal mine. On 2026, we’re entering what I’d call the AI Reckoning, a period where the gap between AI hype and AI reality is creating real consequences for companies that aren’t paying attention. As a CTO, you’re likely feeling pressure from all sides: executives want AI everywhere, employees are skeptical or outright hostile, customers are increasingly vocal about their preferences, and the ROI numbers aren’t adding up the way the vendors promised.
Let’s take a look at what’s actually happening and, more importantly, what you can do about it.
The Numbers Don’t Lie: Enterprise AI Is Struggling
If 2023 was the year of AI’s awakening and 2024 the year of frantic adoption, then 2025 will be remembered as the year of the reckoning. The data is stark, and honestly, it’s worse than most people realize.
A July 2025 MIT study found that 95% of enterprise AI pilot programs deliver no measurable impact on the P&L. Let that sink in for a second. Companies are pouring billions into AI initiatives, and nineteen out of twenty of those investments are producing nothing. Only 5% of custom enterprise AI solutions even make it to production, let alone deliver value.
This isn’t just MIT being pessimistic. IBM’s research found that less than half of IT leaders said their AI projects were profitable in 2024, with 14% actually recording losses. And here’s the number that should worry every CTO: AI project abandonment rates jumped from 17% in 2024 to 42% in 2025. That’s not a gradual shift in sentiment—it’s a rapid reassessment of whether AI is worth the trouble.
Gartner has officially placed Generative AI in the “Trough of Disillusionment” on their 2025 Hype Cycle. For those unfamiliar with Gartner’s framework, this is the phase where the original excitement wears off and early adopters start reporting performance issues and low ROI. It’s where reality catches up with hype. Gartner also predicts that through 2025, at least 50% of GenAI projects will be abandoned at the pilot stage due to unclear business value, poor data, or cost overruns.
The irony? Most companies are investing in the wrong places. MIT’s research found that more than half of GenAI budgets flow into sales and marketing tools, yet the biggest ROI comes from back-office automation—eliminating business process outsourcing, cutting external agency costs, streamlining operations. We’re chasing the flashy use cases while ignoring the ones that actually work.
There’s also a build-versus-buy problem. Companies that purchase AI tools from specialized vendors and build partnerships succeed about 67% of the time. Internal builds? They succeed only a third as often. Yet many organizations, especially in regulated industries, continue pouring resources into building proprietary solutions that are statistically likely to fail.
Customers Are Voting With Their Wallets
While enterprises struggle with ROI, something equally important is happening on the consumer side: people are getting tired of AI, and they’re not being quiet about it.
A HubSpot and SurveyMonkey survey released in August 2025 found that only 25% of consumers say they like or love AI in customer service. More than half—53%—actively dislike or hate it. A separate Gartner survey of nearly 6,000 customers found that 64% would prefer companies not use AI for customer service at all, and 88% have “major concerns” about the technology.
This isn’t just abstract survey data. It’s showing up in real business decisions. McDonald’s pulled an AI-generated Christmas ad after just three days following widespread backlash; viewers called it “creepy,” “poorly edited,” and “inauthentic.” Coca-Cola’s AI-reimagined holiday truck campaign triggered similar negative reactions. Google pulled an Olympics ad featuring AI after viewers criticized a father for using AI to help his daughter write a letter. The pattern is clear: when consumers can tell AI was involved, many of them don’t like it.
The cultural shift is so pronounced that Merriam-Webster chose “slop” as their 2025 word of the year—a term for the AI-generated content flooding social media and marketing that, as the editors put it, “oozes into everything like slime, sludge, and muck.”
What’s particularly interesting is the emergence of what I’d call the “Human Made” movement. Companies are starting to use the absence of AI as a selling point. iHeartMedia rolled out a “guaranteed human” tagline, promising users they won’t use AI-generated personalities or play AI-generated music. Their own research found that 90% of listeners—even those who use AI tools themselves—want their media created by humans. Apple TV’s hit series “Pluribus” from Vince Gilligan included “This show was made by humans” in the credits. Dove pledged to never use AI-generated women in advertising.
Here’s the insight that Duolingo missed, and that many companies are still missing: consumers don’t care about your cost savings. When von Ahn announced the AI-first strategy, there was nothing in the messaging about how it would improve the user experience or help people learn languages better. It was about efficiency and scale. As one analyst put it, “They basically cut the entire customer out of the messaging.”
Research from Washington State University makes this concrete: products and services described using AI terminology are consistently less popular. When participants were shown identical TV descriptions, the one labeled as “AI-powered” performed worse than the one labeled as “new technology.” The word “AI” itself has become a turnoff.
That being said, I don’t think this means consumers are anti-technology. They’re anti-thoughtlessness. They’re pushing back against AI that’s deployed to cut costs rather than create value, AI that makes their experiences worse in the name of efficiency, AI that treats them as problems to be automated rather than people to be served.
Your Team Is Already Resisting
If you think the resistance is only coming from outside your organization, I have bad news: your employees are pushing back too, and the data suggests the problem is worse than most leaders realize.
A Kyndryl report found that 45% of CEOs say most of their employees are resistant or even openly hostile to AI. That’s not passive skepticism—that’s active opposition. A Cloud Security Alliance report puts an even finer point on it: up to 70% of change initiatives, including AI adoption, fail due to employee pushback or inadequate management support.
Why the resistance? The reasons are both emotional and practical.
On the emotional side, fear is the dominant factor. An EY survey found that 75% of employees worry AI could eliminate jobs, with 65% fearing for their own roles specifically. This isn’t paranoia; it’s a reasonable response to headlines about layoffs and CEO statements like “AI can do the work of 700 customer service agents.” When employees hear about AI, they hear about replacement, not augmentation.
On the practical side, the training gap is staggering. A survey of go-to-market professionals found that 62% cite lack of education and training as the primary barrier to AI adoption—and 68% received zero AI training from their employers. The Yooz 2025 Workplace Tech Resistance Report found that 48% of employees believe better training would significantly improve adoption outcomes. Meanwhile, 70% of leaders admit their workforce isn’t ready to leverage AI tools effectively.
There’s also the phenomenon of “shadow AI”—employees using personal AI tools for work tasks without company approval. MIT’s research found that while only 40% of companies have official LLM subscriptions, 90% of workers surveyed reported using personal AI tools daily for job tasks. Seventy-eight percent of professionals using AI at work bring their own tools. This creates a governance nightmare, but it also reveals something important: employees aren’t anti-AI. They’re anti-bad-AI-implementation. When given access to tools that actually help them, they adopt enthusiastically. When forced to use clunky corporate tools with inadequate training, they resist.
The generational divide is real but often misunderstood. The Yooz survey found that 55% of Millennials are excited to try new workplace tools, compared to just 22% of Baby Boomers. But here’s the nuance: nearly 1 in 4 Gen Z employees have refused to use a new workplace tool at least once. Younger workers may be more comfortable with technology, but they’re also more willing to push back when it doesn’t work.
Why This Is Happening: The Common Thread
If you look at all three of these trends—enterprise ROI failure, consumer backlash, employee resistance—there’s a common thread running through them. It’s not that AI doesn’t work. It’s that AI is being deployed thoughtlessly.
More often than not, the companies struggling with AI adoption share the same characteristics:
They’re treating AI as a cost-cutting measure, not a value-creation tool. The Duolingo announcement was fundamentally about reducing contractor costs and limiting headcount. The McDonald’s ad was about producing content faster and cheaper. The chatbots that consumers hate are deployed to reduce customer service headcount, not to improve customer experience. When AI is framed as a way to do less with less, people notice—and they resent it.
They’re deploying AI without clear use cases or success metrics. MIT’s research found that many enterprises pursue AI without a well-defined business case or KPIs tied to business goals. One Fortune 1000 executive captured the problem perfectly: “If I buy a tool to help my team work faster, how do I quantify that impact? How do I justify it to my CEO when it won’t directly move revenue or decrease measurable costs?”
They’re ignoring the human element. Only 14% of companies have aligned their workforce strategies with their AI investments. That means 86% of organizations are deploying AI without thinking about training, change management, or how their employees will actually use these tools. As one report put it, AI adoption is as much about change management as it is about technology.
They’re communicating poorly—or not at all. The Duolingo backlash wasn’t just about the policy; it was about the messaging. Von Ahn announced cost savings without explaining customer benefits. He talked about replacing contractors without addressing what this meant for quality. He used terms like “AI-first” that sounded like “people-last” to everyone listening.
In my opinion, the AI backlash isn’t really about AI at all. It’s about trust. Consumers don’t trust that AI will serve their interests. Employees don’t trust that AI won’t replace them. And frankly, the ROI data suggests that perhaps we shouldn’t trust AI vendors’ promises either.
Questions Worth Asking
So what’s the path forward? I’m not going to give you a checklist—you’ve seen enough of those. Instead, here are the questions I think technical leaders should be sitting with as they navigate this landscape.
Are You Solving a Problem or Chasing a Technology?
The most successful AI implementations I’ve observed start with a clear business problem, not with “we need to use AI.” But here’s the harder question: how many of your current AI initiatives can you trace back to a specific, measurable problem that someone was actually trying to solve? And how many started because a vendor gave a compelling demo, or because the board asked what your AI strategy was?
MIT’s research found that back-office automation delivers the highest ROI—not because it’s exciting, but because the problems are well-defined. There’s something uncomfortable in that finding. It suggests that the flashy use cases we’re drawn to—the ones that make for good conference talks and press releases—might be exactly the wrong places to invest.
What Are You Actually Optimizing For?
When you strip away the language of “transformation” and “innovation,” what is your AI strategy actually optimizing for? Cost reduction? Headcount efficiency? Customer experience? Employee productivity?
These aren’t the same thing, and the answer matters more than most companies acknowledge. The Duolingo backlash happened because the stated optimization—cost savings through contractor reduction—was misaligned with what customers and employees cared about. If your AI strategy is fundamentally about doing less with less, people will notice. The question is whether you’re being honest with yourself about that trade-off.
Who Bears the Risk of Failure?
When an AI implementation fails—and the 95% pilot failure rate suggests most will—who absorbs that failure? Is it the customer who can’t get their problem solved? The employee who looks incompetent because the tool they were told to use doesn’t work? The team that staked their credibility on a project that went nowhere?
The companies I see navigating AI successfully have thought carefully about this question. They’ve built clear escalation paths. They’ve created space for human override. They’ve designed systems where AI failure doesn’t cascade into human failure. But more importantly, they’ve internalized that deploying AI isn’t just a technology decision—it’s a decision about who you’re willing to let down when things go wrong.
What Would Genuine Transparency Look Like?
Most companies say they’re transparent about their AI use. Few actually are. What would it look like to tell your employees, honestly, what AI means for their roles over the next three years? What would it look like to tell your customers, clearly, when they’re interacting with AI versus a human?
The research suggests that consumers don’t hate AI—they hate feeling deceived. They hate chatbots that pretend to be human. They hate marketing that looks authentic but isn’t. The backlash isn’t against the technology; it’s against the lack of honesty about it. If you’re not willing to be transparent about your AI use, that’s worth examining. What are you afraid the transparency would reveal?
Are You Investing in Adoption or Just Deployment?
There’s a telling statistic in the research: 68% of employees received zero AI training from their employers. Meanwhile, 90% of workers use personal AI tools for job tasks—tools they found and learned on their own. The gap between those numbers reveals something important: employees aren’t resistant to AI. They’re resistant to bad implementations forced on them without support.
Deployment is buying the tool. Adoption is everything that happens after. The companies treating AI as a procurement decision are failing. The ones treating it as a change management challenge—investing in training, creating space for experimentation, building feedback loops—are the ones seeing results. Where is your organization on that spectrum, honestly?
What’s the Counterfactual?
Here’s a question that doesn’t get asked enough: what happens if you don’t deploy AI in a particular use case? Not “what opportunities do you miss,” but genuinely, what’s the downside of waiting?
The pressure to move fast on AI is intense—from boards, from competitors, from vendors. But the data suggests that moving fast without clear use cases, adequate training, and honest communication is worse than not moving at all. Failed AI implementations don’t just waste money; they burn trust, create cynicism, and make future initiatives harder. Sometimes the best AI strategy is patience.
The Road Ahead
I believe we will see in 2026 that the AI backlash isn’t going away. If anything, it’s going to intensify more than ever. Consumer expectations will continue to rise. Employees will become more vocal about their concerns. And the ROI pressure will only increase as boards and executives demand returns on the massive investments made over the past two years.
That being said, this isn’t a reason to abandon AI but it’s a reason to be smarter about it. The companies that will thrive in 2026 and beyond are the ones that treat AI as a tool to augment human capabilities rather than replace them, that invest in training and change management rather than just technology, that communicate transparently about their AI strategies, and that measure success based on real business outcomes rather than hype.
The Duolingo story has a postscript worth noting. Despite the backlash, the company’s financials remained strong. Revenue projections exceeded $1 billion, and the stock surged nearly 30% after earnings. On the surface, you could argue the strategy worked.
But I’d push back on that interpretation.** What Duolingo demonstrated is that you can trade long-term customer trust and brand reputation for short-term revenue and growth—and the bill won’t come due immediately.** The user growth slowdown was real. The brand damage was measurable. The goodwill burned with their most engaged community members doesn’t show up on a quarterly earnings call, but it accumulates. We’ve seen this story before with companies that optimize for metrics over relationships; the consequences tend to be lagging indicators, not leading ones.
As a CTO, you have a choice. You can push forward with AI initiatives that prioritize cost savings over value creation, that ignore employee concerns, that treat customers as problems to be automated; that allow you make the vague claim that you’re “AI-first”. Or you can take a different path—one that’s harder in the short term but far more sustainable in the long run.
The backlash is here. The question is: what are you going to do about it?


