When Leadership Talks AI Without Comms, Everyone Loses

In late April, Duolingo made headlines – not for its quirky language lessons, but for the language used by CEO Luis von Ahn. He announced an “AI-first” shift, positioning it as the nucleus of Duolingo’s business strategy. The intention was clear: innovate, lead the conversation and redefine education technology.

What followed was far from the reception von Ahn hoped to receive.

The criticism was not only focused on the use of AI, but its tone, timing and framing of the news. In particular, von Ahn’s publicly available companywide memo stating the company would “gradually stop using contractors to do work AI can handle” was seen as dismissive of the human cost of that transformation.

In the weeks that followed, Duolingo faced reputational challenges that are increasingly common when major business decisions are made without thorough evaluation of communications strategy. It’s become a timely case study evaluating how even well-intentioned innovations can falter when communications are not treated as a strategic business function.

The gap between strategy and messaging

At its core, Duolingo’s shift to AI reflects an undeniable and broad business trend. Organizations are rapidly adopting generative AI and automation to increase efficiency, reduce costs and improve scalability. While these moves are almost always declared necessary to remain competitive by leadership, they are not neutral.

When business transformation impacts people—particularly the very ones who build it— how leadership communicates matters as much as what is being communicated. In Duolingo’s case, comments from von Ahn emphasizing experimentation and efficiency, combined with previous AI-induced job reductions, raised concerns about whether the company fully considered the human element of its AI strategy.

Those concerns were further compounded by von Ahn’s comments not less than two weeks later, in which he said AI might be better suited than human teachers for educating children. An assertion that not-so-subtly suggests he envisions AI as a replacement for flesh and blood educators. While childcare services and specialized learning environments might still need human educators under such a vision, von Ahn’s remarks demonstrate a disregard for the complexities and nuances required to become a qualified teacher of future doctors, lawyers and engineers.

The absence of a clear, empathetic narrative invited public skepticism. It also created room for assumptions, misinterpretations and reputational risk. All of which undoubtedly will fall on von Ahn’s communications and risk teams to clean up. And despite the fact von Ahn recently tried to clarify his blunder by stating he “does not see AI as replacing what our employees do,” the damage has been done.

What Went Wrong: A Communications Perspective

Beyond the substance of the announcement, the problem lies in the breakdown between leadership and communications teams. When executives bypass or reduce the impact of communications teams in framing sensitive and complex topics like AI adoption or workforce changes, they not only jeopardize public perception but also expose the organization to avoidable reputational and operational risks.

This begs a significant question: How involved should communications teams be on these issues? Here’s what could happen if communications teams’ counsel is seriously considered or implemented:

  • Message discipline is strengthened across leadership: Major strategy pivots, especially those involving significantly disruptive transformations, demand carefully coordinated messaging at every level. When communications teams help shape the narrative early, they can coach executives on tone, timing and terminology, even what to avoid saying to ensure the company speaks with a unified voice.
  • Brand voice stays intact: A well-crafted message reflects the company’s values, not just a single executive’s view. Communications teams help leaders articulate bold visions without losing sight of empathy, humanity or business culture nuances.
  • The “why” remains visible: Change, good change, is easier to understand when stakeholders know the true intentions behind it. Strategic communication ensures bold moves are framed in the right context—how it will benefit users, support employees and position the company for long-term growth.

In Duolingo’s case, this proactive approach might have framed the shift to AI as a long-term value add while investing in talent and partnerships with educators. Rather, it was communicated as a pure efficiency gain and a need to be first to the detriment of human workers.

Lessons for every business leader

The Duolingo episode offers several takeaways for executives considering similar transformations:

  • Innovation is not a substitute for communication: Regardless of how forward-thinking the strategy is, it must be explained in a way that reflects empathy, clarity and foresight.
  • AI announcements require specialized messaging strategies: These are not routine product updates. Anything related to AI adoption must be treated with the same rigor and care as earnings reports, regulatory disclosures or acquisitions.
  • Internal stakeholders are your first audience: If employees feel blindsided, undervalued or expendable, the external message will most certainly fall flat.
  • Reputation is cumulative: Every comment from a CEO builds—or erodes—brand credibility. Once trust is lost, it’s difficult to get it back.

AI is here to stay, and it’s changing the way we operate. But it should also change the way we communicate. The pace of innovation must be matched by the discipline of communications strategy. Otherwise, companies not only risk internal friction and external scrutiny, but also long-term damage to their most valuable asset: trust.

AI: How to Avoid Becoming a Cautionary Tale

AI will cure what ails you.

That seems to be the mantra of the 2020s. If you have a problem, it appears the solution is to implement artificial intelligence. However, AI is not a cure-all. While AI can be an incredibly powerful tool, it isn’t perfect and there are cautionary tales to consider as countless organizations incorporate AI.

Glitch in the System

Any adult functioning in the digital world knows technology sometimes fails to live up to its promise. AI is not immune to being glitchy, especially when humans fall short in their quality control roles, many of which are still evolving along with the tech. There are countless AI snafu examples that include:

  • Less than two years ago, Reuters reported on a U.S. District Judge who sanctioned two New York attorneys when their ChatGPT-built brief included six fake case citations.
  • Last spring Google was pilloried by users and media alike when its then-new AI capabilities roll-out resulted in a cascade of false information—including telling users to eat glue and rocks.
  • And Fast Company produced a cringe-worthy list of brands last summer whose AI-driven marketing efforts ranged from total failure to deeply offensive, including household names like Toy “R” Us, McDonald’s and Sports Illustrated.

Reliance Risk

The risk of AI is becoming overly reliant on AI. Reliant on its promise. Reliant on its ease of application. Reliant on its accuracy.

Large language models or LLMs—the engines that drive most generative AI tools—train on massive content libraries. As a result, AI is prone to repeating, in whole or part, both the words and style of some of the content on which its LLM trains. These AI tools aren’t designed to violate copyright laws. Rather, they are working with what they know, and what they know is existing, written—and often copyrighted—content. The intent is to mimic human creativity with enhanced, faster output. The risk, of course, is not only plagiarism, but also inaccuracies due to AI hallucinations as well as content that, frankly, often falls short of being truly creative or distinguishing.

Both the quality and legality of AI generated content will be adjudicated in the court of public opinion, as well as courts of laws, for the foreseeable future. Meanwhile, humans are working to catch up. Plagiarism software is continually being stood-up and refined to catch the errant bot-writer. Publishers and others are setting policies for how they will handle contributed AI-generated content. And the legal industry is, most likely, viewing AI as the next asbestos as everyone considers its implications.  

Practical Realities

Learning to live with, and employ, AI is an evolving state. What business and nonprofit leaders must consider now regarding their use or incorporation of AI is this:

  • Brands and business leaders trying to position themselves as thought leaders will fail—possibly in very public ways—if they cede their expertise to the expedience and perceived accuracy of AI where content is concerned.
  • Leveraging AI as a starting point in the creative process can create efficiencies. Relying on AI to drive that process is simply lazy.
  • From courts to publishers as well as clients and consumers, much of the early AI-driven content we are seeing runs the gamut from being declared unacceptable to the merely unpalatable with limited exceptions.
  • Developing policies around how and where to apply AI in your organization is essential to avoid being left behind.
  • Closed AI—essentially a non-publicly accessible AI model—is the only practical approach to AI implementation for many businesses to protect sensitive company and/or client data.
  • A detailed dive into how and if your organization’s errors and omissions liability insurance addresses claims arising from your use of AI is most definitely warranted.
  • AI can be a remarkable improvement to one’s operational efficiency and even client engagement, but only if thoughtful guardrails are in place with humans overseeing the work and conducting frequent quality and accuracy checks.

Without question, AI is and will continue to shape the future of business. Guiding that process with high ethical standards, transparency and rigorous human oversight is required if non- and for-profit organizations are to maintain the trust and confidence of those they serve.

Fallen Arches: McDonald’s AI Failure Is a Caution for Business Leaders

Not lovin’ it. That’s the takeaway from McDonald’s recent abandonment of AI for its drive-thru ordering. The fast-food chain’s decision to end its AI experiment speaks to the larger trend of AI not yet being quite ready to solve a host of problems for business.

Artificial intelligence offers the promise of a new and more efficient business environment … just not quite yet.

McDonald’s hoped its AI-driven drive-thru ordering would create more accurate and efficient ordering. However, the tech proved no match for humans in the wild. Background noise, the nuances of human communication and, I imagine, some of the hallucinations AI technology is famous for combined to generate customer-frustrating errors, including one infamous order for more than $250 worth of Chicken McNuggets. While the fast-food chain says it learned from and has plans for future AI implementations, the reality is the Golden Arches sees AI as a future state tool rather than a current operational solution.

Other industries are finding the same.

In an interview with Insurance Journal last November, Insurtech CEO Tim Hardcastle of INSTANDA discussed the challenges of AI transparency, saying the full transformational impact of AI in insurance remained a few years away.

What frustrates consumers — and many business leaders — about AI is really a perception problem. While companies boast about the promise of AI, the truth is we are in a state of ongoing beta testing. Even Google, the defacto leader in online search, is feeling its way through as end-users find significant inaccuracies and false answers to certain queries of its AI search tool.

Where does this leave businesses and the race to AI implementation?

We have been here before. In the late 2000s, businesses raced to adopt social media. “We have to be there” was the mantra, while the reasons for being on these platforms were somewhat opaque. We saw a similar approach during the rise of voice search and voice recognition. And I believe we are in a similar place today with AI.

Absent a new AI tool to promote, some business leaders perceive they are running behind. However, aside from some common and long-standing applications, AI is currently a solution in search of problems.

Don’t misunderstand me. I think AI will eventually change how business is done, radically in fact. Just not yet. We haven’t worked out the bugs. The guardrails aren’t in place. And we haven’t fully mapped the real, day-to-day challenges AI might address, although that has begun.

The perception problem extends to consumers. AI is seen as our flying car, and by God, it’s here and we want it to work.

Neither the technology industry nor others with have messaged appropriately on AI. They haven’t told us this is one big beta test. They haven’t cautioned us to expect errors. Sure, the media calls out egregious examples, but the businesses incorporating AI could also be more transparent. We haven’t set expectations appropriately; we talk about the transformative power of AI and consumers assume we mean now, not in the future.

When the problem is perception, you have to change people’s perceptions.

Business leaders — from fast-food chains and insurance providers to the financial services sector and big box retailers — would benefit tremendously from better AI messaging. Consider talking about what AI can mean for their companies as well as customers, but caution that this is a learning process. Survey your consumers. Offer research. Invite consumers to help you test your new AI tools.

I’m confident a majority of consumers would get it and many would be willing to be part of this great, new digital industrial revolution experiment. But we must call it what it is: an experiment. We must move consumer perceptions of AI as a current silver bullet to a potential, future game-changer.

There’s precedent for this: The Human Genome Project. The public conversation around this 10+ year effort was about possibility, potential and promise. Not a current-state solution to contemporary problems. The messaging, from the researchers, the media and governments, was clear, which set the expectations — and the perceptions — of the public.

We don’t have an AI problem. We have a perception problem, and we have the tools to address it. What we need is for better messaging to meet the moment.