Why CMOs Shouldn’t Trust the AI Confidence Boom
Originally published on CMSWire on May 16, 2025
87% of marketers believe AI is accurate. Reality? Most AI content contains major flaws.
The Gist
-
AI confidence gap. Most marketers trust AI output, but research shows much of it is flawed or inaccurate.
-
Smart use matters. AI works best in low-risk areas. Avoid using it where your brand is on the line.
-
Keep humans in. Human oversight is essential when AI touches anything public-facing or strategic.
Marketers have always been early-movers on the technology adoption curve. Over the years, we’ve had to dive feet first into automation, social media, SEO, QR codes, chatbots and adtech, among others. So it’s no surprise that marketers are embracing AI faster than most.
According to a recent survey, 63% of marketers are using generative AI, and 79% plan to expand their adoption in 2025. Another survey found that 85% of marketers are using AI tools for content creation. These numbers are significantly higher than the national average of 37% of all workers in the U.S. who say they use AI in their jobs.
Our willingness to adopt is a good thing. It keeps us in step with the markets we need to reach and one step ahead of the competition. But AI is a technology unlike any other we have encountered. It comes with bigger risks, bigger rewards and more unknowns, which means we need to bring a different kind of thought and intentionality to the ways in which we apply it to the marketing mission.
Quantifying AI Risks in Marketing
Data suggests that many marketers lack an understanding of the limitations of AI, even as they forge ahead with new AI initiatives.
To start, 87% of marketers are confident in the accuracy of AI content. But the confidence is misplaced. More than half (51%) of AI-generated content has “significant issues of some form,” and a whopping 91% has at least “some issues.” In the best cases, these errors can erode brand trust and marketing effectiveness. At worst, they can lead to costly lawsuits, like the one involving an Air Canada AI chatbot.
Accuracy isn’t the only risk associated with AI content. The recent fiasco with Meta’s AI chatbots demonstrates how quickly AI can sink into off-putting weirdness, while a meta-analysis of 2,000+ marketing campaigns found that human-generated content outperformed generative AI, with higher engagement and conversion rates.
3 Ways to Mitigate AI Risks
The key to balancing AI’s risks and rewards is to understand it, apply it intentionally rather than opportunistically, and monitor it closely.
Understand AI
As computational scientist and entrepreneur Stephen Wolfram said, ChatGPT is “just adding one word at a time.” It’s predicting the most likely word to come after the word in front of it. This results in middle-of-the-road output that draws upon existing patterns. It can’t give voice to uniquely human emotions and experiences, and it can’t understand the intricacies of your company’s values, mission and voice. By understanding those limitations, you can work within them to apply AI where it will produce the greatest value (and do the least harm).
Apply AI intentionally
It’s hard to find a marketing tool that isn’t AI-enabled. Ninety-three percent of marketers report that new AI features were added to their tech stack last year. Access to AI is not the problem; knowing where to apply it is. The purpose alignment model developed by Niel Nickolaisen, an IT thought leader and author, is a helpful reference framework.
Given AI’s current limitations, you may want to avoid applying AI to any marketing activity that is “differentiating” (mission-critical and a market differentiator). This would include customer experience, revenue generation and market positioning, among others. Instead, start by limiting AI to low-risk, low-priority areas. For example, use it to draft internal policies, procedures and job descriptions to free up more time and resources for differentiating activities.
Monitor AI closely
Human oversight is critical to all martech activities, but with a black-box technology like generative AI, it’s even more important to keep a human in the loop. This is especially true in situations where AI supports customer-facing outputs and experiences. Human marketers bring context, nuance, empathy, experience and accountability that AI can’t replicate when it comes to supporting brand fidelity, market relevance, messaging alignment and factual accuracy.