AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?
AI is now a regular part of marketing and advertising: more than half of marketers already use GenAI for creative content and audience targeting, while nearly all plan to expand AI use next year, especially for content development and audience engagement. However, while adoption is accelerating, safeguards are not. Over 70% of marketers have encountered an AI-related incident in their advertising efforts, including hallucinations, bias, or off-brand content, yet less than 35% plan to increase investment in AI governance or brand integrity oversight over the next 12 months.
The Research Landscape
This research, conducted in partnership with Aymara, surveyed 125 advertising industry executives in the U.S. using the IAB Insights Engine platform powered by Attest. The data paints a striking picture: AI adoption, and thus AI-related challenges, are outpacing safeguards. Industry leaders are raising the alarm.
AI Is Everywhere in Marketing, and Still Growing
AI is now part of the marketing toolkit across the board. Over half of marketers are already using it for creative content, audience targeting, and customer support, with nearly as many applying it to predictive analytics. Usage is set to grow: 58% plan to increase AI for creative generation in the next year, along with expanded use in chatbots, targeting, and forecasting. AI isn’t just a trend; it’s quickly becoming core to how marketing gets done.
Concerns with AI Are High
Marketers are well aware that there can be risks with AI-generated advertising. Top concerns include misinformation and deepfakes, loss of creative control, and brand integrity risks from offensive or harmful outputs. Many also worry about consumer trust, with 37% fearing audiences will distrust ads made by AI. Other concerns include bias and fairness, regulatory compliance, and the challenge of monitoring AI content at scale. Some flagged the threat of adversarial prompts, like jailbreaks that trick models into unsafe behavior.
The takeaway: AI can pose serious ethical and quality risks, and marketers know these issues can damage trust and brand reputation. That’s why over 60% support labeling AI-generated ads, with only 15% opposed—signaling a strong push for transparency as a trust safeguard.
AI-related Incidents Are Already Occurring
These aren’t future risks. AI-related issues are already affecting advertising campaigns. In the research, 70% of marketers reported at least one AI incident. Common problems included hallucinated outputs, biased or inappropriate content, and off-brand or offensive material. The consequences were significant: 40% had to pause or pull ads, over a third dealt with brand damage or PR issues, and nearly 30% had to conduct internal audits. Only 6% said the impact was minimal.
Patchy Safeguards and a False Sense of Security
Despite growing risks, AI oversight remains inconsistent. Most teams rely on human review and brand integrity checklists, which are important but basic steps. More advanced practices are far less common such as consulting external AI ethics experts, running red team testing, and using automated evaluation tools. Alarmingly, 10% of respondents either do nothing or aren’t sure how they manage AI risks.
Yet confidence remains high. Nearly 90% say they feel prepared to catch AI issues before launch. This may reflect trust in existing workflows, but given that 70% have already had incidents, it also suggests a false sense of security.
Industry Calls for Standards, Tools, and Transparency
Marketers are calling for stronger AI governance. When asked what’s needed to keep AI in advertising safe and effective, top priorities included regular AI audits for bias and integrity, transparency in AI decision-making, data privacy protections, and IP safeguards for AI-created content. In short, marketers want tools, policies, and standards to close real governance gaps. Only 6% believe current safeguards are enough.
Accountability and Leadership: Who’s Minding the AI?
One major challenge in AI governance is ownership. When asked who leads these efforts, responses vary, with the majority mentioning executive leadership or a dedicated AI task force. However, without structured ownership, risks can fall through the cracks. As companies scale GenAI, it’s critical to define who is responsible, whether it is a chief AI officer or a dedicated team.
Third-Party Support for Governance
While most companies currently manage AI governance in-house, there’s strong interest in external support. When asked if they’d consider a third-party solution to evaluate risks like hallucinations, bias, or off-brand content, over 90% said yes. Many see outside expertise as a valuable safety net.
No Time to Waste: A Call to Action on Responsible AI
This survey shows an industry moving fast on AI—but still building the guardrails as it goes. Advertisers are excited about AI’s potential for content, targeting, and engagement. However, many have already seen the risks firsthand: misinformation, bias, and off-brand content that damage trust and waste budget.
Marketers are sending a strong message: they want help in the form of better standards, stronger tools, and expert support to use AI responsibly. The data is clear: AI is undeniably transforming advertising but incidents are already happening, current safeguards aren’t keeping pace, and marketers need better solutions. This isn’t a future problem to solve but instead a present reality demanding immediate action to unlock AI’s full potential.
With a few practical steps, responsible AI is not only possible—it can be the norm.