AI in Advertising: Preparing for Responsible Use Amid Rapid Adoption

AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?

AI is now a regular part of marketing and advertising: more than half of marketers already use GenAI for creative content and audience targeting, while nearly all plan to expand AI use next year, especially for content development and audience engagement. However, while adoption is accelerating, safeguards are not. Over 70% of marketers have encountered an AI-related incident in their advertising efforts, including hallucinations, bias, or off-brand content, yet less than 35% plan to increase investment in AI governance or brand integrity oversight over the next 12 months.

The Research Landscape

This research, conducted in partnership with Aymara, surveyed 125 advertising industry executives in the U.S. using the IAB Insights Engine platform powered by Attest. The data paints a striking picture: AI adoption, and thus AI-related challenges, are outpacing safeguards. Industry leaders are raising the alarm.

AI Is Everywhere in Marketing, and Still Growing

AI is now part of the marketing toolkit across the board. Over half of marketers are already using it for creative content, audience targeting, and customer support, with nearly as many applying it to predictive analytics. Usage is set to grow: 58% plan to increase AI for creative generation in the next year, along with expanded use in chatbots, targeting, and forecasting. AI isn’t just a trend; it’s quickly becoming core to how marketing gets done.

Concerns with AI Are High

Marketers are well aware that there can be risks with AI-generated advertising. Top concerns include misinformation and deepfakes, loss of creative control, and brand integrity risks from offensive or harmful outputs. Many also worry about consumer trust, with 37% fearing audiences will distrust ads made by AI. Other concerns include bias and fairness, regulatory compliance, and the challenge of monitoring AI content at scale. Some flagged the threat of adversarial prompts, like jailbreaks that trick models into unsafe behavior.

The takeaway: AI can pose serious ethical and quality risks, and marketers know these issues can damage trust and brand reputation. That’s why over 60% support labeling AI-generated ads, with only 15% opposed—signaling a strong push for transparency as a trust safeguard.

AI-related Incidents Are Already Occurring

These aren’t future risks. AI-related issues are already affecting advertising campaigns. In the research, 70% of marketers reported at least one AI incident. Common problems included hallucinated outputs, biased or inappropriate content, and off-brand or offensive material. The consequences were significant: 40% had to pause or pull ads, over a third dealt with brand damage or PR issues, and nearly 30% had to conduct internal audits. Only 6% said the impact was minimal.

Patchy Safeguards and a False Sense of Security

Despite growing risks, AI oversight remains inconsistent. Most teams rely on human review and brand integrity checklists, which are important but basic steps. More advanced practices are far less common such as consulting external AI ethics experts, running red team testing, and using automated evaluation tools. Alarmingly, 10% of respondents either do nothing or aren’t sure how they manage AI risks.

Yet confidence remains high. Nearly 90% say they feel prepared to catch AI issues before launch. This may reflect trust in existing workflows, but given that 70% have already had incidents, it also suggests a false sense of security.

Industry Calls for Standards, Tools, and Transparency

Marketers are calling for stronger AI governance. When asked what’s needed to keep AI in advertising safe and effective, top priorities included regular AI audits for bias and integrity, transparency in AI decision-making, data privacy protections, and IP safeguards for AI-created content. In short, marketers want tools, policies, and standards to close real governance gaps. Only 6% believe current safeguards are enough.

Accountability and Leadership: Who’s Minding the AI?

One major challenge in AI governance is ownership. When asked who leads these efforts, responses vary, with the majority mentioning executive leadership or a dedicated AI task force. However, without structured ownership, risks can fall through the cracks. As companies scale GenAI, it’s critical to define who is responsible, whether it is a chief AI officer or a dedicated team.

Third-Party Support for Governance

While most companies currently manage AI governance in-house, there’s strong interest in external support. When asked if they’d consider a third-party solution to evaluate risks like hallucinations, bias, or off-brand content, over 90% said yes. Many see outside expertise as a valuable safety net.

No Time to Waste: A Call to Action on Responsible AI

This survey shows an industry moving fast on AI—but still building the guardrails as it goes. Advertisers are excited about AI’s potential for content, targeting, and engagement. However, many have already seen the risks firsthand: misinformation, bias, and off-brand content that damage trust and waste budget.

Marketers are sending a strong message: they want help in the form of better standards, stronger tools, and expert support to use AI responsibly. The data is clear: AI is undeniably transforming advertising but incidents are already happening, current safeguards aren’t keeping pace, and marketers need better solutions. This isn’t a future problem to solve but instead a present reality demanding immediate action to unlock AI’s full potential.

With a few practical steps, responsible AI is not only possible—it can be the norm.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...