Responsible Generative AI Practices in Advertising

IPA Welcomes Guide for the Responsible Use of Generative AI in Advertising

The IPA has introduced a voluntary guide aimed at providing UK advertising practitioners with practical recommendations for the responsible deployment of generative AI. This initiative helps organizations navigate the opportunities of AI while mitigating potential risks. Developed collaboratively by an expert working group, the guide builds on the IPA/ISBA Principles for the use of generative AI in advertising, published in 2023.

Key Principles for Responsible Use

The guide outlines eight principles for responsible use, translating them into clear, actionable steps:

  • Ensuring Transparency: Practitioners should disclose AI-generated or AI-altered advertising content using a risk-based approach that prioritizes consumer safety.
  • Responsible Use of Data: Compliance with data protection laws is crucial when using personal data for generative AI applications, ensuring respect for individuals’ privacy rights.
  • Preventing Bias and Ensuring Fairness: Design, deploy, and monitor generative AI systems to ensure fair treatment of all individuals and groups, preventing discrimination.
  • Human Oversight and Accountability: Implement appropriate human oversight before publishing AI-generated advertising content, proportional to potential consumer harm.
  • Promoting Societal Wellbeing: Avoid creating harmful or misleading content with generative AI, and leverage AI to enhance consumer protection.
  • Driving Brand Safety: Assess and mitigate brand reputation risks from AI-generated content and placements, ensuring alignment with brand values.
  • Promoting Environmental Stewardship: Consider environmental implications when selecting generative AI tools, favoring energy-efficient options.
  • Continuous Monitoring and Evaluation: Implement ongoing monitoring of AI systems to detect compliance gaps and performance issues.

Comments from Industry Leaders

Richard Lindsay, IPA Director of Legal & Public Affairs, emphasized the balance between innovation and responsibility in the guide. He noted, “The guide balances innovation with clear expectations around responsibility, transparency, and oversight.”

Furthermore, a SME version of the guide has also been published, focusing on the principles most relevant to small businesses to facilitate easier implementation.

Rt Hon Ian Murray MP, minister for the creative industries, expressed support for the guide, stating, “This timely industry-led guide supports the Government’s ambitions to ensure advertising remains trusted while maximizing AI’s opportunities.”

Stephen Woodford, CEO of the Advertising Association, reinforced the importance of the guide in maintaining public trust in advertising, stating, “All advertising must be ‘legal, decent, honest and truthful’ as we embrace AI’s benefits.”

Conclusion

The Best Practice Guide aims to provide clarity on responsible generative AI use, addressing risks such as bias and privacy concerns. It also encourages environmental stewardship by promoting energy-efficient AI choices. All advertising and marketing practitioners are invited to adopt these voluntary principles in their businesses, with feedback welcomed to ensure the guide remains relevant and effective.

Regular reviews are planned to align the guide with developments in technology, regulation, and industry practices.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...