Poynter and Hacks/Hackers Unite to Foster Ethical AI in Journalism

Poynter and Hacks/Hackers Partner to Align AI Adoption with Journalism Ethics

As the media landscape evolves due to the rapid adoption of artificial intelligence (AI), the need for ethical guidelines becomes increasingly critical. The collaborative efforts of Poynter and Hacks/Hackers aim to address this challenge, ensuring that AI technologies are implemented responsibly within journalism.

The Rise of AI in Journalism

Since the launch of ChatGPT in 2022, the news industry has faced a series of AI blunders that have eroded audience trust. Poynter is taking proactive steps to help journalists and media organizations navigate this new landscape. Through a combination of practical AI training, ethical guidance, and media literacy programs, Poynter is dedicated to promoting transparent and trustworthy AI adoption.

Establishing Ethical Guidelines

Poynter has previously hosted summits addressing the intersection of AI, ethics, and journalism. These events have facilitated the establishment of guidelines designed to encourage responsible innovation while minimizing risks to newsrooms’ reputations. In 2026, Poynter will partner with Hacks/Hackers to integrate AI ethics and literacy into various journalism events throughout the year.

Yearlong Conversations

According to Alex Mahadevan, director of MediaWise at Poynter, “AI isn’t standing still, and neither can our approach to ethics and AI literacy.” The partnership allows for continuous dialogue on these critical topics, adapting to the fast-paced changes in technology.

Workshops and Events

Poynter will design and deliver workshops, seminars, and public conversations focusing on AI ethics at Hacks/Hackers events, including the AI x Journalism Summit scheduled for May 2026 in Baltimore. This summit aims to expand its reach, attracting 300 participants compared to the previous year’s 200.

Importance of Ethics in AI

Burt Herman, co-founder of Hacks/Hackers, emphasizes the urgency of maintaining ethical standards amidst significant AI investments: “Society can’t afford to lose sight of ethics when using AI.” This partnership seeks to ensure that journalists are equipped to address pressing issues such as transparency, bias, and accuracy in news reporting.

Comprehensive Training Initiatives

Poynter’s efforts include several initiatives aimed at enhancing AI literacy among journalists. This includes:

  • AI courses specifically designed for journalists and content creators.
  • An AI ethics starter kit for newsrooms.
  • The Talking About AI Newsroom Toolkit, developed with The Associated Press and Microsoft.
  • The alt+Ignite AI literacy initiative, which has trained thousands globally.

A Shared Commitment to Trust and Accountability

The partnership between Poynter and Hacks/Hackers underscores a mutual commitment to uphold public trust, maintain editorial standards, and ensure democratic accountability in AI adoption. Paul Cheung, strategic adviser to Hacks/Hackers, states that the partnership aims to integrate ethics into daily newsroom decisions, fostering a culture of accountability.

Conclusion

As AI technology continues to transform journalism, the collaboration between Poynter and Hacks/Hackers represents a vital step toward ensuring ethical standards are not only established but also actively practiced. The future of journalism lies in its ability to adapt to technological advancements while maintaining the trust of its audience.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...