Empowering Journalists: Ethical AI Training for a Safer Media Landscape

Safer Media Initiative’s Training Workshop on Ethical AI Use

The Safer-Media Initiative hosted a specialized training workshop for journalists in Lagos State on March 18, 2026, focusing on the ethical use of Artificial Intelligence (AI) and data protection. This workshop aimed to equip participants with the technical skills and ethical frameworks necessary for effectively integrating AI into their editorial workflows.

Significant Interest in Journalism and Technology

During the opening remarks, the Executive Director of the Safer-Media Initiative, Peter Ioter, noted the overwhelming interest in the intersection of journalism and technology, as evidenced by the high volume of applications for the workshop. Ioter emphasized that the training was not merely about adopting new gadgets but about understanding how technological shifts are disrupting the fundamental processes of sourcing, processing, and distributing news.

The Role of AI in Media

Ioter highlighted that AI is a key force reshaping the media landscape, necessitating adaptation from even traditional outlets to remain relevant. He pointed out the opportunities AI provides for gathering and verifying news, while also stressing the heightened expectations for responsibility in its use. Findings from a recent survey under the “M-Project”, supported by UNESCO’s International Programme for the Development of Communications, revealed a significant knowledge gap among journalists: while 95 percent use AI tools weekly, 85 percent are only familiar with ChatGPT, and only 10 percent reported receiving formal training from their newsrooms.

Addressing Job Displacement Concerns

During the workshop, Ioter addressed the widespread fear of job displacement due to AI, asserting that while AI will not eliminate journalism, those who fail to adopt the technology risk being replaced by those who do. He advocated for using AI to strengthen accuracy and public safety instead of distorting the truth.

Expert Insights on Safe AI Use

Titilope Fadare Oparinde, the Founder of Generative AI Journalism, led sessions on the effective and safe use of AI tools. Oparinde echoed the sentiment that the future of journalism lies in the hands of those willing to learn. She noted that while AI is already embedded in workflows through transcription, audio cleaning, and content summarization, speed must never replace human verification.

Data Privacy and Transparency

Oparinde issued a stern warning regarding data privacy, advising journalists never to upload sensitive materials such as confidential transcripts or leaked documents into public AI tools, as this data can be used to train future systems. She also cautioned against “AI hallucinations”, where systems produce fabricated quotes or statistics. To combat this, she advocated for a culture of transparency, including the labeling of AI-generated images and maintaining human oversight for all final outputs.

Conclusion and Certification

The workshop concluded with a practical overview of various AI tools, highlighting their specific strengths for newsroom optimization. Upon completion, participants will receive digital certificates of participation, recognizing their commitment to responsible innovation in the digital age.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...