Ethics in AI Adoption: Avoiding the Trap of Ethics Washing

AI Governance and the Risk of Ethics Washing

At a recent annual conference organized by the Institute of Chartered Secretaries and Administrators of Nigeria (ICSAN) in Lagos, stakeholders gathered to discuss the profound impact of Artificial Intelligence (AI) systems on governance. The conference, themed “Reimagining Governance: Navigating the Artificial Intelligence Revolution for Excellence,” highlighted the necessity for proper governance mechanisms as AI technologies continue to evolve.

Expert Insights on AI Governance

One notable speaker, Olajide Olugbade, an AI governance expert with extensive experience in Nigeria’s corporate governance landscape, provided critical insights into the importance of effective governance frameworks. With a background as a governance, risk, and compliance consultant at PricewaterhouseCoopers (PwC) Nigeria, Olugbade has advised numerous organizations on maintaining robust corporate governance systems, especially in light of digital transformation.

Currently, Olugbade serves as an Ethics and Policy Specialist in a significant $65 million federal government project in Georgia, USA, aimed at integrating AI into advanced manufacturing. His role involves developing AI governance frameworks to support the responsible deployment of AI systems.

The Importance of Accountability in AI Adoption

During his address at the ICSAN event, Olugbade emphasized the necessity of having effective accountability mechanisms in place for AI adoption. He warned organizations against the phenomenon of ethics washing, where companies publicly claim to embrace ethical governance by establishing ethics offices that often lack real authority or enforcement capability. He stated:

“Organizations should have effective accountability mechanisms to ensure AI adoption is done responsibly.”

Understanding Ethics Washing

Olugbade elaborated on the concept of ethics washing, citing a recent publication where he and colleagues observed that many organizations engage in this practice. They create ethics offices primarily for show, which do not fulfill their intended purpose of promoting responsible innovation within the organization.

To counteract this issue, Olugbade recommends that organizations should:

  • Designate corporate governance mechanisms specifically for AI governance.
  • Ensure these mechanisms possess the authority and organizational power to effectively respond to uncertainties.
  • Maintain a central role within the organization’s operational framework.

Expanding Influence in AI Governance

Olugbade’s expertise is recognized at high levels of AI governance, policy, and ethics. He has contributed to evaluating barriers to AI and Machine Learning capabilities for a US government agency through his work at the RAND Corporation. Additionally, as a member of the United Nations Network of Experts on AI, he has provided consultancy services for the AI Advisory Body of the UN Office of Digital and Emerging Technologies, focusing on global AI governance.

His ongoing involvement in AI policy development extends to Uganda, where he collaborates with a policy advisory group on crafting AI policies that address local needs while adhering to international governance standards.

For organizations looking to navigate the complexities of AI governance, Olugbade’s insights serve as a vital reminder of the importance of authenticity and accountability in the ethical deployment of AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...