AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

Stakeholders Advocate AI Governance Framework

On Wednesday, stakeholders gathered to advocate for the development of an Artificial Intelligence (AI) governance framework aimed at ensuring responsible deployment. This call to action was made during the 17th annual conference of the Information System Audit and Control Association (ISACA) held in Abuja.

The three-day conference focused on the theme: “AI and Digital Trust: A Global Perspective on Opportunities, Threats and Future Strategies.” Mr. Emmanuel Omoke, President of ISACA’s Abuja Chapter, emphasized that AI is now an integral part of our lives, stating, “AI has come to stay, you cannot run away from it. The question is, how can we turn it into opportunity?”

Omoke highlighted the necessity of addressing the potential risks associated with AI, including its capability to alter genetics, making a collaborative approach among stakeholders essential for its responsible use.

Institutional Efforts in AI Utilization

Mr. Tayo Koleosho, Chief of Staff to the Executive Chairman of the Federal Inland Revenue Service (FIRS), discussed an electronic pricing program aimed at improving tax compliance while reducing audit burdens on businesses. This initiative seeks to harness and aggregate transaction data across the country, facilitating easier tax reporting for companies. He noted, “One of the major things is the ability to make sure that the data this intelligence are depending on are clean and accurate.”

Koleosho cautioned that if AI is fed inaccurate data, it can lead to “hallucinations”—false insights generated by the system. He stressed the importance of data privacy, as much of the information involved is personal and sensitive to the companies.

The Need for Governance Frameworks

Mr. Hanniel Jafaru, Executive Director of Ham Tech Career (HTC) Academy, pointed out the alarming statistic that only 17 out of 54 African countries have adopted national AI strategies, and none have established governance frameworks to regulate ethical AI use. He advocated for the need to define acceptable practices through an AI governance framework that can manage risks such as digital propaganda and deep fakes.

“Countries globally are talking about AI framework; they have moved from having a strategy to having a framework,” Jafaru stated, emphasizing that such frameworks are crucial for determining the outputs of AI.

Collaboration Against Cyber Threats

Mrs. Sushila Nair, CEO of Cybernetic LLC, called for enhanced collaboration among cybersecurity professionals to protect critical infrastructure from global cyber threats. She argued that technology, while a driver of business and economic growth, also exposes organizations to risks from cybercriminals and non-state actors.

Nair remarked, “Looking across the world, you will see that wars are no longer fought with bombs and guns; we are now using technology.” She underscored the importance of learning from past security breaches globally to safeguard infrastructure and ensure public safety.

In conclusion, the discussions at the conference underscore a pressing need for a cohesive approach to AI governance, focusing on ethical use, data integrity, and cybersecurity to harness AI’s potential while mitigating its risks.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...