Mitigating the Risks of Generative AI

Beyond Hallucinations: How to Mitigate Gen AI’s Key Risks

Generative Artificial Intelligence (Gen AI) is revolutionizing the business landscape, building upon years of progress in data and AI adoption. Its potential to drive competitive advantage and fuel growth is undeniable. However, capitalizing on its benefits requires organizations to fully understand and mitigate its unique risks, particularly in managing data and evaluating organizational readiness.

Using Gen AI safely necessitates an understanding not only of the risks and quality of organizational data specific to its implementation—which is highlighted as the biggest challenge—but also how to manage that data effectively. To deploy Gen AI safely and effectively, businesses must address risks in four key areas.

1. The Human Element

Unlike traditional AI, where development and deployment were largely confined to specialist teams, Gen AI reaches across functions and business units. This widespread use raises the risk of employees misinterpreting or over-relying on Gen AI outputs. Without proper understanding, teams may trust the results as infallible, particularly in decision-critical contexts. This could lead to financial or reputational damage to the organization.

2. Data Security and Quality

Managing data security and data quality is a critical challenge when using Gen AI. While it is straightforward for organizations to develop policies that prevent the use of confidential or personally identifiable information (PII) by a Gen AI model, technical enforcement of these rules is far more complex. The primary reason is the proliferation of consumer solutions with multi-modality capabilities, which increases the risk of employees inadvertently exposing confidential data to third-party providers.

Furthermore, the popular adoption of Retrieval Augmented Generation (RAG) architectures could create vulnerabilities if the data sources are not adequately secured. Mismanagement of these aspects not only opens the door to regulatory breaches; it also risks unintentional data exposure, both internally and externally.

3. Expanding Technology Footprint

To utilize Gen AI, many organizations must expand their technology stack, whether on-premises or in the cloud. This rapid scaling introduces operational risks, including integration gaps between new tools and existing systems, as well as increased technological footprint complexity. Besides data disclosure risks, it is essential to pay special attention to the risks associated with integrating third-party tools and ensuring API security.

4. The Nature of the Technology

Gen AI models—all of which operate probabilistically rather than deterministically—introduce another layer of complexity. These models are pre-trained for a specific purpose, and determining whether a model is fit for purpose demands careful analysis.

A rigorous benchmarking process is essential. Businesses must evaluate each model’s intended application, limitations, and safeguards to ensure compatibility with their operational requirements and ethical standards. This process not only mitigates risk but also ensures the technology is used responsibly and effectively.

Balancing Innovation and Risk

Despite these risks, avoiding Gen AI altogether is not the solution. Technology offers unparalleled opportunities to boost efficiencies and innovation, but its rapid developments also bring evolving threats. How can organizations new to Gen AI approach its deployment wisely?

1. Adapt Existing Risk Frameworks

Most organizations already have processes in place for managing technology risks. The challenge lies in tailoring these frameworks to accommodate Gen AI. For limited-scale deployment, a modest expansion of their technology risk management approach may suffice. However, broader Gen AI adoption might require establishing dedicated AI-specific steering committees to address strategy and risks specific to AI’s usage in the organization.

2. Establish Ethical Guidelines

Clear ethical guidelines should govern the use of Gen AI, including prohibited use cases outside the appetite of the organization and predefined risk categories. This guidance provides clarity for business functions pursuing innovation and helps risk and audit functions establish control expectations. Transparency and trust are foundational as AI’s role proliferates. This involves understanding regulatory and compliance obligations, uplifting governance processes, bringing together cross-functional stakeholders, and assigning responsibility for mitigating risks.

3. Phase Governance Using a Risk-Based Approach

Organizations can introduce Gen AI incrementally by applying governance proportional to the risk level in line with the stage of the innovation idea. For prototypes in low-risk scenarios (e.g., minimal financial investment or data sensitivity), oversight can be lighter. As prototypes scale toward deployment, more comprehensive assessments, including cybersecurity evaluations and risk analyses, should be conducted to reinforce defenses.

Gen AI: What Next?

Deploying Gen AI should not be radically different from implementing standard software tools. Much like other technologies, it carries risks that businesses must carefully evaluate and mitigate. The upcoming ISO/IEC 42005 document for AI system impact assessment offers useful guidance on how to evaluate the potential impact of AI on the organization and its stakeholders.

Furthermore, organizations must decide the degree of human oversight required in Gen AI use cases. The Model AI Governance Framework provides a useful structure by categorizing oversight into three levels: human-in-the-loop, human-out-of-the-loop, and human over-the-loop. Determining which to use is a matter of balance—outcomes with a major impact could see more involved human oversight even though faster straight-through decision-making is not possible. This decision should be made by cross-functional teams that assess risks and recommend controls.

Looking ahead, the emergence of Agentic AI has the potential to transform operations even further. Agentic AI, when embedded in businesses, has the ability to mature beyond content generation to include reasoning and decision-making. This demands heightened governance to manage its influence on business processes, including ensuring resilience in multi-agent environments and equipping organizations to investigate and respond to incidents effectively.

As with today’s Gen AI, the key to success lies in a consistent, risk-based approach to deployment combined with robust cybersecurity. By balancing innovation with caution, organizations can harness Gen AI’s potential while minimizing exposure to its risks.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...