Mitigating the Risks of Generative AI

Beyond Hallucinations: How to Mitigate Gen AI’s Key Risks

Generative Artificial Intelligence (Gen AI) is revolutionizing the business landscape, building upon years of progress in data and AI adoption. Its potential to drive competitive advantage and fuel growth is undeniable. However, capitalizing on its benefits requires organizations to fully understand and mitigate its unique risks, particularly in managing data and evaluating organizational readiness.

Using Gen AI safely necessitates an understanding not only of the risks and quality of organizational data specific to its implementation—which is highlighted as the biggest challenge—but also how to manage that data effectively. To deploy Gen AI safely and effectively, businesses must address risks in four key areas.

1. The Human Element

Unlike traditional AI, where development and deployment were largely confined to specialist teams, Gen AI reaches across functions and business units. This widespread use raises the risk of employees misinterpreting or over-relying on Gen AI outputs. Without proper understanding, teams may trust the results as infallible, particularly in decision-critical contexts. This could lead to financial or reputational damage to the organization.

2. Data Security and Quality

Managing data security and data quality is a critical challenge when using Gen AI. While it is straightforward for organizations to develop policies that prevent the use of confidential or personally identifiable information (PII) by a Gen AI model, technical enforcement of these rules is far more complex. The primary reason is the proliferation of consumer solutions with multi-modality capabilities, which increases the risk of employees inadvertently exposing confidential data to third-party providers.

Furthermore, the popular adoption of Retrieval Augmented Generation (RAG) architectures could create vulnerabilities if the data sources are not adequately secured. Mismanagement of these aspects not only opens the door to regulatory breaches; it also risks unintentional data exposure, both internally and externally.

3. Expanding Technology Footprint

To utilize Gen AI, many organizations must expand their technology stack, whether on-premises or in the cloud. This rapid scaling introduces operational risks, including integration gaps between new tools and existing systems, as well as increased technological footprint complexity. Besides data disclosure risks, it is essential to pay special attention to the risks associated with integrating third-party tools and ensuring API security.

4. The Nature of the Technology

Gen AI models—all of which operate probabilistically rather than deterministically—introduce another layer of complexity. These models are pre-trained for a specific purpose, and determining whether a model is fit for purpose demands careful analysis.

A rigorous benchmarking process is essential. Businesses must evaluate each model’s intended application, limitations, and safeguards to ensure compatibility with their operational requirements and ethical standards. This process not only mitigates risk but also ensures the technology is used responsibly and effectively.

Balancing Innovation and Risk

Despite these risks, avoiding Gen AI altogether is not the solution. Technology offers unparalleled opportunities to boost efficiencies and innovation, but its rapid developments also bring evolving threats. How can organizations new to Gen AI approach its deployment wisely?

1. Adapt Existing Risk Frameworks

Most organizations already have processes in place for managing technology risks. The challenge lies in tailoring these frameworks to accommodate Gen AI. For limited-scale deployment, a modest expansion of their technology risk management approach may suffice. However, broader Gen AI adoption might require establishing dedicated AI-specific steering committees to address strategy and risks specific to AI’s usage in the organization.

2. Establish Ethical Guidelines

Clear ethical guidelines should govern the use of Gen AI, including prohibited use cases outside the appetite of the organization and predefined risk categories. This guidance provides clarity for business functions pursuing innovation and helps risk and audit functions establish control expectations. Transparency and trust are foundational as AI’s role proliferates. This involves understanding regulatory and compliance obligations, uplifting governance processes, bringing together cross-functional stakeholders, and assigning responsibility for mitigating risks.

3. Phase Governance Using a Risk-Based Approach

Organizations can introduce Gen AI incrementally by applying governance proportional to the risk level in line with the stage of the innovation idea. For prototypes in low-risk scenarios (e.g., minimal financial investment or data sensitivity), oversight can be lighter. As prototypes scale toward deployment, more comprehensive assessments, including cybersecurity evaluations and risk analyses, should be conducted to reinforce defenses.

Gen AI: What Next?

Deploying Gen AI should not be radically different from implementing standard software tools. Much like other technologies, it carries risks that businesses must carefully evaluate and mitigate. The upcoming ISO/IEC 42005 document for AI system impact assessment offers useful guidance on how to evaluate the potential impact of AI on the organization and its stakeholders.

Furthermore, organizations must decide the degree of human oversight required in Gen AI use cases. The Model AI Governance Framework provides a useful structure by categorizing oversight into three levels: human-in-the-loop, human-out-of-the-loop, and human over-the-loop. Determining which to use is a matter of balance—outcomes with a major impact could see more involved human oversight even though faster straight-through decision-making is not possible. This decision should be made by cross-functional teams that assess risks and recommend controls.

Looking ahead, the emergence of Agentic AI has the potential to transform operations even further. Agentic AI, when embedded in businesses, has the ability to mature beyond content generation to include reasoning and decision-making. This demands heightened governance to manage its influence on business processes, including ensuring resilience in multi-agent environments and equipping organizations to investigate and respond to incidents effectively.

As with today’s Gen AI, the key to success lies in a consistent, risk-based approach to deployment combined with robust cybersecurity. By balancing innovation with caution, organizations can harness Gen AI’s potential while minimizing exposure to its risks.

More Insights

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

AI as a Strategic Partner in Governance

The UAE has announced that a National Artificial Intelligence System will become a non-voting member of all federal and government company boards, marking a significant shift in governance. This...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

AI Governance: The Key to Successful Enterprise Implementation

Artificial intelligence is at a critical juncture, with many enterprise AI initiatives failing to reach production and exposing organizations to significant risks. Effective AI governance is essential...

AI Code Compliance: Companies May Get a Grace Period

The commission is considering providing a grace period for companies that agree to comply with the new AI Code. This initiative aims to facilitate a smoother transition for businesses adapting to the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Laws in Europe Combatting Deepfakes

Denmark has introduced a law that grants individuals copyright over their likenesses to combat deepfakes, making it illegal to share such content without consent. Other European countries are also...