Mitigating the Risks of Generative AI

Beyond Hallucinations: How to Mitigate Gen AI’s Key Risks

Generative Artificial Intelligence (Gen AI) is revolutionizing the business landscape, building upon years of progress in data and AI adoption. Its potential to drive competitive advantage and fuel growth is undeniable. However, capitalizing on its benefits requires organizations to fully understand and mitigate its unique risks, particularly in managing data and evaluating organizational readiness.

Using Gen AI safely necessitates an understanding not only of the risks and quality of organizational data specific to its implementation—which is highlighted as the biggest challenge—but also how to manage that data effectively. To deploy Gen AI safely and effectively, businesses must address risks in four key areas.

1. The Human Element

Unlike traditional AI, where development and deployment were largely confined to specialist teams, Gen AI reaches across functions and business units. This widespread use raises the risk of employees misinterpreting or over-relying on Gen AI outputs. Without proper understanding, teams may trust the results as infallible, particularly in decision-critical contexts. This could lead to financial or reputational damage to the organization.

2. Data Security and Quality

Managing data security and data quality is a critical challenge when using Gen AI. While it is straightforward for organizations to develop policies that prevent the use of confidential or personally identifiable information (PII) by a Gen AI model, technical enforcement of these rules is far more complex. The primary reason is the proliferation of consumer solutions with multi-modality capabilities, which increases the risk of employees inadvertently exposing confidential data to third-party providers.

Furthermore, the popular adoption of Retrieval Augmented Generation (RAG) architectures could create vulnerabilities if the data sources are not adequately secured. Mismanagement of these aspects not only opens the door to regulatory breaches; it also risks unintentional data exposure, both internally and externally.

3. Expanding Technology Footprint

To utilize Gen AI, many organizations must expand their technology stack, whether on-premises or in the cloud. This rapid scaling introduces operational risks, including integration gaps between new tools and existing systems, as well as increased technological footprint complexity. Besides data disclosure risks, it is essential to pay special attention to the risks associated with integrating third-party tools and ensuring API security.

4. The Nature of the Technology

Gen AI models—all of which operate probabilistically rather than deterministically—introduce another layer of complexity. These models are pre-trained for a specific purpose, and determining whether a model is fit for purpose demands careful analysis.

A rigorous benchmarking process is essential. Businesses must evaluate each model’s intended application, limitations, and safeguards to ensure compatibility with their operational requirements and ethical standards. This process not only mitigates risk but also ensures the technology is used responsibly and effectively.

Balancing Innovation and Risk

Despite these risks, avoiding Gen AI altogether is not the solution. Technology offers unparalleled opportunities to boost efficiencies and innovation, but its rapid developments also bring evolving threats. How can organizations new to Gen AI approach its deployment wisely?

1. Adapt Existing Risk Frameworks

Most organizations already have processes in place for managing technology risks. The challenge lies in tailoring these frameworks to accommodate Gen AI. For limited-scale deployment, a modest expansion of their technology risk management approach may suffice. However, broader Gen AI adoption might require establishing dedicated AI-specific steering committees to address strategy and risks specific to AI’s usage in the organization.

2. Establish Ethical Guidelines

Clear ethical guidelines should govern the use of Gen AI, including prohibited use cases outside the appetite of the organization and predefined risk categories. This guidance provides clarity for business functions pursuing innovation and helps risk and audit functions establish control expectations. Transparency and trust are foundational as AI’s role proliferates. This involves understanding regulatory and compliance obligations, uplifting governance processes, bringing together cross-functional stakeholders, and assigning responsibility for mitigating risks.

3. Phase Governance Using a Risk-Based Approach

Organizations can introduce Gen AI incrementally by applying governance proportional to the risk level in line with the stage of the innovation idea. For prototypes in low-risk scenarios (e.g., minimal financial investment or data sensitivity), oversight can be lighter. As prototypes scale toward deployment, more comprehensive assessments, including cybersecurity evaluations and risk analyses, should be conducted to reinforce defenses.

Gen AI: What Next?

Deploying Gen AI should not be radically different from implementing standard software tools. Much like other technologies, it carries risks that businesses must carefully evaluate and mitigate. The upcoming ISO/IEC 42005 document for AI system impact assessment offers useful guidance on how to evaluate the potential impact of AI on the organization and its stakeholders.

Furthermore, organizations must decide the degree of human oversight required in Gen AI use cases. The Model AI Governance Framework provides a useful structure by categorizing oversight into three levels: human-in-the-loop, human-out-of-the-loop, and human over-the-loop. Determining which to use is a matter of balance—outcomes with a major impact could see more involved human oversight even though faster straight-through decision-making is not possible. This decision should be made by cross-functional teams that assess risks and recommend controls.

Looking ahead, the emergence of Agentic AI has the potential to transform operations even further. Agentic AI, when embedded in businesses, has the ability to mature beyond content generation to include reasoning and decision-making. This demands heightened governance to manage its influence on business processes, including ensuring resilience in multi-agent environments and equipping organizations to investigate and respond to incidents effectively.

As with today’s Gen AI, the key to success lies in a consistent, risk-based approach to deployment combined with robust cybersecurity. By balancing innovation with caution, organizations can harness Gen AI’s potential while minimizing exposure to its risks.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...