Mitigating the Risks of Generative AI

Beyond Hallucinations: How to Mitigate Gen AI’s Key Risks

Generative Artificial Intelligence (Gen AI) is revolutionizing the business landscape, building upon years of progress in data and AI adoption. Its potential to drive competitive advantage and fuel growth is undeniable. However, capitalizing on its benefits requires organizations to fully understand and mitigate its unique risks, particularly in managing data and evaluating organizational readiness.

Using Gen AI safely necessitates an understanding not only of the risks and quality of organizational data specific to its implementation—which is highlighted as the biggest challenge—but also how to manage that data effectively. To deploy Gen AI safely and effectively, businesses must address risks in four key areas.

1. The Human Element

Unlike traditional AI, where development and deployment were largely confined to specialist teams, Gen AI reaches across functions and business units. This widespread use raises the risk of employees misinterpreting or over-relying on Gen AI outputs. Without proper understanding, teams may trust the results as infallible, particularly in decision-critical contexts. This could lead to financial or reputational damage to the organization.

2. Data Security and Quality

Managing data security and data quality is a critical challenge when using Gen AI. While it is straightforward for organizations to develop policies that prevent the use of confidential or personally identifiable information (PII) by a Gen AI model, technical enforcement of these rules is far more complex. The primary reason is the proliferation of consumer solutions with multi-modality capabilities, which increases the risk of employees inadvertently exposing confidential data to third-party providers.

Furthermore, the popular adoption of Retrieval Augmented Generation (RAG) architectures could create vulnerabilities if the data sources are not adequately secured. Mismanagement of these aspects not only opens the door to regulatory breaches; it also risks unintentional data exposure, both internally and externally.

3. Expanding Technology Footprint

To utilize Gen AI, many organizations must expand their technology stack, whether on-premises or in the cloud. This rapid scaling introduces operational risks, including integration gaps between new tools and existing systems, as well as increased technological footprint complexity. Besides data disclosure risks, it is essential to pay special attention to the risks associated with integrating third-party tools and ensuring API security.

4. The Nature of the Technology

Gen AI models—all of which operate probabilistically rather than deterministically—introduce another layer of complexity. These models are pre-trained for a specific purpose, and determining whether a model is fit for purpose demands careful analysis.

A rigorous benchmarking process is essential. Businesses must evaluate each model’s intended application, limitations, and safeguards to ensure compatibility with their operational requirements and ethical standards. This process not only mitigates risk but also ensures the technology is used responsibly and effectively.

Balancing Innovation and Risk

Despite these risks, avoiding Gen AI altogether is not the solution. Technology offers unparalleled opportunities to boost efficiencies and innovation, but its rapid developments also bring evolving threats. How can organizations new to Gen AI approach its deployment wisely?

1. Adapt Existing Risk Frameworks

Most organizations already have processes in place for managing technology risks. The challenge lies in tailoring these frameworks to accommodate Gen AI. For limited-scale deployment, a modest expansion of their technology risk management approach may suffice. However, broader Gen AI adoption might require establishing dedicated AI-specific steering committees to address strategy and risks specific to AI’s usage in the organization.

2. Establish Ethical Guidelines

Clear ethical guidelines should govern the use of Gen AI, including prohibited use cases outside the appetite of the organization and predefined risk categories. This guidance provides clarity for business functions pursuing innovation and helps risk and audit functions establish control expectations. Transparency and trust are foundational as AI’s role proliferates. This involves understanding regulatory and compliance obligations, uplifting governance processes, bringing together cross-functional stakeholders, and assigning responsibility for mitigating risks.

3. Phase Governance Using a Risk-Based Approach

Organizations can introduce Gen AI incrementally by applying governance proportional to the risk level in line with the stage of the innovation idea. For prototypes in low-risk scenarios (e.g., minimal financial investment or data sensitivity), oversight can be lighter. As prototypes scale toward deployment, more comprehensive assessments, including cybersecurity evaluations and risk analyses, should be conducted to reinforce defenses.

Gen AI: What Next?

Deploying Gen AI should not be radically different from implementing standard software tools. Much like other technologies, it carries risks that businesses must carefully evaluate and mitigate. The upcoming ISO/IEC 42005 document for AI system impact assessment offers useful guidance on how to evaluate the potential impact of AI on the organization and its stakeholders.

Furthermore, organizations must decide the degree of human oversight required in Gen AI use cases. The Model AI Governance Framework provides a useful structure by categorizing oversight into three levels: human-in-the-loop, human-out-of-the-loop, and human over-the-loop. Determining which to use is a matter of balance—outcomes with a major impact could see more involved human oversight even though faster straight-through decision-making is not possible. This decision should be made by cross-functional teams that assess risks and recommend controls.

Looking ahead, the emergence of Agentic AI has the potential to transform operations even further. Agentic AI, when embedded in businesses, has the ability to mature beyond content generation to include reasoning and decision-making. This demands heightened governance to manage its influence on business processes, including ensuring resilience in multi-agent environments and equipping organizations to investigate and respond to incidents effectively.

As with today’s Gen AI, the key to success lies in a consistent, risk-based approach to deployment combined with robust cybersecurity. By balancing innovation with caution, organizations can harness Gen AI’s potential while minimizing exposure to its risks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...