Trustible Partners with RAIC to Enhance AI Incident Governance

Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database

ARLINGTON, Va., Jan. 26, 2026 /PRNewswire/ — Trustible, a leading provider of AI governance software for enterprises, has announced a significant partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is spearheading RAIC’s inaugural cohort of corporate sponsors, aiming to integrate AIID incident data directly into its platform and collaborate on research regarding AI risk and real-world governance practices.

Why This Partnership Matters

Effective AI governance requires a comprehensive understanding of how AI systems fail in real-world scenarios. The AIID serves as the definitive public record of AI-related incidents, boasting over 5,000 incident reports collected over eight years, utilized by central banks, intergovernmental organizations, researchers, and practitioners globally.

Through this collaboration, Trustible customers will gain access to AIID incident reports directly within the Trustible platform. This functionality will allow organizations to link their internal AI inventories against incidents from the database proactively. Users will receive customized alerts whenever new incidents are reported for relevant use cases, models, or vendors tracked in Trustible’s AI inventory. This capability enables organizations to stay ahead of emerging AI risks and understand potential mitigation strategies in near real-time, thereby building trust and confidence in their AI deployments.

Key Statements

“We’ve long valued the RAIC’s work maintaining this resource,” stated Andrew Gamino-Cheong, CTO and Co-Founder of Trustible. “Our risk and mitigation taxonomies already draw heavily on AIID data. This partnership strengthens that connection, and we’re committed to supporting RAIC’s independence, not shaping it. Their credibility exists because they’ve kept editorial control in-house, and that’s exactly how it should stay.”

Sean McGregor, founder of AIID, emphasized, “The AIID was created so companies like Trustible can motivate AI governance decisions from demonstrated risks. Trustible’s ability to link recommendations to clear statements of what companies are working to prevent supplies an answer to the all-important question of ‘why spend money on AI governance?'”

Partnership Components

The Trustible and AIID partnership focuses on three primary areas:

  • Platform Integration: Authorized use of AIID content within Trustible’s AI governance platform.
  • Education & Thought Leadership: Support for RAIC’s operations and continued development of the database, including opportunities to educate the business and academic communities on the latest AI risks and mitigation strategies.
  • Joint Research: Collaborative work on incident analysis, emerging AI risks, and governance best practices, with findings published publicly.

Commitments to Independence

This partnership is designed to bolster RAIC’s operations, ensuring the continuous work of the AIID without any interruption or influence from business partners. Trustible will not participate in RAIC’s editorial decisions or influence how incidents are logged and evaluated. These independent decisions will remain the purview of RAIC’s editorial team, adhering to their publicly available methodology.

For Trustible customers, all platform and organizational data will remain confidential. No customer information stored in the Trustible platform will be shared with any third party without explicit written consent.

About Trustible

Trustible is where AI governance gets done. The company aids regulated enterprises in managing AI risk, complying with regulations, and accelerating the safe and responsible adoption of AI through its industry-leading governance platform. Trustible has raised $7.69M in funding, supported by leading investors. As AI governance becomes a strategic priority for global enterprises, Trustible is setting the standard for safe, ethical, and scalable AI adoption.

About the Responsible AI Collaborative

The Responsible AI Collaborative (RAIC) is an independent nonprofit that manages the AI Incident Database (AIID), the most widely used public repository of real-world AI harms. Over eight years, the AIID has grown to over 5,000 curated incident reports and has informed the development of national and intergovernmental AI standards.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...