Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database
ARLINGTON, Va., Jan. 26, 2026 /PRNewswire/ — Trustible, a leading provider of AI governance software for enterprises, has announced a significant partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is spearheading RAIC’s inaugural cohort of corporate sponsors, aiming to integrate AIID incident data directly into its platform and collaborate on research regarding AI risk and real-world governance practices.
Why This Partnership Matters
Effective AI governance requires a comprehensive understanding of how AI systems fail in real-world scenarios. The AIID serves as the definitive public record of AI-related incidents, boasting over 5,000 incident reports collected over eight years, utilized by central banks, intergovernmental organizations, researchers, and practitioners globally.
Through this collaboration, Trustible customers will gain access to AIID incident reports directly within the Trustible platform. This functionality will allow organizations to link their internal AI inventories against incidents from the database proactively. Users will receive customized alerts whenever new incidents are reported for relevant use cases, models, or vendors tracked in Trustible’s AI inventory. This capability enables organizations to stay ahead of emerging AI risks and understand potential mitigation strategies in near real-time, thereby building trust and confidence in their AI deployments.
Key Statements
“We’ve long valued the RAIC’s work maintaining this resource,” stated Andrew Gamino-Cheong, CTO and Co-Founder of Trustible. “Our risk and mitigation taxonomies already draw heavily on AIID data. This partnership strengthens that connection, and we’re committed to supporting RAIC’s independence, not shaping it. Their credibility exists because they’ve kept editorial control in-house, and that’s exactly how it should stay.”
Sean McGregor, founder of AIID, emphasized, “The AIID was created so companies like Trustible can motivate AI governance decisions from demonstrated risks. Trustible’s ability to link recommendations to clear statements of what companies are working to prevent supplies an answer to the all-important question of ‘why spend money on AI governance?'”
Partnership Components
The Trustible and AIID partnership focuses on three primary areas:
- Platform Integration: Authorized use of AIID content within Trustible’s AI governance platform.
- Education & Thought Leadership: Support for RAIC’s operations and continued development of the database, including opportunities to educate the business and academic communities on the latest AI risks and mitigation strategies.
- Joint Research: Collaborative work on incident analysis, emerging AI risks, and governance best practices, with findings published publicly.
Commitments to Independence
This partnership is designed to bolster RAIC’s operations, ensuring the continuous work of the AIID without any interruption or influence from business partners. Trustible will not participate in RAIC’s editorial decisions or influence how incidents are logged and evaluated. These independent decisions will remain the purview of RAIC’s editorial team, adhering to their publicly available methodology.
For Trustible customers, all platform and organizational data will remain confidential. No customer information stored in the Trustible platform will be shared with any third party without explicit written consent.
About Trustible
Trustible is where AI governance gets done. The company aids regulated enterprises in managing AI risk, complying with regulations, and accelerating the safe and responsible adoption of AI through its industry-leading governance platform. Trustible has raised $7.69M in funding, supported by leading investors. As AI governance becomes a strategic priority for global enterprises, Trustible is setting the standard for safe, ethical, and scalable AI adoption.
About the Responsible AI Collaborative
The Responsible AI Collaborative (RAIC) is an independent nonprofit that manages the AI Incident Database (AIID), the most widely used public repository of real-world AI harms. Over eight years, the AIID has grown to over 5,000 curated incident reports and has informed the development of national and intergovernmental AI standards.