Building Responsible AI: A Comprehensive Risk Assessment Toolkit

Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment

The rapid growth of Artificial Intelligence (AI) has underscored the urgent need for responsible AI practices. Despite increasing interest, a comprehensive AI risk assessment toolkit remains lacking. This study introduces the Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives. By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks, aligns with emerging regulations like the EU AI Act, and enhances overall AI governance.

A key benefit of the RAI Question Bank is its systematic approach to linking lower-level risk questions to higher-level ones and related themes, preventing siloed assessments and ensuring a cohesive evaluation process. Case studies illustrate the practical application of the RAI Question Bank in assessing AI projects, from evaluating risk factors to informing decision-making processes.

1. Introduction

Since the emergence of ChatGPT and other large language models, AI has seen a surge in popularity in recent years. Fueled by remarkable advancements, companies across all industries are rapidly adopting, using, and developing AI systems to enhance their businesses. While this rapid adoption has driven significant market growth in the AI industry and generated excitement for its potential, it has also raised concerns about the responsible development and application of AI, such as hallucination, generating harmful content, and user’s overreliance.

A recent report found that while many companies view AI as a promising technology and actively pursue AI opportunities, only 10% of those surveyed have publicly announced their Responsible AI (RAI) policies. This suggests that many companies are still concerned about their RAI maturity level. Moreover, there have been numerous incidents involving AI across various sectors, raising concerns in areas such as privacy, bias, and safety.

2. Background and Literature Review

The discourse on RAI has gained significant traction in both industry and academia, underscoring the critical need for effective AI risk management to foster RAI practices. Despite the proliferation of studies and frameworks on responsible and safe AI, many remain abstract, lacking concrete measures for risk assessment and management. Our previous mapping study systematically analyzed 16 AI risk assessment and management frameworks worldwide to gain insights into current practices in managing AI risks.

Key trends and areas for improvement in AI risk assessment practices globally have been identified, informing the design and development of our question bank. Increasing global concern is evident as the number of AI risk assessment frameworks worldwide continues to rise, indicating a growing recognition of RAI approaches to assess and mitigate these risks.

3. Methodology

This study was conducted in five dedicated phases from 2022 to 2024. A systematic mapping study was performed to understand the state-of-the-art in AI risk assessment and select reference frameworks for developing the RAI question bank. The frameworks selected were scrutinized, and AI risk questions were synthesized to develop a comprehensive and holistic question bank for AI risk assessment.

The evaluation of our proposed question bank was carried out via two phases of case study involving eight AI projects and the ESG-AI framework development project. The case studies provided valuable insights and feedback for refining the question bank, ensuring its relevance and effectiveness in real-world applications.

4. RAI Question Bank Overview

The RAI Question Bank is a set of risk questions designed to support the ethical development of AI systems. It addresses the needs of various users, including C-level executives, senior managers, developers, and others. The necessity for such a question bank is evident, given the complex and multifaceted nature of AI risks that can arise from various sources, including biased data, algorithmic errors, unintended consequences, and more.

Organized into different levels (Level 1-3), the RAI Question Bank ensures a comprehensive evaluation by linking high-level principles with detailed, technical questions. This structured approach promotes a thorough and consistent approach to responsible AI development.

5. Case Studies

Two case studies were conducted to evaluate the application of the RAI Question Bank. The first case study focused on AI projects within a national research organization in Australia, assessing the ethical risks associated with each project. The second case study involved the integration of Environment, Social and Governance (ESG) factors into AI frameworks for investors. Feedback from stakeholders during these case studies proved invaluable for refining the question bank.

6. Compliance with Regulations

With the emergence of regulations such as the EU AI Act, companies face growing concerns about compliance with current and upcoming regulations, especially for high-risk AI systems. The RAI Question Bank can be utilized to examine RAI practices against legal requirements, creating a comprehensive compliance checklist.

This three-step approach includes identifying legal requirements, mapping them to the RAI Question Bank, and scoring the AI system against these mapped questions, enabling organizations to assess their compliance with corresponding requirements.

7. Conclusion

The RAI Question Bank represents a significant step forward in structured risk assessment of AI systems. By offering a valuable resource and tool for organizations, it promotes a thorough and structured approach to risk assessment, supporting the responsible use and development of AI systems that are not only innovative but also aligned with ethical principles and societal values.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...