Co-Creating Ethical AI in Healthcare

Co-creating Responsible and Ethical AI with Healthcare Stakeholders

Embedding ethical principles into AI development requires an integrated, multistakeholder approach, involving not only developers but also business users and customers.

During a recent panel discussion titled Beyond Compliance: Responsible and Ethical AI in Singapore’s Public Healthcare, held on September 16, participants emphasized the importance of starting from the user’s perspective to build effective and safe AI solutions. This consensus was reached by a diverse group of experts, including directors and senior data scientists from healthcare institutions and technology firms.

Understanding User Perspectives

Panelists highlighted that technology should not be the starting point. Instead, the focus should be on understanding who is impacted by AI—be it clinicians or patients. This user-centric approach is essential for developing solutions that are both ethical and practical.

Implementing Ethical Principles

Public healthcare institutions can embed responsible and ethical AI principles into their daily workflows using specific tools such as threat modeling and the data ethical card practice. While frameworks like the Model AI Governance Framework for Generative AI serve as a helpful starting point, these practices ensure that responsible AI is integrated into the product team’s thought process.

Addressing Bias and Challenges

Bias in AI systems is a complex issue that requires a holistic approach. This includes everything from data preparation and cleaning to model building and ongoing monitoring. The panel acknowledged that achieving a perfect AI model is not always feasible; rather, institutions should determine how much error is acceptable for specific use cases and contexts.

Responsible AI practices should be regarded as a cross-functional requirement, affecting all aspects of a project rather than just a single team. Transparency and clear communication with the public regarding data usage are vital for building trust, especially in the public sector.

Adopting an Ecosystem Approach

In Singapore, a whole-of-government approach is being employed to roll out its national population health movement, Healthier SG. The Health Promotion Board (HPB) recognizes that a single app with individual user data may not suffice in promoting meaningful health changes, as health is influenced by various factors, including social environments.

As AI systems expand to integrate data from diverse sources, a comprehensive ecosystem approach is essential for governance. Clear ethical boundaries must be established to prevent misuse, ensuring that AI acts as a supportive tool rather than a replacement for human expertise.

Conclusion

The development of responsible and ethical AI in healthcare is not merely a technical challenge but a multifaceted endeavor that requires collaboration among various stakeholders. By embedding ethical principles into workflows and maintaining transparency with users, healthcare institutions can leverage AI to enhance service delivery while safeguarding public trust.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...