Co-Creating Ethical AI in Healthcare

Co-creating Responsible and Ethical AI with Healthcare Stakeholders

Embedding ethical principles into AI development requires an integrated, multistakeholder approach, involving not only developers but also business users and customers.

During a recent panel discussion titled Beyond Compliance: Responsible and Ethical AI in Singapore’s Public Healthcare, held on September 16, participants emphasized the importance of starting from the user’s perspective to build effective and safe AI solutions. This consensus was reached by a diverse group of experts, including directors and senior data scientists from healthcare institutions and technology firms.

Understanding User Perspectives

Panelists highlighted that technology should not be the starting point. Instead, the focus should be on understanding who is impacted by AI—be it clinicians or patients. This user-centric approach is essential for developing solutions that are both ethical and practical.

Implementing Ethical Principles

Public healthcare institutions can embed responsible and ethical AI principles into their daily workflows using specific tools such as threat modeling and the data ethical card practice. While frameworks like the Model AI Governance Framework for Generative AI serve as a helpful starting point, these practices ensure that responsible AI is integrated into the product team’s thought process.

Addressing Bias and Challenges

Bias in AI systems is a complex issue that requires a holistic approach. This includes everything from data preparation and cleaning to model building and ongoing monitoring. The panel acknowledged that achieving a perfect AI model is not always feasible; rather, institutions should determine how much error is acceptable for specific use cases and contexts.

Responsible AI practices should be regarded as a cross-functional requirement, affecting all aspects of a project rather than just a single team. Transparency and clear communication with the public regarding data usage are vital for building trust, especially in the public sector.

Adopting an Ecosystem Approach

In Singapore, a whole-of-government approach is being employed to roll out its national population health movement, Healthier SG. The Health Promotion Board (HPB) recognizes that a single app with individual user data may not suffice in promoting meaningful health changes, as health is influenced by various factors, including social environments.

As AI systems expand to integrate data from diverse sources, a comprehensive ecosystem approach is essential for governance. Clear ethical boundaries must be established to prevent misuse, ensuring that AI acts as a supportive tool rather than a replacement for human expertise.

Conclusion

The development of responsible and ethical AI in healthcare is not merely a technical challenge but a multifaceted endeavor that requires collaboration among various stakeholders. By embedding ethical principles into workflows and maintaining transparency with users, healthcare institutions can leverage AI to enhance service delivery while safeguarding public trust.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...