Category: Artificial Intelligence in Healthcare

Global Standards for AI in Healthcare: A WHO Initiative

The World Health Organization (WHO) has launched a global initiative to establish a unified governance framework for artificial intelligence (AI) in healthcare, focusing on safety, ethics, and accessibility. This initiative aims to support low- and middle-income countries in effectively integrating AI into their health systems while addressing ethical concerns and regulatory challenges.

Read More »

Shaping Responsible AI Governance in Healthcare

The AI regulatory landscape has undergone significant changes, with the US and UK adopting more pro-innovation approaches while the EU has shifted its focus as well. This evolving environment presents both opportunities and challenges for ensuring safety and public trust in health-related AI technologies.

Read More »

Regulating AI in Life Sciences: The UK’s Evolving Framework

The UK is working on new regulations to better govern software and AI in medical devices, with the MHRA focusing on principles that ensure safety and efficacy. As innovation accelerates, the MHRA has initiated programs like the AI Airlock pilot to streamline the path for AI medical devices to reach the market.

Read More »

Will AI Soon Have the Power to Prescribe Medication?

A new bill, the Healthy Technology Act of 2025, aims to allow artificial intelligence (AI) to have prescribing privileges if authorized by states and approved by the FDA. While some experts see potential in AI for healthcare, they emphasize the need for more research before AI can autonomously prescribe medications.

Read More »

AI Act: A Crucial Shift for Healthcare Compliance

The EU AI Act is now partially in effect, requiring hospitals and vendors to comply by August 2026, with strict regulations and potential fines for noncompliance. Experts emphasize the need for transparency and education in deploying AI tools to ensure patient safety and effective healthcare outcomes.

Read More »

AI Accountability in Healthcare: Rethinking Safety and Ethics

The paper discusses the challenges of moral accountability and safety assurance in the use of artificial intelligence-based clinical tools in healthcare. It emphasizes the need to update our understanding of accountability due to the opaque decision-making processes of these systems and suggests involving AI developers in the assessment of patient harm.

Read More »