Will AI Soon Have the Power to Prescribe Medication?

This Bill Could Make It Legal for AI to Prescribe Medicine

A new legislative proposal, the Healthy Technology Act of 2025, has the potential to grant artificial intelligence (AI) the authority to prescribe medications, pending approval from Congress. This bill proposes an amendment to the Federal Food, Drug, and Cosmetic Act, allowing AI and machine learning technologies to qualify as practitioners authorized to prescribe drugs if sanctioned by the respective state and approved by the US Food and Drug Administration (FDA).

Current Landscape of AI in Healthcare

Many physicians are optimistic about the role of AI in enhancing healthcare delivery. Currently, AI is utilized to streamline processes such as clinical documentation and assist in decision-making. However, experts caution that extensive research is still required before AI can autonomously generate prescriptions.

Dr. Ravi B. Parikh, an associate professor at Emory University, pointed out that the legislation references a type of AI technology that is not yet available. The bill was introduced to the US House of Representatives on January 7 and is currently under review by the House Committee on Energy and Commerce.

AI’s Limitations in Prescription Writing

As of now, AI does not possess the capability to independently write prescriptions. Researchers are working on developing AI tools that assist physicians in making informed prescribing decisions. For instance, predictive tools analyze a patient’s electronic health records to assess the likelihood of treatment efficacy.

Additionally, AI is being developed to create digital twins of patients, enabling simulations to determine the most effective medication options. Despite these advancements, there remains a significant gap between AI’s current capabilities and the level of trust required for it to take over prescribing duties.

Concerns Regarding AI Prescribers

Experts have expressed concerns about the accuracy and reliability of AI in clinical settings. Past incidents, such as an AI scribe incorrectly diagnosing a patient based on incomplete data, highlight the potential risks associated with AI-enabled prescriptions. Dr. Matthew DeCamp emphasized that AI’s performance in real-time clinical environments remains unproven, and it cannot replicate the nuanced decision-making that human physicians engage in when prescribing medications.

Furthermore, issues surrounding bias in AI training data could lead to skewed recommendations, particularly if the data used to train these systems is incomplete or unrepresentative of diverse patient populations.

Regulatory Challenges

The proposed act raises critical questions about the regulation of AI prescribers. Under the Healthy Technology Act, AI prescribing tools would be required to meet state authorization and FDA approval, yet existing regulatory standards may not be adequate for the complexities of AI technologies.

Dr. Parikh cautioned that the regulatory landscape for AI devices is currently less stringent than for traditional medications, which could lead to premature adoption of AI in prescribing without sufficient evidence of its efficacy.

Future Prospects of AI in Prescribing

For the Healthy Technology Act of 2025 to be enacted, it must navigate through Congress successfully. Historical attempts to introduce similar legislation have stalled in committee, making the future uncertain.

Should the bill pass, there is speculation about how AI could eventually play a role in prescribing protocols, perhaps by assisting in low-risk scenarios while ensuring a human physician remains involved in the decision-making process.

In conclusion, while the potential for AI to enhance healthcare is significant, the implications of allowing AI to prescribe medications necessitate careful consideration of regulatory frameworks, ethical standards, and ongoing oversight to ensure patient safety and efficacy.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...