Texas Implements Groundbreaking AI Regulations in Healthcare

Texas Enacts Comprehensive AI Governance Laws with Sector-Specific Healthcare Provisions

Texas has taken a significant step in regulating artificial intelligence (AI) with the passage of House Bill (HB) 149 and Senate Bill (SB) 1188. Signed into law on June 22, 2025, and effective January 1, 2026, HB 149 – formally titled the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) – establishes a broad framework for the responsible use of AI across the public sector, with more limited requirements levied upon the private sector, including healthcare providers.

The law is designed to promote transparency and responsible deployment of AI, particularly in contexts where automated systems are used to make decisions that materially affect individuals. Additionally, SB 1188, which was signed into law on June 20, 2025, and becomes effective September 1, 2025, introduces specific requirements for healthcare providers using AI in diagnostic contexts, while also prohibiting the physical offshoring of electronic medical records.

TRAIGA: A Statewide Framework for Responsible AI

TRAIGA places limitations on how Texas state agencies and developers and deployers of AI systems may utilize these technologies. These limitations extend to members of the healthcare industry. Notably, TRAIGA requires healthcare providers to disclose to patients (or their personal representatives) their use of AI systems if such systems are utilized in the diagnosis or treatment of patients.

In clinical settings, this disclosure must be made before or at the time of interaction, except in emergencies, when it must be provided as soon as reasonably possible. This requirement is intended to ensure that patients are aware of when AI is involved in their care so they can make informed decisions accordingly, such as whether to seek care from a different provider.

In addition to the disclosure requirement, TRAIGA includes provisions that prohibit the use of AI with the specific intent to discriminate against individuals based on protected characteristics. However, the law clarifies that a disparate impact alone is not sufficient to establish discriminatory intent – a distinction that may shape how bias in healthcare algorithms is evaluated.

The statute also addresses the use of biometric data in AI systems, though these restrictions apply only to governmental entities. Specifically, it bars government agencies from using AI to identify individuals through biometric data without consent, where such use would infringe on their constitutional or statutory rights. Notably, biometric data used for healthcare treatment, payment, or operations under the Health Insurance Portability and Accountability Act (HIPAA) is excluded from this definition.

Beyond these substantive provisions, TRAIGA imposes governance obligations on organizations that develop or deploy AI systems. Healthcare providers should review internal policies and procedures to assess and mitigate risks, maintain documentation, and ensure human oversight in AI-assisted decision-making. Enforcement authority rests with the Texas attorney general, who is empowered to investigate violations and impose civil penalties.

Texas SB 1188: AI in Healthcare and Data Localization

SB 1188 introduces targeted obligations for healthcare providers using AI. Specifically, the law states that licensed practitioners may use AI to support diagnosis and treatment planning, provided the following requirements are satisfied:

  1. The provider must act within the scope of their licensure, regardless of their use of AI.
  2. The use of AI is not otherwise prohibited by law.
  3. The provider reviews all AI-generated records in accordance with standards set by the Texas Medical Board.

Thus, this bill essentially requires a provider to review any AI-generated records or recommendations and make the ultimate medical decision in accordance with the provider’s scope of practice. In addition, SB 1188 imposes a strict data localization mandate, prohibiting the physical offshoring of electronic medical records. This requirement applies not only to records stored directly by healthcare providers but also to those maintained by third-party vendors or cloud service providers.

Covered entities must ensure that such records are accessible only to individuals whose job responsibilities require access for treatment, payment, or healthcare operations, and must implement reasonable administrative, physical, and technical safeguards to protect the confidentiality, integrity, and availability of patient data.

Looking Ahead

Together, TRAIGA and SB 1188 reflect Texas’ growing role in shaping state-level AI regulation, particularly in the healthcare sector. These laws demonstrate a deliberate effort to balance technological advancement with patient and consumer protections. As these requirements take effect, businesses and healthcare providers operating in Texas should begin reviewing their AI systems, patient policies, and data handling practices to ensure compliance.

As the use of AI in healthcare continues to evolve, healthcare businesses and providers should determine whether their intended use of AI is compliant with these laws at the time of implementation and going forward.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...