Coming AI Regulations Have IT Leaders Worried About Hefty Compliance Fines
As organizations increasingly deploy generative AI, more than 70% of IT leaders express concern regarding their ability to meet upcoming regulatory requirements. A recent survey from Gartner highlights that regulatory compliance ranks among the top three challenges faced by these leaders.
The Compliance Landscape
Less than 25% of IT leaders are very confident in their organizations’ capability to manage critical issues related to security and governance, especially regarding regulatory compliance when utilizing generative AI. The concern is compounded by the anticipated emergence of a patchwork of regulations that could conflict with one another.
Lydia Clougherty Jones, a senior director analyst at Gartner, states that the increasing number of legal nuances poses a significant challenge, particularly for global organizations. The frameworks announced by different countries vary widely, leading to confusion and compliance difficulties.
Projected Legal Implications
Gartner forecasts a 30% increase in legal disputes for tech companies due to AI regulatory violations by 2028. By mid-2026, new categories of illegal AI-informed decision-making are expected to incur over $10 billion in remediation costs across AI vendors and users.
Initial Regulatory Efforts
Government initiatives to regulate AI are still in their early stages. The EU AI Act, effective from August 2024, represents one of the first major legislative efforts targeting AI usage.
In the United States, while Congress has mostly taken a hands-off approach, several states have enacted AI regulations. For instance, the 2024 Colorado AI Act mandates AI users maintain risk management programs and conduct impact assessments to protect consumers from algorithmic discrimination.
Additionally, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 2026, requires government entities to notify individuals when they interact with AI and prohibits manipulative AI practices.
California’s Transparency in Frontier Artificial Intelligence Act, signed into law by Governor Gavin Newsom, requires large AI developers to disclose their compliance with national standards and report critical safety incidents within 15 days. Non-compliance can result in fines of up to $1 million per violation.
Implications for CIOs
As state and global regulations evolve, CIOs are understandably apprehensive about compliance while deploying AI technologies. Dion Hinchcliffe, a vice president at Futurum Equities, notes that CIOs are closely monitoring the accuracy and trustworthiness of AI data.
Many CIOs worry that existing regulatory and governance compliance solutions may not keep pace with the rapidly changing landscape of regulations and AI functionality. Hinchcliffe emphasizes that the probabilistic nature of AI complicates compliance, making it challenging to ensure consistent governance.
Challenges for Health IT Leaders
Tina Joros, chairwoman of the Electronic Health Record Association AI Task Force, expresses concerns that the fragmented regulatory landscape could exacerbate the digital divide between large health systems and smaller counterparts struggling to adopt AI.
James Thomas, chief AI officer at ContractPodAi, underscores the operational headaches caused by regulatory inconsistencies. Definitions of key terms like transparency and accountability vary between regions, complicating compliance for global enterprises.
Recommendations for Governance
To navigate these challenges, experts recommend that organizations adopt comprehensive governance controls as they deploy AI technologies. Many organizations currently face issues due to fragmented deployments driven by individual employees using personal productivity tools.
Gartner also advises focusing on training AI models to self-correct, implementing rigorous use-case review procedures, and deploying content moderation techniques to ensure compliance and manage risks effectively.
In high-risk scenarios, it may be beneficial for organizations to engage external auditors to validate AI outputs, ensuring a robust defense of the data and models used.