AI Regulatory Violations to Drive Surge in Tech Legal Disputes by 2028

AI Regulatory Violations Projected to Cause Increased Legal Disputes Among Tech Firms by 2028

According to a recent survey conducted by Gartner, AI regulatory violations are expected to lead to a significant increase in legal disputes for tech companies, with estimates suggesting a 30% spike by the year 2028. This survey, which involved 360 IT leaders engaged in the deployment of generative AI (GenAI) tools, highlighted critical challenges within the industry.

Concerns Over Regulatory Compliance

The survey revealed that over 70% of IT leaders identified regulatory compliance as one of the top three challenges regarding the widespread deployment of productivity assistants powered by GenAI. Alarmingly, only 23% of respondents expressed high confidence in their organization’s ability to manage the security and governance aspects associated with GenAI tools.

The Impact of Global Regulations

Lydia Clougherty Jones, a senior director analyst at Gartner, noted that global AI regulations vary significantly across countries, reflecting each nation’s approach to aligning AI leadership, innovation, and risk mitigation. This disparity creates inconsistent compliance obligations, complicating the alignment of AI investments with tangible enterprise value, while exposing companies to potential liabilities.

Geopolitical Climate Influencing GenAI Strategies

The survey also indicated that the geopolitical climate is increasingly influencing GenAI strategies, with 57% of non-US IT leaders acknowledging that it moderately impacts their deployment strategies. Notably, 19% reported a significant impact. Despite this, nearly 60% of respondents indicated an inability or unwillingness to adopt non-US alternatives for GenAI tools.

Sentiment Towards AI Sovereignty

A recent webinar poll conducted by Gartner showed that 40% of respondents viewed their organization’s sentiment towards AI sovereignty positively, seeing it as an opportunity. Conversely, 36% maintained a neutral stance, adopting a “wait and see” approach. Furthermore, 66% of respondents indicated proactive engagement in response to sovereign AI strategies, while 52% reported making strategic adjustments as a direct result of these insights.

Recommendations for IT Leaders

Gartner emphasized the need for IT leaders to enhance the moderation of AI outputs. This can be achieved by training models for self-correction, establishing rigorous use-case review procedures to assess risks, and implementing control testing for AI-generated communications. Additionally, organizations are urged to increase model testing and sandboxing by forming cross-disciplinary teams that include decision engineers, data scientists, and legal experts to develop pre-testing protocols and validate model outputs against undesired conversational results.

As the landscape of AI continues to evolve, the importance of understanding and addressing regulatory challenges cannot be overstated. Tech firms must navigate these complexities to mitigate risks and harness the full potential of generative AI technologies.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...