Virginia’s New Law Sets Standards for AI in Customer Support

New Virginia State Law Regulates AI in Customer Support

The state of Virginia is on the verge of implementing a significant law that aims to regulate the use of high-risk artificial intelligence (AI) systems in customer support. This legislation, known as the High-Risk Artificial Intelligence Developer and Deployer Act, introduces compliance requirements for businesses that utilize AI systems impacting consumers in Virginia.

Passed by the Virginia state senate, the law is pending the governor’s signature and is set to take effect on July 1, 2026. Organizations that fail to comply with its provisions risk incurring substantial fines, potentially reaching $10,000 per affected customer.

Definition of High-Risk AI

The act specifically targets AI systems that either autonomously make or significantly influence key consumer decisions. Notable applications of such high-risk AI include:

  • Automating decisions regarding customer eligibility for products or services.
  • Generating personalized financial offers and recommendations.
  • Determining access to premium services or customer tiers.
  • Resolving disputes and processing customer claims automatically.
  • Influencing credit approvals and financing options.

This law aims to enhance accountability and transparency in AI deployment within customer interactions.

Compliance Requirements for Developers and Deployers

Under this legislation, entities involved in the development or deployment of AI-driven customer experience systems are categorized as developers and deployers, respectively. Each classification comes with a set of responsibilities:

  • Developers must take reasonable steps to prevent discrimination, disclose the system’s purpose and limitations, provide documentation for bias monitoring, and update disclosures within 90 days of major changes.
  • Deployers must establish a risk management policy for AI tools, conduct impact assessments prior to deployment, and inform customers when AI is involved in decision-making processes. They are also required to explain adverse decisions and maintain documentation for a minimum of three years.

Generative AI Regulations

The law also includes specific guidelines for the use of generative AI (GenAI). It mandates the implementation of detectable markers or identification methods for AI-generated synthetic content, including audio, video, and images in customer experience applications. This regulation applies to:

  • AI-generated product demonstrations.
  • Virtual try-ons.
  • AI-voiced customer service interactions.
  • Personalized marketing efforts.

However, exceptions are made for creative works and artistic expressions, allowing their use in marketing and branded content.

Exemptions and Penalties

The legislation outlines several scenarios in which AI use may be exempt from the regulations, including:

  • Anti-fraud technologies (excluding those utilizing facial recognition).
  • Cybersecurity tools for customer data protection.
  • Healthcare scenarios involving HIPAA-covered entities.
  • Financial institutions adhering to equivalent federal standards.

For non-exempt entities that violate the law, non-willful violations may incur fines of up to $1,000 per instance, while willful violations can result in fines of up to $10,000 per instance. Each affected customer counts as a separate violation, meaning that the potential financial penalties could be considerable.

A Broader Context of AI Regulation

Virginia is not alone in its pursuit of AI regulations. Colorado was the first state in the U.S. to enact comprehensive consumer protection regulations focused on fair AI use. Other states, including California, Illinois, Minnesota, and Utah, are also working on similar legislation to govern AI applications.

Internationally, the European Union has introduced an AI Act, which may evolve to include rights such as the right to talk with a human in customer service interactions within the next few years.

Conclusion

As AI technology continues to evolve, the regulatory landscape surrounding its use will become increasingly complex. Customer service and experience professionals must stay informed about the latest laws and regulations to maximize the potential of their AI systems while avoiding legal and financial repercussions.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...