If You or Your Clients Are Using AI, Here Is What You Should Know
The laws and regulations around data privacy and cybersecurity are ever-changing, and with the rapidly growing popularity of artificial intelligence (AI), we are starting to see more regulatory action at the federal and individual state levels specific to AI. Lawmakers on both levels are driven by the desire to provide protections for consumers and intellectual property rights. With five states recently implementing regulations around the use of AI, and more likely on the horizon, it is important to know what your business can do to avoid violating the law.
Where is the Law Regulating AI Today
Like with data privacy, there is no comprehensive law in the U.S. at the federal level governing the use of AI. While there had been an executive order in place to fund research into AI and how to use it safely and regulate it properly, that was rescinded early in 2025. Currently, there are a handful of executive orders in place, and they revolve around three main objectives of the White House: accelerating AI innovation; building American AI infrastructure; and leading in international AI diplomacy and security.
Current Federal AI Regulations
- The Take It Down Act criminalizes the non-consensual posting of intimate images, including deepfakes. It is enforced by the FTC and requires platforms to remove such content within 48 hours of notification from a victim.
- Executive Order Promoting the Export of the American AI Technology Stack tasks the Secretary of Commerce with establishing and implementing an AI export program by October 21, 2025.
- The Accelerating Federal Permitting of Data Center Infrastructure Executive Order provides the Secretary of Commerce with the ability to fund data center projects, including the infrastructure needed to power those data centers.
New State AI Laws
The following states have passed laws specifically regulating the use of AI. Most of the laws require notifying consumers when AI is being used, and restrictions around using it to make decisions where bias (even if unintentional) could influence a result.
- The Utah Artificial Intelligence Policy Act was amended in May 2025, requiring that when a consumer is interacting with an AI system, they must receive notice throughout the use that they are interacting with AI and not a human.
- The Maine Chatbot Disclosure Act requires that consumers be notified when they are not engaging with a human.
- The Texas Responsible Artificial Intelligence Governance Act, signed into law in June 2025, prohibits government entities from using AI for certain purposes, such as assigning a social score.
- Arkansas recently enacted a law to clarify ownership of AI-generated content, stating that the person supplying the information to train a model owns the end model.
- The Colorado Artificial Intelligence Act will go into effect February 1, 2026, covering a variety of issues and requiring AI use to be disclosed to consumers.
Considerations If You Use AI
First and foremost, evaluate how you use AI, and if you use generative AI, consider what you do with the information generated. Understanding the information you have, where it comes from, where it is stored, and how you use it is extremely important for general data privacy and security.
If you own or manage a business, it is crucial to establish a policy around the use of AI. Consider if you will allow the use of company-provided devices with AI or if employees must disclose if part of their work was developed using AI.
With the focus in some states on restrictions and controls on developers and deployers of AI systems, consider how this impacts your contract negotiations with service providers. Ensuring compliance with applicable laws is essential.
Be aware of the potential for AI hallucinations, which are made-up facts that can lead to misinformation. Users must fact-check and cite-check the output to protect their reputation.
Engaging with a professional who knows the law early on can help you manage and mitigate risk associated with AI usage.