Thoughts on AI and the Law
It’s no secret: Artificial Intelligence has captured America’s attention, and many industries are exploring how to leverage AI to cut costs and increase efficiencies.
Broadcast media are no exception. Recent industry analysis suggests that the radio industry has been an early adopter of AI and automation, and the industry’s use of AI will likely continue to grow in the years ahead.
The Future of AI in Broadcasting
In a recent interview, WETA Chief Engineer William Harrison shared his belief that 2026 is truly going to be the year of AI in broadcasting. He noted that broadcasters have already been experimenting with AI for programming choices and even for AI DJs. He envisions that there could be more uses of AI for both stations and their talent.
Similarly, John Garziglia observed in a recent article that AI was a consistent theme at the Consumer Electronics Show 2026, indicating that AI will have a major impact on the industry in the next few years alone.
When implemented effectively, AI can be a useful tool for radio broadcasters to streamline their operations, reduce expenses, and stay competitive against streaming services, podcasts, and digital platforms. Many stations have already begun adopting AI-powered tools for tasks like voice tracking, playlist curation, and production editing — areas that traditionally required significant human time and expertise.
As these tools continue to evolve, the future of radio will “involve fewer human voices and more algorithms,” as one author put it.
Industry Concerns
While AI can be a useful tool, broadcasters must be aware of the dangers AI presents for the industry. Given that we are in an important mid-term election year, the issues are all the more significant. During the last presidential election cycle, we saw first-hand just how easily AI can spread misinformation.
Reports highlighted incidents such as the 2024 robocall impersonation of President Joe Biden that misled voters in New Hampshire. Reports are already out showing that foreign entities are using new AI tools to sow division in the U.S. and undermine America’s image.
Broadcasters must be prepared to address AI disinformation and avoid eroding their credibility, reliability, and public trust.
Compliance and Regulations
Broadcasters should ensure they comply with industry standards and regulations when utilizing AI on the content side. SAG-AFTRA, representing broadcast journalists and media professionals, has stated that every individual has an inalienable right to their name, voice, and likeness. Any use of an individual’s name, voice, or likeness must be pursuant to the individual’s consent and just compensation.
Additionally, SAG-AFTRA has adopted a platform ensuring that any recreation of or synthetic performances must be paid on-scale with an in-person performance.
Legislative Landscape
AI is an increasing area of focus for both Republican and Democratic lawmakers. In 2025, all 50 states and territories introduced AI legislation, with 38 states adopting about 100 laws. While not all AI laws will impact broadcasters, many will, and broadcasters must stay alert for new state requirements.
For example, New York passed legislation requiring any advertisement produced with AI to disclose whether it includes AI-generated performers and mandates consent from heirs or executors for using an individual’s name, image, or likeness for commercial purposes after their death.
California has also adopted laws prohibiting the distribution of deceptive AI-generated or manipulated content in advertisements and requiring disclosures for altered electoral advertisements.
Federal Attention
AI has not only been a topic at the state level but also federally. Several AI bills have been introduced in the U.S. Congress, including the No Fakes Act of 2025, which protects individual rights from computer-generated representations that are readily identifiable as an individual’s likeness.
Though no federal AI laws have been enacted yet, this remains a hot topic among legislators.
Executive Orders and National Security
Concerned about state-by-state legislation potentially impeding AI development, President Trump issued Executive Order 14365 on December 11. This order aims to promote U.S. leadership in AI, alleging it will enhance national and economic security.
The order establishes an AI Litigation Task Force to evaluate and challenge state AI laws inconsistent with a national framework for AI. It also tasks federal officials with creating a uniform federal policy framework for AI that preempts conflicting state laws.
Guidelines for Broadcasters
Every station considering AI in its broadcasts should adopt a policy that conforms to SAG-AFTRA principles of full disclosure and equal pay, while complying with state disclosure requirements. For pre-recorded programming or advertising, broadcasters might require certification from programmers on whether AI technology was used in the material’s creation, ensuring proper disclosure accompanies it, especially for political content.