Oregon’s AI Regulations: Adapting Old Laws for New Technologies

Oregon’s AI Guidance: Old Laws in Scope for New AI

The Oregon Attorney General’s Office, in conjunction with the state’s Department of Justice, has issued guidance regarding the application of existing state laws to the use of Artificial Intelligence (AI) by businesses. Released late last year, this guidance emphasizes the need for companies to understand how their use of AI may be regulated under current legislation.

Key Laws Affecting AI Usage

The guidance highlights several Oregon state laws that may apply to a company’s use of AI, including:

  • Consumer Privacy Act – Oregon’s comprehensive privacy law that mandates transparency regarding the use of personal information.
  • Unlawful Trade Practices Act – Prohibits deceptive practices in commerce.
  • Equality Act – Addresses discrimination based on protected characteristics.
  • Consumer Information Protection Act – Focuses on data security and the handling of personal information.

Key Takeaways from the Guidance

Several important considerations have been outlined in the guidance that companies must adhere to when implementing AI technologies:

  • Notice: Companies must disclose how they use personal information with AI tools. Failure to do so could be viewed as a violation of Oregon’s privacy laws, particularly if there are known issues with AI tools that could mislead users.
  • Choice: Under the Consumer Privacy Act, consent is required before processing sensitive information. Companies must provide consumers with the ability to withdraw consent and opt out of AI profiling for significant decisions.
  • Transparency: Organizations must be clear about when users are interacting with AI tools and avoid misleading claims regarding the capabilities of AI. For instance, using AI-generated voices in robocalls without disclosing the caller’s identity could lead to legal issues.
  • Bias: Any AI application that discriminates based on race, gender, or other protected characteristics is in violation of the Equality Act. Companies are urged to ensure their AI solutions do not perpetuate bias.
  • Security: The guidance insists that organizations comply with data security laws when using AI tools that incorporate personal information, emphasizing the need for “reasonable safeguards” to protect such data.

This comprehensive approach to AI regulation in Oregon serves as a crucial reminder for businesses to remain vigilant about their legal obligations while leveraging new technologies. As AI continues to evolve, understanding the intersection of technology and law will be paramount for organizations seeking to innovate responsibly.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...