Berkeley Unveils AI Usage Framework

Berkeley City Council Passes Guidelines for Future Regulations on AI Usage

The Berkeley City Council has taken a significant step towards the future regulation of artificial intelligence (AI) by passing two resolutions during its regular meeting on March 10. These resolutions aim to create a structured framework for AI usage within the city.

The Berkeley Rule

The first resolution, known as “The Berkeley Rule,” is a comprehensive set of 10 guidelines crafted by District 3 Councilmember Ben Bartlett. These guidelines are designed to outline the primary purposes that AI should serve within the city:

  • Put residents first
  • Modernize city services
  • Empower the community
  • Ensure transparency and accountability
  • Standardize operations
  • Certify ethical use
  • Protect and prepare our workforce
  • Defend civil liberties
  • Promote social advancement and accessibility
  • Catalyze civic wealth

According to Bartlett, the initiative to create “The Berkeley Rule” stemmed from discussions with the city manager, who sought guidance on how to effectively implement AI, which had been utilized in a scattershot fashion. Bartlett stated, “He’s going to create administrative regulations right around it.”

Framework for AI Policy Development

The guidelines outlined in “The Berkeley Rule” will serve as a foundational framework to guide city staff in regulating and incorporating AI systems into government operations. Bartlett emphasized the importance of the guidelines, noting, “The administrative regulations are going to tell the city how (to use AI), but ‘The Berkeley Rule’ tells us why.” He advocates for a human-centered approach that fulfills the moral evolution of the city.

Principles for AI Policy

The second resolution, crafted by District 5 Councilmember Shoshana O’Keefe, aims to inform the city manager about seven essential principles to consider when developing the AI policy. This initiative complements “The Berkeley Rule” and includes:

  • Creating safeguards against AI systems introducing bias
  • Protecting data privacy
  • Ensuring compliance with cybersecurity standards
  • Maintaining human oversight and accountability
  • Exploring opportunities for AI integration in operations management
  • Fostering cross-departmental collaboration of AI knowledge

District 4 Councilmember Igor Tregub, who co-authored “The Berkeley Rule,” highlighted that both items were developed in close coordination with the city manager. He stated, “(The item) directs (the city manager) in the writing of such a policy to keep certain principles in mind.”

Conclusion

As the city of Berkeley embarks on this journey to regulate AI usage, “The Berkeley Rule” and the accompanying principles set a precedent for responsible and ethical AI integration. This structured approach aims to enhance city services while ensuring that the needs and rights of residents remain a top priority.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...