Billings Launches First AI Security Policy to Enhance Public Trust

Billings, Mont., Implements Its First AI Security Policy

(TNS) — In a progressive move towards modern governance, Billings has taken significant steps to refine its approach to technology, particularly concerning artificial intelligence (AI). During a recent city council meeting, council member Andrew Lindley proposed the establishment of a Technology Advisory Commission. This commission would comprise industry experts providing critical guidance on various technological issues, notably cybersecurity policies, citizen interaction technologies, and AI governance within the city administration.

In a proactive response, city officials have already implemented their first AI security policy, overseen by City Administrator Chris Kukulski and IT Director Jeff Sprock. This policy, part of an IT handbook update, recognizes the increasing significance of AI in municipal operations and addresses essential topics such as privacy, transparency, and ethical usage.

Everyday Usage and Data Security

According to city officials, staff members are permitted to use AI for routine tasks, including web searches and email drafting. Kukulski noted the utilization of tools like Claude and ChatGPT among staff, likening their use to standard search engines.

While acknowledging the benefits of AI, Kukulski emphasizes the necessity of double-checking AI-generated content. He states, “You have to read and reread and double-check anything that comes out of it because sometimes it hallucinates and brings in inaccuracies.”

Importantly, the security policy strictly prohibits the uploading of sensitive personal information—such as Personally Identifiable Information (PII) and Criminal Justice Information System (CJIS) data—into any AI models. Sprock remarked, “We’ve been very vocal in saying you can’t put PII into AI.”

Furthermore, the city monitors the types of AI models utilized by staff, ensuring that they comply with industry-standard data security principles and undergo appropriate risk management assessments prior to implementation. Kukulski highlighted concerns regarding data storage locations, stating, “The big concern with AI is you don’t know where the data necessarily is being stored.”

Maintaining Public Trust Through Transparency

Transparency regarding AI usage is a significant public concern, especially for institutions wielding substantial influence over public life. The city’s policy commits to openly disclosing AI usage and instructs employees to explain AI decisions to the public, particularly when they affect critical areas like law enforcement and resource allocation.

To further enhance transparency, a reporting system for AI misuse has been established, with penalties ranging from suspension of IT privileges to termination for severe violations. This ensures accountability in the application of AI technologies.

An Ongoing Process

The rapidly evolving nature of the AI industry poses continuous challenges for regulation. Despite this, Sprock and his team remain dedicated to staying ahead of the technological curve, already considering updates to the city’s AI security policy, even though it was implemented less than a year ago. “As quickly as it’s moving, you could have adopted something 90 days ago, and new information could already be out,” he noted.

A comprehensive copy of the city’s AI Security Policy is accessible within its IT Policy Handbook on the city’s official website, reflecting Billings’ commitment to responsible and secure AI integration in public administration.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...