AI Policies in K-12: A Local Approach to a Growing Challenge

When It Comes to Developing Policies on AI in K-12, Schools Are Largely on Their Own

Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.

However, as schools seek to navigate the age of generative AI, a significant challenge arises: schools are operating in a policy vacuum. While several states offer guidance on AI, only a few require local schools to form specific policies, despite the increasing use of generative AI by teachers, students, and school leaders.

Survey Insights on AI Policy Formation

As part of ongoing research into AI and education policy, a survey conducted in late 2025 with members of the National Association of State Boards of Education revealed the complexities of how education policy is formed. This process involves dynamic interactions across national, state, and local levels rather than being dictated by a single source.

Despite the absence of strict rules regarding AI usage in schools, education policymakers have identified several ethical concerns, including:

  • Student safety
  • Data privacy
  • Negative impacts on student learning

Concerns have also been raised about industry influence, particularly the fear that schools may later be charged for tools that are currently available for free. Deepfake technology has also become a topic of concern, as one administrator noted the potential dangers of students creating deepfakes to disrupt school activities.

The Dominance of Local Action

Despite the availability of chatbots for over three years, many states are still in the early stages of addressing generative AI, with most yet to implement official policies. Local decisions primarily shape the landscape, with each school district responsible for developing its own plans.

Responses to the survey indicated a significant degree of local influence in policy implementation, regardless of state guidance. For example, one respondent noted, “We are a ‘local control’ state, so some school districts have banned generative AI.” Others mentioned that their state has a basic requirement for districts to adopt local policies about AI.

This local decision-making mirrors the dynamics seen in previous waves of technology adoption in K-12 schools. However, a lack of evidence on how AI will affect learners and teachers adds to the challenges in formulating effective policies.

States as Guiding Lights

State policies can provide essential guidance by prioritizing ethics, equity, and safety, making them adaptable to changing needs. A coherent state policy can address key questions regarding acceptable student use of AI and ensure consistent standards across districts.

Currently, the development and use of AI policies vary significantly based on available resources. Data from a RAND-led panel indicated that educators in higher-poverty schools are about half as likely to receive AI guidance. Additionally, poorer schools are less likely to use AI tools.

Grounding Discussions in Human Values

Policymakers have emphasized the importance of involving families in discussions about AI in education. One respondent highlighted that the role of families is often overlooked: “What is the role that families play in all this? This is something that is constantly missing from the conversation.”

Integrating New Technology into Education

According to a Gallup Poll conducted on Feb. 24, 2025, 60% of teachers report using some form of AI in their work. The survey also revealed instances of shadow use of AI, where employees implement generative AI tools without explicit approval from school or district IT departments.

Some states, like Indiana, are encouraging schools to apply for grants to pilot AI-powered platforms, provided the vendors are state-approved. Prioritization is given to proposals that focus on supporting students or professional development for educators.

In California, for instance, an eighth-grade language arts teacher participated in a pilot program using AI tools to generate feedback on student writing. While the teacher praised the time-saving benefits, she also noted instances of bias in the tools, which sparked discussions about algorithmic bias in education.

Core Principles for AI Use in Education

Survey respondents emphasized the necessity of ethical principles in guiding AI usage in educational settings. This includes ensuring that both students and teachers learn about the limitations and opportunities of generative AI, and how to leverage these tools effectively and ethically.

Despite the confusion surrounding AI and a fragmented policy landscape, policymakers recognize the importance of engaging communities and families in co-creating a path forward. As one policymaker succinctly stated, “Knowing the horse has already left the barn, where on the spectrum do you want to be between AI-human collaboration vs. outright ban?”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...