Lawmaker Eyes Bill to Codify NIST AI Center
A top House lawmaker is developing legislation to codify the National Institute of Standards and Technology’s Center for AI Standards and Innovation (CAISI) into law. This initiative comes at a time when lawmakers and the Biden administration are deliberating the federal government’s role in overseeing AI technology.
The Great American AI Act
Rep. Jay Obernolte (R-Calif.), chairman of the House Science, Space and Technology Committee’s research and technology subcommittee, announced a forthcoming bill dubbed the “Great American AI Act.” During a recent hearing, Obernolte emphasized that the bill aims to formalize CAISI’s role in advancing AI evaluation and standard setting.
Obernolte stated, “The work it does in doing AI model evaluation is essential in creating a regulatory toolbox for our sectoral regulators, so everyone doesn’t have to reinvent the wheel.”
Background on CAISI
The Biden administration initially established an AI Safety Institute at NIST, but last summer, the administration rebranded the center to focus on standards and innovation. Last September, CAISI released an evaluation of the Chinese “DeepSeek” AI model, which found it lacking compared to U.S. models in terms of cost, security, and performance. More recently, CAISI issued a request for information on securing AI agent systems.
Despite the rebranding, Obernolte pointed out that CAISI’s functions have largely remained consistent. He argued that codifying the center would provide stability, stating, “It’s unhealthy for us to have every successive administration spin up a brand new agency that, essentially, is doing something with a long-term mission that needs continuity.”
Expert Opinions
During the hearing, Michael Kratsios, the director of the White House Office of Science and Technology Policy, affirmed that CAISI is a “very important part of the larger AI agenda.” He mentioned that it is crucial to reframe the center’s work around innovation and standards rather than safety, saying, “The great standards that are put out by CAISI and by NIST are the ones that, ultimately, will empower the proliferation of this technology across many industries.”
Setting Standards for AI
Kratsios later remarked that the NIST center would play a vital role in setting standards for advanced metrology of model evaluation. He noted, “You want to have trust in them so that when everyday Americans are using, whether it be medical models or anything else, they are comfortable with the fact that it has been tested and evaluated.”
New Legislation: The READ AI Act
Obernolte and Rep. Sarah McBride (D-Del.) have introduced the “READ AI Act,” which directs NIST to develop guidelines for how AI models should be evaluated, including standard documentation. When asked about the bill, Kratsios deemed it worthy of consideration, but emphasized that any efforts should not solely focus on frontier AI model evaluation.
“The reality is that the most implementation that’s going to happen across industry is going to happen through fine-tuned models for specific use cases,” Kratsios explained, highlighting the importance of creating a framework and standard for evaluating various models tailored for sectors like finance and health.
Conclusion and Future Directions
Discussions surrounding the role of the NIST center come amid a larger debate over the federal government’s involvement in setting AI standards. In a December executive order, President Joe Biden called for legislative recommendations to create a national framework that would preempt state AI laws.
During the hearing, Kratsios indicated a desire to collaborate with Congress on viable solutions, ensuring that any proposed legislation would not preempt lawful state actions regarding child safety protections, AI compute and data infrastructure, and state government procurement and use of AI.