Anthony Habayeb
NIST’s AI Standards Set to Inspire New Federal Regulations

As Congress engages in discussions regarding federal regulations for artificial intelligence (AI), attention is turning to the National Institute of Standards and Technology (NIST). Experts suggest the guidelines it has developed over the past two years could serve as a valuable model.
Bhavin Shah, founder and CEO of AI company Moveworks, praised NIST’s AI standards during a House Oversight Committee hearing on June 5. “When it comes to AI and protecting data, we’ve looked at the new standards from NIST, and they’re actually pretty good,” he stated.
Shah highlighted NIST’s AI Risk Management Framework, composed of voluntary guidelines released in 2023. Crafted through collaboration among public and private sector specialists, the framework aims to enhance the safety and user trust of AI models. Its broad, flexible design allows organizations of varying sizes to adopt these recommendations.
In 2024, NIST is set to release a similar framework tailored specifically for generative AI models. These advanced systems can create new content, including text and images.
Patrice Williams-Lindo, CEO of Career Nomad, emphasized the significance of NIST’s framework. “It’s where technical rigor meets trust,” she explained. Having no federal regulations currently in place, industry players often refer to NIST for common standards. This enables companies to self-govern responsibly while awaiting congressional action.
The NIST framework assesses the entire lifecycle of an AI product, employing a “map, manage and measure” strategy, according to Anthony Habayeb, cofounder and CEO of Monitaur. This process involves identifying risks in AI models, measuring performance, and managing these risks effectively.
NIST’s contributions often influence national laws and policies. After the September 11 attacks, the organization investigated technical failures in building collapses, leading to critical changes in public safety standards and building codes. In the early 2010s, as cloud services gained traction, NIST contributed to developing the FedRAMP program, which standardizes software procurement for government agencies.
Williams-Lindo remarked, “NIST is often the soft law before the hard law hits.” Recently, the agency became part of the U.S. Department of Commerce, a shift that Ylli Bajraktari, president and CEO of the Special Competitive Studies Project, believes will aid companies in developing sound AI policies.
Bipartisan consensus has emerged regarding NIST’s clear, nonpartisan framework at congressional hearings. However, some legislators argue that mere guidelines are insufficient. They advocate for federal AI laws aimed at consumer protection, in response to discrimination issues arising from AI technologies in various sectors, such as facial recognition and hiring.
Democratic Rep. Kathy Castor voiced concerns over Republican efforts to block state-level regulations. “What the heck is Congress doing?” she questioned, emphasizing the need for protective measures against potential harms from AI algorithms.
While NIST’s framework is a foundational reference, it lacks the regulatory authority necessary to monitor AI’s rapid evolution effectively. Williams-Lindo indicated that federal leadership is crucial to address gaps in measuring algorithmic harm and protecting vulnerable communities.
Despite these challenges, some experts believe companies will naturally adopt NIST standards. Habayeb argued that the advantages of adhering to safety guidelines extend beyond regulatory compliance. “If you skip certain testing or data validation, you’ll end up with a suboptimal system,” he cautioned, highlighting the business implications of disregarding safety protocols.