Connect with us

artificial intelligence

Congress Tackles AI: Balancing Rewards and Security Risks in Government Use

Published

on

AI experts told the House Oversight Committee that AI offers enormous promise for federal workers, but also raises serious privacy and security concerns. (Photo by Jennifer Shutt/States Newsroom)

As Congress scrutinizes the integration of artificial intelligence (AI) in government, the House Oversight Committee is examining both the applications and procurement of AI tools. Members are particularly concerned about privacy implications associated with unregulated technology.

During a hearing on June 5, expert witnesses highlighted that, while the federal government is leveraging AI in certain functions, outdated technological infrastructures and data handling practices hinder broader adoption. They cautioned that inefficient procurement processes risk allowing other nations to outpace the U.S. in the global AI landscape.

Bhavin Shah, CEO of Moveworks, emphasized the urgency of making AI readily accessible to federal employees. He pointed out that achieving FedRAMP status took his company three and a half years and $8.5 million, underscoring significant barriers for smaller AI innovators. “This is a prohibitive barrier for smaller AI innovators, the companies that often develop the most cutting-edge solutions,” Shah stated.

Federal departments like the Health and Human Services and the Veterans Affairs are currently deploying AI to improve medical research and analyze health data, which highlights existing utilization despite slow adoption rates. Nonetheless, the U.S. risks lagging behind nations such as China if bureaucratic inefficiencies and a lack of AI proficiency within the workforce are not addressed, according to Ylli Bajraktari, president of the Special Competitive Studies Project.

Bajraktari proposed creating a White House council dedicated to AI, boosting non-defense AI R&D funding to $32 billion, and overhauling procurement processes to facilitate quicker integration into government systems. He believes these steps are vital for enhancing the country’s competitive edge in technology.

Conversely, Bruce Schneier, a Harvard Kennedy School fellow, warned that hasty AI adoption poses significant security risks. He cited concerns regarding Elon Musk’s recent tenure leading the Department of Government Efficiency (DOGE), where AI was reportedly used to make decisions affecting federal workers. Musk allegedly accessed sensitive data from various federal agencies, raising alarms among lawmakers.

“You all need to assume that our adversaries have copies of all the data DOGE has exfiltrated,” Schneier cautioned, warning about potential national security threats stemming from improper data handling.

Musk’s actions fueled a divisive debate among lawmakers, with Rep. Stephen Lynch denouncing the impact of Musk’s leadership on public service integrity. Testimony from Linda Miller, chief growth officer at TrackLight, echoed concerns about the challenges of adopting private sector innovations within government frameworks.

Miller advocated for focusing AI applications on automating routine tasks rather than attempting sweeping changes to legacy systems. She proposed “regulatory sandboxes” for testing AI systems under controlled environments to ensure scalability and safety.

“Wholesale changes to legacy IT systems and the federal acquisition system will take years,” she said. “However, we can create innovative laboratories where AI projects can operate in proof of concept regulatory sandboxes.”