Fashion
State Lawmakers Tackle AI’s Perils and Potential as November Elections Loom
Amidst stark white walls of a conference room, a speaker engaged hundreds of state lawmakers and policy influencers with a pressing question: Does artificial intelligence pose a threat to upcoming state elections?
The response was clear-cut. A live poll indicated 80% in agreement that AI is a threat, and nearly 90% felt their state laws were insufficient to combat it. This topic prominently featured in sessions at the National Conference of State Legislatures’ annual meeting in Louisville.
“It’s the topic du jour,” stated Kentucky Senator Whitney Westerfield, kicking off a panel on AI. Discussions about AI’s impact are active across every state legislature.
Some experts and lawmakers hailed AI’s potential benefits in fields like health care and education. However, others were alarmed by its capability to interfere with the democratic process ahead of the November elections. Lawmakers shared legislative proposals aimed at mitigating these risks.
This election cycle is the first since generative AI, which can create new images, audio, and video, has become widespread. Concerns have been raised about deepfakes—realistic but fake videos that could misrepresent candidates.
Kentucky Senator Amanda Mays Bledsoe emphasized the need for voter awareness: “We need to do something to make sure the voters understand what they’re doing.” Bledsoe co-sponsored a bipartisan bill to limit deepfake use in elections, allowing candidates targeted by such media to take legal action. Although the bill passed the Senate, it stalled in the House.
Rhode Island Senator Dawn Euer voiced worries about AI exacerbating disinformation, especially on social media. “Election propaganda has always existed,” said Euer, “but now we have advanced tools to do it.”
Similarly, Connecticut Senator James Maroney noted that while deepfake concerns about elections are legitimate, most deepfakes target women with non-consensual intimate images. Maroney sponsored a bill to regulate AI and criminalize deepfake porn and false political messaging. It passed the state Senate but faced opposition from Democratic Governor Ned Lamont, who argued it could harm the state’s tech industry.
Maroney acknowledged the complexities but noted technological benefits like AI-driven constituent communication and multilingual messaging.
In Louisville, New Hampshire Secretary of State David Scanlan mentioned that AI could streamline election administration. However, earlier this year, New Hampshire voters were targeted by AI-generated robocalls mimicking President Joe Biden’s voice to discourage voting, leading to legal action against the organizer.
Scanlan highlighted that misinformation and extreme tactics are not new but have become more complex with AI.
Federal Cybersecurity and Infrastructure Security Agency adviser Cait Conley commended New Hampshire’s swift response as a model for other states.
Kentucky Secretary of State Michael Adams acknowledged AI’s potential impact, especially in swing states, but felt it’s not yet widespread enough to be a top concern.
Amid a lack of federal action, many states are expanding their regulatory frameworks for AI. At least 40 states, Puerto Rico, the Virgin Islands, and Washington, D.C., considered AI-related legislation this year, according to NCSL.
Colorado recently pioneered nation-leading AI regulations, with Sen. Robert Rodriguez drawing inspiration from European standards to align rules globally. The law, set to be reviewed by a task force, will take effect in 2026.
In Texas, Rep. Giovanni Capriglione, co-chair of a state AI advisory council, anticipates much AI-related legislation in the next session. “AI is unquestionably being used to spread disinformation and misinformation,” he warned, noting increased usage as elections approach.