Connect with us

arizona

Az Officials Steadfast Against AI Deepfake Menace in Upcoming Elections

Published

on

AI deepfakes could wreak havoc in elections, but Az officials are determined to avoid that

The rapid advancement of artificial intelligence has unveiled unprecedented possibilities in politics, including the emergence of convincing deepfakes and the rampant spread of disinformation.

“We should consider AI to be a significant risk to American democracy,” warned David Harris, a professor at the University of California at Berkeley, who specializes in AI, misinformation, and democracy.

With AI tools becoming more accessible, creating realistic yet fraudulent videos, audios, and photos of politicians is now cheaper and easier than ever.

Arizona Agenda created a deepfake of Kari Lake, the former GOP nominee for Arizona governor, to alert voters to the deceptive nature of such technology as she seeks a U.S. Senate seat.

At the Republican National Convention, experts from Microsoft sounded alarms on the potential influence of deepfakes on elections and discussed possible mitigation strategies.

The audience of a corresponding workshop successfully identified AI-generated images four out of six times, a score touted as “a great score” by Ashley O’Rourke, a Microsoft executive.

“2024 is going to be the first cycle where deepfakes play a more essential role,” she added.

Ginny Badanes, who leads Microsoft’s Democracy Forward program, emphasized the importance of consumer literacy in combating misinformation and deepfakes. “Being skeptical and seeking multiple sources to verify what we see or hear is crucial,” she stated.

Microsoft plans a similar educational workshop at the Democratic National Convention in Chicago next month, as part of a coordinated effort to enhance AI literacy among voters.

Last December, Arizona Secretary of State Adrian Fontes collaborated with election and tech experts to prepare local officials for potential AI-fueled disruptions. One simulated exercise involved officers responding to audio instructions to keep polling stations open, despite uncertainty about the audio’s authenticity.

“We need to be literate, concerned, and prepared,” urged Toshi Hoo, director of the Emerging Media Lab at the Institute for the Future, who co-created the training scenario.

Noah Praetz, president of The Elections Group, which advises on election security, noted that generative AI can heighten existing election vulnerabilities. He collaborated on the Arizona exercise, stressing the importance of current procedures in preventing election interference.

To better understand AI and its capabilities, Hoo encouraged citizens to engage with AI tools. “Awareness of the new capabilities is vital as they are evolving rapidly,” he said.

In response to these challenges, Fontes established an Artificial Intelligence and Election Security Advisory Committee, including experts like Hoo, Harris, and Praetz. The committee aims to protect elections from AI disruptions while exploring AI’s potential utility in this sector.

In February, Microsoft and 19 other tech firms signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. Badanes described this as a significant proactive measure, contrasting past defensive stances taken by tech companies.

The signatories committed to developing technologies to mitigate the risks of deceptive AI content, including robust efforts to detect and neutralize such content on their platforms.

Last year, the White House secured voluntary pledges from seven leading AI companies to ensure the responsible development of AI, including embedding digital watermarks to verify content authenticity.

However, Harris argued that regulations are essential to mandate effective protections against AI’s potential misuse in undermining democracy.

In May, Arizona Governor Katie Hobbs signed a law allowing candidates and citizens to sue over unauthorized digital impersonations. Yet, comprehensive federal regulation on AI remains absent.

Hoo expressed the need for caution and preparedness but warned against overstating the risks posed by deepfakes, which could lead to widespread skepticism and mistrust.