By Noah Diedrich, Community News Service
Editor’s note: The Community News Service is a program in which University of Vermont students work with professional editors to provide content for local news outlets at no cost.
Can Vermont legislators distinguish an AI-generated portrait from a real one? That was the question facing the Senate government operations committee last week as members watched pictures from a New York Times quiz designed to test just that.
As each face flicked by, the senators took turns guessing whether or not it was made by artificial intelligence.
In five attempts, they only managed to get one correct.
The Feb. 4 committee meeting was convened to hear testimony on S.23, a bill that would require political campaigns in Vermont to disclose uses of “synthetic media” — an image, video or audio recording that creates a realistic yet false representation of another candidate. Failing to do so would come with a fine based on the severity of the violation.
The bill, introduced by Sen. Ruth Hardy, D-Addison, seeks to regulate the use of deepfakes, a type of AI-generated media that alters what a person said or did in a conversation with the intent of deceiving viewers.
The threat of AI in elections is something that has long been a conversation among state election officials across the country, said Vermont Secretary of State Sarah Copeland Hanzas. For her, S.23 is a “first logical step” in AI regulation for Vermont.
“We’re really in uncharted territory in terms of the newness of this technology,” she said. “We don’t have any court precedents saying, ‘This is how you can limit this type of speech,’ or, ‘This is how you can’t limit this kind of speech.’ So disclosure seems to be the safest way to go.”
Ilana Beller, a lobbyist for national consumer advocacy group Public Citizen, testified last Tuesday in support of the new bill. She was the one who had brought the quiz to the committee to prove the ability of deepfake tech to confuse and befuddle.
“Whether you’re talking about audio deepfakes, images, videos — the technology has gotten to a really good place in terms of being effective at tricking people,” Beller said. “We’ve reached a place where pretty much anyone on the internet can create a deepfake within a couple of minutes, and it costs like five bucks.”
Beller said the quality of this technology is rapidly improving and that deepfake use saw an increase in recent election cycles around the world, including in the U. S., India, Turkey and Slovakia.
The impetus for S.23 was a robocall this past year that attempted to bamboozle voters in New Hampshire during the 2024 presidential primary by playing an AI-generated recording of former President Joe Biden, Hardy said.
Phone messages mimicking the voice of the then-president told Granite State Democrats to save their vote for the general election in November, spreading the false notion that they had only the one vote to cast for both contests.
The effects of AI could be detrimental for public trust in the long run, let alone creating confusion during election cycles, Beller said in her testimony.
“If a large percentage of the content or information that’s being circulated is realistic-looking video or images that are fraudulent, then it will serve to erode the trust of the general public,” she said.
A version of S.23 has been introduced in 49 state legislatures, with 21 states having already passed it with broad bipartisan support. Vermont’s version has tripartisan support, Hardy said.
“One of the things that’s great about this issue is I don’t think it’s a partisan issue,” Copeland Hanzas said. “It’s really just to make sure that elections are honest and accurate and fair.”
Like many of its sister bills, S.23 requires a disclosure of synthetic media instead of an outright ban out of caution for violating the First Amendment. On the question of a complete ban or disclosure for AI use in Vermont elections, Copeland Hanzas said she opts for the latter.
“We have not demonstrated a high enough bar of potential damage to justify a ban,” she said. “It is likely there would be a lawsuit if we were to attempt to ban the use of AI.”
But the implications of AI in elections could pose questions for matters of free speech.
“Deciding where along the spectrum of acceptable free speech, versus something that is dangerous or damaging and should be restricted, is just completely uncharted here in the AI realm,” the secretary of state said. “It was never possible to make such a convincing fabrication of what another person might say.”
Despite the risks deepfakes could pose to election integrity, Copeland Hanzas said AI may help leverage the playing field in certain contests.
“It helps a candidate who maybe doesn’t have staff or doesn’t have the funds to hire a bunch of people to help them write ad copy,” she said. “They could, in theory, use AI to form the basis of their campaign materials.”