Convincing AI deepfakes of politicians are getting easier, report warns
A simple text prompt and a sample of a major politician’s voice is all it takes to create convincing deepfake audio with artificial intelligence.
That could spread damaging misinformation during elections due to “insufficient” or “nonexistent” safety measures, according to a new study released Friday.
Researchers at the Center for Countering Digital Hate tested six of the most popular online AI voice-cloning tools to determine how easy it is to create misleading audio of politicians like U.S. President Joe Biden, former president and current Republican candidate Donald Trump, British Prime Minister Rishi Sunak and others.
Five of the six tools tested failed to prevent the generation of those voice clips, most of which were deemed convincing. Researchers were also able to get around safety features on some platforms by simply using a different, less restrictive AI voice-cloning tool.
The generated clips included fake audio of Biden, Trump, Sunak and other political figures falsely urging people not to vote due to bomb threats at polling stations, claiming their votes are being counted twice, admitting to lying or discussing medical issues.
“We’ve shown that these platforms can be abused all too easily,” said Imran Ahmed, CEO of the Center for Countering Digital Hate, in an interview with Global News.
LISTEN BELOW: An AI-generated clone of Trump’s voice created by Speechify appears to portray Trump admitting to lying in order to get elected. (Audio provided by the Center for Countering Digital Hate)
The report comes ahead of consequential elections this year in the United Kingdom — which goes to the polls in just over a month — the United States and several other democracies around the world.
Canada is due to have a federal