Deepfake of principal's voice is the latest case of AI being used for harm
The most recent criminal case involving artificial intelligence emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice.
The case is yet another reason why everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.
“Everybody is vulnerable to attack, and anyone can do the attacking,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.
Here's what to know about some of the latest uses of AI to cause harm:
AI HAS BECOME VERY ACCESSIBLE
Manipulating recorded sounds and images isn't new. But the ease with which someone can alter information is a recent phenomenon. So is the ability for it to spread quickly on social media.
The fake audio clip that impersonated the principal is an example of a subset of artificial intelligence known as generative AI. It can create hyper-realistic new images, videos and audio clips. It's cheaper and easier to use in recent years, lowering the barrier to anyone with an internet connection.
“Particularly over the last year, anybody — and I really mean anybody — can go to an online service," said Farid, the Berkeley professor. «And either for free or for a few bucks a month, they can upload 30 seconds of someone's voice.”
Those seconds can come from a voicemail, social media post or surreptitious recording, Farid said. Machine learning algorithms capture what a person sounds like. And the cloned speech is then generated from words typed on a keyboard.
The technology will only get more powerful and easier to use, including for video manipulation, he said.
WHAT