PolitMaster.com is a comprehensive online platform providing insightful coverage of the political arena: International Relations, Domestic Policies, Economic Developments, Electoral Processes, and Legislative Updates. With expert analysis, live updates, and in-depth features, we bring you closer to the heart of politics. Exclusive interviews, up-to-date photos, and video content, alongside breaking news, keep you informed around the clock. Stay engaged with the world of politics 24/7.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

The Issue Of Sexually Explicit Deepfakes Is Far Larger Than Taylor Swift

While the concern around generative AI has so far mainly focused on the potential for misinformation as we head into the U.S. general election, the possible displacement of workers, and the disruption of the U.S. education system, there is another real and present danger — the use of AI to create deepfake, non-consensual pornography.

Last month, fake, sexually explicit photos of Taylor Swift were circulated on X, the platform formerly known as Twitter, and allowed to stay on there for several hours before they were ultimately taken down. One of the posts on X garnered over 45 million views, according to The Verge. X later blocked search results for Swift’s name altogether in what the company’s head of business operations described as a “temporary action” for safety reasons.

Swift is far from the only person to be targeted, but her case is yet another reminder of how easy and cheap it has become for bad actors to take advantage of the advances in generative AI technology to create fake pornographic content without consent, while victims have few legal options.

Even the White House weighed in on the incident, calling on Congress to legislate, and urging social media companies to do more to prevent people from taking advantage of their platforms.

The term “deepfakes” refers to synthetic media, including photos, video and audio, that have been manipulated through the use of AI tools to show someone doing something they never actually did.

The word itself was coined by a Reddit user in 2017 whose profile name was “Deepfake,” and posted fake pornography clips on the platform using face-swapping technology.

A 2019 report by Sensity AI, a company formerly known as Deeptrace, reported that 96% of deepfakes accounted for

Read more on huffpost.com