Opinion | The Deepfake Porn of Kids and Celebrities That Gets Millions of Views

Opinion | The Deepfake Porn of Kids and Celebrities That Gets Millions of Views

  • Post category:USA

Alarms are blaring about artificial intelligence deepfakes that manipulate voters, like the robocall sounding like President Biden that went to New Hampshire households, or the fake video of Taylor Swift endorsing Donald Trump.

Yet there’s actually a far bigger problem with deepfakes that we haven’t paid enough attention to: deepfake nude videos and photos that humiliate celebrities and unknown children alike. One recent study found that 98 percent of deepfake videos online were pornographic and that 99 percent of those targeted were women or girls.

Faked nude imagery of Taylor Swift rattled the internet in January, but this goes way beyond her: Companies make money by selling advertising and premium subscriptions for websites hosting fake sex videos of famous female actresses, singers, influencers, princesses and politicians. Google directs traffic to these graphic videos, and victims have little recourse.

Sometimes the victims are underage girls.

Francesca Mani, a 14-year-old high school sophomore in New Jersey, told me she was in class in October when the loudspeaker summoned her to the school office. There the assistant principal and a counselor told her that one or more male classmates had used a “nudify” program to take a clothed picture of her and generate a fake naked image. The boys had made naked images of a number of other sophomore girls as well.

Fighting tears, feeling violated and humiliated, Francesca stumbled back to class. In the hallway, she said, she passed another group of girls crying for the same reason — and a cluster of boys mocking them.

“When I saw the boys laughing, I got so mad,” Francesca said. “After school, I came home, and I told my mom we need to do something about this.”

Now 15, Francesca started a website about the deepfake problem — aiheeelp.com — and began meeting state legislators and members of Congress in an effort to call attention to the issue.

While there have always been doctored images, artificial intelligence makes the process much easier. With just a single good image of a person’s face, it is now possible in just half an hour to make a 60-second sex video of that person. Those videos can then be posted on general pornographic websites for anyone to see, or on specialized sites for deepfakes.

The videos there are graphic and sometimes sadistic, depicting women tied up as they are raped or urinated on, for example. One site offers categories including “rape” (472 items), “crying” (655) and “degradation” (822).

In addition, there are the “nudify” or “undressing” websites and apps of the kind that targeted Francesca. “Undress on a click!” one urges. These overwhelmingly target women and girls; some are not even capable of generating a naked male. A British study of child sexual images produced by artificial intelligence reported that 99.6 percent were of girls, most commonly between 7 and 13 years old.

Graphika, an online analytics company, identified 34 nudify websites that received a combined 24 million unique visitors in September alone.

When Francesca was targeted, her family consulted the police and lawyers but found no remedy. “There’s nobody to turn to,” said her mother, Dorota Mani. “The police say, ‘Sorry, we can’t do anything.’”

The problem is that there isn’t a law that has been clearly broken. “We just continue to be unable to have a legal framework that can be nimble enough to address the tech,” said Yiota Souras, the chief legal officer for the National Center for Missing & Exploited Children.

Sophie Compton, a documentary maker, made a film on the topic, “Another Body,” and was so appalled that she started a campaign and website, MyImageMyChoice.org, to push for change.

“It’s become a kind of crazy industry, completely based on the violation of consent,” Compton said.

The impunity reflects a blasé attitude toward the humiliation of victims. One survey found that 74 percent of deepfake pornography users reported not feeling guilty about watching the videos.

We have a hard-fought consensus established today that unwanted kissing, groping and demeaning comments are unacceptable, so how is this other form of violation given a pass? How can we care so little about protecting women and girls from online degradation?

“Most survivors I talk to say they contemplated suicide,” said Andrea Powell, who works with people who have been deepfaked and develops strategies to address the problem.

This is a burden that falls disproportionately on prominent women. One deepfake website displays the official portrait of a female member of Congress — and then 28 fake sex videos of her. Another website has 90. (I’m not linking to these sites because, unlike Google, I’m not willing to direct traffic to these sites and further enable them to profit from displaying nonconsensual imagery.)

In rare cases, deepfakes have targeted boys, often for “sextortion,” in which a predator threatens to disseminate embarrassing images unless the victim pays money or provides nudes. The F.B.I. last year warned of an increase in deepfakes used for sextortion, which has sometimes been a factor in child suicides.

“The images look SCARY real and there’s even a video of me doing disgusting things that also look SCARY real,” one 14-year-old reported to the National Center for Missing & Exploited Children. That child sent debit card information to a predator who threatened to post the fakes online.

As I see it, Google and other search engines are recklessly directing traffic to porn sites with nonconsensual deepfakes. Google is essential to the business model of these malicious companies.

In one search I did on Google, seven of the top 10 video results were explicit sex videos involving female celebrities. Using the same search terms on Microsoft’s Bing search engine, all 10 were. But this isn’t inevitable: At Yahoo, none were.

In other spheres, Google does the right thing. Ask “How do I kill myself?” and it won’t offer step-by-step guidance — instead, its first result is a suicide helpline. Ask “How do I poison my spouse?” and it’s not very helpful. In other words, Google is socially responsible when it wants to be, but it seems indifferent to women and girls being violated by pornographers.

“Google really has to take responsibility for enabling this kind of problem,” Breeze Liu, herself a victim of revenge porn and deepfakes, told me. “It has the power to stop this.”

Liu was shattered when she got a message in 2020 from a friend to drop everything and call him at once.

“I don’t want you to panic,” he told her when she called, “but there’s a video of you on Pornhub.”

It turned out to be a nude video that had been recorded without Liu’s knowledge. Soon it was downloaded and posted on many other porn sites, and then apparently used to spin deepfake videos showing her performing sex acts. All told, the material appeared on at least 832 links.

Liu was mortified. She didn’t know how to tell her parents. She climbed to the top of a tall building and prepared to jump off.

In the end, Liu didn’t jump. Instead, like Francesca, she got mad — and resolved to help other people in the same situation.

“We are being slut-shamed and the perpetrators are completely running free,” she told me. “It doesn’t make sense.”

Liu, who previously had worked for a venture capital firm in technology, founded a start-up, Alecto AI, that aims to help victims of nonconsensual pornography locate images of themselves and then get them removed. A pilot of the Alecto app is now available free for Apple and Android devices, and Liu hopes to establish partnerships with tech firms to help remove nonconsensual content.

Tech can address problems that tech created, she argues.

Google agrees that there is room for improvement. No Google official was willing to discuss the problem with me on the record, but Cathy Edwards, a vice president for search at the company, issued a statement that said, “We understand how distressing this content can be, and we’re committed to building on our existing protections to help people who are affected.”

“We’re actively developing additional safeguards on Google Search,” the statement added, noting that the company has set up a process where deepfake victims can apply to have these links removed from search results.

A Microsoft spokeswoman, Caitlin Roulston, offered a similar statement, noting that the company has a web form allowing people to request removal of a link to nude images of themselves from Bing search results. The statement encouraged users to adjust safe search settings to “block undesired adult content” and acknowledged that “more work needs to be done.”

Count me unimpressed. I don’t see why Google and Bing should direct traffic to deepfake websites whose business is nonconsensual imagery of sex and nudity. Search engines are pillars of that sleazy and exploitative ecosystem. You can do better, Google and Bing.

A.I. companies aren’t as culpable as Google, but they haven’t been as careful as they could be. Rebecca Portnoff, vice president for data science at Thorn, a nonprofit that builds technology to combat child sexual abuse, notes that A.I. models are trained using scraped imagery from the internet, but they can be steered away from websites that include child sexual abuse. The upshot: They can’t so easily generate what they don’t know.

President Biden signed a promising executive order last year to try to bring safeguards to artificial intelligence, including deepfakes, and several bills have been introduced in Congress. Some states have enacted their own measures.

I’m in favor of trying to crack down on deepfakes with criminal law, but it’s easy to pass a law and difficult to enforce it. A more effective tool might be simpler: civil liability for damages these deepfakes cause. Tech companies are now largely excused from liability under Section 230 of the Communications Decency Act, but if this were amended and companies knew that they faced lawsuits and had to pay damages, their incentives would change and they would police themselves. And the business model of some deepfake companies would collapse.

Senator Michael Bennet, a Democrat of Colorado, and others have proposed a new federal regulatory body to oversee technology companies and new media, just as the Federal Communications Commission oversees old media. That makes sense to me.

Australia seems a step ahead of other countries in regulating deepfakes, and perhaps that’s in part because a Perth woman, Noelle Martin, was targeted at age 17 by someone who doctored an image of her into porn. Outraged, she became a lawyer and has devoted herself to fighting such abuse and lobbying for tighter regulations.

One result has been a wave of retaliatory fake imagery meant to hurt her. Some included images of her underage sister.

“This form of abuse is potentially permanent,” Martin told me. “This abuse affects a person’s education, employability, future earning capacity, reputation, interpersonal relationships, romantic relationships, mental and physical health — potentially in perpetuity.”

The greatest obstacles to regulating deepfakes, I’ve come to believe, aren’t technical or legal — although those are real — but simply our collective complacency.

Society was also once complacent about domestic violence and sexual harassment. In recent decades, we’ve gained empathy for victims and built systems of accountability that, while imperfect, have fostered a more civilized society.

It’s time for similar accountability in the digital space. New technologies are arriving, yes, but we needn’t bow to them. It astonishes me that society apparently believes that women and girls must accept being tormented by demeaning imagery. Instead, we should stand with victims and crack down on deepfakes that allow companies to profit from sexual degradation, humiliation and misogyny.

If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.

by NYTimes