Deepfake porn could be a growing problem amid AI race

Apr 16, 2023, 8:24 AM

Australian Noelle Martin poses for a photo Thursday, March 9, 2023, in New York. The 28-year-old fo...

Australian Noelle Martin poses for a photo Thursday, March 9, 2023, in New York. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. (AP Photo/Andres Kudacki)
Credit: ASSOCIATED PRESS

(AP Photo/Andres Kudacki)

NEW YORK (AP) — Artificial intelligence imaging can be used design advertising campaigns.

But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.

Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.

Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.

The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.

“The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.

Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.

“You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”

The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.

But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.

In the meantime, some AI models say they’re already curbing access to explicit images.

OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.

Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”

Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.

TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.

The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.

Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.

Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.

The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.

In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.

“When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

“We have not … been able to formulate a direct response yet to it,” Portnoy said.

National News

Associated Press

California investigating whether DeSantis involved in flying asylum-seekers from Texas to Sacramento

SACRAMENTO, Calif. (AP) — Officials were investigating Tuesday whether Florida’s Gov. Ron DeSantis was behind a flight that picked up asylum-seekers on the Texas border and flew them — apparently without their knowledge — to California’s capital, even as faith-based groups scrambled to find housing and food for them. About 20 people ranging in age […]

1 day ago

Former New Jersey Gov. Chris Christie, center, poses for a selfie after a town hall style meeting a...

Associated Press

Former New Jersey Gov. Chris Christie set to launch 2024 presidential bid at New Hampshire town hall

NEW YORK (AP) — Former New Jersey Gov. Chris Christie is set to launch his bid for the Republican nomination for president at a town hall in New Hampshire on Tuesday evening. The campaign will be the second for Christie, who lost to Trump in 2016 and went on to become a close on-and-off adviser […]

1 day ago

This booking photo provided by the Missouri Department of Corrections shows Michael Tisius. Tisius ...

Associated Press

Missouri man facing execution for killing 2 jailers in failed bid to help inmate escape in 2000

A man who shot and killed two rural Missouri jailers nearly 23 years ago during a failed bid to help an inmate escape is set to be executed Tuesday evening. for killing Leon Egley and Jason Acton at the small Randolph County Jail on June 22, 2000. Tisius’ lawyers have urged the U.S. Supreme Court […]

1 day ago

FILE - E. Jean Carroll arrives at Manhattan federal court, Tuesday, May 9, 2023, in New York. Donal...

Associated Press

Trump’s lawyers say defamation claim by NY writer must fail because jury agreed he never raped her

NEW YORK (AP) — A New York writer who won a $5 million jury verdict against ex-President Donald Trump can’t win a pending defamation lawsuit against him because the jury agreed with Trump that he never raped her, his lawyers told a judge Monday. The lawyers urged Judge Lewis A. Kaplan to reject columnist E. […]

1 day ago

Crowd members listen to a speaker during the "Unions Strike Back" rally, Friday, May 26, 2023, near...

Associated Press

Hollywood actors guild votes to authorize strike, as writers strike continues

Actors represented by the Hollywood union SAG-AFTRA voted Monday evening to authorize a strike if they don’t agree on a new contract with major studios, streamers and production companies by June 30. The guild, which represents over 160,000 screen actors, broadcast journalists, announcers, hosts and stunt performers, begins its negotiations with the Alliance of Motion […]

1 day ago

FILE - A man walks past a Microsoft sign set up for the Microsoft BUILD conference, April 28, 2015,...

Associated Press

Microsoft will pay $20M to settle U.S. charges of illegally collecting children’s data

SAN FRANCISCO (AP) — Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console. The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that […]

1 day ago

Sponsored Articles

Men's Health Month...

Men’s Health Month: Why It’s Important to Speak About Your Health

June is Men’s Health Month, with the goal to raise awareness about men’s health and to encourage men to speak about their health.

Internet Washington...

Major Internet Upgrade and Expansion Planned This Year in Washington State

Comcast is investing $280 million this year to offer multi-gigabit Internet speeds to more than four million locations.

Compassion International...

Brock Huard and Friends Rally Around The Fight for First Campaign

Professional athletes are teaming up to prevent infant mortality and empower women at risk in communities facing severe poverty.

Emergency Preparedness...

Prepare for the next disaster at the Emergency Preparedness Conference

Being prepared before the next emergency arrives is key to preserving businesses and organizations of many kinds.

SHIBA volunteer...

Volunteer to help people understand their Medicare options!

If you’re retired or getting ready to retire and looking for new ways to stay active, becoming a SHIBA volunteer could be for you!

safety from crime...

As crime increases, our safety measures must too

It's easy to be accused of fearmongering regarding crime, but Seattle residents might have good reason to be concerned for their safety.

Deepfake porn could be a growing problem amid AI race