fake news, facebook, fake news elections
Facebook's artificial intelligence experiment was shut down when chatbots made up a language all their own. Photo: Reuters

Facebook has a massive revenge porn problem.

 

The social media giant had to assess nearly 54,000 potential cases of revenge porn and “sextortion" in a single month, according to a document leaked to the Gaurdian.

 

As a result of investigations into pornographic images shared without consent, sometimes called nonconsensual porn, Facebook disabled more than 14,000 accounts, the Guardian report said. Of those cases, 33 involved children.

 

Facebook’s community standards prohibit sharing what it calls “nonconsensual intimate imagery,” but policing the growing problem is difficult for the tech company, which largely relies on users to report revenge porn and other posts of sexual violence.

 

“Sexual policy is the one where moderators make the most mistakes,” an unnamed source told the Guardian. “It is very complex.”

A study of U.S. revenge porn victims shows 93 percent reported significant emotional distress, and 82 percent reported significant impairment in social, occupational or other important areas of their life.

Facebook fell under public and legal pressure to strengthen its response in how it fights nonconsensual porn over the last year following two high-profile scandals.

In March, the U.S. Marine Corps launched an investigation into a secret Facebook group where Marines and veterans were sharing explicit photos of female service members. Facebook shut down the group after it was made aware of its existence.

Last fall, a judge in Ireland ruled against dismissing a suit against Facebook, claiming the site didn't do enough to stop a picture from being repeatedly reposted after the nude image of a 14-year-old girl was shared dozens of times across the platform

Here’s how Facebook plans to crack down on revenge porn

Last month, Facebook unveiled how it plans to curb sharing of nonconsensual porn — that means on walls, in private or secret groups, and on Messenger.

The documents obtained by the Guardian outlined how site moderators should evaluate sexually charged content, as some sexual references are permitted for both text and images.

The social media site allows “moderate displays of sexuality, open-mouthed kissing, clothed simulated sex and pixelated sexual activity” involving adults.

“We allow general expressions of desire, but we don’t allow sexually explicit detail,” a spokesperson told the Guardian.

These guidelines were developed to give moderators better direction on what constitutes allowable and unallowable sexual references.

The social media giant has also developed new guidelines on how to deal with what it calls “intimate images” in order to “help build a safe community.”

When someone sees an intimate image on Facebook that looks like it was shared without permission, users should click the “Report” button. Specially trained representatives will then review the image and remove it if it is deemed in violation of community standards.

Using photo-matching technology, Facebook then blocks the image from being shared on any walls, groups, in Messenger or on Instagram.

“We're focused on building a community that keeps people safe. That means building technology and A.I. tools to prevent harm,” Facebook founder Mark Zuckerberg wrote in a post the day the new technology was rolled out. “Today, we're rolling out new tools to prevent 'revenge porn' from being shared.”