Meta Must Do More to Stop AI-Generated Explicit Images

14

Meta must do more to address AI-generated explicit images after it fell short in its response to non-consensual, nude deepfakes of two female public figures on its platforms, according to a report.

In April, Meta’s semi-independent observer body, the Oversight Board announced that it would overtake an investigation into the company’s handling of deepfake porn.

The investigation came about after two specific instances in which a deepfake nude image of a public figure from India, as well as a more graphic image of a public figure from the U.S. were posted on Meta’s platforms.

Neither Meta nor the Oversight Board named the female victims of the deepfakes.

In a report published on Thursday after a three-month investigation into the incidents, the Oversight Board found that both images violated Meta’s rule prohibiting “derogatory sexualized photoshop” images — which is part of its Bullying and Harassment policy.

“Removing both posts was in line with Meta’s human rights responsibilities,” the report reads.

The deepfake pornographic image of the Indian public figure was twice reported to Meta. However, the company did not remove the image from Instagram until the Oversight Board took up the case.

In the case of the image of the American public figure posted to Facebook — which was generated by AI and depicted her as nude and being groped — Meta immediately removed the picture, which had previously been added to a matching bank that automatically detects rule-breaking images.

“Meta determined that its original decision to leave the content on Instagram was in error and the company removed the post for violating the Bullying and Harassment Community Standard,” the Oversight Board says in its report.

“Later, after the Board began its deliberations, Meta disabled the account that posted the content.”

Not Just Photoshop

The report suggests that Meta is not consistently enforcing its rules against non-consensual sexual imagery, even as advancements in AI technology have made this form of harassment increasingly common. The oversight board called on Meta to update its policies and make the language of those policies clearer to users.

In its report, the Oversight Board — a quasi-independent entity made up of experts in areas such as freedom of expression and human rights — laid out recommendations for how Meta could improve its efforts to combat sexualized deepfakes.

Currently, Meta’s policies around explicit images generated by AI branch out from a “derogatory sexualized Photoshop” rule in its Bullying and Harassment section. The Board urged Meta to replace the word “Photoshop” with a generalized term to cover other photo manipulation techniques such as AI.

Additionally, Meta prohibits nonconsensual imagery if it is “non-commercial or produced in a private setting.” The Board suggested that this clause shouldn’t be mandatory to remove or ban images generated by AI or manipulated without consent.

The report also pointed to continued issues at Meta in terms of moderating content in non-Western or non-English speaking countries.

In response to the board’s observations, Meta said that it will review these recommendations.

 
Image credits: Header photo licensed via Depositphotos.



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their original owners.

Aggregated From –

Comments are closed.