The Oversight Board at Meta has confirmed that it will be looking at two cases involving AI-generated sexually explicit images.

It will specifically look at its policies around sharing AI-generated explicit images of public figures.

The two cases are from an instance in the US and India respectively.
The advent of AI-generated images has proved both pervasive and nefarious. In recent months we have seen popular musicians like Taylor Swift have AI-generated sexually explicit material surface en masse online, with the buck needing to fall on social media platforms as to how it handles the uploading and distribution of such content.

While it does have policies on the distribution of sexually explicit content on its various platforms, Meta’s rules surrounding AI-generated ones are being scrutinised by its Oversight Board.

It is specifically looking at two cases, with one from an incident in the United States and the other in India. The Oversight Board also explained that it is looking at how explicit images of public figures generated AI is handled.

“These cases concern two content decisions made by Meta, one on Instagram and one on Facebook, which the Oversight Board intends to address together. For each case, the Board will decide whether the content should be allowed on Instagram or Facebook,” it explained in a statement.

For obvious reasons, the Board is not detailing which public figures were used in the creation of the images, but did note that, “The first case involves an AI-generated image of a nude woman posted on Instagram. The image has been created using artificial intelligence (AI) to resemble a public figure from India.”

“The second case concerns an image posted to a Facebook group for AI creations. It features an AI-generated image of a nude woman with a man groping her breast. The image has been created with AI to resemble an American public figure, who is also named in the caption. The majority of users who reacted have accounts in the United States,” it added regarding the other.

Along with assessing whether Meta’s policies are effective enough, which they likely are not given that this content has been able to be posted, shared, and has since been highlighted on the platform, the Oversight Board also wants to see how well policies are implemented in other parts of the world that are not the US.

This is among some of the elements that the Board will be looking at, as it calls for public comment on the two cases.

In particular it wants public feedback to determine:

“The nature and gravity of harms posed by deepfake pornography including how those harms affect women, especially women who are public figures.

Contextual information about the use and prevalence of deepfake pornography globally, including in the United States and India.

Strategies for how Meta can address deepfake pornography on its platforms, including the policies and enforcement processes that may be most effective.

Meta’s enforcement of its “derogatory sexualised photoshop or drawings” rule in the Bullying and Harassment policy, including the use of Media Matching Service Banks.

The challenges of relying on automated systems that automatically close appeals in 48 hours if no review has taken place.”
Whether any lasting or enforceable changes will be made should Oversight Board see a need to refine the current policies, remains to be seen, but it is clear that AI-generated images are creating yet another headache for social media platforms, as well as a potential PR nightmare for public figures.

To submit your comments on the matter anonymously, head here.

[Image – Photo by Bastian Riccardi on Unsplash]
The post Meta’s Oversight Board looking at AI-generated explicit images appeared first on Hypertext.

Previous post How to spot a cyberattack before it happens
Next post How To Test Your Microphone In Linux