Meta’s semi-independent policy board Oversight Board turned its attention to how the company’s social platforms handle explicit images generated by artificial intelligence. On Tuesday, the company announced investigations into two separate cases involving how India’s Instagram and the U.S.’s Facebook handled AI-generated images of public figures after Meta’s systems failed to detect and respond to explicit content.

In both cases, the sites have now taken down the media. The committee did not name the individuals targeted by the AI ​​images “to avoid gender-based harassment,” according to an email Meta sent to TechCrunch.

The Board handles cases regarding Meta review decisions. Users must first submit a review request to Meta before contacting the Oversight Committee. The committee will release its full findings and conclusions at a future date.

Case

Describing the first case, the commission said a user reported AI-generated nude photos of Indian public figures on Instagram, calling them pornographic content. The image was posted by an account dedicated to posting AI-created images of Indian women, and most of the users who reacted to the images were from India.

Meta failed to remove the image after it was first reported, and the reported ticket was automatically closed 48 hours later without the company further reviewing the report. When the original complainant appealed the decision, the report was again automatically closed without any oversight from Meta. In other words, after two reports, the explicit AI-generated images remained on Instagram.

The user ultimately appealed to the board. The company at the time simply removed objectionable content and removed images that violated community standards on bullying and harassment.

The second case relates to Facebook, where a user posted an explicit AI-generated image that resembled an American public figure in a group dedicated to AI creation. In this case, the social network removed an image previously posted by another user, and Meta added it to the media matching service’s library under the “Derogatory Photoshop or Drawing” category.

When TechCrunch asked the board why it selected cases in which the company successfully removed explicit AI-generated images, the board said the chosen cases were “symbolic of broader issues on the Meta platform.” It added that the cases help the advisory board understand the global effectiveness of Meta’s policies and procedures on various topics.

“We know that in some markets and languages, Meta is faster and more effective at moderating content than others. By taking one case from the United States and one from India, we hope to understand whether Meta protects in a fair way All women around the world,” oversight committee co-chair Helle Thorning-Schmidt said in a statement.

“The board believes it is important to explore whether Meta’s policies and enforcement practices effectively address this issue.”

Deepfake porn and online gender-based violence

In recent years, some (but not all) generative AI tools have expanded to allow user-generated pornographic content. As TechCrunch previously reported, groups like Unstable Diffusion are trying to monetize AI porn currencies by blurring ethical lines and data bias.

Deep fakes have also become a cause for concern in places like India. Last year, a BBC report noted that the number of deepfake videos of Indian actresses had surged in recent times. Data shows that women are more often the targets of deepfake videos.

Earlier this year, deputy IT minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ efforts to combat deepfakes.

“If a platform thinks they can get away with not removing deepfake videos, or just maintains a casual attitude, we have the power to block such platforms by blocking them,” Chandrasekhar said in a press conference at the time. to protect our citizens.”

While India has considered incorporating specific rules related to deepfakes into law, nothing is set in stone yet.

Although the country’s laws provide for reporting gender-based violence online, experts say the process can be tedious and often has little support. Indian advocacy group IT for Change argued in a study published last year that Indian courts need to develop strong procedures to address online gender-based violence rather than trivializing these cases.

Currently, there are only a handful of laws around the world targeting the production and distribution of pornographic content generated using artificial intelligence tools. A handful of U.S. states have laws against deepfakes. The UK this week introduced a law criminalizing the creation of sexually explicit images powered by artificial intelligence.

Meta’s response and next steps

In response to the oversight board’s case, Meta said it had removed both items. However, the social media company did not address the fact that it failed to remove the content on Instagram after initial reports from users or how long the content remained on the platform.

Meta says it uses a combination of artificial intelligence and human review to detect sexually suggestive content. The social media giant said it doesn’t recommend such content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comment (deadline April 30) on the matter, which touches on the dangers of deepfake pornography, background information on the proliferation of such content in regions such as the United States and India, and possible pitfalls of meta-detection methods Explicit images generated by artificial intelligence.

The committee will investigate the cases and public comments and post a decision on its website within a few weeks.

These cases show that large platforms are still grappling with old moderation processes, while AI-driven tools enable users to quickly and easily create and distribute different types of content. Companies like Meta are experimenting with tools that use artificial intelligence for content generation and working to detect such images. However, criminals continue to find ways to evade these detection systems and post questionable content on social platforms.

#Meta #Oversight #Board #investigates #AIgenerated #explicit #images #posted #Instagram #Facebook

Leave a Reply

Your email address will not be published. Required fields are marked *

Index