Meta announced changes to its rules around AI-generated content and manipulated media after criticism from an oversight board. The company said it will label a wider range of such content starting next month, including applying an “AI-made” badge to deepfakes, also known as synthetic media. Additional contextual information may be displayed when the risk of content being manipulated in other ways to deceive the public on important issues is high.
The move could lead the social networking giant to label more content that may be misleading — a move that could be important in a year marked by multiple elections around the world. For deepfakes, however, Meta will only apply the label if the content in question has “industry-standard AI image metrics”; or the uploader discloses AI-generated content.
AI-generated content outside these scopes may not be flagged.
The policy change could also lead to more AI-generated content and manipulated media remaining on Meta’s platform – as it moves toward an approach focused on “providing transparency and additional context,” as it puts it, as ” Better ways to address this content” (i.e. taking into account the associated risks to free speech than removing manipulated media). So for media generated or otherwise manipulated by AI on meta-platforms like Facebook and Instagram, it looks like the revised strategy will be one of more tags and fewer takedowns come summer.
Meta said it will stop removing content based solely on its current manipulated video policy in July, adding in a blog post published on Friday: “This timeline allows us to stop removing a small percentage of content before we stop removing it.” People are given time to understand the self-disclosure process. Manipulate the media.”
The change in approach may be in response to Meta’s growing legal requirements around content moderation and systemic risk, such as the European Union’s Digital Services Act. Pan-EU law has imposed a series of rules on its two major social networks since August last year, requiring Meta to maintain a balance between removing illegal content, mitigating systemic risks and protecting free speech. The EU is also applying additional pressure on platforms ahead of European Parliament elections in June this year, including urging tech giants to add watermarks to deepfake products where technically feasible.
Meta may also consider the upcoming November U.S. presidential election, as this high-profile political event increases the risk of misleading content in the country.
Oversight Board Criticism
Meta’s advisory board, funded but kept at arm’s length by the tech giants, reviews a small portion of its content moderation decisions but can also make policy recommendations. Meta was not bound to accept the board’s recommendation, but in this case it agreed to revise its approach.
Monika Bickert, Meta’s vice president of content policy, said in a blog post on Friday that the company is revising its policies on AI-generated content and manipulation of media based on feedback from its board of directors. “We agree with the Oversight Committee that our existing approach is too narrow because it only covers videos created or altered by artificial intelligence to make it appear that a person said something they did not say,” she wrote.
Back in February, the Oversight Board urged Meta to reconsider its approach to AI-generated content. A content review decision was issued in response to a doctored video of President Biden that was edited to suggest that a platonic kiss he gave his granddaughter was sexually motivated.
Although The board agreed with Meta’s decision to withhold specific content, attacking its policy on manipulated media as “incoherent” – noting, for example, that it was just Works for videos created with artificial intelligence, letting other fake content, such as more basic doctored videos or audio, off the hook.
Meta appears to have taken the critical feedback on board.
“Over the past four years, and particularly in the last year, other types of authentic AI-generated content, such as audio and photos, have been developed, and the technology is advancing rapidly,” Bickert wrote. “As the board noted, It is equally important to address manipulation that shows someone doing something they did not do.
“The committee also believes that when we remove manipulated media that does not violate our community standards, we unnecessarily risk restricting free speech. It recommends a “more restrictive” approach to manipulated media, such as contextual labels. Less” approach.
Earlier this year, Meta announced that it was working with other companies in the industry to develop common technical standards for identifying AI content, including video and audio. Now it’s banking on that effort to expand the label for synthetic media.
“Our ‘AI Made’ label on AI-generated video, audio and images will be based on our detection of industry-shared AI image signals or people’s self-disclosure that they are uploading AI-generated videos,” Bickert said. Content.” notes that the company has applied the “AI Imagination” tag to photorealistic images created using its own meta-AI capabilities.
Bickert said the expanded policy would cover “a broader range of content beyond the manipulated content that the Oversight Board recommends flagging.”
“If we determine that a digitally created or altered image, video, or audio poses a particularly high risk of materially deceiving the public about an important issue, we may add a more prominent label so that people have more information and Context.” “This holistic approach gives people more information about the content so they can better evaluate it, so if they see the same content elsewhere, they understand the context.”
Meta says it will not remove content that has been manipulated — whether based on artificial intelligence or otherwise tampered with. unless It violates other policies (such as voter interference, bullying and harassment, violence and incitement, or other community standards issues). Instead, as mentioned above, it may add “informational tags and context” to certain scenes of high public interest.
Meta’s blog post highlighted a network of nearly 100 independent fact-checkers it said it was working with to help identify risks associated with manipulated content.
According to Meta, these external entities will continue to review false and misleading content generated by artificial intelligence. When they rate content as “fake or doctored,” Meta says it will respond by applying algorithm changes that reduce the content’s reach, meaning the content will appear lower in the feed. Look like this Fewer people will reach it, and in addition, Meta will slap an overlay label with additional information on it for those eyeballs that land on it.
As artificial intelligence-generated tools flourish and synthetic content proliferates, it looks like these third-party fact-checkers will face an increasing workload. And thanks to this policy shift, it looks like more of this content will remain on Meta’s platform.
#Meta #expands #policy #labels #deepfakes #adds #context #highrisk #manipulated #media
Discover more from Yawvirals Gurus' Zone
Subscribe to get the latest posts sent to your email.