- Meta has decided to brush up its content policy on AI-generated media to include audio, video, and images
- Meta’s previous policy was very narrow, covering only AI-generated videos. Hence, based on the oversight board’s recommendations, a new set of rules has been introduced
- The new policy will be effective in May 2024 and the old one will be suspended in July 2024
On Friday, just ahead of the US elections, Meta posted a blog announcing new changes to its policies about AI-generated and altered content.
To help counter deep fakes, the company said it will label a wider range of content, including video, audio and images. In most cases, the post will simply get a “Made With AI” label. However, if it poses a greater risk of confusing the users, additional context may also be provided.
Meta’s current policy only applies to videos created by AI tools. But with the rise of AI in content creation, it was high time the company launched a broader policy.
The new policy will be effective from May 2024 and the old policy (the one that applied to videos alone) will be withdrawn in July.
This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media.Meta blog post
This was a necessary change and was prompted by Meta’s Oversight Board’s previous comment that called its existing rules on manipulated media “incoherent”.
This happened after a video of President Joe Biden went viral on the platform which was manipulated to show him behaving inappropriately with his adult granddaughter.
However, Meta’s flawed rule allowed the video to stay on the platform because it removed videos that were AI-generated or showed the person to say or do something that he never did.
Problems With This Decision
Although this might seem like a great move at first glance, there are some significant drawbacks to this decision. For example, to identify the deep fakes, the user needs to disclose it before uploading, or the social media app will use “industry standard AI image indicators” for detection.
This means that any AI content that doesn’t make it into either of these categories will remain unlabeled.
Also, the new policy means that more fake or AI-generated content (that could be potentially harmful) will be allowed to stay on the platform. Instead of completely removing the manipulated content, Meta is simply choosing to inform the users of its risk.
How effective this approach will be in curbing the spread of misinformation through AI in the long run remains a question.
Read more: YouTube launches a new tool to help creators label AI-generated content
The Reason Behind This Decision
Although the policy change is problematic, Meta is not the villain here. If anything, it’s a victim of the situation.
Meta is trying its best to curb the misuse of AI and the spread of misinformation but the legal demands of the European Union’s Digital Services Act and the feedback from the Oversight Board have made matters really tricky.
Meta doesn’t want manipulated content on the platform but at the same time, it also wants to protect people’s freedom of speech and expression. In a situation like this, adding labels seems like the best way out, even if it’s not a foolproof solution.
But on the brighter side, Meta has promised that it will continue to remove content that goes against their platform policies regardless of whether it was created by AI or humans. This includes content glorifying bullying, harassment, and violence and those interfering with voters.
Meta also has a team of 100+ fact-checkers who review each post and look out for false information. If a certain post is labeled false or altered, it’s pushed down the feed so that fewer users see it.
Last but not least, ads containing false information are straightaway rejected. Plus, since January, every advertiser needs to disclose when they create or alter content digitally, especially the ones that depict a political or social issue.