

Meta is under intense scrutiny after an AI-generated video of an alleged military strike surpassed 700,000 views on Facebook without being labeled as fake. This highlights a dangerous gap in the platform's ability to police synthetic media during active conflicts.
As experts warn of ‘soft wars’ being fought through digital deception, the Oversight Board has now officially overturned Meta’s decision to leave the content unlabeled. It has demanded an urgent overhaul of how the tech giant identifies high-risk AI content.
The issue began with the circulation of a high-quality video of a massive explosion in a major city. Many people believed the footage was real, and it was extensively shared online. However, the video showed clear signs of being AI-generated that Meta’s systems missed.
Even with these mistakes, many influential accounts reshared the video, believing it was a real news event. Meta originally defended the post, saying it did not cause “imminent physical harm.” However, the Oversight Board strongly disagreed with this excuse and stated that “Meta’s current labeling mechanisms are neither robust nor comprehensive enough,” in their 10 March 2026 ruling.
Meta’s failure stems from the inconsistent use of watermarks that act like digital ID cards and inform a computer if a video is real or AI-generated.
C2PA has created a standard to label these IDs; however, the Oversight Board revealed that Meta is “inconsistently implementing” these rules, even for content made with Meta’s own AI tools. This creates a massive loophole for misinformation to spread.
Another major issue is the 'labeling gap.’ Currently, Meta requests users to tell the truth and label their own AI posts. If a user lies, Meta’s scanners usually fail to catch the video, especially if it is edited slightly. The organization WITNESS informed the Board that "highly realistic AI-generated content is now shaping public understanding" before anyone can find the facts.
To stop digital lies from ruining public trust, companies now need to use automatic labels instead of waiting for a human to complain. Overall, Meta needs to create a ‘High Risk’ tag that slows down suspicious videos until they are verified. The future of what we believe online now depends on whether social media companies care more about clicks or the truth.