Jack Guez/AFP via Getty Images
One viral video claims to show a Hamas fighter shooting down an Israeli helicopter — but it’s a clip from the video game Arma 3. A video purporting to show an Israeli woman being attacked in Gaza was filmed in 2015 in Guatemala. An unverified voice message circulating on WhatsApp, along with the note “forwarded many times,” says a military official has instructed Israelis to stock up on cash, fuel, and groceries. Fake accounts posing as a BBC journalist and the Jerusalem Post newspaper spread false information widely before being suspended by X (formerly known as Twitter).
In the wake of Hamas’s surprise attack on Israel and the escalation into war over the weekend, social media platforms and messaging apps are awash in viral rumors, misleading images and videos, and outright falsehoods, making it hard for people in Israel, Gaza and around the world seeking information and facts about the conflict.
Many online videos are being taken out of context or mischaracterized — a frequent occurrence in breaking news situations where interest is high but verified information is hard to come by.
“Once we saw the events happening, the war started, there was a void of information. No one knew nothing. And [into] this vacuum of information entered all kinds of interest groups, fear, confusion, and conspiracies,” said Achiya Schatz, executive director of FakeReporter, an Israeli watchdog group that tracks misinformation.
Misleading posts born of fear and confusion are being amplified within a broader online information ecosystem inundated with graphic, violent footage posted by Hamas, Israeli military forces, and supporters aligned with both sides.
“The violent content that is being pushed out across a range of different social media platforms as well as as well as encrypted messaging apps is being used to essentially to gloat, celebrate attacks, as well as … to insinuate war crimes,” said Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a nonprofit that studies extremism.
“This is propaganda 101. You flood the gap, especially in those early hours, with content that suggests a certain narrative, whether it’s the strength of one faction over another, whether it’s the strength of one state over another, and try to get ahead of the curve,” he said.
The fog of war and accompanying surge in unverified information online is fodder for state actors — including those backed by Iran and Russia — and other groups eager to take advantage of the chaos to fuel division, spread propaganda, attack enemies and sow further confusion.
“All of these actors of course will be squarely focused on the war and how they can twist perception of the war to benefit their objectives,” said Emerson Brooking, resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab.
False and misleading claims are also being used to advance political agendas here in the U.S. On X, the site formerly known as Twitter, a fake memo purporting to show the White House announcing $8 billion in military aid to Israel spread on Facebook, showed up high in Google search results, and was boosted on X by accounts that paid $8 for “verified” checkmarks. In some cases, the fake memo was paired with allegations the Biden administration funneled $6 billion to Hamas via Iran, which the White House says is false.
X has emerged as a particular locus for bogus claims and mischaracterized videos and images, as owner Elon Musk has removed many guardrails against the spread of false and misleading narratives.
After cutting much of X’s trust and safety staff, Musk has said the site will rely more heavily on user-generated fact checks to address falsehoods. But it’s unclear how much impact those fact checks have.
One video posted on Sunday by the co-chair of a group that calls itself Republicans Overseas Israel shows a man playing with a baby. The caption claims it depicts a “Hamas terrorist with kidnapped Jewish baby girl in Gaza.”
But many users quickly pointed out the video was originally posted on TikTok back in August and bears no indication that it depicts a kidnapped child and a terrorist. The X post has been labeled with a user-generated fact check pointing this out, but has been viewed a million times and remains on the platform despite replies urging the poster to delete or correct it.
Musk has also added to the confusion on his platform, recommending that “for following the war in real-time,” users follow two accounts that have posted spurious claims in the past — including promoting a false report of an explosion at the Pentagon in May that sparked a brief dip in the stock market.
Both accounts carry “verified” checkmarks, meaning they’ve paid for X’s subscription service. That means their posts get boosted on the platform and they are eligible to earn advertising money
As a result, accounts are able to “buy this veneer of legitimacy and credibility,” and have a “direct profit incentive” to maximize views of their posts, even if they don’t have new information to share, Brooking said.
“I’ve noticed that some of these accounts, then they editorialize more frequently, they interject their own opinions, or they may suggest things that are not necessarily based in even the data that they’re sharing” he said. “In this sort of fast-moving conflict situation where people are making real and impactful decisions based off what they’re seeing on the platform, the consequences are deeply harmful.”