Over the past two years, Israel has suffered a crisis of information: a relentless daily assault of slander and de-legitimization, with international isolation as the result. For Israel’s enemies in the social media age, information is a weapon. Why fire rockets when you can fire lies and achieve your goals in a different way?
Israelis may look at the rest of the world, aghast at how easily these lies are believed, but the issue runs deeper than mere gullibility. A sophisticated disinformation campaign was crafted to exploit how the brain functions.
We rely on shortcuts: We trust what feels familiar, accept claims that align with our existing beliefs, and judge credibility based on social cues (“everyone’s sharing it”) rather than evidence. Malicious actors such as authoritarian states and extremist groups expertly manipulate these shortcuts.
Confirmation bias
Begin with confirmation bias. If a story flatters our worldview, we scrutinize it less. Motivated reasoning goes further: We use intelligence not to find the truth but to defend the tribe. That is why falsehoods framed as “us versus them” spread so easily; believing becomes a sign of loyalty, and doubting becomes treason.
Then comes emotion. Fear, anger, and moral disgust hijack attention and reduce our capacity for rational thought. A sensational accusation spreads faster because outrage is highly shareable. The world’s social media algorithm feeds are designed to reward content that inflames. Malign actors know this and use it.
Repetition completes the task. The mind often mistakes familiarity for accuracy. Hearing a false claim enough times across different platforms and from various accounts can make it seem true. It does not need to be consistent; frequency and speed can distort critical thinking.
In 2026, we can add straightforward visuals into the mix. Everyone now owns a phone with a camera and a network connection. A photo or video, even if irrelevant or taken out of context, can make a claim seem “real,” and malicious actors now have the means to flood the Internet with footage that supports their campaign through selective editing. Deepfakes increase the risk by fabricating evidence and allowing guilty parties to dismiss real footage as fake.
A narrative battle
The past few years have demonstrated these dynamics in action. The narrative battle over the Gaza conflict revealed how, amid the crisis, competing claims become separate realities before investigators can verify the facts.
Both Hamas’s campaign in Gaza and Russia’s disinformation campaign surrounding Ukraine illustrate the long-term strategy: a high-volume “firehose” of claims, repeatedly broadcast across channels to sow doubt, polarize audiences, and erode trust.
This is a cognitive issue with political consequences. Repeated exposure to disinformation can distort memory, reinforce false beliefs, and leave behind “echoes” that persist even after correction. Worse, overload can lead to cynicism. When everything feels disputed, people abandon the pursuit of truth altogether. That surrender is a tactical victory for any propagandist.
Information ‘Iron Dome’
So what would a serious response look like? How do we create an “Iron Dome” for disinformation?
Firstly, inoculation. Just as vaccines train the immune system, “prebunking” trains the mind. Teach people the common disinformation techniques before they encounter them, and how to spot emotional language, scapegoats, false dilemmas, fake experts, and doctored visuals. When you can name the trick, you are less likely to fall for it.
Secondly, transform media literacy into bias literacy. “Check the URL” is not sufficient. People need to practice recognizing their own psychological triggers: “Am I sharing this because it is true or because it is satisfying?” Develop habits like lateral reading, especially during breaking news: opening new tabs, cross-checking claims, and seeking independent corroboration before reacting or sharing.
Thirdly, redesign the attention economy. When algorithms reward outrage, society will drown in outrage. Platforms should introduce friction to rapid resharing, downrank known falsehoods and coordinated networks, and provide timely context from credible sources. Provenance and authenticity signals for images and videos should become as standard as spam filters. Regulators should demand transparency on political adverts and state-linked outlets without sliding into outright censorship.
Finally, focus on trust and empathy. Corrections land best when delivered by trusted messengers such as community leaders, local journalists, educators, and creators who can speak in a shared language without contempt. Disinformation feeds on social fracture; rebuilding civic trust is part of the cure.
In the social media age, truth will never be effortless again, but it can still win if we stop treating disinformation as a nuisance and start treating it as a psychological assault on the public mind.■
Andrew Fox is a retired British Army officer and research fellow at the Henry Jackson Society.