Rather than flip on the TV when major news-worthy events happen, like Hamas’ attack on Israel on Oct. 7 and the subsequent retaliation by Israeli forces in Gaza, we open up social media to get up-to-the-minute information. However, while television is still bound to regulations that require a modicum of truthful content, social media is a battleground of facts, lies, and deception, where governments, journalists, law enforcement, and activists are on an uneven playing field.
It is a massive understatement to use the term “fog of war” to describe what is happening in discussions of Hamas and Israel on social media. It’s a torrent of true horror, violent pronunciations, sadness, and disinformation. Some have capitalized on this moment to inflame Russia or gain clout by posting video game clips or older images of war recontextualized. Many governments, including the U.S., were shocked that Israeli Intelligence failed to see the land, sea, and air attack. Israel is known for its controversial cyber defense and spyware used to tap into journalists’ and adversaries’ networks. How could this have happened?
It may come as a surprise to some that we are involved in an information war playing out across all social media platforms every day. But it’s one thing to see disinformation, and it’s another to be an active (or unwitting) participant in battle.
Read More: How Israel-Hamas War Misinformation Is Spreading Online
Different from individuals, states conduct warfare operations using the DIME model—"diplomacy, information, military, and economics.” Most states do everything they can to inflict pain and confusion on their enemies before deploying the military. In fact, attacks on vectors of information is a well-worn tactic of war and usually are the first target when the charge begins. It’s common for telecom data and communications networks to be routinely monitored by governments, which is why the open data policies of the web are so concerning to many advocates of privacy and human rights.
With the worldwide adoption of social media, more governments are getting involved in low-grade information warfare through the use of cyber troops. According to a study by the Oxford Internet Institute in 2020, cyber troops are “government or political party actors tasked with manipulating public opinion online.” The Oxford research group was able to identify 81 countries with active cyber troop operations utilizing many different strategies to spread false information, including spending millions on online advertising. Importantly, this situation is vastly different from utilizing hacking or other forms of cyber warfare to directly attack opponents or infrastructure. Cyber troops typically utilize social media and the internet as it is designed, while employing social engineering techniques like impersonation, bots, and growth hacking.
Data on cyber troops is still limited because researchers rely heavily on takedown reports by social media companies. But the Oxford researchers were able to identify that, in 2020, Palestine was a target of information operations from Iran on Facebook and Israel was a target of Iran on Twitter, which indicates that disinformation campaigns know no borders. Researchers also noted that Israel developed high-capacity cyber troop operations internally, using tactics like botnets and human accounts to spread pro-government, anti-opposition, and suppress anti-Israel narratives. The content Israel cyber troops produced or engaged with included disinformation campaigns, trolling, amplification of favored narratives, and data-driven strategies to manipulate public opinion on social media.
Of course, there is no match for the cyber troops deployed by the U.S. government and ancillary corporations hired to smear political opponents, foreign governments, and anyone that gets in the way. Even companies like Facebook have employed PR firms to use social media to trash the reputation of competing companies. It’s open warfare—and you’ve likely participated.
As for who runs influence operations online, researchers found evidence of a blurry boundary between government operatives and private firms contracted to conduct media manipulation campaigns online. This situation suggests that contemporary cyber operations are best characterized as fourth generation warfare, which blurs the lines between civilians and combatants.
It also has called into question the validity of the checks that platforms have built to separate fact from fiction. For instance, a graphic video of the war was posted by Donald Trump Jr.—images which Trump Jr. claimed came from a “source within Israel,”—was flagged as fake through X’s Community Notes fact-checking feature. The problem, though, was that the video was real. This would not be the first time we have seen fact-checkers spread disinformation, as pro-Russian accounts did something similar in 2022.
Time and time again, we have seen social media used to shape public opinion, defame opponents, and leak government documents using tactics that involve deception by creating fake engagement, using search engine optimization, cloaked and imposter accounts, as well as cultural interventions through meme wars. Now more than ever we need politicians to verify what they are saying and arm themselves with facts. Even President Biden was fact-checked on his claim to have seen images of beheaded babies, when he had only read news reports.
Today, as we witness more and more attacks across Israel and Palestine, influential people—politicians, business people, athletes, celebrities, journalists, and folks just like me and you—are embattled in fourth generation warfare using networks of information as a weapon. The networks are key factors here as engagement is what distributes some bytes of information—like viral videos, hashtags, or memes—across vast distances.
If we have all been drafted into this war, here are some things that information scientist and professor Amelia Acker and I developed to gauge if an online post might be disinformation. Ask yourself: Is it a promoted post or ad? This is a shortcut to massive audiences and can be very cheap to go viral. Is there authentic engagement on the post or do all of the replies seem strange or unrelated? If you suspect the account is an imposter, conduct a reverse image search of profile pics and account banners, and look to see if the way-back machine has screenshots of the account from prior months or years. Lastly, to spot spam, view attached media (pictures, videos, links) and look for duplicates and see if this account engages in spam posting, for example, replying to lots of posts with innocuous comments.
While my hope is for peace, we all must bear witness to these atrocities. In times of war, truth needs an advocate.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com