U.S. Military Sends Scouting Party Into the Twitterverse

7 minute read

The first warriors fought on the ground. Then, someone hollowed out a log and naval warfare began. Aircraft came next, followed by space—and now, cyberspace. So it should come as no surprise that the exploding corner of cyberspace—social media—is the next battleground.

The fog of war now includes rolling clouds of Tweets, Facebook posts and Instagram photos that the Pentagon wants to filter, track and exploit. Enveloping the globe, from friends and foes alike, the torrent of data can serve as an early-warning system of trouble brewing—or a leading indicator of imminent action by a potential troublemaker.

That’s why the Defense Advanced Projects Research Agency has spent three years and $35 million, plumbing pixels as part of its Social Media in Strategic Communication (SMISC) program. Makes sense that DARPA’s in charge: the agency basically invented the Internet. “Events of strategic as well as tactical importance to our Armed Forces are increasingly taking place in social media space,” DARPA says. “We must, therefore, be aware of these events as they are happening and be in a position to defend ourselves within that space against adverse outcomes.”

Britain’s Guardian newspaper suggested Tuesday that the program might be connected to papers leaked by NSA whistleblower Edward Snowden showing “that US and British intelligence agencies have been deeply engaged in planning ways to covertly use social media for purposes of propaganda and deception.”

But Peter W. Singer, a strategist with the independent New America Foundation, sees it more as Defense Department due diligence. It appears to be “a fairly transparent effort, all done in the open, following academic research standards, aiming to understand critical changes in the social, and therefore, emerging battlefield, environment,” Singer says of DARPA’s efforts. “I am not deeply troubled by this—indeed, I would be troubled if we weren’t doing this kind of research to better understand the changing world around us.”

DARPA says researchers have to take steps to ensure that “no personally identifiable information for U.S. participants was collected, stored or created in contravention to federal privacy laws, regulations or DoD policies.” It issued a statement Wednesday declaring it was not involved in the recent Cornell University study of Facebook users, and that the work it has funded “has focused on public Twitter streams visible and accessible to everybody.”

The program’s aims, according to DARPA:

  • Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
  • Recognize persuasion campaign structures and influence operations across social media sites and communities.
  • Identify participants and intent, and measure effects of persuasion campaigns.
  • Counter messaging of detected adversary influence operations.
  • The goal is to win without firing a shot. The agency cited, without elaboration, an incident that it said occurred solely on social media as an example of what it wants to do:

    Rumors about the location of a certain individual began to spread in social media space and calls for storming the rumored location reached a fever pitch. By chance, responsible authorities were monitoring the social media, detected the crisis building, sent out effective messaging to dispel the rumors and averted a physical attack on the rumored location. This was one of the first incidents where a crisis was (1) formed (2) observed and understood in a timely fashion and (3) diffused by timely action, entirely within the social media space.

    DARPA’s lengthy research roster (at least those publicly available; there’s no link to IBM’s Early Warning Signals of System Change from Expert Communication Networks, for example) doesn’t detail anything about waging war. It’s all about tapping into those who use social media, how to figure out who their leaders are, and perhaps sway their thinking. Academics and computer scientists, working for major universities and outfits like SentiMetrix (which says its “sentiment engine has been proven to work in predicting election outcomes, conflicts, and stock price fluctuations”) have written more than 100 papers on a wide range of topics:

    Cues to Deception in Social Media Communications

    Well-crafted deceptive messaging is difficult to detect, a difficulty compounded by the fact that people are generally naïve believers of information that they receive. Through studying modern forms of communication, as that found in social media, we can, though, begin to develop an understanding of how users’ expectations lead them to detect deception and how deception strategies are exhibited through linguistic cues.

    The Language that Gets People to Give: Phrases that Predict Success on Kickstarter

    Crowdfunding sites like Kickstarter—where entrepreneurs and artists look to the internet for funding—have quickly risen to prominence. However, we know very little about the factors driving the “crowd” to take projects to their funding goal. In this paper we explore the factors which lead to successfully funding a crowdfunding project. We study a corpus of 45K crowdfunded projects, analyzing 9M phrases and 59 other variables commonly present on crowdfunding sites. The language used in the project has surprising predictive power— accounting for 58.56% of the variance around successful funding.

    Understanding Individual’s Personal Values from Social Media Word Use

    The theory of values posits that each person has a set of values, or desirable and trans-situational goals, that motivate their actions. The Basic Human Values, a motivational construct that captures people’s values, have been shown to influence a wide range of human behaviors. In this work, we analyze people’s values and their word use on Reddit, an online social news sharing community. Through conducting surveys and analyzing text contributions of 799 Reddit users, we identify and interpret categories of words that are indicative of user’s value orientations.

    The Digital Evolution of Occupy Wall Street

    We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period…the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.

    A Computational Approach to Politeness with Application to Social Factors

    We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite.

    If it seems difficult to discern a pattern here, that’s because the agency engages in basic research. It only builds the tools that others will use to build the next war (or disinformation) machine. There’s no telling which of these reports—if any—contains a glimmer of military utility. The only way to find out is to continue such research until it yields a breakthrough, or until the Pentagon goes broke.

     

    More Must-Reads From TIME

    Contact us at letters@time.com