Boston researchers have spent years helping government agencies plan and simulate operations to troll potential terrorists. Despite controversial practices and lackluster results, the covert operations continue.
Since the 2016 election, social media meddling and influence peddling have become regular subjects of media coverage, especially amidst the ongoing engrossing probes into President Trump’s foreign affairs.
Scoops typically break internationally and have widespread impact. So it may surprise Bostonians that many key developments in America’s online psychological warfare toolkit—internet-era equivalents of “black ops” campaigns waged to disorient enemy troops via deceptive propaganda and other unconventional tactics during World War II—have been grown in their backyard, even over the past decade.
Many of these programs are dreamed up by academics, on college campuses. If you look in local headlines, though, there is one well-known example in which the sort of covert concepts in question manifested in real life, and with fatal consequences.
Talking on his cell phone from the parking lot of a CVS in Roslindale in June 2015, Usaamah Rahim, a 26-year-old Boston resident, had just informed his brother that he wouldn’t ever see him again.
Moments after, two men approached and ordered him to put his hands in the air.
“Do I know you?”
According to authorities, before cops on the scene could respond to the agitated Rahim’s question, their suspect unsheathed a foot-long knife out of Rambo’s arsenal that he had recently purchased on Amazon, prompting two officers to draw guns and tell Rahim to drop his weapon.
“You drop yours!”
Rahim replied before police fired three shots, killing him.
Prior to this confrontation, authorities say, Rahim had plotted to murder police officers in a suicide-by-cop scenario. They would know, since they were watching him. Rahim’s shooters, an FBI agent and a BPD officer in plainclothes, were members of the department’s Joint Terrorism Task Force, or JTTF. Though he may not have recognized them, Rahim may have had a rough idea about their interest. Two years earlier, he complained to friends on Facebook about the surveillance:
Damn FBI calling my phone! … He wanted to meet up with me and ‘Talk.’ HA! I said about WHAT? He said ‘Sir, we have some allegations regarding you …’ I said ‘REALLY?’ What ALLEGATIONS? He said ‘Well sir, that’s what I wanted to meet up with you about. I came by your house a few times, but kept missing you.’
“He was someone we were watching for quite some time,” then-Boston Police Department Commissioner William Evans told reporters following Rahim’s death.
On the day he died, Rahim was reportedly hoping to kill “boys in blue” in the name of the Islamic State extremist group (aka ISIS or ISIL).
Just a few years earlier, though, Rahim and his nephew David Wright—who was recently himself convicted of crimes including plotting terrorist killings related to events that spurred his uncle’s death—wouldn’t have raised red flags.
After graduating from Brookline High School in the mid-aughts, Rahim was preaching to friends about Scandinavian death metal—not jihad. Wright was collecting Pokémon cards. Yet in time Rahim became preoccupied with extremist internet propaganda and began chatting with like-minded Islamists in the online community Paltalk. As Massachusetts Congressman William Keating, who sits on the House Homeland Security Committee, put it, “There is a ‘terrorism gone viral’ side” to the Rahim case. In another description, an FBI special agent and member of the JTTF cited the Rahim case as an example of how “in today’s day and age, you could be radicalized by sitting at your computer.”
Though not everyone agrees on the definition of “lone wolf,” according to some criminologists, America has seen an increase in such so-called solo terrorist attacks in recent years. As the popular logic goes, the rise of the ISIS “caliphate” in Iraq and Syria poured gas on the trend.
Yet there are other potential factors in play—say, for example, government agents trolling people who appear to sympathize with violent radical Islamist teachings—that are actually part of larger efforts executed in the name of thwarting terrorism.
The idea that US government agents and federal contractors would collude to create a fake internet sounds like a crazy conspiracy theory. That they would then use cyberspace simulations to test methods for flooding the actual internet with fake social media accounts sounds even nuttier. But around the same time foreign meddling in US elections became a regular news topic in 2016, two area companies were involved in such a simulation, with the goal of pacifying potential jihadists as part of an anti-terrorism effort.
Further details of such ventures have since emerged. And while civil liberties advocates question the strategies behind them, these companies continue working closely with the US military.
The local ties have deep roots, too. Follow them back a decade, and you’ll arrive at a paper, by two Harvard professors, that features strikingly similar language to descriptions of the aforementioned 2016 simulation. Its title: “Conspiracy Theories.”
In April 2016, two Boston-area companies, Charles River Analytics and National Security Innovations, in coordination with the Department of Homeland Security, US Army Special Operations Command, and several other organizations, participated in an exercise simulating online “military information support operations” (MISO)—also known as psychological operations or psychological warfare—against ISIS, its supporters, and would-be supporters.
Both companies have scored national security contracts for over a decade and largely brand themselves around their semiclandestine work. Cambridge-based CRA’s website, for example, lists products with code names like Minotaur, Crisis, and Spider. One of CRA’s products, Persona, is described as an “easy-to-use graphical tool that allows trainers to rapidly develop and execute sophisticated, interactive behaviors for virtual characters.” (Despite the company’s involvement with the 2016 simulation, in response to an inquiry for this article, a CRA executive noted that their Persona tool was not used in creating the personas for that program.)
At NSI, meanwhile, according to his company bio, founder Robert Popp spent four years at the Defense Advanced Research Projects Agency (DARPA), including a stint as deputy director of its early post-9/11 era Information Awareness Office (IAO). The IAO was defunded in 2003, at least in part due to the public’s perception of its controversial logo, which featured an “all-seeing eye” atop a pyramid watching the globe, punctuated by a Latin phrase translating to “knowledge is power.” Before that, the IAO briefly housed a “Total Information Awareness” mass surveillance program, which has since been revealed to have been reincarnated at the National Security Agency (NSA).
According to publicly documented results, the 2016 test run was carried out “on a synchronous, virtual, and distributed platform called ICONSnet, designed and managed by the University of Maryland.” And it complemented work done in the same realm in late 2015, when researchers fashioned fake “personas” to troll a simulated social media environment for (simulated) potential ISIS sympathizers and to talk them down from supporting Islamist extremism.
The participating parties subsequently published an analysis, “Counter Da’esh Influence Operations: Cognitive Space Narrative Simulation Highlights.” Among the findings, a principal software engineer at Charles River Analytics notes: “While this white paper represents a collaboration between academic and operational communities, most academic research in narrative science is not as easily transferred to MISO operators and made relevant to their needs.”
Authors of the simulation paper found inconclusive evidence that lessons could be carried over to the World Wide Web. Still, government stakeholders apparently saw promise in the trials—and by extension, in a previous study that seemingly influenced the 2016 white paper in language and substance.
In 2008, Harvard Law School professors Cass Sunstein and Adrian Vermeule argued for online “cognitive infiltration”—a term they coined—of “extremist groups.” Specifically, their targeted interest was “conspiracy theorists,” particularly those with a “crippled epistemology.” (ISIS, which did not exist in its contemporary form or influence a significant number of American terrorists at the time, was not explicitly acknowledged).
“Government agents (and their allies) might enter chat rooms, online social networks, or even real-space groups and attempt to undermine percolating conspiracy theories by raising doubts about their factual premises, causal logic or implications for political action,” Sunstein and Vermeule wrote a decade ago. (A reworked version of the article appeared as a chapter in Sunstein’s 2014 book, Conspiracy Theories and Other Dangerous Ideas. Vermeule is not credited in the book version; otherwise, the most noticeable change is terminological, substituting “foreign chat rooms” for the simpler “chat rooms.”)
While one track of this scheme would have government agents debunking conspiracy theories “openly,” another calls for their participation in the “cognitive infiltration” effort, “anonymously or even with false identities.”
Pulitzer prize-winning journalist Glenn Greenwald calls the Sunstein and Vermeule scheme “spine-chilling,” noting that the government defines such ops as “covert propaganda.” Unlike truthful, or “white” propaganda, “black” propaganda, to use terminology developed during World War II, is secretive and aimed to deceive.
Sunstein categorically dismisses conspiracy theories as too silly to take seriously. At the same time, he seems to believe some information is dangerous enough that it needs to be fought with disinformation—all while acknowledging theories that have turned out to be factual, and embarrassing to the government.
“In the 1950s and 1960s, the CIA did, in fact, administer drugs such as LSD under Project MKULTRA in an effort to investigate the possibility of ‘mind control,’” Sunstein notes in his book. “Operation Northwoods, a rumored plan by the US Department of Defense to simulate acts of terrorism and to blame them on Cuba, really was proposed by high-level officials (though the plan never went into effect).”
Sunstein shrugs off such anomalies. In his original 2008 paper, he writes that his focus is “on false conspiracy theories, not true ones.”
In both versions of his “conspiracy theories” discussion, Sunstein writes that “cognitive infiltration” means something other than “1960s-style infiltration with a view to surveillance and collecting information, possibly for use in future prosecutions.” Nonetheless, critics in the world of academia that Sunstein inhabits have compared his plan to FBI efforts to undermine protest movements of the 1960s, known as COINTELPRO.
Those knocks notwithstanding, Sunstein has found a receptive audience. In 2009, he was appointed head of the White House’s Office of Information and Regulatory Affairs under President Barack Obama, a position he held through 2012. Until recently, Sunstein was also a member of the “Defense Innovation Advisory Board,” an exclusive group boasting several tech bigs with an interest in Pentagon policy.
In 2011, US Army intelligence was studying social media “swarming,” mostly in the context of the Arab Spring revolutionary movement that eventually toppled governments in Egypt, Tunisia, and Libya. Social media has “provided a means for individuals and small groups to more effectively synchronize actions, even in the absence of an authoritative leader,” one lieutenant colonel wrote in a report. He also suggested “making every soldier a messenger,” all geared toward helping the US military “[dominate] the narrative.”
The same year, the US State Department created a comparable “Center for Strategic Counterterrorism Communications” (CSCC), and reports of a US Central Command program called Operation Earnest Voice emerged. Focused on targets in Iraq, Afghanistan, Pakistan, and elsewhere in the Middle East, Earnest Voice reportedly aimed to spread pro-American messages using “sock puppet” social media accounts. In the wake of recent revelations about Russian trolls, it’s worth noting that the Guardian, among others, observed back then that such bold activities “could also encourage other governments, private companies and non-government organisations to do the same.”
As Greenwald, a constitutional attorney as well as a reporter, along with other critics of sock-puppetry have noted, the sort of covert propaganda Sunstein has proposed could be “illegal under long-standing statutes prohibiting government ‘propaganda’ within the U.S., aimed at American citizens.” Such restrictions were loosened in 2012, though, when Congress repealed a major domestic propaganda ban.
By 2013, before ISIS became known to most Americans, the State Department had launched its first English-language online propaganda campaign targeting the group, crossing a boundary it previously observed “in part to avoid running afoul of rules barring the State Department from attempts to influence American citizens,” according to the Washington Post. “But officials also cited another concern: venturing into English would expose the center’s efforts to more scrutiny in Washington.”
The CSCC’s efforts have received little scrutiny—especially relative to the media coverage of propaganda spread by other countries. One reason for the lack of attention may be that, since roughly late 2015, US efforts along these lines have been “in disarray,” according to Will McCants, an ISIS expert at the Brookings Institution and former adviser to the State Department. Furthermore, the CSCC has been replaced by a so-called Global Engagement Center (GEC) where morale is reportedly low, and analysts have been quitting the “anti-propaganda” team.
Despite the rocky track record of ISIS fishing, both in simulated training and real implementation, these operations have been bolstered by the Trump administration. After signalling that he would reject further congressionally approved funding for the GEC, former Secretary of State Rex Tillerson later approved about $60 million. Less than a year after the Pentagon’s late 2016 announcement that it sought a social media mimicry tool to “emulate the look, feel and the key features of Facebook, Twitter, Snapchat, Instagram and Tumblr,” a new internet simulator that also includes “real and fake news” prompts was being “incorporated into training and wargames by military units all the way up to combatant commands,” according to national security news site Defense One.
For his part, Sunstein continues pushing theories from his conspiracy paper and still teaches at Harvard. His ideas, at first co-opted by the Pentagon and State Department, have trickled down to city law enforcement units. Kade Crockford, director of the Technology for Liberty Project at the American Civil Liberties Union of Massachusetts, notes that in the past, the BPD’s Boston Regional Intelligence Center (BRIC) has focused on Black Lives Matter protesters, among others, for social media monitoring. In one case, reporting officers labeled a variety of harmless Arabic expressions as “Islamic Extremist Terminology,” a designation that could get somebody marked for surveillance.
“We are not privy to all the details about how the Boston Police Department used that system, or frankly uses other social media monitoring systems today,” Crockford added. In 2017, Crockford said a public backlash spurred a victory for privacy rights when the BPD ditched plans for a sophisticated new $1.4 million suite of surveillance and investigative tools that would have included the capability to manage multiple social media accounts at once for undercover operations.
These would have been similar to state-sponsored troll operations that have targeted potential terrorist recruits, only for use in day-to-day police work.
As state and city cops wage war with petty criminals online, the nation’s top law enforcers are hunting for bigger bears. In one subversive incarnation, according to reports that were made public in 2016, the Federal Bureau of Investigation is allowed, in certain circumstances, to impersonate journalists. Agents also reportedly “swarm” the web in ways “so pervasive that the bureau sometimes finds itself investigating its own people.”
“You would be in a forum, and you’re like, ‘This person’s way out there,’” an unnamed FBI official told the Intercept in 2017, “and we’ve gone and opened up a case, and sometimes that was a local police department, or a friendly foreign service. There are still instances of that, and deconfliction is still necessary.”
“You should not have to wonder when you’re on Twitter having a political debate whether the person you’re arguing with is a sock puppet for the FBI,” Crockford said. “Unfortunately we cannot be sure that that’s not happening right now.”
“In an environment devoid of trust, the population teams [simulation roleplayers] often rejected [US Government] messaging as lacking a credible voice,” an NSI analyst wrote in a document describing the 2016 drill. “A surprising number of population segments were open to USG’s counter-Da’esh messaging in principle, but wanted to engage in a deeper conversation about how to effect change.”
Early in 2017, an Associated Press investigation exposed the activities of a highly dysfunctional Defense Department online psychological warfare program aimed at potential ISIS recruits—remarkably similar to the kind described in the 2016 report that involved Boston-area companies. Like the simulated operation described in that report, the “critical national security program” in the AP’s description, called WebOps, deploys “fictitious identities” on social media.
According to the investigation, debacles included reported drinking on the job, the falsifying of reports to show illusory progress, and cronyism in the awarding of a half-billion-dollar contract. “[WebOps] is so beset with incompetence, cronyism and flawed data,” the AP reported, “that multiple people with direct knowledge of the program say it’s having little impact.”
Alex Marthews, national chairman of Massachusetts-based privacy advocacy group Restore the Fourth, describes WebOps as an “excellent example” of “security grifting,” in which companies move to quickly profit from a new, little-understood, and often largely imagined threat.
“Everybody, except the citizen and taxpayer, wins,” Marthews explained in an email. “The security grifting company gets seven-figure contracts. The government agency gets to say that it’s doing something to thwart [currently fashionable counterterrorism threat], and their budget goes up in the next round.”
In the wake of such revelations and failures, and with headlines about Russian social media shenanigans still trending daily, it seems that some of the minds behind “Counter Da’esh Influence Operations” and similar studies are rethinking their positions. In a document published in August 2017, this one co-edited by the same NSI analyst who wrote the earlier report’s executive summary, psychological warfare planners strike a decidedly different, almost remorseful tone, emphasizing the need to shift from attempting to “control” adversaries toward simply “influencing” them.
“We must establish an evaluation methodology before relying on any new tool too [sic] completely that, in trying to create new strengths, we do not instead create false-insights that leave us exposed to massive new risks,” the document notes.
The publication of said report came more than half a year after the AP expose on WebOps. The August 2017 paper mentions that program by name, but significantly downplays its apparent failures, noting only that US Central Command needs help to “make better plans and preparations.”
As for the blueprint for those plans, NSI did not respond to a request for comment. Sunstein didn’t either, though the Harvard prof recently gave an interview to the New Yorker. For a piece titled “How a Liberal Scholar of Conspiracy Theories Became the Subject of a Right-Wing Conspiracy Theory,” he told the interviewer:
“I think it’s my job to put ideas out there.
“If that comes with the risk that someone is gonna do something horrible with it, well, that’s life.”
This article was produced in collaboration with the Boston Institute for Nonprofit Journalism. To see more reporting like this please consider making a contribution at givetobinj.org.