Home The Washington Diplomat October 2017 Tech Giants Team Up to Take Terrorists, Extremist Propaganda Offline

Tech Giants Team Up to Take Terrorists, Extremist Propaganda Offline

0
Tech Giants Team Up to Take Terrorists, Extremist Propaganda Offline

Social media has revolutionized how people connect and interact. But the darker side of this revolution is that it has made it easier for terrorist ideology to take root around the world, a dilemma governments and tech giants are struggling to grapple with as they balance the privacy rights of billions with the need to protect lives.

a5.digital.terrorism.man.storyOver the last year, companies such as Facebook, Apple and Twitter have gone from being worshipped innovation wunderkinds to a target of anger, suspicion and fear as their technology continues to infiltrate and influence our daily lives. Facebook, for example, is scrambling to fend off accusations that its site was used by the alt-right, Russia and other groups to disseminate fake news in a bid to tilt the 2016 presidential election. Allegations of sexual harassment and misogyny have upended the corporate culture at the ride-sharing app Uber and revealed a toxic underbelly in Silicon Valley. And policymakers in Washington are taking a hard look at whether companies such as Amazon represent a new generation of rapacious monopolies that need to be broken up.

Before the current wave of trouble for tech, however, the world woke up to the dangers of social media when the Islamic State, also known as ISIS, burst onto the scene in 2014, seizing large tracts of territory in Iraq and Syria and using the internet to recruit thousands of jihadists to its murderous cause.

Over the last year, the terrorist group has steadily lost land, fighters and money in the face of a punishing U.S.-led military offensive (also see “Obama’s Strategy to Defeat Terrorist Group Lives on Under Trump” in the September 2017 issue). But governments and corporations are still playing catch-up in the digital battlefield, where the Islamic State and other extremist groups thrive in the vast, borderless, unregulated realm of the world wide web.

In an effort to remove terrorist and extremist content from their platforms, Facebook, YouTube, Twitter and Microsoft are working to create a forum where they can discuss tactics and challenges, and share information.

The Global Internet Forum to Counter Terrorism, announced in late June, “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups, academics, governments and supra-national bodies” such as the European Union and the U.N., according to an announcement from the companies.

Although the exact details of the forum’s structure and start date are still being ironed out, the plan is for the companies to work with counterterrorism experts, the U.N. Security Council Counter-Terrorism Executive Directorate, the ICT4Peace Initiative, smaller companies and organizations such as the Anti-Defamation League and the Center for Strategic and International Studies (CSIS) to come up with and share best practices on challenges such as balancing free speech with hate speech.

“I think that one of the main things that they’re going to try to do through that engagement is number one, identify the best practices and the knowledge that we have outside of the companies in terms of the best ways to counter terrorist use of the internet and social media platforms,” said Shannon Green, director and senior fellow of the CSIS Human Rights Initiative. “Two, get feedback on some of the ideas, policies and prescriptions that they have under development. And three, especially for me and my space, I think one of the key things is going to be to help the companies … balance their desire to make sure terrorists can’t exploit their platforms with the need to make sure that those platforms remain open and a place where users can express themselves freely.”

a5.digital.terrorism.phone.storyIt’s a daunting undertaking. Facebook alone has over 2 billion monthly users. Policing such a widely used platform is akin to a game of whack-a-mole. As soon as one posting or user is taken down, another pops up. And once a video goes viral, there is no taking back the outrage, regardless if the images have been thoroughly debunked.

Overt government efforts to counter terrorist propaganda, such as a widely panned State Department campaign to undermine the Islamic State’s narrative online, have largely fallen flat. Mistrust of the U.S. and governments in general, especially in the Middle East, runs deep, and oftentimes bureaucracies simply aren’t as agile or effective as terrorists in this slick new form of messaging warfare.

So Facebook and other tech platforms have come under intense pressure to crack down on terrorist propaganda themselves. Facebook has turned to algorithms, artificial intelligence, an army of human moderators and other users to flag suspicious content. But this vetting still represents a drop in the ocean.

Moderators, for instance, face the grueling task of sifting through reams of disturbing content and instantaneously judging what to remove. It is an emotionally taxing and sometimes-dangerous job. It’s also a subjective balancing act, with Facebook and other sites occasionally accused of going too far and censoring legitimate content.

Officials from Facebook and Google admit there are limitations — both in manpower and technology — to spotting terrorist proselytizing. They point out that AI technology hasn’t evolved yet to the point where it can automatically remove suspicious content.

“We are making significant progress, but removing all of this content within a few hours — or indeed stopping it from appearing on the internet in the first place — poses an enormous technological and scientific challenge,” Kent Walker, general counsel for Google, said in a statement on behalf of the Global Internet Forum to Counter Terrorism ahead of a meeting on the sidelines of the U.N. General Assembly last month.

“There is no silver bullet when it comes to finding and removing this content, but we’re getting much better,” Walker said.

One way the Global Internet Forum hopes to address this problem is by creating a database of unique digital fingerprints known as “hashes” for videos and images that promote terrorism or violence. So when one firm flags and removes a piece of content, the other companies could use the hash to identify and take down the same content on their platforms.

Still, terrorists are adept at adapting, circumventing digital barriers and exploiting the broad reach of social media to lure impressionable recruits and inspire lone-wolf attacks from afar and on the cheap. 

Green admits that it’s tricky to determine exactly how terrorists and extremists are using the internet.

“We don’t have that good of a sense, to be honest, right now,” she said. “When ISIS was gaining a lot of momentum and they were using social media in a way that other terrorist organizations had never dreamt of, there was a lot of analysis of what their messages were, how they were using social media platforms to get out those messages, who was doing the transmission of messages.”

Now that tech companies have curtailed that usage, the groups have migrated away from transparent platforms and turned to the dark web, encrypted platforms such as Telegram and coded words, making it harder to get a full picture, she explained.

That has led some security experts to warn that it may be more dangerous to kick terrorists off the internet, because law enforcement authorities are often tipped off to plots by social media postings and exchanges. The deeper underground terrorists go, the harder it is to find them, they argue.

a5.digital.terrorism.facebook.storyAs Elizabeth Bodine-Baron of Rand Corp. argued in a commentary late last year, while social media sites such as Twitter in particular serve up Islamic State propaganda, “it also gives the broader public a unique window into the social networks of extremist supporters and allows researchers to study the impact of extremist messaging.”

Another challenge for tech companies is determining the fine line between free and hate speech. The companies usually draw the line at speech that aims to radicalize or recruit people, or to incite acts of violence, but those distinctions are hardly clear-cut. For example, “if you’re too blunt about it, you could take down content that is newsworthy analysis of the way that a terrorist group is trying to spread its propaganda,” Green pointed out.

What’s more, the companies are based in the United States but have a global presence, which means that governments all over the world have a tremendous stake in what’s posted on the platforms. Tech companies must walk a fine line here, too, between respecting foreign laws and censorship. Most governments such as those in Europe genuinely want to stop terrorism, but “other governments will see that as an opportunity to curtail free speech by other entities that they don’t like — for example, opposition groups or human rights activists or journalists that might be trying to hold the government accountable,” Green said. “The entire premise of their platform is to facilitate communication and they don’t want to be censors.”

In June, Facebook explained how it uses artificial intelligence, human expertise and hackathons to counter terrorists’ use of its platform. “We want Facebook to be a hostile place for terrorists,” the company wrote.

Twitter rules state that violent threats and hateful conduct count as abusive behavior and could result in an account being temporarily locked or permanently suspended. The company says it has removed nearly 1 million accounts between August 2015 and June 2017 for promoting terrorism. Google, which owns YouTube, announced in June new plans to remove extremist videos from its platform by adding more analysis tools to help identify and remove videos depicting gory acts of violence, threats and hateful content. And last year, Microsoft amended its terms of use to prohibit terrorist content and defined it as “material posted by or in support of organizations included on the Consolidated U.N. Security Council Sanctions List.”

Clearly, fighting terrorist and extremist content isn’t a new battle for these companies, Green said, but the forum is a way they can join forces to tackle a common dilemma.

“There’s been plenty of informal, ad hoc opportunities for the social media companies to talk to each other and talk to government, but I think they’re looking at it as an opportunity to formalize those conversations and to take it to the next level,” she said. “I’m also hoping that it creates a regular channel for communication between people who are researching and practicing in this space, because … it has been a bit closed off, and I think we would gain from getting a better sense from the social media companies about what they’re doing to address the threat of terrorists on their platforms, and hopefully they would gain a lot by having the best and most up-to-date analysis coming from outside of the companies.


About the Author

Stephanie Kanowitz is a contributing writer for The Washington Diplomat.