Brian Fishman manages Facebook’s global counterterrorism policy. He is the author of The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for final Victory (Yale University Press, 2016) and previously served as the director of research at the Combating Terrorism Center at West Point. Fishman maintains affiliations with the CTC, New America, Stanford University’s Center for International Security and Cooperation, and UC Berkeley.

CTC: There’s long been concern that extremist content posted and shared on social media is helping to fuel terrorism. As the social media company with the largest user base in the world, what is Facebook doing to counter terrorism? 

Fishman: The bottom line is that there is no place for terrorism on Facebook—for terrorist actors themselves, terrorist groups, or supporters. This is a long-standing Facebook policy.a Our work countering terrorism now is more vital than ever because of the success ISIS [the Islamic State] has had in distributing their message via social media. But our basic policy framework is very clear: There should be no praise, support, or representation of terrorism. We use a pretty standard academic definition of terrorism that is predicated on behavior. It is not bound by ideology or the specific political intent of a group.

The sheer size and diversity of our user base—we have 2 billion users a month speaking more than 100 languages—does create significant challenges, but it also creates opportunities. We’re striving to put our community in a position where they can report very easily things on Facebook that they think shouldn’t be there.

We currently have more than 4,500 people working in community operations teams around the world reviewing all types of content flagged by users for potential terrorism signals, and we announced several months ago that we are expanding these teams by 3,000.

Every one of those reports gets assessed, regardless of what it was reported for, to see whether there is anything that looks like it might have a nexus with terrorism. If the initial review suggests that there might be a connection, then that report is sent to a team of specialists who will dig deeper to understand if that nexus exists. And if there is support of some kind or someone representing themselves as a terrorist or another indication that they are, then we will remove the content or account from the platform.

CTC: Earlier this year, a U.K. parliamentary report on online hate and extremism asserted “the biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe.”1 For her part, British Prime Minister Theresa May stated after the June London Bridge terrorist attack, “We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet and the big companies that provide internet-based services provide.”2 Is the industry as a whole doing too little to combat terrorism content? 

Fishman: There was once a time where, I think, companies were trying to wrap their heads around what was happening on their platforms. And so there was a learning period. Facebook’s policy on this is really clear. Terrorists are not allowed to be on Facebook. So I don’t think the suggestion that technology companies must be compelled to care is helpful at this stage. From my vantage point, it’s clear technology companies across the industry are treating the problem of terrorist content online seriously. Now we need to work constructively across industry and with external partners to figure out how to do that job better.

CTC: You’re an alumnus of the Combating Terrorism Center who has long studied and written about terrorism. What’s the transition been like to your current role at Facebook? 

Fishman: It’s tremendously gratifying to take my experience at a center of academic expertise and the engagement that I had with cadets and folks in government and translate it to a Facebook environment. I work within a wider product policy team whose job it is to set policy for Facebook broadly, including community standards. We’ve broken out a dedicated team on counterterrorism that I lead and are growing that team with some really talented people.

I think that the biggest point of learning for me is figuring out how to scale an operation to enforce guidelines consistently and effectively. And in my experience, until you’ve had to manage the scale that Facebook operates at, even when somebody gives you some of the numbers, you still have to learn to wrap your head around it and understand what that means in terms of language coverage, cultural knowledge, having the right people to be able to do the right things. That’s something that I think you can’t fully prepare yourself for. You need to get in the trenches and do it.


Brian Fishman

CTC: Given the sheer volume of material constantly being posted on social media by extremist actors, what are some of the strategies you are using to remove such material? 

Fishman: I mentioned reports from the community earlier, but we are increasingly using automated techniques to find this stuff. We’re trying to enable computers to do what they’re good at: look at lots of material very quickly, give us a high-level overview. We’ve also recently started to use artificial intelligence [AI]. But we still think human beings are critical because computers are not very good yet at understanding nuanced context when it comes to terrorism. For example, there are instances in which people are putting up a piece of ISIS propaganda, but they’re condemning ISIS. You’ve seen this in CVE [countering violent extremism] types of context. We want to allow that counter speech. We want to allow people to play off of the horrible things that an ISIS or al-Qa`ida or whoever it is is distributing and criticize that stuff and reveal its inconsistencies and the hypocrisy that is inherent in so much of it. But we don’t want people to be able to share that same image or that same video and use it to try to recruit. Context is everything, so we really need human beings to help us understand and make those decisions.

All this means that the 150 people or so internally at Facebook whose primary job is dealing with terrorism are vital to our counterterror efforts. Between them, they have previous experience as academic experts, prosecutors, law enforcement agents and engineers and speak nearly 30 languages.

Our strategy is to blend the processing power of computers with the nuanced understanding provided by humans. In certain black-and-white cases, we can automatically block things entirely from reaching Facebook. For gray areas, we’re also beginning to use AI. In these cases, the content hits the platform but is routed immediately for human beings to take a look at and removed if necessary.

Using AI to be able to route that particular piece of content to the right reviewer very quickly is a challenge. I think when people think about what AI means, they tend to think it’s, “Well, we want to invent a magic button that gets rid of terrorist content.” But in reality, when you’re operationalizing something like this, you actually want to use sophisticated tools in a lot of different points in the process to make it as efficient as possible to improve your speed and your accuracy in making a good decision. We’re trying to improve the processes all the time. Some of the changes we’re making are big, and some of them are small. Some of them, we slap our foreheads and say “why weren’t we doing that last week?” And some of them are really insightful and come from some really brilliant engineers that are focused on these issues. When you’re trying to build out an operation to get at this stuff at scale, it’s not just a question of one algorithm. Instead, it is, how do you use AI and the computer to facilitate good decisions each step of the way.

CTC: In what ways are you using artificial intelligence to remove terrorist content? 

Fishman: One of the key ones we’ve had a lot of success with lately is photo and video matching. ISIS and al-Qa`ida, in particular, have very formal processes for developing and releasing propaganda. We do everything we can to understand that flow of propaganda so we can quickly put those images and videos into databases. When people upload photos or videos released by terrorist entities, we can match against those databases. There is always room to improve this matching technology, but it doesn’t have to be an exact match for the computers to find them. In some cases, this prevents propaganda material from ever hitting Facebook. In other cases, it allows us to route the posts that share this material to the right reviewer very, very quickly.

There are all sorts of complications to implementing this, but overall the technique is effective. This was evidenced by a recent VOX-Pol study, which found Facebook was not in the top 10 platforms ISIS-supporting Twitter accounts were out-linking to.3 The reason it isn’t in this top 10 is that if you put a piece of formal propaganda on Facebook, maybe there will be a gap in our enforcement for a short time period, but we’ll get to it pretty quickly. Facebook is not a good repository for that kind of material for these guys anymore, and they know it.

There are times when we literally could not be faster. There are certainly times when we are not perfect. We make mistakes. Sometimes we find gaps in process. Sometimes things stick around longer than they should because we’ve had operational breakdown. We can get faster and have better operational consistency at the scale that we want to do it. It’s hard, and there are no easy technical fixes. We’re really trying to be frank about the challenges we run into.

It’s much more difficult, for example, to use computers to identify text advocating for terrorism. We’re in the early stages of using AI to develop text-based signals that content could be terrorist propaganda. We do this by analyzing posts we’ve already taken down and putting this information into an algorithm that is learning how to detect such posts. The machine learning algorithm works on a feedback loop, which makes it better over time. We’re decent at finding content that supports terrorism, but not good enough yet where we would trust the computer to make a decision. We trust the computer to reasonably accurately identify things that we want a human being to take a look at, but we don’t yet trust the computer to make a decision in those cases. Not being an engineer, I hesitate to speculate about whether we’ll get to the point where the computer is making decisions, but this stuff is really exciting to do—true machine learning. You’re trying to find a symbiotic relationship between a skill set of human beings and algorithms that can provide you a leg up on solving these problems.

One other thing we use AI for is to identify clusters of pages, posts, groups, or profiles with terrorist content. We know from terrorism studies that terrorists tend to concentrate in clusters, and it’s no different online. We use algorithms to “fan out” to identify these for possible removal by looking at accounts that are friends with a high number of accounts disabled for terrorism or accounts that share a high number of attributes as a disabled account.

CTC: In June 2016, an Islamic State-inspired terrorist broadcasted on Facebook Live from the scene of his crime after killing two police officers in Magnanville northwest of Paris,4 raising concern an actual attack might one day be broadcasted live on the internet. What kind of mechanisms do you have to stop this? 

Fishman: It’s a scenario we certainly worry about. We have extensive procedures in place to make sure live broadcasts do not violate our terms of service, including specialized enforcement and review teams monitoring Facebook Live. Algorithms, again, play a role in identifying concerning video, but we also work to make sure our operations team has appropriate tooling. All this allows us to keep tabs on Facebook Live and content that is going viral. None of it is perfect, so we will continue to work to improve.

CTC: How do you stop terrorists suspended from Facebook from just opening new accounts? 

Fishman: When we identify somebody that has supported terrorism in the past or if we believe they are a terrorist, they are not allowed on Facebook. And if we can verify that it’s the same individual, we will kick them off, even if they created a new, fake account that doesn’t actually post terrorist content. We’ve gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, we’ve been able to greatly reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it’s adversarial. We’re constantly identifying new ways that terrorist actors try to circumvent our systems—and we update our tactics accordingly.

CTC: When real-deal threats, which go beyond rhetoric, pop up on Facebook, what kinds of mechanisms are in place to identify those and to alert authorities as quickly as possible?

Fishman: When we see something anywhere around the world that looks like a real-world threat, we make sure that we alert the authorities. That obviously doesn’t happen all that often. It certainly isn’t as common as the kind of propaganda that we see. But when it does happen, we take it to authorities as quickly as possible.

CTC: Let’s talk about the debate over encryption. The Facebook-owned messaging application WhatsApp uses end-to-end encryption. In recent months, there have been calls by politicians to introduce so-called backdoors into apps using such encryption, including after a suicide bombing at a music festival in Bavaria in July 2016 by an extremist who authorities said was communicating via WhatsApp with a suspected Islamic State handler based overseas.5 Aaron Brantly, a cyber policy fellow at the U.S. Army Cyber Institute and non-resident fellow at the Combating Terrorism Center, recently argued in this publication that despite the fact that terrorists were using encrypted messaging apps to plan attacks, introducing backdoors was a “worse than futile exercise” because it would compromise the security of the general public’s communications and do little to stop terrorists using encryption, given that the code behind it is already in the public domain.6 As a counterterrorism specialist now working at a technology company, how do you see this issue? 

Fishman: I think Brantly got a lot right in that article. Here’s the issue: you can’t create a backdoor into WhatsApp without creating a backdoor into every WhatsApp account in the world. And so you’d be creating an extreme vulnerability, and in doing so, you wouldn’t actually limit the ability of terrorists to use encryption. You would be driving them to platforms like Telegram and Kik. And as Brantly pointed out, the technology that is used in encryption is open source, so nefarious actors can create their own encrypted messaging platforms. We saw that a decade ago with Mujahideen Secrets, the software tool created by al-Qa`ida. That was jihadis re-skinning open-source encryption software in an effort to try to create conduits for, from their perspective, secure communications. The bottom line is we think that pushes for a backdoor are likely to undermine secure communication and create significant risks without actually providing a lot of benefit.

CTC: Last month, British Home Secretary Amber Rudd said companies offering encryption apps should give up more metadata about messages being sent by their services.7 What kind of information was she referring to?

Fishman: Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages on, say, WhatsApp, but we do respond quickly to appropriate and legal law enforcement requests. We believe that actually puts authorities in a better position than in a situation where this type of technology runs off to mom-and-pop apps scattered all over the globe.

CTC: When it comes to what can be shared, are you talking about metadata?

Fishman: There is some limited data that’s available, and WhatsApp is working to help law enforcement understand how it responds to their requests, especially in emergency situations.

CTC: Given terrorists can migrate from platform to platform, what is being done at the industry level in terms of cooperation to take down extremist content? 

Fishman: At the industry level, we are working with a range of partners to share hashes—that is to say, unique digital fingerprints—of the most egregious terrorist videos and pictures. In practice, this currently focuses on content related to ISIS and al-Qa`ida.

Initially, when we began this 10 months ago,8 it was Facebook, Microsoft, Twitter, and YouTube, but those partnerships have now been expanded to include Snap and JustPaste.it as well as a range of other companies. This is something that’s moving; it’s working now. There are certainly improvements we can make—both process improvements and improvements to some of the technology that we’re utilizing—but it’s a start. When we send something to a hash-sharing database, it provides other companies the opportunity to take down the content if it violates their own community standards.

In conjunction with that same group of companies, we also launched the Global Internet Forum to Counter Terrorism (GIFCT),9 which serves as a mechanism for information-sharing, technical cooperation like hash-sharing, and shared research initiatives. We launched that in June. This is all new, but we are excited about the direction we’re headed.

CTC: How has Facebook sought to empower voices taking on terrorist content? 

Fishman: As a company, we’re supporting voices around the world who are challenging those who preach hatred and extremism. We support several counter-speech programs.10 The broadest is something called “Peer to Peer,” or P2P Facebook Global Digital Challenge. EdVenture Partners is an organization we support and advise that develops a curriculum that is implemented by universities around the globe. The curriculum is focused on how you build a social media campaign, how you identify and think about hatred and extremism. Using that knowledge, students actually develop their own messaging campaigns. Some of them are very sophisticated; some of them are not. This operates as a big global competition between those groups of students. So far, P2P has launched more than 500 counter-speech campaigns from students in 68 countries, engaged more than 5,500 students, and reached more than 56 million people.11 b I think we’ve reached somewhere in the range of 60 million people [who] have been touched by one of these campaigns in the last couple years. And I think the most important thing about that program is that it actually reaches scale, which is one of the things that is very, very difficult in developing counter-speech work. We’re empowering these local students around the world to identify the kind of extremism and the form that it takes in their communities.

Sometimes this might be pushing back directly on ISIS/al-Qa`ida. Sometimes it’s going to be other types of hate organizations. There are a lot of different versions depending on what students in a local context prioritize. We don’t get into the business of dictating what the students should focus on. We just want to give them the tools to identify what’s going to be relevant in their communities.

CTC: How does Facebook promote their message? 

Fishman: The best and winning campaigns get Facebook ad credits. We don’t actively help them, algorithmically, with promoting their content, but we do give them ad credits that they can use to target their ads, to target folks they want to reach. And those can be very, very effective.

We also have a program called the Online Civil Courage Initiative,12 which operates in the U.K., Germany, and France. It takes the same basic ethos of finding civil society groups on the ground, giving them this kind of training, providing them ad credits, and trying to give them a leg up but not dictating content. We’re trying to support people that understand the local environment, that are more credible messengers, and give them the tools to be more effective messengers. We recognize that when you do these things at scale, some of the campaigns are going to be well-designed, some are not. And we’re perfectly comfortable with that.c

CTC: In terms of measuring progress, how do you do that at Facebook? Do you have a system of metrics? How can you know that you’re succeeding in taking down terrorist content? 

Fishman: That’s a really great question and something we’re grappling with. But talking about the number of takedowns isn’t necessarily meaningful because you don’t know the denominator—the baseline amount of nefarious content there in the first place. So if you remove more content and the number goes up, was it because you’re doing a better job finding bad content, or is because there was more extremist content to find? And if that number goes down, is it because there’s less of it overall, or is it because your folks are doing a better job of circumventing the kinds of things that you’re doing?

CTC: How do you see the challenges ahead? 

Fishman: I think dealing with scale will continue to be a challenge. Making sure that we can understand really culturally nuanced activity in a way that is consistent is a constant challenge. And it’s something that requires human beings. We really want, as much as possible, to rely on our ability to use algorithms and machine-learning to do as much of this as possible. But we’re never going to get away from the necessity of having human beings to make the gray area calls. And when you’re dealing with terrorism, which is an inherently political activity but it’s a violent political activity, there’s going to be gray areas where you need human judgment in the loop. Anytime there is human judgment, trying to write effective policies and drive consistent application of guidelines is a challenge.     CTC

Substantive Notes
[a] Editor’s note: For more on Facebook’s counterterrorism efforts, see https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/

[b] In early 2016, West Point cadets in Combating Terrorism Center Director LTC Bryan Price’s “Combating Terrorism” course finished second out of 49 universities in the “P2P: Challenging Extremism” competition. They briefed their proposal at the White House and the U.S. State Department. “Recruiting college students to fight extremists online,” PBS Frontline, January 30, 2016; Nike Ching, “US Initiative Enlists International Students for Online Anti-extremism Campaign,” VoA, February 4, 2016.

[c] Editor’s note: More information about Facebook’s counter-speech programs is available at https://counterspeech.fb.com

Citations
[1] “Crime: abuse, hate and extremism online Fourteenth Report of Session 2016–17,” House of Commons Home Affairs Committee, April 2017.

[2] Kate Samuelson, “Read Prime Minister Theresa May’s Full Speech on the London Bridge Attack,” Time, June 4, 2017.

[3] Maura Conway, Moign Khawaja, Suraj Lakhani, Jeremy Reffin, Andrew Robertson, and David Weir, “Disrupting Daesh: Measuring Takedown of Online Terrorist Material and its Impacts,” VOX-Pol Network of Excellence, August 15, 2017, p. 34.

[4] Tim Hume, Lindsay Isaac, and Paul Cruickshank, “French terror attacker threatened Euro 2016 in Facebook video, source says,” CNN, June 14, 2016.

[5] “CSU fordert Zugriff auf WhatsApp-Kommunikation,” Die Zeit, May 27, 2017; “Die CSU will Zugriff auf WhatsApp-Chats,” Wired (Germany), May 30, 2017; Andreas Ulrich, “Germany Attackers Had Contact with Suspected IS Members,” Der Spiegel (English edition) August 5, 2016; Paul Cruickshank and Chandrika Narayan, “Germany: Five ISIS recruiters arrested,” CNN, November 8, 2016.

[6] Aaron Brantly, “Banning Encryption to Stop Terrorists: A Worse than Futile Exercise,” CTC Sentinel 10:7 (2017).

[7] Dave Lee, “Message encryption a problem—Rudd,” BBC News, August 1, 2017.

[8] “Partnering to Help Curb Spread of Online Terrorist Content,” Facebook Newsroom, December 5, 2016. Available at https://newsroom.fb.com/news/2016/12/partnering-to-help-curb-spread-of-online-terrorist-content/

[9] “Facebook, Microsoft, Twitter and YouTube Announce Formation of the Global Internet Forum to Counter Terrorism,” Facebook Newsroom, June 26, 2017. Available at https://newsroom.fb.com/news/2017/06/global-internet-forum-to-counter-terrorism/

[10] See https://counterspeech.fb.com/en/

[11] See https://counterspeech.fb.com/en/initiatives/p2p-facebook-global/

[12] See https://counterspeech.fb.com/en/initiatives/online-civil-courage-initiative-occi/

Stay Informed

Sign up to receive updates from CTC.

Sign up