Naureen Chowdhury Fink is the Executive Director at the Global Internet Forum to Counter Terrorism (GIFCT). Prior to this, she was the Executive Director at The Soufan Center, and before that, a senior policy adviser on counterterrorism and sanctions at the U.K. Mission to the United Nations, leading related negotiations in the UN Security Council and the General Assembly. She has previously worked at the UN Counter-Terrorism Committee Executive Directorate (CTED), UN Women, the Global Center on Cooperative Security, and the International Peace Institute.
CTC: Some of our readers will be familiar with GIFCT and the work that you do. Others may not. Can you provide an overview of GIFCT and your mission, the work that you do, and some of the partnerships that you have, particularly with GNET, because we know that’s an important one.
Fink: It’s always great to be able to talk about GIFCT. Our mission is to prevent terrorists and violent extremists from exploiting digital platforms. It’s a big mission, and we take it very seriously. GIFCT was started by industry. It started as a voluntary consortium established by industry—Facebook, Microsoft, YouTube, and Twitter—almost a decade ago, and yet, it was designed deliberately to foster multi-stakeholder engagement and to make sure that as an industry body, it was positioned to facilitate engagement with all the key sectors that help shape this space—governments, policymakers, academics, civil society. How do we deliver on this mission? What is it that we actually do? We do our work through four key tools. First, we have a hub for companies to be able to speak with each other, learn from each other, collaborate, engage. And through that, intra-company information-sharing, knowledge development is a key aspect of what we offer our members.
Second, we manage the hash-sharing database that allows us to create—for want of a shorthand—a digital ‘fingerprint’ of terrorist or violent extremist content. Member companies can then use that digital fingerprint to run matches against their own platforms and then act on it in accordance with their own policies. That hashing enables us to share information and raise awareness of terrorist or violent extremist content without actually resharing the content itself or without any of the private user information that is associated with it.
Third, we have an Incident Response Framework that allows us to work with our member companies to respond to online dimensions of offline violence, like the presence of perpetrator-produced content related to a terrorist or violent extremist attack. So, think of the Christchurch attack where you had perpetrator-produced content (livestreaming of the attack). We want to make sure that something like that, to the extent we can, doesn’t happen again.
A fourth pillar of our work and one of our key tools is of course GNET, the Global Network on Extremism and Technology. That is the research arm of GIFCT. We invest heavily in GNET and the expert networks that are developed around it because it really helps us make sure that our members and stakeholders have access to cutting-edge global research and perspectives and analysis about what terror threats and counterterrorism solutions look like, where there are intersections of tech and counterterrorism and national security, through a global perspective.
As you may know, we have a very diverse membership. We’ve grown from four to over 35 companies. GNET helps to ensure that our members have access to subject matter expertise. We hope it also helps researchers and practitioners to understand the tech world a bit more and understand where the solutions and challenges come from on the tech side. So GNET, which is housed in King’s College, London, is really a key part of our offering to our members.
CTC: You mentioned the growth of your membership at GIFCT. Can you help us understand what that membership universe looks like and the different types of companies that are members of GIFCT? And the types of challenges that you work through with some of your members?
Fink: Sure. We have grown from an initial four founding companies, which were at the time Facebook, Twitter, Microsoft, and YouTube, that established GIFCT as an independent non-profit to support industry members. Today, GIFCT is an organization with over 35 member companies that span from the big to the small and everything in between. We have a number of companies that are working on different types of content, different kinds of user engagement. And so, we range from our founding companies to TikTok to JustPaste.it and Pinterest and Airbnb, for example. It’s really quite a range of companies, and I think that diversity is the key. We know that a lot of times, as terrorist and violent extremist groups have adapted to some of the measures that the big companies took early on, they have migrated, they have adapted to using different kinds of platforms. So, that diversity of size and type and geographies was really important and I think is a key aspect of that membership universe.
Becoming a GIFCT member is not necessarily easy, which is why we take great pride in this expansion. We have a set of membership criteria that companies have to look over and then decide if they’re willing to make that commitment to join us. And we have a Membership Advisory Program now where if a company says, ‘Look, I want to join, GIFCT, but I could still need some support in making sure we can be compliant with that criteria,’ we will work with them for as long as it takes and as long as they’d like to work with us to make sure they meet that criteria. And at that point, they can become a member. So, growing in membership is not just a result of us looking at the map and the tech stack and saying, ‘This is what we’d like.’ It is actually a testament to the commitment of industry to join us.
CTC: One of the things that I’ve always been curious about with GIFCT is that it was established by tech companies to be a consortium, to help facilitate knowledge exchange and information sharing exactly in the ways that you described. But how does GIFCT interact with governments? How does GIFCT engage with entities like the E.U.’s Internet Referral Unit or comparable places?
Fink: We have an Operating Board made up of tech companies. But we do also have an Independent Advisory Committee [IAC], which is made up of governments and civil society, and we have a rotating roster of governments which have included—the U.S., the U.K., Canada, the E.U., the Netherlands, and Australia—a number of countries have been there or rotated in, rotated out over the last number of years. We look to our IAC for advice, for support, for strategic perspectives on where things are happening and where we need to be. It also offers a forum for dialogue with GIFCT members, with GIFCT’s board.
I would say, however, that we also host a number of multi-stakeholder events and engagements throughout the year that are deliberately designed to bring together governments and practitioners and experts with industry. At least once a year, we develop a strong regional partnership and we deliver a workshop, a training, whatever is appropriate to the region and is most helpful. We worked, for example, last year with the International Institute of Justice and the Rule of Law (the IIJ) in Malta and the government of Nigeria, and we put together a workshop with a focus on West Africa looking at regional threats and trends. [We brought] together practitioners, academics, GNET experts, gaming experts, CT folks, and that really allows us to dig deep into what’s happening in a region. What do we need to know? What do we need to take back to our members? But, also what do they need to understand about industry and CT trends? And we do that at least once or twice a year.
So, government is one of the key sectors we engage with. They’re definitely an active part of many of our multi-stakeholder events. The IAC also includes the European Union and the UN Counter-Terrorism Committee Executive Directorate. So, we have some international organizations there as well.
You had asked about the EU-IRU [Internet Referral Unit] of which we are active participants. We attend most of the meetings. We work with the technical groups on thematic issues, similarly with the U.N. and the GCTF [Global Counterterrorism Forum] and all those institutions, where we have sought to further both our substantive engagement with them, but also to make sure that what we learn and what we see there is something we can take back to our members and say, ‘This is what you need to know about the threats and trends we’re seeing globally.’
CTC: GIFCT sits at a very strategic and interesting vantage point where you’re helping to convene different stakeholders focused around your mission, but you also get a sense of areas where public-private partnerships are working well and where they aren’t. How do you evaluate the state of public-private partnerships to combat terrorist use of the internet and digital platforms?
Fink: The first metric I would look at is who we actually got to sit around the table. And I think that is a bit of a lesson learned for me. When I look back at the last two decades, I think there’s been too little engagement with the private sector in talking about counterterrorism issues. For me, what has been a marker in the evolution of public-private partnerships is making sure we have the right players at the table. As you and I know, that’s not easy to do. Getting someone to the table can be the tip of the iceberg after years of effort. But I think that is one of the key metrics. I don’t underestimate how valuable that is. Once they’re at the table, of course you need to have discussions and see where there is scope for collaboration and where there may be divergence, and work together. But I think a huge challenge has been that we haven’t seen enough of public-private partnerships in the CT space, and that for me was a bit of a surprise. I thought in 20 years, we’d had all the sectors covered—certainly I think after the Christchurch attack, but certainly in the heyday of ISIS. It was a real shock that it was that hard of a step to climb.
I think the second marker of success [is] finding consensus and agreeing on the classifications and the harms that we need to address. That is, I think, the great underestimated success of all this because we take for granted that everyone believed that there was a harm that needed to be addressed and that there was something different sectors could do together. We know, on the government side, it took many years—sometimes decades depending where you want to start that conversation—for governments to realize that there was a transnational threat that required a multilateral intergovernmental response.
I think similarly with public private partnerships, the consensus on the harm types has been hard won; as with state actors, it required collaboration and agreements among a wide range of industry actors to agree to some common understandings and classifications that allowed us to build collaborative tools and solutions. And I don’t think we talk enough about that, how important it is to reach a common ground before developing solutions, which I think has been a big success.
CTC: Next year is a big year for GIFCT as the organization will mark its 10-year anniversary. When you reflect on what GIFCT has achieved, what its accomplishments have been over that period, what stands out to you? What do you think GIFCT should be the proudest of?
Fink: It’s so exciting to be planning for that anniversary. It is hard won, and what I’m most proud of is the growth and the diversification of GIFCT’s membership. It was a tremendous thing for four big companies to say, ‘Hey, we need to actually work together and breakdown some of these silos.’ But I think to get to nearly 40 companies—and as I shared, becoming a GIFCT member means making a commitment and making transformations internally to be able to join—that diversification and expansion really represents an important commitment of the sector to a certain set of norms and criteria that bind our efforts together.
Some of the things I’m also really proud of, especially since I joined, is that we’ve created an in-house Membership Advisory Program, which means we provide a lot more resources to our members and our candidates directly. We have also updated our Incident Response Framework to what is needed in this day and age. When the framework was developed, it was several years ago. Initially, the threat looked different, and the needs of member companies looked different. We have undergone a long multi-stakeholder process to get to a new Incident Response Framework, which means companies can get more support in responding to terrorist threats in terms of how they manifest today. We have developed a huge member resource portal. It’s called Compass, [which allows members access to] the wealth of resources GIFCT puts together for them. So, these are some of the more tactical in-house things we have been able to do to get at that great strategic achievement, which I’m so proud of, which is the membership.

CTC: I’d like to pivot a little bit and talk about your own personal journey. We’ve known each other for a long time, and you have worked in various roles focused on national security and in the counterterrorism world. Can you talk a bit about your personal journey and some of the different roles you’ve had and how that’s prepared you to serve in your current role?
Fink: Looking back on it, it’s longer than I realized it had been, which makes me feel older! To give a little overview, I’ve worked with nonprofits and think tanks. I guess they would be called think-and-do tanks because we did research, we did analysis, we did publishing. But I’ve also delivered projects and programs in places as diverse as West Africa or Southeast Asia or Europe. [That] was really an eye-opener into how these issue sets and these norms develop around the world.
That work was complemented by work that I’ve done at the UN Counter-Terrorism Committee Executive Directorate, UN Women, and interestingly, I had a chance to represent the United Kingdom on counterterrorism and sanctions issues at the U.N., working with the Security Council, with the General Assembly, with governments, and with counterterrorism practitioners. When I look back on this two-decade journey, it comes down to people and partnerships. That’s the main ‘treasure chest’ I feel like I’ve collected along the way because in working with governments, with civil society, delivering programs, writing for government, working on closed-door negotiations, I’ve really been able to get a 360-degree view on how these issues develop.
In addition to that substantive learning, which is huge, working with different stakeholders teaches you a lot about how the same problem can look to different stakeholders and where you can actually find points of commonality. Divergence is easy to find. There’s a lot of noise about where we all disagree on things, but finding that convergence, which is there, has been one of the most exciting parts of working with so many different stakeholders.
CTC: How would you characterize the state of the terrorist activity online today and how extremists are using or attempting to use digital platforms? What does that landscape look like?
Fink: Over the past two decades, the state of terrorism activity online in some ways has mirrored our own activity online. [Whether] we’re working, shopping, loving, living, everything is online, and we’re all using multiplicities of apps and tools to do it. And that is very much what we’re seeing about the threat: Adversarial actors are using multiple platforms for recruitment, for propaganda, for instructional support, for operational planning. In the early years, we sort of focused on single platforms or single usage, whereas now the threat has been not only cross-sector, but cross-platform. That is very much a characteristic of the contemporary landscape. We’ve seen a lot more multi-layered exploitation of the gaming and gaming-adjacent spaces. I think it’s always good to remember that; it’s not just the game, right? It’s the social space and the environment that gaming produces that is vulnerable to exploitation.
We are also seeing a very worrying trend of increasingly young people not just as victims but sometimes as perpetrators as well. It’s not a new trend. We’ve seen violent non-state groups—whether it’s the Lord’s Resistance Army or Boko Haram or ISIS—target kids. But looking online, the volume and scale and the increasing focus on youth is really a worrisome trend, particularly as we’re seeing groups like 764 really gain strength.
This leads to another aspect, which is also worrying, and that is terrorist groups are learning and adapting. So, as counterterrorism measures are successful, or they understand what is happening on one platform, they’re able to adapt and evade—whether it’s content moderation evasion, whether it’s learning to use legitimate symbols, words, images to create that kind of meme culture or create that ‘lawful but awful’ content. The adversarial adaptation to CT is something we also have to grapple with so that we’re not always preparing for ‘September 10, 2001.’
CTC: One of those adversarial points of adaptation, as you know, and as GNET has published quite a bit about, is terrorist use of artificial intelligence and machine learning tools. Can you help unpack that a little bit for our readers? How are terrorists are using AI online? And alternatively, how are some of your members using AI to counter the threat?
Fink: If I had to sum it up in a sentence, it would be in many ways enhancing the volume and the scale of the content that we’re seeing while lowering the threshold for entry. It makes it easier to produce more volume, more scale, at a greater scope. Generative AI tools, for example, have been used to augment and manipulate terrorist violent extremist content to avoid detection, to evade those content moderation efforts by making sure that AI can—at scale—produce content that is evasive. We have seen it in attack planning. We’ve seen the use of AI tools that can facilitate the planning and operationalization of attacks by widely sharing information regarding bomb making, 3D-printed firearms. The future potentials for use in chemical and biological weapon design is definitely something we’re hearing a lot of concern about. AI has also been used to facilitate not just fundraising, but mobilization and incentivization through chat bots.
We used to talk about how important social networks were and one-on-one engagement for terrorist groups to recruit and mobilize. But now AI can do this at scale through a chat bot. You don’t have to find a safe space for a conversation. You don’t have to travel to meet an individual. We’re hearing that chat bots are an attractive way to interact and develop relationships, particularly for young people, and I think that is going to be a very challenging and worrisome use of AI. In terms of recruitment and radicalization, we’re seeing that AI can strengthen the capacities of violent extremist groups to personalize that effort to make sure that they can be linguistically more capable, that they can reach a larger number of people.
We have not yet seen attack perpetration through AI. So, I think we right now are looking at the recruitment, mobilization, incentivization, attack-planning phase, but not the perpetration phase just yet. But with developments in UAS and some of the other capabilities, it doesn’t feel that far away.
We know that AI also presents immense opportunities. We are seeing many of our members prioritizing user privacy and safety, and recognizing that as AI evolves, it means they have to think about multi-layer mitigations right from product design to hardware function to content moderation. We are seeing many of our members talk about integrating AI considerations when designing and developing tools. It’s sort of safety by design, which is done by testing out risks through red teaming, deepening understanding of how AI is classifying information or content, and really understanding that pathway.
Ensuring tools that are developed can’t be modified by malicious users or hard locks or other encryption safeguards is critical. One of the ones we talk about is Microsoft’s Prompt Shields, which is an API within Azure AI Content Safety that helps safeguard large language models from adversarial misuse by detecting and blocking harmful or policy-violating prompts before the content is generated. So, some of these proactive built-in safety mechanisms are exemplifying how the industry can responsibly innovate while reducing risk. But we also know a lot of our members have been able to use AI to search, find, and help identify harmful content on their platforms and actually help enhance also the volume and scale of a preventive engagement.
CTC: What do you think governments can learn from the work of your members and how they have been utilizing AI and AI-related tools as part of their pipelines and processes to engage in and manage terrorist activity and content on their platforms? I ask because governments are investing heavily in this area, but I think if we’re honest, they’re a bit behind the curve than most of your members. So, there’s probably a lot that governments can learn from the work of your members.
Fink: I think the greatest aspect I notice when working with governments versus working with our industry partners is speed. We know by nature, and for very good reasons, government just doesn’t have the speed that the private sector has at its disposal. But I don’t think it’s going to be a choice any longer. We are going to have to figure out how government actors can adapt to the speed of [technological] innovation. I think one of the answers is more open multi-stakeholder engagement because, as I said, one of the sectors in 20 years of counterterrorism conversations that I think was underrepresented was the private sector, which means a lot of the initiatives, a lot of the investment didn’t really go into working with the private sector. But the GenAI evolution is just at a speed that we cannot comprehend. So, I think for governments, the key lesson learned will have to be speed and flexibility and how to find a way to make that happen.
CTC: When you think about future terrorist use of the internet and digital platforms, what concerns you the most? We have talked about the current threat environment, extremist use of AI, but when you look out and scan the horizon, what concerns you?
Fink: What concerns me the most is the speed at which it all operates and the reduced friction—the lowered bars for entry—which means that you can have more and more individuals mobilizing at pace and unseen by law enforcement or intelligence or any other kind of community safeguard. That means we’re seeing more and more kids get wrapped up in this, and that brings on a whole other responsibility to make sure that the rights and needs of children are protected while we’re making sure we’re doing effective counterterrorism. Speed and the lowering bar of entry for individuals is what worries me because it has so many repercussions for practice, for prevention, for risk mitigation.
CTC: GIFCT has had to navigate different points of controversy and different points of view regarding its work. For example, X (previously Twitter) was a founding member of GIFCT but is no longer a member of GIFCT. Similarly, while TikTok is a current member, its application to join was previously not approved for controversial reasons. GIFCT has also been criticized for its lack of transparency, especially about its internal deliberations. As a leader of GIFCT, how have you been navigating through these issues and challenges, and how would you respond to some of them?
Fink: Well, X [Twitter at the time] was an important founding member. We really appreciate their contributions. They chose to conclude their membership with GIFCT to strengthen their internal trust and safety efforts. So, we are very grateful for the contributions while they were with us, and we look forward to collaborating again in the future. Since I joined, TikTok’s gone through the membership process. They’ve worked closely with our teams to get to the point where they meet all the criteria, and it’s been great to work with them both as a candidate company—while they were a candidate, they were active in many of our projects and events—and now since they joined us as a full-fledged member. We’re really grateful to have them as a member.
I don’t know if any of the spaces I’ve ever worked in have been without controversy. But we remain really deeply committed to working transparently and in a manner that fosters respect for human rights. GIFCT at the very early stages of its inception created a human rights impact assessment that was a foundational document. One of the things I’m really proud of is how we continue to make progress on many of its recommendations, which focus on transparency and making sure that the human rights lens is on all our work. We regularly seek feedback from our multi-stakeholder community through the workshops, working groups, consultations, and we have an annual transparency report where all our work is published.
Controversy is always going to be part of this work, and I think in many ways, that’s good. There’s a healthy tension. We work in a very important space that touches on very key considerations for different sectors. So, a healthy amount of tension and dialogue is what keeps us on our toes and aiming to be better.
CTC: Dealing with a multi-stakeholder environment, there are obviously always going to be points of tension that you have to navigate through. How has the tension between the United States and European governments regarding content moderation activity, as guided by the Digital Services Act and other legislation in Europe, and disagreements about free speech and censorship impacted the work of GIFCT and its members.1 For example, in late December of last year, it was reported that the U.S. State Department directed its consular staff to “thoroughly explore” the background of H1-B visa applicants who work in content moderation and trust and safety.2
Fink: We definitely continue to monitor all these developments and keep an open conversation going with our partners across governments, industry, and civil society. It is an ever unfolding and changing context. So, it is definitely one we are tracking closely.
Collaboration is such a key part of what we do, and I earlier described our Independent Advisory Committee. I think that is a really important forum for governments that have different perspectives on how these issues should be addressed and how they’re working to provide input on GIFCT’s priorities and work. We welcome those strategic recommendations and guidance. We continue to work with our international partners to better understand the entry points and opportunities for collaboration.
A key part of all this has been to make sure that our hash-sharing database has clear and agreed boundaries or taxonomies; these serve as formal guidance for how our hash-sharing database operates. To go down to the tactical level a little, although different member companies have slightly different operational definitions of terrorism or terrorist content, GIFCT’s taxonomy at least represents a place of consensus about some high severity content among members, in terms of a common lexicon about the terrorist and violent extremist content that can be hashed through the database So, GIFCT’s taxonomy give us that important point of convergence for different actors to be able to work together.
CTC: Over the past year, the Trump administration has designated at least 25 new foreign terrorist organizations (FTOs), and in the past, the FTO list and other designation lists have been utilized by some of your members to help guide some of their activity in terms of inclusion—what is terrorism, what is not. The expansion of the FTO list is a seismic shift for our community. How have your members been responding to those types of changes?
Fink: It’s an ongoing conversation because many of our member companies are situated in countries which have their own lists and designations. So, we have always been working in a space with evolving designation lists and perspectives on terrorist groups. All our member companies, first of all, have their own policies regarding content moderation and their own approaches to lists and violent extremist organizations. But I will say that the way we operate, for example, our hash-sharing database and our IRF [Incident Response Framework] have the flexibility to adapt to evolving threats as well as boundaries. That’s because we have a number of criteria that allow for labels in the hash-sharing database and that allow us to action different kinds of content. Some of the criteria emerged early from the U.N.’s 1267 sanctions list, which largely focused on ISIS and al-Qa`ida. But we also have a number of behavioral labels that categorize content by the type of harm and incident labels that are created from activations. So, that means that you don’t just have to be a designated entity to activate the hash-sharing database or an incident; you have to be sharing perpetrator-produced content that meets the criteria, which means we don’t have a limitation on the kind of content that we action based on the perpetrator. [This] puts us in a position to adapt to the lists because we can action [terrorist and violent extremist] content by a range of perpetrators.
CTC: So, if I understand it right, by focusing on the behavior as a core indicator and those sort of signals, your members are already looking across and beyond the list in terms of what isn’t being captured.
Fink: Exactly. That allows our members to have discussions about different kinds of perpetrators because we’re focused largely on the action. Through GNET, through our regional consultations, through working with industry partners we are also facilitating ongoing conversations about how members are reacting to these changes [and] any lessons learned they have to share with their peers. And as you said, it is a big shift. We deal with countries with many different types of lists, but we do want to make sure our members always have a chance to talk to each other about how to respond.
CTC: What’s the hardest part of your job? Given GIFCT’s diverse membership, I would imagine that it’s managing and navigating through different equities, but I don’t know if that’s right.
Fink: For me, the hardest part of the job is knowing that there’s always more to be done, and as soon as we think we’ve succeeded in one aspect or think that we have understood one trend, [we always have to] be ready for the next evolution of the threat. When you look at the environment and the context where GIFCT was established versus what we’re looking at now in terms of the threat, the solutions, the sectors involved, I think the hardest part is not ever feeling quite comfortable resting easy [on our laurels].
Another challenge we do need to think about is that as we continue to grow our membership and expand and improve the member resources, we also need to think about what our members need. If our members are going to be increasingly diverse and a larger group, we need to make sure that we are constantly introspective and make sure the tools we develop suit them. That is always an ongoing process. CTC
Citations
[1] For background see, Mark Scott, “Trump Squares Off with Brussels Over its Digital Rulebook,” Tech Policy Press, August 28, 2025.
[2] For context, see Shannon Bond, “State Department to deny visas to fact checkers and others, citing ‘censorship,’” NPR, December 4, 2025.