Dr. Amy Zegart is the Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution and Professor of Political Science (by courtesy) at Stanford University. She is also a Senior Fellow at Stanford’s Freeman Spogli Institute for International Studies, Chair of Stanford’s Artificial Intelligence and International Security Steering Committee, and a contributing writer at The Atlantic. She specializes in U.S. intelligence, emerging technologies and national security, grand strategy, and global political risk management.

Zegart has been featured by the National Journal as one of the 10 most influential experts in intelligence reform. Most recently, she served as a commissioner on the 2020 CSIS Technology and Intelligence Task Force (co-chaired by Avril Haines and Stephanie O’Sullivan and has advised the National Security Commission on Artificial Intelligence. She served on the Clinton administration’s National Security Council staff and as a foreign policy adviser to the Bush 2000 presidential campaign. She has also testified before the Senate Select Committee on Intelligence and advised senior officials on intelligence, homeland security, and cybersecurity matters.

The author of five books, Zegart’s award-winning research includes the leading academic study of intelligence failures before 9/11—Spying Blind: The CIA, the FBI, and the Origins of 9/11 (Princeton 2007). She co-edited with Herbert Lin Bytes, Bombs, and Spies: The Strategic Dimensions of Offensive Cyber Operations (Brookings 2019). She and Condoleezza Rice co-authored Political Risk: How Businesses and Organizations Can Anticipate Global Insecurity (Twelve 2018) based on their popular Stanford MBA course. Zegart’s forthcoming book is Spies, Lies, and Algorithms: The History and Future of American Intelligence (Princeton 2022).

CTC: Next month, your book Spies, Lies, and Algorithms: The History and Future of American Intelligence will be released by Princeton University Press. What’s the central thesis of your book, and what are some of its key findings and takeaways?

Zegart: The central thesis of the book changed. I originally was going to write this book a decade ago—I’m a little embarrassed to even admit that—and it was supposed to be a textbook for university undergraduate courses. It started back when I was at UCLA where I polled my students and found out, much to my surprise, that most of their information about intelligence came from spy-themed entertainment. So the original thesis of the book was just [to] provide a textbook that separates fact from fiction and that provides an introduction for a wide audience to understand intelligence. But that thesis changed dramatically with the rise of cyber threats, Edward Snowden’s revelations, and other profound changes driven by technology. One of the benefits of taking so long to write the book is that the world changed, and how U.S. intelligence agencies make sense of this dizzying threat landscape in the tech age became a much more interesting topic.

The thesis of the book now is that this is a moment of reckoning for the intelligence community, that we’ve never before had the convergence of so many emerging technologies—whether it’s internet connectivity, AI [artificial intelligence], quantum computing, synthetic biology, and that this convergence of emerging tech is transforming every aspect of intelligence. I summarize this moment of reckoning as an adapt-or-fail moment much like it was on 9/11 for the intelligence community, and the reason is emerging technologies are driving what I call the five “mores”: more threats able to threaten across vast distance, through cyberspace for example; more speed, threats are moving at the speed of networks, not the speed of bureaucracy, and so that means that collection has to be faster, analysis has to be faster, decisions using intelligence have to be faster; the third more [is] more data. Analysts are drowning in data. How can we use emerging technologies to sift vast amounts of data? The amount of data on Earth is doubling about every two years.

[The fourth more is] more customers who need intelligence to advance the national interest. Intelligence isn’t just for people with clearances anymore. Voters need intelligence, critical infrastructure leaders need intelligence, tech platforms need intelligence. So how do intelligence agencies produce for the open? That’s a radical transformation.

And then the fifth more: more competitors. The government’s ability to collect and analyze information is nowhere near dominant compared to what it used to be in the Cold War. Open source isn’t just a type of intelligence, or an “INT,” that spy agencies need to collect. Open-source intelligence is an ecosystem of new players who have their own incentives, capabilities, dynamics, and weaknesses. U.S. intelligence agencies can’t just add more open-source intelligence and stir. They have to figure out how to deal with a world where anyone can collect and analyze information and make it available to the world. Much of this information can be useful, but it can also be dead wrong, deliberately misleading, and it can create unintended consequences. For example, third-party open-source intelligence could make crises harder to manage because their real-time “fact checking” could limit the ability of states to compromise, negotiate in secret, and use useful fictions to find face-saving ways to de-escalate. When the Soviets invaded Afghanistan, we provided covert support to the Afghan rebels. The Soviets knew it and we knew the Soviets knew it, but both sides pretended not to know. That useful fiction helped keep the Cold War from escalating.

This too is a radical new environment for the intelligence community, and what it means is U.S. dominance and intelligence is declining. The playing field of intelligence is leveling, not to the advantage of the United States. For all of those reasons, emerging technology is creating a need for radical transformation of the intelligence community.

CTC: You’ve served on the National Security Council and have played a key role in various commissions related to AI and other intelligence reform topics. If you were back in the NSC or had a senior role in the DNI, what would be the top three initiatives that you would want to kick off so the U.S. could better prepare to tackle those five “mores”?

Zegart: That’s such a difficult and good question. To the [Biden] administration’s credit, I think there are a lot of people working on this problem. I’m not the first one to talk about it. As you mentioned, I was on the CSIS Technology and Intelligence Task Force co-chaired by Avril Haines [current Director of National Intelligence] and Stephanie O’Sullivan [former Principal Deputy Director of National Intelligence]. So there are a lot of smart people working on these problems; I didn’t invent the awareness of the problem. But if I were in the seat of government today, I would focus on three drivers. Rather than specific recommendations, I think [about] what’s going to drive reform over the long-term because this is an urgent and important issue, [and] we need long-term change. Number one, organization; number two, strategy; and number three, talent.

One of the things that I really felt very strongly about in the [CSIS] task force, and you’ll see it in the report, was we need a dedicated open-source agency. I was reluctant to recommend a 19th intelligence agency because as we all know, when you have more agencies to coordinate, coordination becomes harder, and so if you’re worried about coordination as I am, a new agency may not seem like such a great idea. But I’m convinced we need a new open-source agency. Much like air power didn’t get the attention it needed until the Air Force became its own service after World War II, OSINT [open-source intelligence] will never get the priority or resources the nation needs without its own agency. There are open-source initiatives in the IC [intelligence community] already, but secret agencies will always favor secrets. For intelligence to succeed in this era, open-source intelligence has to be foundational. And for it to be foundational, it has to have a dedicated organization focused relentlessly and single-mindedly on that mission. So I think that organizational piece is key.

The second piece is strategy. What’s our strategy and intelligence for emerging technology? We need one, and it needs to guide everything we do. And then the third thing, and I know they’re focused on this and [CIA] Director [William] Burns is focused on this a lot, [is] talent. How do we get the right people in the door, and how do we get the right flow of people in and out of government in intelligence so that we can harness emerging technologies ourselves, developing better working relationships with the private sector, and better understand how technologies are driving the threat landscape?

Amy Zegart (CISAC/Stanford University)

CTC: [As you know], the challenge is not just for the intelligence agencies and the production of information, but it’s also prepping the ‘customers.’ Are our senior leaders prepared to accept guidance drawn from open sources? And when you think about that broader range of customers, how do we prep them to hear from intelligence agencies so that those agencies can be effective?

Zegart: It gets also to, how do we have customers [that] become champions of intelligence reform, not just recipients of intelligence products? They go hand-in-hand. I think there has always been a need to educate customers about what intelligence can and can’t do. Sue Gordona has said to me in an interview that I use in the book: Policymakers always have some friction with intelligence because intelligence steals presidents’ decision space. By that, she meant that intelligence often has to deliver bad news—telling presidents that events may be unfolding in ways they don’t like and can’t control. I think there needs to be an education function not just within the IC but among customers about what’s possible with intelligence, what isn’t, and why. And I don’t think we’re going to get there unless customers are partnering in the intelligence reform endeavor.

CTC: What should the intelligence community be learning from entities like Bellingcat and other data journalists that are taking innovative approaches to leveraging data or making interesting, novel use of open-source data?

Zegart: I think OSINT is too often viewed in the intelligence community as an INT. It’s stuff that people can use, and I think that’s wrong. OSINT is an ecosystem; it is a group of organizations and individuals, and what an open-source center should be doing is actually providing a node of engagement with the ecosystem so that it’s not just how can we use the stuff that Bellingcat is producing or how can we use the tools that they’re using today, but how can we have a continuous learning and collaboration process with a variety of open-source actors—and they’re constantly evolving—so that we produce our own open-source stuff, but we’re also engaging in that interaction with the open-source community? I think that’s the most important thing that an open-source agency should be doing, is reframing it from the INT as a way of collecting stuff that we already collect towards viewing open-source as a whole new ecosystem of actors with its own capabilities, weaknesses, dynamics, and incentives.

But beyond that, I think Bellingcat is such an interesting example of, how can you harness the crowd without turning the crowd into a mob? And I think Bellingcat’s done a really good job of that. Other open-source actors have not done as good a job at that, and it’s an emerging ecosystem with norms and standards and training, and it’s learning from Bellingcat: how were they able to do that globally on a volunteer basis, and how do they actually exert quality control when anyone can join and it’s a volunteer effort? Because, as you know, in the IC, quality control tends to be bureaucratic, it’s rigid, it’s top down. There are some benefits to that, but it’s slow. Bellingcat is kind of the opposite. It’s fast, it’s bottom up, it’s dispersed, it’s decentralized, but there are risks to that, too. And Bellingcat’s done a really good job at actually mitigating the risks, so I would focus there.

CTC: One element is the analysis; the other part is using that open-source data, right? I thought [it] was an interesting observation that we could use open-source data to train machine-learning tools and potentially apply that in other realms where the data is held more securely. How should the government be thinking about acquiring and using that data? And how do you work with the private sector in that realm, where a lot of this data may be held?

Zegart: Let’s start with the easy sort of open-source data, which is foreign open-source data. We’re not talking about U.S. persons on social media inside the United States, which raises constitutional, First Amendment questions. As you know, it’s not just getting foreign data; it’s getting structured data that is usable in a variety of ways. How do we collect haystacks in ways where we can actually use machine learning and other analytic tools to harness insights that we wouldn’t otherwise get? And how can we do it quickly? That’s kind of the key question. I’m always struck by the fact that foreign adversaries have access to our data in ways that we don’t have access to anybody else’s data. I feel like right now we’re living in the worst of all worlds: The internet is free and open for adversaries to collect our data and use it, but it’s not free and open for us to understand what’s going on inside of China or to share what the news actually is inside of denied environments. It’s only free and open for the adversaries who are autocrats. I don’t have a good answer [for] how we deal with getting more data other than to say I think it takes trust more than anything else. We can have laws and we can have mandates and all the rest of it, but at the end of the day , you have to trust the government with access to data, and that requires oversight. We can’t just access more data. We have to think much more systematically about what kind of oversight we’re going to have so that there are guardrails on how data of Americans, in particular, is collected and used.

CTC: The United States government has been investing in data science, machine learning, and artificial intelligence driven approaches in various ways for decades. You served as an advisor to the National Security Commission on Artificial Intelligence (NSCAI).1 How much progress has the U.S. government made on the AI front? What are the key remaining challenges, and how do you assess our progress given the efforts of countries like China?

Zegart: I’m going to sound like so many government reports that say, ‘Progress has been made but there’s more room to improve.’ Have we made progress? Absolutely. I think that commission (NSCAI) did tremendous work. You see a lot of its recommendations now getting into the NDAA [National Defense Authorization Act] over the past couple of years, and its work is ongoing. But we are way behind the curve on this. Look also at the Belfer Center report that just came out.2 We are losing. People don’t like to use the ‘L’ word, but we are losing the tech race with China. We’re losing in almost every technological area. The gap is narrowing, and China is expected to surpass the United States in just about every area except semiconductors. And they’re working hard on that, too: the semiconductor supply chain. That’s not to say we have to lose, but the trend is not in the right direction, and time is not on our side.

One of the least noticed areas of the tech competition that I would highlight is the human capital dimension. I am really worried that we are eating our seed corn when it comes to AI education. What do I mean by that? Look at the percentage of AI faculty at leading research universities that are no longer faculty. They’re going to industry. Two-thirds of Ph.D.s in 2019 who got their Ph.D.s in computer science specializing in AI, two-thirds of them did not stay in academia. They went to industry, which means there aren’t enough professors to teach the next generation of students to do AI. So if we’re thinking about long-term, what does our tech innovation require? People. Nearly all the numbers are bad when it comes to higher education. The percentage of international students getting Ph.D.s in STEM fields: really high and getting higher. I welcome international students. The question is, do they have visas that enable them to stay so they can continue working in the United States? The answer is no. It’s too hard for them to continue to stay. Our K-12 education system [during] COVID has gotten worse. All our performance in international science and math competitions is getting worse despite spending more money. So if we look at the long-term, strategic tech competition is about talent, and we are losing the race for talent. And that really concerns me, especially the sort of brain drain of our top AI researchers to industry.

CTC: Having been myself [Brian Fishman] in industry, there is incredible demand obviously for that talent, and folks are getting offerings. Is there a way to balance that out? Should the government step in? You can imagine programs to incentivize talented folks to stay in academia to train folks. We’ve done that in the past with key languages, for example.3 Are there things we ought to do to counter this specifically?

Zegart: I think the government should step in and provide more funding to keep talented researchers at universities. I had this argument with someone in industry who said, ‘Why is that a bad thing, all these people going to industry?’ and I said, ‘Because you’re looking to monetize products, and academics are looking at the frontier of the frontier.’ If you’re looking at basic research that will fundamentally change how we do what we do in 20 years, it’s not going to come out of Google most likely. It’s not going to come out of industry. It’s going to come out of academia. I have a colleague here at Stanford who says his sort of ‘test’ for his doctoral students is ‘Go come up with a dissertation topic and go pitch it around the Valley, and if you have people that want to fund it, I’m gonna reject it. Why? Because you’re not thinking big enough, because you’re thinking too near-term. You’re thinking about what can be monetized. That’s not our job. Our job is to be bolder, to be on the frontier of the frontier.’

How can we solve this problem? Money from the government to retain top talent in universities would help. Compute power and resources to enable them to do what they do in industry within universities would also help. Once you get past the really wealthy universities, it’s hard to put together packages and the capabilities for the leading researchers to actually do the work that they want to do. They have to be able to do it with access to compute power and data.

The other thing, and I’ve been talking with folks I know at the Pentagon about this, is there are more windows of opportunity for the U.S. government to harness the creative energies of top researchers in universities today. I’ll give you an example. If you’re a first-year Ph.D. student in computer science at a top university, you have to figure out what your dissertation is going to be. Guess who provides all sorts of great advanced tools for you to play around on the new gizmos in the lab. Industry. Path of least resistance: What’s my dissertation going to be? It’s going to be on this thing that I can do research on that’s already in my lab. Why doesn’t the U.S. government actually provide problem sets and capabilities to departments for first-year Ph.D. students to work on their toughest problems? There’s talent looking for problems to work on; the government has problems looking for talent. But they haven’t crossed that bridge. And first-year doctoral students are an ideal group to cross the bridge.

CTC: It’s fascinating that you’re conceptualizing technological talent in truly a national power sort of way, in industry, in academia. How do we get some of these folks? And what is the right model for bringing in this kind of talent direct to government, whether it’s the NSC or the Department of Defense or wherever? And how much do we need to [get] talent direct in[to] government versus making sure that talent is just accessible to the folks that need to make decisions within the IC or the defense community more broadly?

Zegart: They’re really two questions embedded in there. One is, how can we get this talent pool to be interested in government? I think there’s a lot more interest than the government realizes. I often joke that the Pentagon is the only organization in the world that thinks it can market the same way to 18- to 25-year-olds about its products now as it did 25 years ago. You have to know how to reach people, and cohorts change pretty fast in that age group in terms of what speaks to them. So I think the Pentagon tends to use too many D words: deny, degrade, destroy. And tech folks like to use C words: create, collaborate, change. So I think there’s a sort of marketing issue there, but I think there’s a reservoir of interest among top talent in the academic world and I think the onus is on the government to figure out ways to reach them more and give them low-cost, low-risk ways for them to go in and out. For example, you’re a doctoral student and you’re interested in doing stuff for the government, but you’re not going to take time away from your dissertation project to go work in the Pentagon. But you might go for a couple of weeks for a boot camp to understand how policy works. So then you’ve met the right people and you understand how government works, and now you’ve got that network that you can draw on in both directions for the future. I think that part of it—how can we tap that resource better—there are a whole number of ways that we could do better.

CTC: How in government do you utilize those folks that are not in government all the time?

Zegart: I think the question here is, what is the talent problem in the government? And I think we actually conflate three talent problems inside the government with respect to tech. We need champions, we need innovators, and we need implementers.

Implementers: You can grow your own. This is Kessel Run.b This is what a lot of DoD efforts are doing. So you can grow your own—implement better coding, etc.

Champions: That gets to, how do we get senior leaders to understand what technology can and can’t do for them? That’s education of senior leaders so that a combatant commander actually understands what AI can do with threat analysis, for example.

Innovators: There, you probably do need to have more in and out of the private sector. I think it’s the category of the innovators—who’s at the cutting-edge of the cutting-edge—those are the people that need to be going in and out of government more. But the model can’t be lifers, right? It’s got to be ambassadors, going in and out of the two worlds.

CTC: You made comments in previous interviews and we touched on a little bit here in this interview about open source being a potential laboratory for the experimentation and testing of ideas.4 As we all know, government classification of data and some of those restrictions make it challenging to have that interaction and movement of people back and forth, which limits the pool of people who can do that. What do you see as open source’s value or utility in that regard?

Zegart: I’m glad you brought that up. When I say open-source center, I really think of three areas of goodness that it could create. One is pounding the table for open source; you’ve got a stakeholder [saying] that open source really matters, right? But two is, it encourages innovation in terms of, now we have open-source data, let’s test various tools and see what we find. And we can do that in an unclassified environment. It enables more innovation more quickly because it’s all unclassified. And then the third area of goodness is recruiting people. If you’re not geographically constrained to be in the Beltway, now you can go where talent wants to live. You have to forward deploy to where the talent wants to live. So you gotta go to Austin. You gotta go to Denver. You have to go to Portland and other places. Imagine an open-source center that has forward deployment offices in other locations because it’s all unclassified. The communication’s much easier. People can work more seamlessly. So I think it’s all three of those things: pounding the table, experimenting with new tools because it’s unclassified data, and drawing people in because you are located where they want to be.

CTC: On this organizational question, the Department of Defense recently announced it is consolidating organizations focused on digital transformation.5 If you imagine AI contributing to the Department’s mission in three broad buckets—intelligence, operations, and the “business” or back-office side of managing the department—what would like to see the Department do in each of these areas?

Zegart: I hope—and I think DoD is moving in this direction—that you start, particularly when you’re talking about AI, with the back office because, first of all, it’s desperately needed from what I can understand and, second, it’s less controversial. You know the old saying in the Valley: Nail it, then scale it. You gotta nail it in the back-office functions—things like logistics and maintenance, predictive maintenance, things like that [that are] crucial for the effectiveness of the force but you’re not getting into the debate about killer robots, right? You don’t need to start with the debate about killer robots. Let’s start with the boring bureaucratic functions where we know AI can create tremendous benefit. And logistics wins wars, as the old saying goes. There, the goal of adopting AI is no friction. You should be able to do a lot: travel more easily, communicate more easily, do all your HR stuff more easily without friction, and if it’s one chorus I always hear is how much friction there is in doing even the simplest things within DoD. So think about the efficiency gains you get if you actually adopt AI usefully in that area.

But when you get to the warfighter, you want some friction. When I think about ‘how do we adopt AI in a useful way?’ I want productive friction between the speed of dealing in a warfighting environment and the pause to think about the ethical and legal implications of what we’re doing. I actually want more friction with AI and the warfighter, at least as we’re ironing out what the norms should be and how ethics should apply.

With intelligence, I think about AI as augmenting the human in a serious way. So I think there’s a lot of concern about, are machines going to replace humans? And the answer is no. Machines should be augmenting humans, so that pattern matching, searching for surface-to-air missile sites is done by an AI algorithm [and] the analyst can focus on, what is Kim Jong-un’s intention here with what he’s doing? Machines can’t do that nearly as well as humans, but machines can do pattern recognition better than humans can. So [it’s] figuring out the division of labor so we free up human thinking for the kind of analysis that humans are much better at doing.

CTC: As we all appreciate, sometimes the near-term solution of government is to dedicate a variety of resources to a problem. There’s a lot of money and investment going into AI. I think DoD recognizes that it needs to accelerate and more effectively compete with countries like China given what they’ve achieved in the AI sphere. Any thoughts about how we think about, with all that money and investment, AI’s return on investment (ROI), particularly as the DoD and the intelligence community is trying to scale?

Zegart: I’d say a couple of things. One is in the grand scheme of things, the money going to AI is not very much. We’re talking about spending half a billion to a billion dollars per plane, on the next bomber. A plane. What is AI spending compared to that? Not very much, right? So in the grand scheme of things, I think we’re not actually spending nearly enough on AI and other foundational emerging technologies. Second, I’d say there’s a lot of stupid spending on AI. I’m not being very diplomatic here, but I think that’s the reality. And so where is the money going? Because I hear earfuls from really amazing startup AI companies about how they can’t get into the DoD. It’s too hard. The money is too small. They take too long. And I think part of the problem here is the defense industrial base is consolidating. Because of mergers and acquisitions, there are now only a handful of big primes, there’s less competition, they’re locking up a lot of money, and it’s not creating enough space for actual innovation. So who’s getting the AI funding is a key question.

And then there’s also a different definition of speed. [Recently], we held a Tech Track II dialogue at the Hoover Institution6 [on] the idea that we need the Valley and DoD to actually communicate better together. We had venture capitalists and industry leaders and not just big companies, but startup companies and folks from the Pentagon and White House and others, and one of the key takeaways for me was that they had different definitions of what’s fast. In the Pentagon, there’s a lot of talk and I think a lot of genuine interest in moving fast, but fast in Pentagon speak is a decade. Fast in Silicon Valley is a month. And fast for venture capitalists is a year. That disconnect means that DoD may think it’s moving fast in AI, but not fast enough to make the return on investment for venture capitalists investing in startup AI companies to make it worthwhile. I heard a lot of concern from the venture capital community that there’s been a lot of forward investment in defense-first companies, including AI companies, and the Pentagon’s got a year to show that that investment is worth it in terms of actual production or actual contracts, or that money is going to go where the returns are better. So I’m really worried about this moment in terms of speed, and the Pentagon’s moving fast, but not fast enough.

I think ultimately the ROI on AI is the government actually adopting AI from the right places that can improve effectiveness fast enough. And I don’t see enough evidence that we’re anywhere close to being where we need to be.

CTC: Is the implication there, though, that we need to be spreading those dollars more widely outside of the traditional defense industrial base and get to cutting-edge companies that are not tied to Lockheed or Raytheon or BAE or some of the big companies that are already in this space because some of the innovation is happening outside of those realms?

Zegart: Absolutely. We need more money. We need it to go to more players. We need what we call actual competition in the United States, as opposed to having two companies that make airplanes, one company that makes ships. Oh, by the way, the primes are not software companies first; they’re hardware companies first. I liken it to asking an artillery officer to fly a plane or a pilot to do land warfare. Their sweet spot is not software. Their sweet spot is hardware. So why are we thinking that primes are good at software, when software companies are good at software?

CTC: This also seems to circle back to the point you made about human capital, where in government, with the investment, are there enough people who have the skills to understand what good AI looks like and to evaluate value based upon what’s being pitched to them, select good vendors or products, and so forth?

Zegart: I am concerned that AI is [viewed] like sort of magic fairy dust where people sprinkle a little AI and suddenly good things happen and no one really knows when it works or what its risks are. Before the budget debt ceiling intervened, we were going to have the first-ever congressional forum for bipartisan members of Congress to come to Stanford, where we were doing a tech bootcamp to educate members of Congress about a range of emerging technologies, including AI. I think it’s desperately needed. I actually counted the number of engineers in Congress, and it’s something like two dozen, and there are more than 200 lawyers in Congress. Not that I have anything against lawyers, but you can’t understand the technology just by reading a little bit about it. You actually need a little bit more of an understanding to know what it can and can’t do, and that’s true not just in DoD, but Congress has to learn more about it, too.

CTC: I don’t think we can go through this conversation without talking about killer robots at least a little bit. How do we set the ethical guidelines for the use of AI in not just combat environments, but broader defense? And I do worry about trying to turn these considerations over to government completely, I’m wondering how we come to agreement about the ethical and appropriate uses of AI. And over time, because this is a competitive space, is this a place where, however difficult it may be, we’re going to need international treaties and some kind of verification?

Zegart: These are such thorny and good questions, and they’re not limited to just government: How do we think about ethical uses of fill-in-the-blank? Engineers often think that their products are agnostic, and they don’t think about the potential downside uses of their products. I think ethics needs to be baked into engineering at the front end where the creators of technologies are thinking about ethics and potential downside uses of those technologies in the process of creating them. Right now, I can’t tell you how many engineers come to me and say, ‘Well, policy and ethics is what you guys do. We just create stuff.’ That’s not how it should work, right? So ethics needs to be baked into academia. It needs to be baked into industry. It needs to be baked into the government.

Killer robots is not just a government problem. When Google says things like, ‘We don’t want to be involved in anything involved in making weapons.’ Google is a weapon. The platform is a weapon for nefarious actors to do bad things, and we need to realize that. I think it’s a much broader question than just, what are the ethical guidelines of the Pentagon? But let’s put that aside for a second. How do we move forward? I think people outside the military would be surprised to know how much thought is going into ethical guidelines and AI, and the use of autonomous weapons. And those discussions need to be more explicit, and they need to be more public and transparent to reassure, in particular, in the United States that our policy makers are really thinking about, even if we could use autonomous systems for certain capabilities, we won’t and why. You think about the use of law enforcement and algorithms for facial recognition and how algorithmic bias is leading to a lot of false identification, particularly of African Americans, because we know algorithms are better at identifying facial recognition of lighter-skin faces than darker-skin faces.7 So we need to think at all levels of society about how to have those conversations in an open way, and it starts with understanding what the inherent limitations of the technology [are]. Where can the technology go wrong? If you start by understanding where the technology can go wrong, you can have better ethical guidelines moving forward.

You asked, should we have an autonomous weapons treaty? I think the answer is no. And the answer is no for me for a couple of reasons. Number one, how do you define an autonomous weapon? That’s a pretty tricky thing. Is a nuclear missile an autonomous weapon? Is something [that] once you launch it, you can’t recall it an autonomous weapon? We might imagine a lot of disagreement about what is autonomous and how autonomous it is. Number two, there’s no incentive for other states to actually adhere to such a treaty, so it would be aspirational, but not operational. And I worry then it gives lip service to ethical guidelines without actually implementing them. If you think about cyber norms, for example, there’s a lot of discussion about the free and open internet. Well, the internet is not free and open, and we need to get over that. So if you talk about having a treaty to foster a free and open internet, aren’t we better off talking about the Balkanized internet we currently have? How China and Russia are taking advantage of it, and how we need to actually have like-minded countries with democratic principles banding more together? I think the surge for virtue signaling and feeling good about all signing on to something can get in the way of real progress in terms of figuring out where we agree with like-minded countries about what we will and will not do, and actually developing norms among the like-minded first and then expanding that circle outward. So I really worry about a treaty that just gives lip service to it and lets the bad guys off—the ones that don’t think about ethics at all.

CTC: In relation to this discussion, how concerned are you about the messy reality of conflict and future conflict related to the development of ethical principles for the United States and Allied partners when other competitors or adversaries may have different ethical guidelines and adhere to different ethical principles in relation to autonomous weapons? That those differences actually might be an advantage to a competitor, which they could leverage in the future; it seems like that might be a future that’s not that far away necessarily, either for a state actor or non-state actor, or a proxy because of the different boundaries and guidelines that govern different actors’ behavior.

Zegart: It’s such a great question. I think on that front, our history with nuclear security gives us some good guidance about how to proceed. We developed a lot of confidence-building measures and mutual understandings about how nuclear war could emerge, even when neither side wants it, during the Cold War. And as I think about autonomous weapons, where is the mutual self-interest in restricting our use of these weapons? And I think the mutual self-interest lies in crisis escalation. With autonomous weapons, I think crisis escalation becomes much more likely and more fraught. Crisis management when humans are against humans is hard enough, where we’re trying to predict what the other side is going to do. Now you think about crisis escalation where autonomous systems are collecting information, analyzing information, and making decisions about how to use that information and what kinetic effects there will be, and each side is not operating with the same algorithms and with the same use of autonomy. Now the chances of miscalculation rise exponentially. I think in that risk of escalation lies a silver lining, which is called talking to our adversaries about the mutual dangers that arise from an increasingly autonomous world. So it’s not just that the U.S. is fighting with one hand behind our back and China isn’t, it’s that we are all worse off when one side uses autonomy in certain ways that are not well understood, that have lots of reasons for failure, and that crises can escalate out of control pretty easily.

I have this one example that really alarmed and has intrigued me since I saw it, and it has to do with AlphaGo [and the board game Go].c There was this moment when the machine is playing Lee Sedol, where AlphaGo makes this—I call it the move 37 problem—the machine makes a move that is just so crazy that the best Go player in the world has to leave the room. He’s freaked out. He leaves for 15 minutes, and commentators are saying, ‘That’s just not a human move.’ And the general reaction was, ‘Isn’t that amazing what the machine can do, is not a human move?’ and I thought, ‘Isn’t it alarming it’s not a human move?’ So imagine in a crisis, your opponent is using this algorithmic decision-making tool, and you can’t understand it because it’s not a human move. Can you imagine the escalation risks when one side is doing things you can’t imagine in your wildest dreams they would do? We think about all the crises that escalated into war when humans were doing their best to understand other humans. That not-a-human move component to autonomous capabilities scares the pants off me when it comes to crisis escalation. And I think if it scares the pants off of other people, then we have the opportunity to actually have real conversations about self-limiting autonomy.

CTC: As we become more dependent on technology, that dependence creates new surfaces for attack, for adversaries. And the inverse is obviously true: As our adversaries become more dependent on technology, that creates new surfaces for us to attack. How do we think about resilience in that environment? Because you know the first answer among most folks will be, ‘Hey, we’re going to defend those surfaces.’ But we’re not going to defend them perfectly over time. How do we prepare a workforce and an organization to deal with the loss of these technologies, if that happens in critical moments?

Zegart: You’ve raised such a critical question. I have a lot of thoughts about resilience. The first is, we’re not going to be able to deter our way out of this. I completely agree with your premise, which is that bad things are going to happen. So now we have to think about how do we defend against them happening? How do we recover once they do happen? A couple of things I think about are, number one, technical problems often don’t have technical solutions. They have non-technical solutions. Like learning how to use a paper map instead of relying on your GPS, for example. So that resilience often has to be in a non-technical way.

Number two, avoid the temptation to concentrate. You hear a lot about ‘we need to concentrate our communications capabilities.’ That to me is, ‘Oh no, we don’t want to do that.’ We want to distribute, not concentrate. That, in itself, empowers resilience. It makes coordination harder, but distributed capabilities, not concentrated capabilities are going to be really important. And we haven’t talked about it so far, but there’s a really important psychological dimension to resilience. Resilience is a frame of mind, too. It’s not just about capabilities and regulations and what you do. It’s about your attitude, and that comes, in part, from communicating with people so they know what is likely to happen and they know the plan if something bad happens. I don’t think we’ve thought enough about the human dimension of all of these kinds of threats. I’m really struck by, we’re sitting here during COVID, and how many health officials are saying we didn’t really understand why so many people might resist vaccination or resist masking or feel the way they do. This is about human behavior, and we have to take human behavior fundamentally into account when we think about policy, including resilience.

CTC: I [Brian Fishman] was doing research for you at UCLA as an undergraduate student. One of the questions that you asked then [centered on how] the public tends to focus on the intelligence community when they get something big wrong. And you were trying to understand whether or not they were actually wrong more than the private sector essentially. Did they fail more than the private sector? What are your thoughts are about that today, the IC versus the private sector? You’ve spent a lot of time now in and around Silicon Valley; you’ve seen successes and failures there. And a lot of time around the IC and you’ve seen successes and failures there. How do you think about that?

Zegart: I have been beaten up a lot by my friends inside the community because I focus on failures and not on successes. Fair criticism. But of course, I say, ‘Your successes are silent, and your failures are public, so how can I get a representative sample? Hard for me to do from the outside.’ I did spend a lot of time in my new book on the bin Ladin operation because I felt like as a researcher, I’m sampling on the dependent variable too much. I’m looking only at failures and what leads to failures. I didn’t have the ability to look enough at successes and what led to success so that I can actually examine the two in context. In the book, I spent a lot of time on the bin Ladin operation and what led to success in that case because we have a lot that’s in the public domain about it. I think it’s a remarkable story, actually. And I think the hero of the story in the bin Ladin operation is the ability of the intelligence community to jettison its analytic assumptions. To find him—and Leon Panetta and Jeremy Bash have written about this8—the analysts had to actually throw out every assumption they’d been working under about bin Ladin: that he would likely be in a rural area holed up in the mountains somewhere, that he would be surrounded by lots of security, that he wouldn’t be with his family. All of those things turned out not to be true. There were good reasons to have those assumptions, but they had to then throw them away. Think about how hard it is for us to get rid of our confirmation bias and all the things that normally focus us on analytic success. They had to throw them out to actually find him. I think that’s a remarkable accomplishment. So that tells me that the analytic ingenuity inside the community is really something. It doesn’t always work, but there’s a real self-analysis there.

To get to your question, Brian—is the private sector better than the intelligence community—I don’t know the answer, but I am absolutely certain of the fact that the intelligence community is more reflective about its successes and failures, lessons learned, and causal inference than the private sector is. Absolutely more reflective, systematically reflective, willing to acknowledge failure because that’s part of the business, whereas I think in the private sector, it tends to be either ‘I’m only going to selectively look at my successes and talk about what led to them,’ or I’m going to say, ‘Failure is a part of being successful. I’m going to kind of discount the failure as the price of admission.’ So I couldn’t say what the hit rate is, but I think the process of examining the hit rate is light years better in the intelligence community in general than it is in the private sector.

CTC: When you look out over the near- to mid-term horizon, what are the primary threats that have the potential to intersect with terrorism topics that you’re most concerned about and why?

Zegart: As I think about national security threats to the country, the biggest concern I have is us. Our polarization, our division, the threat for violence in our country because 68 percent of Republicans actually believe that Joe Biden is not legitimately elected the president,9 the use of violence on January 6th, I really worry that our biggest national security threat is the lack of trust and the polarization of our society, and the undermining of our foundational democracy. We cannot outcompete China if we can’t work in unity as a country, and I think if we can work in unity as a country, we can handle any challenge.     CTC

Substantive Notes
[a] Editor’s Note: Sue Gordon served as the Principal Deputy Director of National Intelligence from August 2017 to August 2019.

[b] Editor’s Note: “Kessel Run is the operational name for Air Force Life Cycle Management Center (AFLCMC)’s Detachment 12. Its mission is to deliver combat capabilities warfighters love and revolutionize the Air Force software acquisition process.” “The Kessel Run Mission,” Kessel Run website at kesselrun.af.mil

[c] Editor’s Note: “AlphaGo is the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history.See the AlphaGo page on DeepMind’s website at deepmind.com

Citations
[1] For more, see the National Security Commission on Artificial Intelligence’s website at nscai.gov

[2] Editor’s Note: Graham Allison, Kevin Klyman, Karina Barbesino, and Hugo Yen, “The Great Tech Rivalry: China vs the U.S.,” Belfer Center for Science and International Affairs, Harvard Kennedy School, December 7, 2021.

[3] For examples of foreign language programs, some of which are funded by the U.S. government, see Emily Wood, “11 Essential Scholarships for Foreign Language Study,” GoAbroad.com, last updated January 20, 2021.

[4] For some background, see Edmund L. Andrews, “Re-Imagining Espionage in the Era of Artificial Intelligence,” Human-Centered Artificial Intelligence, Stanford University, August 17, 2021.

[5] See “Memorandum on Establishment of the Chief Digital and Artificial Intelligence Officer,” Office of the Deputy Secretary of Defense, December 8, 2021.

[6] Editor’s Note: See “Readout from Tech Track II Symposium,” U.S. Department of Defense, December 2, 2021.

[7] Editor’s Note: For background, see Larry Hardesty, “Study finds gender and skin-type bias in commercial artificial-intelligence systems,” MIT News, February 11, 2018.

[8] Editor’s Note: See Leon E. Panetta and Jeremy Bash, “The Former Head of the CIA on Managing the Hunt for Bin Laden,” Harvard Business Review, May 2, 2016.

[9] Editor’s Note: David Byler, “Opinion: Why do some still deny Biden’s 2020 victory? Here’s what the data says,” Washington Post, November 10, 2021; “Competing Visions of America: An Evolving Identity or a Culture Under Attack? Findings from the 2021 American Values Survey,” Public Religion Research Institute, November 1, 2021.

Stay Informed

Sign up to receive updates from CTC.

Sign up