Lieutenant General John N.T. “Jack” Shanahan is Director of the Joint Artificial Intelligence Center, Office of the Department of Defense Chief Information Officer. General Shanahan is responsible for accelerating the delivery of artificial intelligence-enabled capabilities, scaling the department-wide impact of AI and synchronizing AI activities to expand joint force advantages.
General Shanahan has served in a variety of assignments, most recently as Director of Defense Intelligence Warfighter Support, Office of the Under Secretary of Defense for Intelligence at the Pentagon. He was also Director of the Algorithmic Warfare Cross- Functional Team (Project Maven), where he led the artificial intelligence pathfinder program charged with accelerating integration of big data, machine learning, and artificial intelligence.
General Shanahan also served as the Commander, 25th Air Force, Joint Base San Antonio-Lackland, Texas, where he led 30,000 personnel in worldwide intelligence, surveillance, and reconnaissance operations and also served as the Commander of the Service Cryptologic Component. In this capacity, he was responsible to the director of the National Security Agency, and the chief of Central Security Service, as the Air Force’s sole authority for matters involving the conduct of cryptologic activities, including the spectrum of missions directly related to both tactical warfighting and national-level operations. Prior to these assignments, General Shanahan also served as Deputy Director for Global Operations on the Joint Staff.
CTC: The Joint Artificial Intelligence Center (JAIC) is still a fairly new organization. What have been JAIC’s initial accomplishments?
Shanahan: The JAIC is now just over a year old. It’s taken a while to grow, but the momentum is increasing as we bring more people into the organization. We are seeing initial momentum across the department in terms of fielding AI [artificial intelligence]-enabled capabilities. Yet, we still have a long way to go to help bring pilots, prototypes, and pitches across the technology valley of death to fielding and updating AI-enabled capabilities at speed and at scale. Adopting and fielding AI is difficult work, especially for a department that is making the transition from an industrial age, hardware-centric force to an information age, software-driven one. Yet, it is critically important work. It’s a multi-generational problem requiring a multi-generational solution. It demands the right combination of tactical urgency and strategic patience. Our ongoing projects include predictive maintenance for the H-60 helicopter; humanitarian assistance and disaster relief, or HA/DR, with an initial emphasis on wildfires and flooding; cyber sense-making, focusing on event detection, user activity monitoring, and network mapping; information operations; and intelligent business automation.
For the next year, in addition to expanding the scope and scale of the projects we started over the past year, we are beginning to focus on a mission initiative we are calling Joint Warfighting. This is a broad term of course, but as for all early stage AI projects, we are working with the military Services and combatant commands on bounded, high-priority operational mission requirements. We used the last several months to frame the problems the services and combatant commands want help solving. These include Joint All Domain Command and Control (JADC2); autonomous ground reconnaissance and surveillance; operations center workflows; accelerated ops-intel fusion and sensor-to-shooter solutions; and targeting. We are very much in the early stages in each of these lines of efforts, continuing with user-defined, data-driven workflow analyses to better understand how AI can—or equally important, cannot—help with a given operational problem.
We are also embarking with the Defense Innovation Unit (DIU) and the services’ Surgeons General, as well as other key organizations such as the Defense Health Agency and Veterans Affairs, on a predictive health project, with several proposed lines of effort to include health records analysis, medical imagery classification, and PTSD [Post-Traumatic Stress Disorder] mitigation/suicide prevention.
Our other major effort, one that is instrumental to our AI Center of Excellence concept, is what we are calling the Joint Common Foundation, or JCF. The JCF will be a platform—think platform as a service, residing on top of an enterprise cloud infrastructure as a service—that will provide access to data, tools, environments, libraries, and to other certified platforms to enable software and AI engineers to rapidly develop, evaluate, test, and deploy AI-enabled solutions to warfighters. It is being designed to lower the barriers of entry, democratize access to data, eliminate duplicative efforts, and increase value added for the department.
CTC: What have been some of the key challenges JAIC has faced during its first year? What have been some of the lessons learned so far?
Shanahan: One of our biggest challenges as we continue to stand up the JAIC is what every startup organization goes through: namely, finding the right people—like everyone else in the AI game, we’re in a war for talent—getting stable funding, consolidating our workforce into a single operating location, and so on. We like to think of ourselves as a startup, yet we also have to operate as part of the institutional bureaucracy known as the Department of Defense. That makes for some unique challenges. Though I have to say that we continue to have tremendous support from DoD [Department of Defense] senior leaders and the Congress.
Some other lessons included the importance of problem framing, as I mentioned earlier. It is far too easy to jump to an AI solution before fully understanding the problem you are trying to solve. And far too often, I find that solutions offered tend to be narrow solutions to a narrow problem, whereas we are seeking more comprehensive answers to a wide range of challenges. Not surprisingly, data management is a perennial challenge in DoD, especially for machine-learning projects. Data is at the heart of every AI project. We have to liberate data across the department. We are addressing challenges related to data collection, data access, data quality, data ownership and control, intellectual property protections, and data-related policies and standards. After working on Project Mavena for two years and the last year with the JAIC, I am now convinced we have to divide our challenges into two major categories: legacy data, systems, and workflows—in essence, we have to play the data as it lies; and developing the AI-ready force of the future in which data is treated as much a part of a weapon system’s lifecycle management as is cost, schedule, performance, and security. We still have a long way to go.
Third, DoD’s AI adoption capacity is limited by the pace of broader digital modernization. Along with enterprise cloud; cyber; and command, control, and communications (C3), AI is one of DoD Chief Information Officer Dana Deasy’s four digital modernization pillars. These four pillars are going to converge in such a way that digital modernization and warfighting modernization become synonymous. As we look to a future of informatized warfare, comprising algorithm against algorithm and widespread use of autonomous systems, we need to help design operating concepts that harness AI, 5G, enterprise cloud, and robotics. This critical path from a hardware-centric to an all-domain digital force will shape the department for decades to come.
CTC: How does JAIC approach partnerships? Can you provide any examples of some of JAIC’s partnerships, and what these partnerships or collaborative efforts look like in practice?
Shanahan: Even as an organization dedicated to AI, we know we will never corner the market on AI expertise or experience. For that reason, our partnerships with academia and private industry are paramount to the JAIC’s success in fielding AI in the DoD at scale. We continue to have robust dialogues with thought leaders in commercial industry and academia, along with our allies and partners, to help inform our approaches to AI principles and adoption. Our strategic engagement team, working with partners such as the Defense Innovation Unit and the National Security Innovation Network (NSIN), is developing industry outreach initiatives to engage with technology companies of all sizes and scope. We’ve also facilitated a series of hackathons and technology challenges to solicit ideas from academia and industry on the AI and ML [machine-learning] solutions that we are working on through our mission initiatives. For instance, in September, the JAIC partnered with the NSIN to facilitate a hackathon at the University of Michigan’s School of Aerospace and Engineering, where military aircraft maintenance personnel with decades of real-world experience worked alongside students and industry professionals to develop new ideas to incorporate AI-enabled solutions for aircraft preventive maintenance.
We’ve also been active in reaching out to our international allies and partners to discuss national security approaches to AI and to offer perspectives on frameworks for developing common AI governing principles. All of these outreach activities enable the JAIC to harness innovation and provide leadership that strengthens our role as the DoD AI Center of Excellence and focal point for AI adoption.
CTC: When it comes to AI and non-state threats, what does the danger look like in your view, and how can the United States government use AI/ML to try to both anticipate and manage future use or incorporation of AI by terrorist entities?
Shanahan: We acknowledge the dangers presented from the proliferation of weapons and emerging technologies. The DoD works very closely with the Department of State and other international partners to strengthen protocols and frameworks to prevent and deter weapons proliferation efforts of non-state actors and adversaries alike. As AI/ML technologies mature and adoption becomes more ubiquitous, the DoD will continue to work with other agencies and international partners to encourage international norms and provide meaningful leadership to guide the responsible and principled adoption and use of AI-enabled military capabilities.
In general, the barriers to entry for AI/ML are quite low. Unlike most big weapon systems in the past that were dominated by the Defense Industrial Base, many if not almost all AI-enabled capabilities start in commercial industry. We are seeing a true democratization of technologies that, like so many other emerging technologies in history, are as capable of being used for bad as they are for good. It is going to be increasingly difficult to prevent the use of AI-enabled capabilities by those who are intent in causing harm, but you can expect the department will continue with a concerted effort to stymie those who wish to harm the United States and our allies and partners.
CTC: The use of AI tools and technologies to both aid misinformation or disinformation campaigns, or to detect and prevent them, has been quite well documented. Terrorist use of these types of approaches has not yet arrived, at least not on a consistent or broad scale. How concerned are you about this issue, and its potential?
Shanahan: I am very concerned about it. I’ve spent a considerable amount of time in the information operations business over the course of my career. In fact, information operations was one of my core responsibilities when I served as the Deputy Director for Global Operations on the Joint Staff six years ago. Hence, I am very well aware of the power of information, for good and for bad. The profusion of relatively low-cost, leading-edge information-related capabilities and advancement of AI-enabled technologies such as generative adversarial networks or GANs, has made it possible for almost anyone—from a state actor to a lone wolf terrorist—to use information as a precision weapon. What was viewed largely as an annoyance a few years ago has now become a serious threat to national security. Even more alarming, it’s almost impossible to predict the exponential growth of these information-as-a-weapon capabilities over the next few years.
We are seeing the rapid proliferation of AI-enabled technologies that are leading to high-fidelity “Deepfakes,” or the creation of forged text, audio, and video media. While there are a number of methods available for detecting these forgeries, the level of realism is making it harder to keep up. This is becoming a cat-and-mouse game, similar to what we’ve seen historically in electronic warfare and now cyber—action, counter-action, counter-counter-action, and so on. DARPA [Defense Advanced Research Projects Agency] has a project underway to help detect Deepfakes, and there are other projects elsewhere across DoD and the intelligence community oriented against detecting counterfeit text, audio, and video. But as fast as we can come up with solutions, the other side will find creative new ways of defeating them.
A holistic response to state and non-state actors use of Deepfake technology for disinformation, social engineering, or other attacks will require a coordinated effort between DoD organizations with capabilities and authorities spanning AI technology, cybersecurity, criminal investigation, information operations, and public affairs along with coordination with our interagency partners. This will require a long-term, sustained effort to monitor, engage, and deter the growing threat of disinformation campaigns invoked by Deepfake technology. We’re still working through the JAIC’s role in this, but there is no question that we need to be part of the solution given that AI/ML is now playing a substantial role in the creation of these Deepfakes.
CTC: A core, stated area of JAIC’s focus has been to develop AI principles and to lead in the area of military AI ethics and safety. What type of work has JAIC been doing in this sphere?
Shanahan: AI-enabled capabilities will change much about the battlefield of the future, but nothing will change America’s steadfast record of honorable military service or our military’s commitment to lawful and ethical behavior. Our focus on AI follows in the path of our long history of investing in technology to preserve our most precious asset—our people—and to limit the risks to civilians and the potential for collateral damage. All of the AI-enabled systems we field will comply with the law of war, international humanitarian law, and rules of engagement. Moreover, we will take into account the safe, lawful, and ethical use of AI-enabled capabilities at every step in the AI fielding pipeline—from establishing requirements on the front end to operational employment on the back end—with rigorous validation, verification, test, and evaluation at the appropriate points in the pipeline.
We are intensely focused on accelerating the adoption of AI across the DoD, but we have no interest in using unsafe or insufficiently tested technology for mission-critical applications. The DoD’s history clearly demonstrates our track record of investing in technologies that reduce risks to our warfighters, our allies, and non-combatants even as we seek to increase our combat effectiveness. Over the last 50 years, the DoD has invested literally hundreds of billions of dollars to make our weapons more precise, safer, and more effective. Our compliance with the law of war and DoD regulations is a part of the development of every system, from setting requirements through ongoing operational testing and fielding. The DoD has a methodical testing and evaluation process for technology development; we do not field technologies for mission-critical applications before we have substantial evidence that they will work as intended. Moreover, the DoD is spending considerable time evaluating the use of AI-enabled applications in lower-consequence operations (such as ISR [intelligence, surveillance, and reconnaissance], HA/DR, and predictive maintenance) before we apply AI to higher-consequence and higher-risk warfighting operations. These initial lower-risk mission applications have allowed us to absorb valuable lessons about best practices in AI program management without disrupting existing operations or risking lives. In considering the application of AI-enabled capabilities across DoD, we will consider a number of core ethical principles to ensure that our use of AI systems is responsible, equitable, traceable, reliable, and governable. In summary, if we cannot perform a mission using AI in an ethical and responsible manner, then we will not use AI for that mission.
CTC: When near competitors appear to be comfortable with AI uses/approaches that the United States is not, how does the United States ensure that it leads on both the ethics and capabilities front—and do so when other states might be willing to take AI steps or actions that the United States is not prepared to take due to ethical considerations?
Shanahan: Authoritarian regimes inherently possess some advantages in civil-military cooperation. They often have fewer barriers for fielding technology, without the kind of rigorous testing and evaluation processes or adherence to ethical principles considered core to DoD operations. This should not mean that DoD needs to change its approach to fielding AI-enabled capabilities. Instead, it means we must call out our adversaries when they fail to abide by ethical principles. The temporary advantages accrued to those who do not abide by these principles will, in the long run, be outweighed by those who value the safe, responsible, and ethical approaches to fielding AI-enabled technologies. In the long run, our strong approaches to AI ethics, combined with a sustained focus on fielding AI-enabled capabilities to the right places, at the right time, and in a safe and effective manner, will provide the U.S. military a strategic advantage over our adversaries. CTC
Substantive Notes
[a] Editor’s note: For background on Project Maven, see Deputy Secretary of Defense Robert Work, “Memorandum: Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven),” Department of Defense, April 26, 2017.