Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space

Print Friendly, PDF & Email
Image courtesy of orihaus/Flickr. (CC BY 2.0)

This article was originally published by War on the Rocks on 6 September 2018.

Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role of artificial intelligence (AI), but, surprisingly, many Chinese participants equated AI almost exclusively with armies of killer robots.

Popular imagination and much of current AI scholarship tend to focus, understandably, on the more glamorous aspects of AI — the stuff of science fiction and the Terminator movies. While lethal and autonomous weapons have been a hot topic in recent years, this is only one aspect of war that will change as artificial intelligence becomes more sophisticated. As Michael Horowitz wrote in the Texas National Security Review, AI itself will not manifest just as a weapon; rather, it is an enabler that can support a broad spectrum of technologies. We agree: AI’s most substantial impacts are likely to fly under the radar in discussions about its potential. Therefore, a more holistic conversation should acknowledge AI’s potential effects in cyber space, not by facilitating cyber attacks, but rather by improving cyber security at scale through increased asset awareness and minimized source code vulnerabilities.

Opportunities for AI-Supported Cyber Defense

One of the most common refrains about fighting in cyber space is that the offense has the advantage over the defense: The offense only needs to be successful once, while the defense needs to be perfect all the time. Even though this has always been a bit of an exaggeration, we believe artificial intelligence has the potential to dramatically improve cyber defense to help right the offense-defense balance in cyber space.

Much of cyber defense is about situational awareness of one’s own assets. Former White House Cybersecurity Coordinator Rob Joyce said as much in a 2016 presentation at USENIX: “If you really want to protect your network,” he advised, “you have to know your network, including all the devices and technology in it.” A successful attacker will often “know networks better than the people who designed and run them.” With the right combination of data, computing power, and algorithms, artificial intelligence can help defenders gain far greater mastery over their own data and networks, detect anomalous changes (whether from insider threats or from external hackers), and quickly address configuration errors and other vulnerabilities. This will cut down on the hacking opportunities — known as the attack surface — available for adversaries to exploit. In this way, network defenders can focus their resources on the most sophisticated and deadly state-sponsored campaigns.

This is not science fiction: DARPA experimented with this kind of self-healing computer in its 2016 Cyber Grand Challenge. There, teams competed to develop automatic defensive systems capable of detecting software flaws and devising and implementing patches to fix them in real time. Through AI’s automatic recognition and reparation, the teams were not only able to self-heal their systems; they also did so in seconds as compared to the previous metric of days.

Another cyber defense challenge that artificial intelligence could help mitigate has to do with the prevalence of code reuse. It may come as a surprise to readers that coders don’t always write their own code from scratch. Repositories like GitHub host modular plug-ins that allow coders to piggyback off earlier work by others. As Mark Curphey, the CEO of SourceClear, a startup that focuses on securing open-source code, has said, “Everyone uses open source … Ninety percent of code could be stuff they didn’t create.” While this leads to great efficiencies, eliminating the need to reinvent the wheel, it’s also risky because no one is accountable for the integrity of the code at the core of applications and firmware. No one coder (or the software company they work for) has an incentive to devote resources to the painstaking task of auditing millions of lines of code.

Artificial intelligence could help companies and governments assess and identify errors in code to shore up defenses in existing apps, firmware, and programs. This can be accomplished through machine learning, a method of AI that allows machines to take in data from training sets and discern subtle patterns out of the norm, such as malware. Therefore, assuming the right training data, actors could much more readily go back and clean up previously released versions to reduce the risks of compromise. They can include AI-enabled reviews during the production cycles of new products. It will take time to get code review right: Humans will still be important for identifying errors and determining the best ways to remediate errors that an algorithm detects. But if successful, artificial intelligence could dramatically reduce the prevalence of undiscovered exploits and well-known vulnerabilities.

What Washington Can Do

What can U.S. policymakers, specifically those in the military, do to capture AI’s potential advantages for cyber security and other less-than-lethal aspects of future war? The key is implementation. As DARPA’s competition demonstrated, the military already possesses nascent capabilities for creating situational awareness of personal assets and auditing code integrity. The necessary capacity and know-how also exist in the private sector. Companies such as Cylance and Sift Science claim to leverage artificial intelligence — and more specifically, machine learning — to detect fraud and vulnerabilities. The bottleneck is in acquiring and implementing these advancements in the government.

Therefore, the military must continue to fund and integrate commercial research, talent, and companies into government systems. Importantly, the Defense Department is already demonstrating its own situational awareness; the organization previously known as the Defense Innovation Unit Experimental (DIUx), an initially temporary governmental body dedicated to streamlining commercial technology advancements in the government, is now a permanent addition. And in fact, it was DIUx that awarded a defense contract to ForAllSecure’s MAYHEM Cyber Reasoning System — the winning technology from the 2016 Cyber Grand Challenge. While the 2017 contract demonstrates a step in the right direction, a gap remains between commercial and governmental AI-enabled cyber security.

Given that cyber security is one potential application of artificial intelligence, cyber defense is part of a broader discussion about how the Defense Department should be thinking about AI in ways including, but not limited to, killer robots. There are already reports that the Defense Department is creating an AI strategy, which will be a crucial step for getting the entire department and its massive budget on the same page, working toward a few common goals. A critical part of that strategy, as the department knows, is about how best to work with private companies that are making huge advances in AI.  The reaction at Google when some employees learned about the company’s cooperation with the U.S. military on artificial intelligence should be instructive: Not all technology companies will jump at the opportunity to work with the U.S. military, even if lucrative government contracts are at stake. Some of this, of course, is about market size, as the commercial AI market is far larger. These concerns must be taken seriously in order to bring all of the key AI players, in both the public and private sectors, to the table to create and implement a national AI strategy.

In the meantime, the government cannot afford to let these concerns overshadow the opportunities for AI research in areas beyond autonomous killing. Research on improved cyber security could be just as critical to U.S. national security as any advancement in killer robots.

While a dedicated strategy for AI will help the Department of Defense unify its efforts, the department also needs to infuse artificial intelligence throughout existing capabilities. Despite the shorthand we all use, AI does not refer to a single technology or capability: Different levels of autonomy are important, and there are important differences between types of machine learning and neural networks. The Office of the Secretary of Defense, the Joint Staff, the combatant commands, and the services need to think systematically about how artificial intelligence can better help them achieve their objectives in various realms — including cyber security. They must avoid the conceptual trap of merely asserting that AI is important and that once we can “throw AI at the problem,” we will be better off.

The Defense Department’s AI strategy should also address how to work with allies and partners to improve collective efforts in artificial intelligence and, by extension, collective defense. Today, companies like DeepMind, a subsidiary of Alphabet (Google’s parent company), are making revolutionary advances. Notably, DeepMind is based in the United Kingdom, raising the question of what London’s approach will be, and how allies can best work together to support this kind of research while limiting the potential for misuse. This is not just a U.S. problem, but the United States can be a leader among its allies and partners to promote responsible research and uses in cyber security and beyond.

Finally, the military must work with Congress to ensure that whatever artificial intelligence strategy emerges can be effectively resourced over the long term. If the Defense Department’s AI efforts get caught in the traditional acquisition and procurement machinery that can take 30 years to deliver a fighter jet, it has already lost the next war. Congress has shown leadership and creativity when it comes to protecting investments in cyber capabilities from death by bureaucracy. It can and should do the same for AI by creating a new Major Force Program — an aggregation of funding lines for a specific mission — for artificial intelligence investments. In this way, the government would provide greater transparency into the budget for AI across the entire department. However it is done, congressional buy-in is crucial to ensure a smart AI strategy can help America prevail against adversaries in both the physical and cyber realms.

For the U.S. government to reap any benefits from artificial intelligence in cyber applications, the Defense Department needs to take an enlightened approach to artificial intelligence more broadly. However, Washington is struggling with how to prioritize, galvanize, and embrace disparate research efforts into artificial intelligence. Chinese officials see this and are charting a more deliberate (although probably less flexible) course with their 2030 plan, though the recent visit to China suggests that their focus remains on lethal applications of AI. Regardless, while the United States, China, and others will naturally seek out applications of AI that enable new and improved forms of power projection, American policymakers should not overlook how AI can shore up American defenses and ultimately improve resilience in cyber space.

About the Authors

Michael Sulmeyer is the Director of the Cyber Security Project at the Harvard Kennedy School’s Belfer Center for Science and International Affairs. 

Kathryn Dura is the Joseph S. Nye, Jr. Intern for the Technology and National Security Program at the Center for a New American Security.

For more information on issues and events that shape our world, please visit the CSS website.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.