Part 2: The Pearl Harbor of Cyber Attacks

Cyber attacks-2.jpeg

In early January, we published “The Pearl Harbor of Cyber Attacks Goes Unnoticed”. The biggest espionage attack in US history went unnoticed for the better part of 2020…if there was a better part.

And the Hits Keep Coming

Over the weekend, the Wall Street Journal reported new Russian-backed campaigns to undermine US citizen confidence in COVID vaccines. President Biden immediately responded that the US will counter with all the tools we have. Ummm, ‘scuze me. Do we have the right stuff aka tools? Is it time for an AI moon shot?

Steve Blank, tech entrepreneur and author of Lean Startup methodology, has been deeply involved with cyber technology. In his recent blog post Software Once Led Us to the Precipice of Nuclear War. What Will AI Do? he reminds us of a time in history when two superpowers - US and Russia - were unaware that we were on the brink of an accidental nuclear war. It is a disturbing part of history that most of us who grew up during that period had no knowledge of. According to Santa Cruzian Gregg Herken, author of Brotherhood of the Bomb, purchasing too much steak from Safeway could have triggered nuclear holocaust:

The Russians had reason to be worried. As they say, “even paranoids have enemies”. And the U.S. does have a first-strike plan, and considered activating it in 1961 (on that note, the distinction between "preventive war" and "preemptive attack" was removed from the DOD's Dictionary of Military Terms--the Pentagon's "Bible"--during the Trump years). Among the assumptions that the KGB made as an indicator of a coming attack: Americans would start buying more steak at Safeway. You can't make this stuff up!

AI Enters the Scene

AI is being used for cyber attacks. In fact, AI is likely to advance AI programs faster than human coders. Think Skynet. How will AI be used to shape policy, to determine when to prevent or preempt? Steve Blank asks the foundational question: what could happen when we start using Artificial Intelligence and Machine Learning to shape policy? The remaining content below is an excerpt from Blank’s blog Software Once Led Us to the Precipice of Nuclear War. What Will AI Do? We recommend you read the entire blog.

A Cautionary Tale

Forty years ago RYAN attempted to automate military policy and potential actions. But in the end, RYAN failed in actually predicting U.S. intent. Instead, RYAN reinforced existing fears, and accidently created its own paranoia.

While the intelligence lessons of RYAN and Able Archer have been rehashed for decades, as our own AI initiatives scale no one is asking what lessons RYAN/Able Archer should have taught us about building predictive models and what happens when our adversaries rely on them.

Which leads to the question: What could happen when we start using Artificial Intelligence and Machine Learning to shape policy?

  • What could happen when we start using artificial intelligence and machine learning to shape policy?

  • Will AI/ML actually predict human intent?

  • What happens when the machines start seeing patterns that aren’t there?

  • How do we ensure that unintentional bias doesn’t creep into the model?

  • How much will we depend on an AI that can’t explain how it reached its decision?

  • How do we deconflict and deescalate machine-driven conclusions? Where and when should the humans be in the loop?

  • How do we ensure foreign actors can’t pollute the datasets and sensors used to drive the model and/or steal the model and look for its vulnerabilities?

  • How do we ensure that those with a specific agenda (i.e. Andropov, chairman of the KGB) don’t bias the data?

  • How do we ensure we aren’t using a software program that misleads our own leaders?

The somewhat-comforting news is that others have been thinking about these problems for a while. In 2020, the Defense Department formally adopted five AI ethical principles recommended by the Defense Innovation Board for the development of artificial intelligence capabilities: AI projects need to be Responsible, Equitable, Traceable, Reliable and Governable. The Joint Artificial Intelligence Center appointed a head of ethics policy to translate these principles into practice. Under JAIC’s 2.0 mission, they are no longer the sole developer of AI projects, but instead providing services and common software platforms. Now it’s up to the JAIC ethics front office to ensure that the hundreds of mission areas and contractors across the DoD adhere to these standards.

Here’s hoping they all remember the lessons of RYAN.

Lessons Learned

  • RYAN amplified the paranoia the Soviet leadership already had

  • The assumptions and beliefs of people who create the software shape the outcomes

  • Using data to model an adversary’s potential actions is limited by your ability to model its leaderships intent

  • Your planning and world view are almost guaranteed not to be the same as those of your adversary

  • Having an overwhelming military advantage may force an adversary into a corner. They may act in ways that seem irrational

  • Responsible, Equitable, Traceable, Reliable and Governable are great aspirational goals