Human Error: Living With the Weakest Link

“We have met the enemy, and he is us.”
– Walt Kelly’s Pogo

Computer security breaches have become so common as to seem like a force of nature we can’t stop or control, like hurricanes or epidemics. After each one, experts scramble to plug holes, rewrite security plans, and explain at length why that particular problem will never happen again. We want to believe that with just a few more bug fixes, our systems will be truly secure.

Unfortunately, perfect security will elude us for as long as human beings are involved, because humans—for all our greatness—are imperfect. From the data entry clerk to the Director of Security to the CEO, everyone makes mistakes, and sometimes those mistakes result in security breaches, either immediately or long after the original mistakes were made.

The truth is, our security systems are indeed improving steadily, and tend to get better after each breach. But even if we believe we might create perfect security technology, human nature doesn’t fundamentally change. Sometimes we’re fooled by social engineering attacks (like spear phishing), sometimes we misunderstand what we’re supposed to do when confronted with a cyber threat, and sometimes we just plain mess up. Software can be more or less perfected, but not human behavior. In fact, the more complex, the more stressful, or more fast-paced the environments we work in, the more likely we are to make mistakes; this “human factor” has been well proven in industries that rely on human performance for safety, such as aviation.

Given that human error is inevitable, we need to start supplementing our important efforts to educate users about security with more explicit plans about how to handle the next security-threatening human error. Just as we can prepare for the next tsunami by building higher sea walls, and zoning to discourage building in flood-prone areas, we can anticipate the ways some users will inevitably err, and plan around them.

Pre-Empting Human Behavior

One thing we can do is deflect and redirect errors to where they’ll do the least harm. Workers in nuclear power plants have often replaced generic-looking but potentially hazardous switches with beer tap handles or other things that stand out and warn workers that this is a particularly dangerous switch. This may not decrease the likelihood of a worker throwing the wrong switch, but it may decrease the likelihood of throwing the worst possible switch, and certainly helps us outwit our own tired, stressed, or panicked subconscious that could throw the switch without thinking.

Paradoxically, an overall system is often safer if we

Author: Nathaniel Borenstein

Nathaniel Borenstein is chief scientist for cloud-based e-mail management company Mimecast. At Mimecast, he is responsible for driving the company’s product evolution and technological innovation. Dr. Borenstein is the co-creator of the Multipurpose Internet Mail Extensions (MIME) e-mail standard and developer of the Andrew Mail System, metamail software, and the Safe-Tcl programming language. Previously, Dr. Borenstein worked as an IBM Distinguished Engineer, responsible for research and standards strategy for the Lotus brand, and as a faculty member at the University of Michigan and Carnegie Mellon University. He also founded two successful Internet cloud service startups: First Virtual Holdings, the first Internet payment system; and NetPOS, the first Internet-centric point-of-sale system.