Human Error: Living With the Weakest Link

give users fewer warnings, not more. A warning that users see too frequently can be like the boy who cried wolf; users can become so accustomed to ignoring it that they will barely notice it when it truly matters.

As our software systems grow more complex, it is high time that they developed a more complex knowledge of the user. For a brand new user, there might be many risky actions worth warning them about. But by the time a user has become adept at using a system, warnings that were initially useful become nuisances to be ignored, which can eventually cause the user to overlook the occasional truly important warnings scattered among the rest.

For example, if I tell a system to delete a whole directory full of files, and I’ve never done anything like that before, a warning is probably in order. If I’ve been using the system for years and have deleted directories often in the past with no bad consequences, the warning may do more harm—by desensitizing me to warnings—than good.

For this kind of user modeling to work, it is probably important that it be made completely automatic. To the extent that a human being decides who is a sophisticated user and who is not, the decision becomes a political one, with possible career-affecting consequences, and is subject to all the interpersonal drama of a performance review. Such a system could become an administrative nightmare, with the potential to negatively affect team morale and performance as the status quo becomes the acceptance of the lowest common denominator in an effort to avoid risk and doubt.

It seems much more likely to be effective if the system itself decides who is a sophisticated user, and matches its warnings and other constraints to the capabilities it ascribes to the user. Paradoxically, people are more likely to accept being labelled and categorized by an “impartial” computer than by a human being with whom they are already enmeshed in a complex web of relationships and dependencies.

Striking a Balance

The future of managing risks associated with human behavior will be a balancing act: we need technology that can pre-empt human error, and even replace human labor wholesale in some cases with automated processes; on the other hand, we must avoid scenarios where users feel they are living and working in a robotic “nanny state” in which they have little control.

At the end of the day, while human error is the weakest link in an organization’s security, human creativity and ingenuity are still the critical factors in overall security—and we can’t risk disrupting that. If we can provide technology that empowers users to do their best work while helping them to avoid the small errors that can have large consequences, then we will strike a balance that offers the best of both worlds—security and freedom—in the workplace.

Author: Nathaniel Borenstein

Nathaniel Borenstein is chief scientist for cloud-based e-mail management company Mimecast. At Mimecast, he is responsible for driving the company’s product evolution and technological innovation. Dr. Borenstein is the co-creator of the Multipurpose Internet Mail Extensions (MIME) e-mail standard and developer of the Andrew Mail System, metamail software, and the Safe-Tcl programming language. Previously, Dr. Borenstein worked as an IBM Distinguished Engineer, responsible for research and standards strategy for the Lotus brand, and as a faculty member at the University of Michigan and Carnegie Mellon University. He also founded two successful Internet cloud service startups: First Virtual Holdings, the first Internet payment system; and NetPOS, the first Internet-centric point-of-sale system.