Recently, I found myself sitting in a large room surrounded by a bunch of really smart people. Policymakers, researchers, technologists, and practitioners gathered to hear an invited speaker discuss the future of cybersecurity. As she concluded, one key refrain clearly resonated through the room, setting off a wave of heads nodding in agreement: Human beings, and their behavior, are the weakest link in cybersecurity. This room was the scene of my first cybersecurity conference, and since that day, as I have traveled across the country from gathering to gathering talking to other experts in the field, it dawned on me that this was not simply a refrain — it was a clarion call. It was also the reason for my entry into cybersecurity.
Human behavior is a specialty of mine. The organization I’m part of, ideas42, is a nonprofit behavioral design lab, and we focus on applying the theories of behavioral economics and psychology to numerous behavioral challenges. Over the past year, a group of us took a hard look at how behavioral science could be applied to challenges in cybersecurity. We quickly realized that a lot of the stickiest problems experts talk about — failing to update computer software, creating weak passwords, succumbing to phishing attacks, clicking on bad links — are behavioral in nature, and occur because the software and hardware we all use on a regular basis isn’t designed with human psychology in mind.
Take updates, for instance. Security professionals frequently emphasize that applying security updates in a timely manner is one of the most important security measures anyone can take. Many operating systems even prompt users to install updates as soon as they are ready. Yet, we often find that despite the ease and importance of updating, many users procrastinate on this critical step. Why? Part of the problem is that update prompts often come at the wrong time — when the user is preoccupied with something else — and provide the user an easy out in the form of various “remind me later” options. Because of this small design detail, users will be much more likely to defer on the update, no matter how important it is — how many times have you clicked “remind me tomorrow” before finally clicking on “update now”?
How might we get around these behavioral problems? Well, one insight that came out of our formative research is to simply slow the user down. So much technological advancement is about speed — faster processors, higher upload and download speeds, more reactive user experiences. However, the problem is that this foists users into a setting that causes us to be more reactive and automatic as opposed to deliberative in our decisions and actions. While we might be making a conscious decision to defer the update to a later date, we could just as easily be clicking “remind me later” because of force of habit instead. The same could be said for when we blow through browser warnings, download attachments in email before thinking about who the sender is, or quickly enter in the same old username and password we use for everything when we sign up for a new web service. By slowing the user down during these critical decision and action steps, it may be possible to move users from thinking automatically to thinking more deliberately, making it easier for users to consider consequences, and modify their behavior accordingly.
How might we get around these behavioral problems? Well, one insight that came out of our formative research is to simply slow the user down.
However, usable security experts tend to disagree with this perspective. Instead, they advocate for creating user experiences that offload these key decisions from the user onto algorithms and systems. There’s a lot to be said for this approach. Satisfaction is a key concern if we hope to get users to adopt services that support improved security, and if the service requires the user to slow down or stop, getting in the way of a user’s productivity, it’s very likely the user will find some way to bypass the service — what good is a security system if the users don’t want to adopt it? Additionally, in some instances, the user’s decision to ignore an update prompt so that they can keep on plugging away at some piece of work is within their economic best interests — in the moment, the benefits of continued productivity far outweigh the risk they take in not updating immediately.
Developing usable security remains a work in progress. Despite numerous suggestions about ways to improve authentication systems, browser warnings, phishing and malware detection, users are still left with critical security decisions, whether or not they know it. Take, for example, phishing and malware detection. A usable security expert might say that in the best-case scenario an email client will be able to automatically detect and filter out most phishing attacks, and only bring those emails to the user that the system itself has a hard time categorizing. The reality is that computers are still pretty dumb. While machine learning and AI have come a long way, computers are still inferior to humans when it comes to identification tasks. Google’s neural net still has trouble differentiating a brown, curly-haired dog from a piece of fried chicken. What this means in practice is that end users are still occasionally required to make a determination about whether an email, webpage, or attachment is legitimate. While low false negative rates are better than high false negative rates, it only takes one really bad misclassified email, out of the hundreds we all skim through each day, to lead to a catastrophic attack .
Solving these kinds of problems will require first admitting that we’re still in a technology transition. Until machine learning advances to a degree that we can safely offload these various identification tasks to the hardware and software we use on a regular basis, people still have to use their own judgement to protect themselves from cyber attacks. As designers of pieces of software and hardware, the responsibility that we have is to make sure that when users are required to make a decision or take an action for themselves, that decision is well-facilitated. Instead of giving users an option between updating now or the nebulous, unspecified later, build an interface that prompts the user to commit to a specific date and time for the update to take place — a time when they know they won’t be busy, and their computer will be on. A similar approach can be applied in building better password interfaces and guidance on creating strong phrases, finding effective ways make users attend to browser warnings, or helping users identify likely phishing attacks in emails.
While we can’t quite predict what will happen in the future, one thing is certain — until AI and machine learning are developed to the degree that Skynet becomes self-aware, users will inevitably have to exercise some judgment. As designers, it’s our responsibility to help users exhibit better judgement than those dumb, uncreative computers, so, for now, we’re still our own overlords.