Risky business: The tradeoff between security and convenience
There were only a few computers in the world in 1954, and computer security, as we know it, was not yet a thing. And yet the tradeoff between convenience—or usability, if you wish—and security was already understood. That's because tradeoffs are not inherently a phenomenon of computers but one of human nature.
That's why, in 1954, Gen. Benjamin W. Chidlaw said at a conference on national security, "Simply put, it is possible to have convenience if you want to tolerate insecurity, but if you want security, you must be prepared for inconvenience."
The ultimate goal of security must be to manage risk. Therefore, one cannot design effective security without first considering the human element.
Who decides what's important? Everyone does
Security professionals have long known that people determine the level of risk they feel comfortable with and act accordingly. It's become axiomatic that people will simply refuse to comply with, turn away from, or seek ways to get around security measures they find too obtrusive, complex, or unnecessary.
People prioritize what's most important to them. While they may lock their front doors when they leave their homes, for example, they won't lock the internal doors to their bedrooms or their kitchen. People weigh their need for security by balancing the risk involved with the complexity—which always translates as the inconvenience—that solving for that risk requires. So, while locking their interior doors might add to their protection, the inconvenience doesn't make it worthwhile.
A survey taken at the height of the pandemic in July 2020 found that more than one in four bank customers said they ended an online banking transaction over an issue with the bank's access security measures. Worse, nearly half said they left the financial institution altogether as a result. That's a significant loss in revenue and business opportunities for those banks.
Please read: How to manage organizational risk
The security-convenience tradeoffs for customer-facing technology, as in the bank survey, as opposed to those for employee computing are not exactly the same, but they have many similarities. You can place more burdens on employees, but you still need to be concerned about usability or you'll end up incentivizing bad behaviors. Customers may just take their business elsewhere, so the usability issue is starker.
From a behavioral point of view, the modern goal of enterprise security is to provide as much protection as the organization can provide in as transparent a manner as possible to the user. This leads to what cybersecurity expert Timothy Ferrell, distinguished technologist for security, risk, and compliance at Hewlett Packard Enterprise, calls his law of conservation of complexity (a derivative of Tesler's Law): "Complexity is usually neither created nor reduced. It's only moved to a different place."
What Ferrell means is that organizations look to remove the complexity, the inconvenience, away from their users. Digging a little deeper, it means using technology to make that complexity invisible by moving it to a technical, automated platform to handle it. The result allows users to continue working as they normally would while the security system operates in the background, as transparently and frictionlessly to those users as possible.
The flip side to this model is its diametric opposite: Organizations want to make their security systems as complex and inconvenient as possible to thwart potential cybercriminals.
Bringing these two goals together creates an interesting and delicate balancing act, which is the most challenging part of modern cybersecurity. Stated quite simply, the challenge is, how can organizations make security as seamless and as transparent as possible for users without rendering it ineffective?
What makes it a challenge is that if users experience the security measures as too difficult, they will find a way around it. "It's like a ball of mercury," says Ferrell. "If you put an obstacle in their way, users will find a way around it."
So, where should enterprises draw the line?
Driving bad behaviors
The need to implement a truly effective security model without it appearing as an obstacle goes back to the need for security to be as frictionless and transparent as possible.
Often, organizations make security an unnecessary burden by not staying up with best practices. For instance, it's been several years since the National Institute for Standards and Technology (NIST) recommended that users no longer be required to change passwords periodically, but it's still a common requirement.
A best-in-class security solution needs to be "under the hood, in the background, or behind the curtain, like the Wizard of Oz," Ferrell says. At the same time, however, users cannot be burdened with jumping through a bunch of hoops or forced to undergo a series of time-consuming machinations in order to comply with the security policy.
With ransomware and other cybercrime surging since the pandemic, advanced technology has become the "go-to" focus of industry investment in response, with such spending expected to rise to more than $133 billion per year by 2022, according to a recent MIT Sloan Management Review report.
While acknowledging that choosing the "right technology" is essential, the study points out that the vast majority of incidents involving hacks, data breaches, and ransomware attacks relate directly to gaps in human performance. The authors say that this has become a persistent and often overlooked cybersecurity issue in most organizations. They point to a vicious cycle as new technologies brought into organizations to address emerging threats tend to increase the level of complexity for users exponentially. The resulting increase then further drives bad behaviors that defeat the security protocols.
The complexity itself overwhelms the users and leads to poor performance, the report concludes. But even more dangerous is that to meet the demands of their jobs, the spiraling level of complexity is directly leading to security workarounds and workers "going outside the system." These two behaviors invite precisely the kinds of security vulnerabilities that the security protocols are intended to prevent.
The security-usability tradeoff isn't just a phenomenon of business IT. In April, Facebook explained how an unnamed party made public an unsecured database with account information, including phone numbers, of over 530 million Facebook users. The attacker got the information by asking for it. As explained by F-Secure's Mikko Hypponen, "Effectively, the attacker created an address book with every phone number on the planet and then asked Facebook if his 'friends' are on Facebook." As respected ethical hacker Charlie Miller put it, "This is a really hard problem to solve, and I personally spent a lot of time trying to 'solve' it in some acceptable manner when I worked at Twitter, which has a similar feature."
This is a feature people want, so Facebook and Twitter gave it to them. All the users whose names and phone numbers were exposed had given permission for the companies to provide them to "friends." What's the correct response from a security perspective? For Twitter and Facebook, the answer is not likely to be to make using their service harder.
The Goldilocks Solution: Usable security
Security has to be usable to be effective. It really needs to be transparent. That's the key to effective security policy that has thought through the role of human behavior in security policy compliance. "It just has to do its job magically under the hood, without it being a burden to the user," says Ferrell.
That's the Goldilocks Solution to effective security policy: not too inconvenient to drive bad behaviors on the part of the users but inconvenient enough that it thwarts bad behaviors on the part of bad actors.
An April 2020 survey of cybersecurity professionals points toward the need to educate employees about the critical role their behavior plays in creating an effective security posture. Indeed, one insight of the survey is that people appear to be growing numb or inured to cybersecurity risks.
There are many possible explanations for this, including, perhaps, generational change as Gen Zers migrate into the workforce. Zoomers have spent their lives sharing their lives online with very little negative consequence for the vast majority. Many don't see the necessity of caring that much about security policy or why it's even important.
Properly implemented, zero trust can make secure and usable systems easier to develop. Zero trust is the systems philosophy that all actors must prove identity and authorization for any resource.
Zero trust increases the burden on system designers and developers to make security usable. No longer will you be able to have free reign in a system just because you're on the VPN. Done wrong, users may need to respond to challenges frequently. Good systems will provide facilities, like security keys, that make repeated authentication as easy as possible.
"It does the job of hiding the complexity from the user, so it doesn't drive undesirable behaviors," concludes Ferrell. "It promises to help solve some of our biggest security challenges by moving the complexity into the technology core and allowing it to do all the hard and heavy lifting from there."
Lessons for leaders
- Security must be usable to be effective.
- Investments that make users more accepting of effective security make the company more productive.
- Users will work around or avoid tasks that have excessively burdensome security; this can lead to disasters.
Security just has to do its job magically under the hood, without it being a burden to the user.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.