Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
There was a time when the U.S. government thought it could control encryption, or rather, it thought it stop strong encryption. Beyond a certain key length, government agencies considered distribution of encryption software a violation of federal weapons export laws and attempted to prosecute Philip Zimmerman, author of the famous PGP (Pretty Good Privacy) software, under that theory.
The rationale behind the prosecution was that strong encryption is too hard to crack. To quote one U.S. intelligence official at the time, the "ability of just about everybody to encrypt their messages is rapidly outrunning our ability to decode them."
The government eventually dropped its prosecution of Zimmerman and the notion that it could put the encryption toothpaste back in the tube. But the unnamed intelligence official was right. As a practical matter, modern encryption is so hard to crack, it’s generally not worth the effort.
So you don’t crack it. You work around it. Effective encryption presupposes a certain level of security on the part of the people concealing the data: Private keys need to be kept secure, unencrypted copies of the data cannot be accessible, the keys should not be reused or easily guessed (e.g., using "password" as the password), and so on. Sloppy practice by data security personnel can, and often does, allow clever hackers to gain access to the data without actually defeating the encryption algorithms.
A recent academic paper, "Encryption Workarounds," explores this principle thoroughly. The authors are two of my favorite writers: Orin S. Kerr, a professor at the University of Southern California Gould School of Law and famed blogger on Fourth Amendment issues, and Bruce Schneier, adjunct lecturer in public policy at Harvard University's Berkman Klein Center for Internet and Society and probably the most famous cryptographer around.
Kerr and Schneier define the six basic encryption workarounds:
The paper then identifies some lessons, starting with the obvious one: No workaround works all the time, but they all work some of the time. It then considers the often-ambiguous legal status of workarounds.
The authors are specifically concerned with government action, generally when law enforcement wants to gain access to a plaintext version of content that they have (presumably) legally seized. One famous example was in the wake of the early-2016 terrorist attack in San Bernardino, California, when the FBI seized an iPhone 5C from one of the terrorists. Its inability to crack the phone’s encryption led to a hot dispute with Apple, discussed in more detail below.
Whether a person is obliged to provide a password or other key to law enforcement in such cases is an unresolved question. But as a matter of everyone’s defense against attack on our encrypted systems, it is worth considering the same workarounds.
History is full of examples of people placing too much faith in the inherent strength of their encryption. Even in 1903, when Sherlock Holmes broke the “Dancing Men” code, it was a laughably weak substitution cipher:
(Credit: Arthur Conan Doyle; decodes to "HP ENTERPRISE")
Let’s look at the six different ways to work around encryption.
Find the key: If you (the government or attacker) can find an existing copy of the key, your problem is basically solved. How might you find it? It may be on a Post-it Note on the user’s monitor, it may be on a smart card, it may be saved by the user’s web browser. You may install a keylogger on the user’s computer and watch for use of the password.
Guess the key: How do you guess a password? The most simplistic guesses are birth dates and children’s names. Kerr and Schneier describe a case in which U.S. border agents asked a suspect for her birthday, used it as a passcode, and gained access to her iPhone. The next step, too often successful, is to cycle through a list of common passwords, such as this one, gleaned from mass data breaches.
It's standard practice when encoding passwords (usually hashes of passwords) to loop through the algorithm a large number of times. This slows the process to a degree that is not important for any one user logging in but is a major impediment for users trying to brute-force their way through a password list.
Some systems impede password guessing, often by locking the system for a period of time after some number of wrong password attempts. This was the FBI’s problem in attempting to gain access to the iPhone 5C belonging to one of the San Bernardino terrorists. It asked Apple to create a special iOS version without the feature of locking after password failures so it could use an automated device to cycle through possible PINs. Apple refused. The FBI asked a judge to order Apple to help, but eventually announced that it had gained access some other way and dropped the case.
Exploit a flaw in the encryption scheme: All software has bugs, and encryption software is no exception. The most popular software package for implementing encryption is OpenSSL. In 2014, a severe bug, named Heartbleed, was uncovered in OpenSSL, leading to a mad scramble to fix it and implement the fixes in the uncounted number of programs using old, insecure versions. The extent to which Heartbleed was exploited is unclear.
Heartbleed turned out to be the result of sloppiness, which is bad enough, but sometimes bugs are put in on purpose. One of the important basic functions used in encryption is a random number generator, and so there are standards for them. In 2006, such an algorithm, called Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator), was published and proposed as a standard by NIST (National Institute of Standards and Technology). Dual_EC_DRBG was largely written by the U.S. National Security Agency, which was and remains a normal thing, as the NSA has many of the best cryptographers in the world.
It wasn’t long before cryptographers identified flaws in the algorithm that would allow an attacker to predict future output, compromising the encryption. Eventually, NSA documents leaked by Edward Snowden confirmed that the flaws were an intentional effort to put a government backdoor into the standard, and NIST withdrew it.
Access plaintext while the device is in use: Under best practices, data is encrypted when stored and in transit, but at some point, it must be decrypted to be used. At that point, it exists in plaintext on a computer, and if attackers can gain control of that computer at that time, they get the plaintext. This can be done by malicious software running on the computer or by physically taking the computer while the user is authenticated.
As Kerr and Schneier note, the FBI agents who arrested Ross Ulbricht for running the Silk Road used this technique to evade the heavy encryption employed by Silk Road and the full-disk encryption on his computer. They staged a distraction in the library in which Ulbricht was working and then physically took his laptop away before he could do anything to it. They then immediately copied the files to a USB thumb drive.
Locate a plaintext copy: Obviously, if an unencrypted copy of data exists, you don’t need to bother with the encrypted copy. When the FBI had trouble getting Apple to hack the San Bernardino shooter’s iPhone, they asked for a copy of the iCloud backup of the phone. This was necessarily out of date and insufficient to their purposes, but it was useful nonetheless.
The six methods of working around encryption provide a pretty good guide to securing encrypted data and encryption processes:
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.