Skip to main content

Facial recognition can drive business goals, but where do we draw the line?

Some people worry that facial recognition technology undermines privacy, but there is a fundamental difference between using the technology to monitor shopping habits and using it to save lives.

No time to read? You can listen to this article here:

If a law enforcement computer could use a street camera to find and identify your face in a crowd, would you feel uncomfortable? Violated? Worried that George Orwell's creepiest predictions in "1984" have come true? What if the face was your missing child being dragged into a van by a captor? Would you feel grateful that your child's face was in a database instead of gracing the outside of a milk carton?

That’s the dilemma we currently face with artificial intelligence suddenly bringing automated facial recognition (AFR) technology to the forefront of a red-hot ethical debate. It is pitting technologists against privacy watchdogs—and even those on each side against one another. How this debate plays out may heavily influence the fate of the tens of thousands of children who go missing every year.

Facial recognition was once considered a useful but rather imprecise science that leaned too heavily on human involvement. However, the rise of machine learning has changed everything. Today, it’s possible to capture, aggregate, and analyze massive amounts of facial feature images from cameras, sensors, smartphones, and social media sites and instantly compare them with other facial images in global databases.

That power can be used for good. With the emergence of AI face recognition, Gartner predicts that by 2023, there will be an 80 percent reduction in missing people in mature markets compared with 2018.

AI accelerating everything

The huge differentiator sparking the change is that by removing human beings from the facial recognition process, you essentially turn every facial feature—whether it’s the shape of the eyes or nose or a person’s skin texture—into a number. With the right algorithm, computer scientists can quickly and more accurately create models of what a person really looks like, even when the original picture or video of that individual is fuzzy or grainy.

And that’s one of the breakthroughs that has the public and private sectors so excited—yet worried at the same time.

The technical infrastructure is still coming together to support more widespread adoption of AFR. For example, Amazon, Facebook, Google, Microsoft, and others are working diligently to improve facial detection software's accuracy and precision. IBM plans to release a huge public dataset (more than 1 million images of people) to minimize bias and improve AI capabilities, which continue to evolve. Vendors including Dell and Hewlett Packard Enterprise are building out capabilities in next-generation servers to enable faster, seamless, and cost-effective facial-image sharing. And most vendors are trying to make the technology more budget-friendly and mainstream.

 

Delivering the promise of the capabilities of facial recognition requires a powerful infrastructure

Lin Nease, chief technology officer for the Internet of Things at HPE, says much of the video processing work must occur at the network edge, using specialized compute appliances. As cameras become more high definition and more video processing use cases arise, cost concerns will necessitate a comprehensive edge architecture.

"There’s an issue with complexity and costs," he says. "We have customers who would like to have processing occur all the way out in the video cameras. But the problem is, once you have software running on those network-connected systems, you have an explosion of cybersecurity and management costs, due to the number of devices that must be managed and patched. So the industry is working to address those issues to make this technology more manageable for the average organization."

Agencies racing ahead

Such challenges are not stopping government agencies from investing, however. Military and intelligence agencies, for instance, have long understood the value of facial recognition technology for identifying spies and terrorists. Adding machine learning and AI to the mix just makes the technology far more useful to them. The U.S. Army, for example, uses AI and thermal imaging to capture facial images in the dark

Law enforcement agencies are also becoming avid users of AFR. The FBI has long used facial recognition technology to identify suspects across the country, sharing that information with various state and local law enforcement agencies. It’s also piloting Amazon Rekognition face-matching software to go through surveillance footage much faster.

Across the country, at least 26 states (and possibly as many as 30) allow law enforcement to run or request searches against their databases of driver’s licenses and ID photos, according to a report from the Georgetown Law Center on Privacy & Technology. Major police departments, including those in Chicago, Dallas, Detroit, Los Angeles, New York, and Orlando, are using or piloting real-time facial recognition on live surveillance camera feeds in their cities.

Identifying suspects or suspicious individuals with facial recognition doesn’t stop with the police, by the way. In fact, Amazon last year filed a patent for home surveillance technology doorbell cameras from Ring, which it recently acquired. According to CNN, the patent describes creating a database of suspicious persons. Theoretically, if someone with a history of burglary were to show up at your door and have their image captured by your doorbell camera device, it would consult a database, identify that individual as a potential threat, and alert you and your neighbors to take note.

Finding missing persons

Of course, much of the real excitement around AI-assisted facial recognition technology lies in its use for finding missing persons, especially children. It’s difficult to know exactly how many people are missing at any given time. But it’s thought that tens of thousands of kids disappear every year. About 90 percent are runaways, 14 percent (one in seven) are taken for child sex trafficking, and 5 percent are abducted by family members, according to the National Center for Missing & Exploited Children.

Loren O'Keeffe, founder and CEO of the Missing Persons Advocacy Network (MPAN) in Australia, says organizations such as hers struggle with the vast amount of work involved in finding and correctly identifying missing people around the world. As such, MPAN launched Invisible Friends, a campaign that uses Facebook’s new facial recognition and auto-tagging technology to find missing persons.

"Without sufficient resources, it’s not feasible for us to focus on actively searching for missing individuals," O’Keefe says. "So, a passive, low-cost adoption of facial recognition technology offers an opportunity to alleviate, if only temporarily, families’ overwhelming feelings of hopelessness and helplessness."

Facial recognition technology is also addressing the missing persons crisis in India, where there are about 200,000 missing children. In New Delhi, police reportedly traced nearly 3,000 missing children within four days of kick-starting a new facial recognition system. Using a database called TrackChild, the system compared and contrasted previous images of missing children with about 45,000 current images of kids around the city. The program was widely viewed as successful.

Privacy concerns persist

But no such deployment is without controversy. With facial recognition technology and the ability to share images becoming so advanced, nearly every implementation seems to come with potential misuse warnings from legal pundits, privacy watchdog groups, and technologists.

Many point to China as an example of facial recognition technology potentially getting out of control, with surveillance cameras showing up in cities everywhere to monitor everything from security surveillance to jaywalking, speeding, and border control. Interestingly, China is also one of the world’s leaders in implementing AI standards, including those for facial recognition.

"Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression," writes Microsoft President Brad Smith. "Imagine a government tracking everywhere you walk over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each smart shelf that you browse and product you buy—without asking you first. This has long been the stuff of science fiction and popular movies like 'Minority Report,’ 'Enemy of the State,’ and even '1984.’ But now it’s on the verge of becoming possible."

Smith, like many high-tech leaders of late, has called for thoughtful government regulation around the use of facial recognition technology and for the development of "norms" around acceptable use.

"We believe it’s important for governments in 2019 to start adopting laws to regulate this technology," Smith adds. "The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up."

Striking a balance

Privacy and technology watchers tend to agree a careful balance needs to be struck.

"Fundamental to the right to privacy is the ability to control the disclosure of one’s identity to others," says Marc Rotenberg, president and executive director of the Electronic Privacy Information Center, an independent, public interest research center in Washington. "That ability is denied in prisons and in authoritarian states. We should proceed very carefully with facial recognition, particularly when the aim is to obtain identity surreptitiously or as a means of social control. This technology is not compatible with democratic society."

"Putting cameras in public spaces to help find missing individuals can be a great benefit to society; it can reunite loved ones, help catch criminals, and possibly stop human trafficking," says Kathleen Walch, managing partner and principal analyst at Cognilytica, an AI research firm. "But what are the downsides to this technology? Do we want cameras everywhere? And do we want them connected to a missing persons database? We need to weigh the consequences of constant monitoring beyond the good benefits."

Most observers will agree that there are privacy considerations to nearly every connected technology. If you gather, store, analyze, or share any kind of personal data, there will always be those who grow concerned about it. Such is the impetus for privacy regulations like the European Union’s recently enacted General Data Protection Regulation, the California Consumer Privacy Act of 2018, and the Illinois Biometric Information Privacy Act. It’s also why we’re seeing a wave of privacy lawsuits against facial technology players like Facebook and Google.

But in the end, many observers also recognize the potential of facial recognition technology for solving the missing persons crisis—within limits.

"The reality is it’s increasingly difficult to live in a world with a feeling of complete privacy," says MPAN’s O'Keeffe. "Technological advancements are often concerning before they become mainstream, and fears of misuse are as valid as they are inevitable. Facial recognition could be the latest, more efficient form of closed-circuit television, and as long as the same laws enforcing the appropriate management of that information are applied, we’ve no doubt it could aid in the location of missing loved ones."

Facial recognition: Lessons for leaders

  • Before implementing any facial recognition projects, identify any privacy concerns your customers may express.
  • Show the value of facial recognition technologies to the business bottom line; make a clear connection between profit and privacy.
  • Make sure all applicable regulatory hurdles are addressed.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.