Skip to main content

Will privacy rights stymie AI?

With the right to privacy the subject of significant regulation, AI analytics will be under increasing scrutiny. Experts say companies using AI with personal data must focus on GDPR and HIPAA, but long term, companies can expect governments and people affected to increasingly push for audits and explanations of AI decisions.

Business use of cognitive and artificial intelligence is expected to skyrocket in the coming years, with global spending on the technology reaching $19.1 billion in 2018, a 54.2 percent increase over last year, according to IDC.

But as businesses embrace AI to help with all kinds of tasks, they face a complex set of regulations that limit what personal data they can collect and use. Many AI systems don’t collect personal data, of course: A business using AI to predict when its own factory machinery needs maintenance has little to fear from regulations.

But a handful of significant regulations in the United States and the European Union restrict what personal data businesses can collect—and potentially what AI systems can use—from customers and other people.

GDPR is raising awareness of the problem

The elephant in the room is the EU’s General Data Protection Regulation (GDPR). GDPR requires companies using personal information to get explicit consent before collecting and using personal information, including names, home addresses, email addresses, bank details, social networking posts, and computer IP addresses.

A similar data protection and privacy law went into effect in China May 1, although it has received less attention from U.S. and European companies because of the limited number of U.S. and European companies operating in the country.

The U.S. has less comprehensive privacy regulations than the EU, with a patchwork of laws covering some industries and technologies. Most AI experts see the Health Insurance Portability and Accountability Act (HIPAA), the 1996 law governing medical data privacy, as the U.S. regulation companies need to pay the most attention to.

But HIPAA and GDPR may not be the last of the regulations affecting AI. With Facebook’s recent data leak involving Cambridge Analytica, companies using AI should expect more government scrutiny.

Another European regulation that limits some AI data use is the long-standing right to be forgotten.

Jana EggersCEO, Nara Logics

The Cambridge Analytica leak and other data breaches “will almost certainly give rise to new data privacy regulations in the U.S. and elsewhere,” says Eric Schrock, chief technology officer of data management platform provider Delphix. “Governments are realizing that these massive platforms have access to massive amounts of personal information, and their terms of service are insufficient to protect consumers from potential abuse.”

In the meantime, companies using AI with personal data must focus on the EU’s GDPR. The new regulation requires most companies that collect EU residents’ data to get the consent of people whose data is being processed. The regulation also requires companies to anonymize collected data to protect privacy and notify people who are affected by a data breach.

GDPR has a broad impact beyond AI, says Nicolas Economou, CEO of legal technology company H5. “All companies, irrespective of their business model, will need to be able to identify and take precautions to protect and appropriately use” personal information, he says.

Consumer-facing companies will be the AI users most affected by GDPR, experts say.

The potential penalties for a violation of GDPR are huge. The maximum fine is 4 percent of annual global revenue, or €20 million, whichever is greater. 

Some tech policy experts also believe GDPR includes an implied right to an explanation when an EU resident doesn’t like an AI decision, although it’s unclear how or if this right will be enforced.

Infrastructure powered by artificial intelligence can overcome limitations of humans and traditional tools. Learn more about the autonomous data center.

No more black-box decision processes

Schrock predicts governments and affected people will increasingly push for audits and explanations of AI decisions.

“The notion of legal liability in the world of artificial intelligence is convoluted, and it’s only going to get worse as the algorithms become more sophisticated and put into use in more places,” he says. “Part of the legal process will undoubtedly come down to litigation, which will force companies to explain how an AI algorithm came to the outcome it did and whether the algorithm is flawed … or if it’s a reasonable outcome given the data presented to it.”

While many privacy advocates have applauded GDPR, some in the tech industry aren’t big fans. The regulation could “wipe out a decade of advances in AI,” says Vian Chinner, CEO of Xineoh, a predictive analytics vendor.

Requiring AI companies to explain results would be particularly difficult, Chinner says. “Results from older-generation algorithms like decision trees are easily explainable, but significantly less accurate” than modern AI techniques.

Economou, who serves as co-chair of the law committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, has a much different take on GDPR.

“GDPR is a powerful step in placing the ordinary citizen back in control of her or his own data,” he says. “It offers the hope of meaningful safeguards against a Kafkaesque world, where the institutions of state and society rely on opaque AI to make non-appealable decisions that affect citizens’ rights or opportunities.”

Don't forget to forget

Another European regulation that limits some AI data use is the long-standing right to be forgotten, says Jana Eggers, CEO of Nara Logics, vendor of an AI‑powered recommendation engine.

AI companies using personal data will need to deploy systems allowing for “frequent rebuilds needed to extract data that needs to be forgotten,” she says.

In the United States, companies using AI and personal data are largely focused on HIPAA compliance. HIPAA includes privacy, security, and breach notification rules for healthcare-related organizations.

“The key concept is to protect confidential identifiable information in use, at rest, and in transit,” says Gerry Blass, president and CEO of HIPAA compliance vendor ComplyAssistant. “All regulations that focus on information privacy and security impact AI as far as protecting vulnerable information.”

As with GDPR, opinions are mixed, with Xineoh’s Chinner saying the regulation has held back AI advances in medicine.

“If all medical data could be shared and centralized, algorithm capacity for the prediction and cure of diseases would grow exponentially,” he says.

The range of industries impacted is broad

Finance and telecom companies also must comply with privacy regulations that could affect their use of AI, experts say. And the U.S. Federal Trade Commission has a broad mandate to take legal action against companies for unfair or deceptive business practices. FTC action is largely focused on companies that break privacy and other promises made to consumers.

The FTC’s willingness to investigate companies can depend on the political party in the White House, with the president controlling nominations to the commission. But the FTC, along with state consumer protection agencies, can investigate AI companies when the technology's use and effects are “not clearly explained to impacted customers or have unexplained, unintended consequences,” says H5’s Economou.

GDPR, however, is the primary concern for many companies using AI. HIPAA is well established law, and the FTC has a long track record, but it’s still unclear how aggressively the EU will enforce the new privacy regulations or whether European consumers will push for enforcement through lawsuits.

Ensuring regulatory compliance will be difficult, because “the regulations have not been tested, thus boundaries are not clear,” Eggers says. She recommends that companies have clear processes for evaluating potential violations, offer comprehensive employee training, and have clear processes for escalating complaints.

Eggers ran a German crowdsourcing company in the early 2000s and found EU regulators willing to work with companies on enforcement issues. She says that experience gives her “a reasonable amount of my optimism for these regulations actually helping companies with their AI efforts.”

AI success will require human involvement 

Companies deploying AI systems must also remember to retain robust human oversight and avoid decisions made completely by AI, adds Chuck Davis, CTO and co-founder of Element Data, an AI business intelligence startup.

“Companies using AI can self-regulate by keeping humans involved in their criteria, which would be a positive move for the industry as a whole,” Davis says. “It will pose some initial challenges as companies may have to pivot, but I believe approaching data responsibly and ethically from the start will clear a path to gain consumer trust.”

In addition, AI users should have processes to identify personal information and should put in place reliable measures to test the effectiveness and fairness of their AI systems, Economou recommends.

Companies must also ensure their AI developers and operators are both properly trained and credentialed, “to ensure that AI meets its intended objectives,” particularly systems dealing with financial services, medicine, and legal issues, he says.

Currently, “operators are the sole judges of their own competence,” Economou adds. “This is unlikely to be sustainable: Are even the best doctors competent to understand how AI arrives at a diagnosis?”

This kind of self-regulation is important, he adds. “There is no easy answer, but if industry is to gain the trust of both regulators and civil society,” it needs to put in place these checks on its own AI systems. Failure to self-regulate will motivate governments to get involved to make their own, often politically motivated, rules.

How privacy rights impact AI: Lessons for leaders

  • Companies using AI have a variety of regulations they need to comply with, including HIPAA in the Unites States and GDPR in the EU.
  • GDPR requires companies using personal information to get explicit consent before collecting and using personal information, including names, home addresses, email addresses, and computer IP addresses.
  • The United States has a patchwork of privacy laws that could affect AI, but many companies are focused on HIPAA.
  • It could potentially take years of litigation to fully define the boundaries.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.