Skip to main content

4 obstacles to ethical AI (and how to address them)

Data bias, opacity, data monopoly, and job loss are issues that plague the field of artificial intelligence. Here are some simple solutions to these problems.

The past several years have seen a dramatic swell in development of—and discussion about—artificial intelligence. Many of these conversations have a teleological bent: AI will kill us all. Or AI will save us all. But we technologists lack an outline of the ethical obstacles to functioning AI, as well as practical steps to solve these problems so that we control AI instead of the other way around.

No one wants to be the next Sikorsky, Kalashnikov, or Oppenheimer, all of whom came to regret their roles in developing a technology that was subsequently used to grim and anti-human effect. However, there is no reason to think we are obliged to choose between scientific advances and ethics. But, as with any project design, you can’t solve problems you don’t acknowledge. So let’s outline barriers to ethical AI development—data bias, opacity, data monopoly, and job loss—and consider simple, here-and-now fixes to avoid future regret.

Data bias: A biased AI is a useless AI

“At its foundation, data bias is how we translate racism into technology,” says Charla Griffy-Brown, professor of information systems technology management at Pepperdine University. Another element, she adds, is narrow-mindedness and favoritism. “We have seen this most recently when AI has become racist or gender-biased based on the data it was trained [with].”

Data bias has been front and center in the news, including use of AI to determine which criminals are likely to reoffend. In this case, the problem is that extraneous data, ranging from ZIP codes to the time a court hearing was scheduled, “overwhelms the history of recidivism,” explains Damien Williams, a convention associate at SRI and a researcher in science and technology studies at Virginia Tech.

Another example is AI’s use to sort resumes, which can reflect societal biases in terms of gender and racial representation. “‘Black-sounding’ names and ‘female-sounding’ names are assigned a lesser weight for a given job,” says Williams. A hiring manager using such an AI is less likely to hire the right person because the data is biased in favor of white males, irrespective of required skills and experience.

One reason we have a hard time recognizing an AI bias is our faith in science and mathematics, a belief that AIs are based on black-and-white mathematical markers. However, AIs are created and trained by people. If the pool of developers and trainers are similar in background, their shared bias is likely to be communicated to—and then used by—the AI.

Fixing this requires recognition tools and conceptual frameworks, says Williams. “But these are hard to operationalize, so we haven’t put much effort into developing them.”


Video: Moral code: The ethics of AI

The promises of AI are unimaginable, but the need for honest dialogue about how to build and adopt these technologies responsibly is critical. It's time to get involved in that conversation.


To “bracket out” bias, Williams says, “I have to recognize how I create systems and code my understanding of the world.” That means making an effort early on to pay attention to the data entered. The more diverse the group, the less likely an AI system is to reinforce shared bias. Those issues go beyond gender and race; they also encompass what you studied, the economic group you come from, your religious background, all of your experiences.

“Business always wins,” says Griffy-Brown. “If you own the process, you own the outcome.” So business leaders, CTOs, and developers who want to develop ethical AI need to own the process. They not only have the ability to change the complexion of AI but the motivation to do so.

That becomes another reason to diversify the technical staff, says Williams. This is not merely an ethical act. The business strategy may produce more profit because the end result may be a more effective AI. "The best system is the one that best reflects the wide range of lived experiences and knowledge in the world," he says.

On a personalized level, according to Griffy-Brown, a lot of technology, including AI, is devoted to creating a well-designed bias that delivers what customers want to see. “This is inherently dangerous to the individual, society, and to business,” she says. You are unlikely to be exposed to a wide spectrum of possibilities in such a system because the whole point is to give people what they are most likely to already want. It’s sales as confirmation bias.

“To avoid this, be conscious of data reasoning,” says Griffy-Brown. “We’re already in the swamp, so we can’t avoid it. So how do we evaluate and mitigate? AI used properly might help us with this, though it’s easy to use it to intensify bias. Where is the bias located algorithmically? Find that.”

Griffy-Brown agrees that the key to diversifying data and minimizing bias is by diversifying the human beings who are accountable for it. We need to develop specialists in finding the bias in code. For example, the University of Washington offers a course called Calling Bullshit in the Age of Big Data, one indication that an entirely new science surrounding data bias is emerging. “The key is intentionality,” she adds.

  • Diversify your technical staff by changing hiring practices in order to reduce shared bias.
  • Think critically about your data and ferret out your own biases.
  • Own your process. You are responsible for data bias in your AI systems. Not your customer, not society, not math—you.

Unlock your data: Artificial intelligence and data-driven services can help companies uncover insights, create trusted transactions, and employ intelligence in revolutionary ways.

Opacity: If you can’t show how an AI works, you can’t correct for bias

Transparency in AI has generated more than a few arguments. Some believe an insistence on radical transparency will slow down all the good AI can do, including its role in diagnosing illness.

The other premise, shared by these two experts, is that “the ends justify the means” does not apply when a system is inherently capable of—even prone to—bias. If we as human beings can communicate our shared biases into technology that can determine whether we get a job or the right medical treatment or whether we are imprisoned, then we have a responsibility to start making the black box out of Lucite.

“AI is not a purely objective system,” says Williams. “Making AI transparent doesn’t solve the problem of bias, but it’s a step.”

Williams advises developers and business leaders to make their work comprehensible to non-technical people. “Honestly tell the story of your AI,” he suggests. Ask users and customers if their values are reflected in the product. “Make it clear that AI is a helpful tool but no more perfect than the people who created it.”

For Griffy-Brown, it’s an issue of risk—and openness usually carries less risk, particularly in regard to public reputation. “AI systems come with more risks, so companies need to make choices on what kind of AI tech they want to use,” she says. “One of the things we have to do as intentional decision-makers, as business people, or members of the government, is to evaluate cyber-risks: what they are, what they will cost, what will you do about it when you wind up making a mistake, and how you can mitigate these risks. We can achieve better outcomes if we’re very articulate about what we’re doing and what the risks are at every stage of project management and the technology lifecycle, from budgets to culture.”

In short, transparency improves a company’s risk posture, increases its top line, and moves it toward compliance.

  • Assess the risks of opacity vs. transparency: Is there any real reason for Cold War-style secrecy?
  • Show your work: Aim for as much transparency as possible to minimize biases and make them easier to identify.
  • Move your company toward compliance with as much transparency as possible: Compliance now influences law later.

Data monopoly: Personal, political, economical

The Cambridge Analytica scandal happened because Facebook had an effective monopoly on your data, says Williams. Most people didn’t even know that the data existed, much less that it was scraped. But Facebook is far from the only organization with a data monopoly. And those data monopolies can have an excessive negative effect on the poor and working poor.

The example Williams uses is the Electronic Benefits Transfer Card (EBT) card. Qualified users receive EBTs in order to spend the funds they get from the U.S. Supplemental Nutrition Assistance Program (SNAP). “EBT cards track everything the individual purchases. Anyone who controls that has a perfect picture of everything he does in a day, knowledge that can be used to make decisions about the SNAP user’s benefits.”

It’s difficult to argue against the governmental organization responsible for administering these benefits having that data, says Williams. But when a citizen wants to find out how the government is using that data to approve or deny benefits, the process of getting access to your own data is “byzantine at best and impossible at worst.”

Data monopoly has become such a concern that some have called for a Magna Carta or Bill of Rights to protect data ownership and avoid information monopolies. Others have made a case for more radical reform—to nationalize the big tech companies—given how closely they can resemble utilities.

Any promise to customers about data sharing is meaningless without building a framework for users, with active and meaningful enforcement behind it.

“The kicking apart of monopolies is inevitable,” Griffy-Brown says. “Laws like the GDPR are already being applied to democratize data because we don’t trust those who are holding monopolies and manipulating data.”

It’s in an organization’s best interest to avoid becoming a monopoly. “A wise company will embrace self-guidance in the hopes of influencing future laws,” says Griffy-Brown. “Monopolies are a bad posture, with an excessive amount of risk.”

  • Build in user ownership of data from the ground up: In the long run, your respect for customers engenders trust, and trust is going to be the differentiator.
  • Avoid monopoly status: Monopolies are like super-villains. They’re only powerful for a while, then they wind up imprisoned in a mirror and hurtling through space toward an angry star.

Job loss: You can program yourself out of your market

Tesla has famously begun testing self-driving trucks, and so are Waymo and Uber. The technology rationale is obvious: Such freight processing tech could prove better for the environment, and be both cheaper to operate and more efficient.

The liabilities of such a system are equally obvious. Top of that list is the possible elimination of the trucking industry as we know it. According to the U.S. Department of Transportation’s Bureau of Transportation Statistics, 13 million Americans are employed by the trucking and freight industry.

If the logging industry of the Pacific Northwest is any indication, the “improvement” of an entire industry can result in its destruction. Perhaps that destruction is best in the end; perhaps it’s a disaster. But regardless, it affects people, and technology creators must contemplate the moral consequences. Just because we can follow a train of thought or a line of scientific investigation does not mean we should.

When technology automates jobs, it is personally destructive to those who lose the job that locates them in society. Since the onset of the recession in 1999, U.S. suicides have increased by 30 percent, with only half of those attributable to mental illness. These figures exist not to paralyze but light a fire.

What we do as developers, CTOs, and business leaders has a tangible effect on humans’ hopes and opportunities. When we act ethically, with empathy, we can make things materially better.

“Designers need to think not only about disrupting industries that exist, but also about what that means for those who get disrupted,” says Williams. “If you ‘disrupt’ people's lives without putting the infrastructure in place to help them grasp and leverage new opportunities, then you have all the moral courage of a house fire.”

We need a way to invoke public voice in these issues, says Griffy-Brown: “We’ve defaulted to technocratic decision-making.” The outcomes of science are not always good, despite an underlying belief that engineering and science fixes will right the world, she adds. “We can make fewer bad decisions. Starting from a frame of reference of fear can be less useful. Let’s start before that. We need to look at this before it becomes a problem.”

  • Extend your ROI to ethics.
  • Involve the public in your decision-making.
  • Consider your customers’ long-term needs. No one has ever made a sustainable profit from a foundation of impoverished and demoralized consumers.

Moving forward: AI, trust, and ethics

We can steer artificial intelligence into a safe space where efficacy and ethics are balanced. We can control technology with eyes wide open to its dangers and with a sense of our own agency. We just have to find the will to act in our own interests.

“We have the opportunity right now to teach information literacy and a healthy skepticism, something we should have been doing all along,” says Kirk Bresniker, a Hewlett Packard Enterprise fellow and chief architect of Hewlett Packard Labs. Believing that a computer is objective is a fallacy that predates machine learning. It goes back to the origins of computability. There is no sense of the possibility of error in Alan Turing’s foundational work, says Bresniker.

“People may not understand exactly how the computer works,” Bresniker says, “but they have absorbed the idea that, based on Turing, it just cranks through the program, doing steps in an algorithm just like a human could have done only immensely faster.” AI is different, however, because it is based on statistical correlation, not symbolic algorithms.

The result: Even if a program is 100 percent logically correct and the execution flawless, we still might end up with a bad result. That’s because of the training it received based on flawed or biased data—and we won’t necessarily be able to tell.

Yet, with the power we see in AI, it’s tempting to ignore the dangers. We may press into service the techniques that power AI before we have the rigorous understanding of when they are not safe to apply. “Any tool used without full understanding of its full ramification is open to unintended consequences,” says Bresniker.

Ethical development of AI is not just moral action but also good business sense. On the risk spectrum, reputation and trust become a premium, and people pay more for the premium version. Or as Griffy-Brown much more succinctly puts it, “Be the hero.”

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.