Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Is AI a good thing for society? Surveys say maybe

How comfortable are you with AI? A lot depends on what you believe the technology is capable of.

Marvin Minsky, widely considered the father of artificial intelligence, believed that robots would eventually inherit the earth. He also predicted that if computers ever gained total control, they would never relinquish it, though they might decide to keep humans as pets.

Minsky was probably joking, but his statements reflect long-held concerns about the impact of thinking machines on our daily lives. Over the past few years, attitudes toward AI have evolved rapidly. And how people feel about the technology depends a lot on where they live and what they believe AI is able to achieve.

Home sweet AI?

One of the biggest differences is geographical. A 2020 Pew Research Center study found significant regional differences in attitudes toward AI. For example, people in Asia have a much more positive view of the technology than many in the West: Two-thirds of those surveyed in East Asia countries believe AI to be a good thing, while fewer than half in western Europe agree.

The U.S. is almost evenly split on the question, with 47 percent saying the development of AI is a good thing and 44 percent calling it bad, which is within the survey's margin of error. Exactly half of the 1,502 Americans surveyed believe replacing humans with robots for certain jobs is a bad idea.

University of Oxford study of more than 150,000 people in 142 countries arrived at similar conclusions, with people in Latin and North America perceiving the greatest risks.

The most marked extreme in the Oxford study is in China, where only 9 percent see AI as mostly harmful, despite having a reputation as the most surveilled nation on the planet, where some 600 million AI-driven CCTV cameras are used for things like shaming people for wearing their pajamas in public.

Please read: It's time for AI to explain itself

Experts contacted for this story can only speculate as to what accounts for the disparities, but researcher Christos A. Makridis warns that we shouldn't put too much stock in the cultural differences shown by these studies.

"Cross-sectional comparisons of attitudes across countries are really unreliable, because there are so many confounding factors," says Makridis, a research professor at Arizona State University and a digital fellow at the Stanford Digital Economy Lab. It's more accurate to look at changes in the same populations over time, he adds. Unfortunately, such longitudinal studies are not yet widely available.

Welcoming our new robot overlords

These same studies also show consistent differences in gender, age, and education. Men, younger people, and those who are more familiar with technology tend to be more positive about AI and automation.

Not surprisingly, people's attitudes tend to depend on whether they believe robots are gunning for their jobs. In the Oxford study, 47 percent of business executives and government officials think AI will mostly help society; only 35 percent of construction and service workers feel the same. Even in India, where 67 percent surveyed by Pew are positive about AI, less than half believe job automation is a good thing.

June 2018 survey by the Brookings Institution found that 52 percent of Americans believe robots are likely to perform most human activities in 30 years, and 61 percent aren't happy about it. Only one in five said they'd be interested in a robot that cleans their house (Roomba, you're fired) and even fewer would welcome robots to provide security or care for aging relatives. Yet, conversely, more than 60 percent said they weren't especially worried by the machines taking over.

"People are still figuring out how they feel about robots and how great a threat they pose to human workers," says Darrell M. West, director of governance studies at Brookings. "They appreciate the productivity gains but wonder what it will mean for them. Opinions will continue to evolve as we learn more about robotization."

Alexa, call my divorce attorney

Ultimately, how people feel about AI and automation boils down to what we're asking the machines to do. A study of 100 men and women in the U.K. looked at 42 activities where AI could theoretically replace a human, asking how capable they believed the technology to be in performing these tasks and how comfortable they were with it.

Please read: HPE's Dr. Goh on harnessing the power of swarm learning

In general, people are more at ease with things they think machines should be better at—like crunching data—and less comfortable with more "human" activities, says Astrid Schepman, an experimental psychologist at the University of Chester who designed the survey.

For example, survey respondents are fine with AI searching for life on other planets (87 percent), using odors in human breath to detect illness (84 percent), or spotting art forgeries (71 percent).

But they are least comfortable with technology replacing actors, sports stars, doctors, or TV news anchors. Four out of five would not choose a robot as their personal physician, and 86 percent do not want a virtual assistant predicting relationship breakdowns by listening to their conversations.

"People seem quite comfortable if AI does something they're not very good at, such as detecting patterns in large data sets," says Schepman. "But they seem terribly uncomfortable with the idea that AI would do something that would normally be done by a sensitive human with empathy, feelings, and understanding."

Hasta la vista, baby

Much of the negative attitude toward AI in the West can be traced in part to popular media. Robots that are coming to kill or enslave us is a perpetual Hollywood theme. Stark warnings from the likes of Bill Gates, Elon Musk, and Stephen Hawking about the dangers of AI run amok only add to that perception.

"The media and leading popular figures have done a great disservice to society by using the image of advanced technology or AI to create fear, apprehension, and misleading narratives," argues James Hodson, co-founder and CEO of AI for Good, a nonprofit dedicated to promoting the use of technology for positive social change.

And while studies of news about AI found coverage to be fairly neutral overall, it still leans into the sensational and the dramatic. We know, for example, that Tesla vehicles running on autopilot have crashed at least 12 times since 2014. But you don't read many stories about how a self-driving car has delivered its passengers safely to their destination, and obviously the number of crashes with human drivers in that period is many orders of magnitude higher.

Autonomous cars are an interesting case where perception and reality diverge. Forty-eight percent of people in the University of Chester study believe AI is less capable than humans when driving a car, and 44 percent are uncomfortable with the idea. A 2018 Brookings study is even more stark: 51 percent don't believe autonomous vehicles will reduce the number of highway accidents, and 61 percent say they don't want to ride in one.

These perceptions persist, despite the widely quoted statistic that 94 percent of serious crashes are due to human error and the fact that every new car that rolls off the assembly line is filled with self-driving technology: front- and rear-facing cameras, radar and lidar, collision sensors, and AI that makes sense of all that data.

The fact is, many people are using AI on a regular basis without realizing it, says Schepman.

Please read: The rise of artificial intelligence and machine learning

"Half the time, people don't know that AI is beneath the things they use every day, like when Amazon suggests more products for them to buy or they're on Facebook and advertisements pop up," she says. "With other technologies, it's more of a conscious opt in. But if I interact with the internet, I don't necessarily know whether AI is being used."

Facing the real issues

But that doesn't mean concerns about AI aren't valid. As the technology becomes an increasingly common part of daily life, more questions will arise around ethics, explainability, personal privacy, and bias.

For example, facial and speech recognition are much less reliable for people of color. A Stanford study of speech-to-text accuracy found that error rates for Black men are more than twice that for white women. In Michigan, a Black man was arrested in January 2020 for a crime he didn't commit, due to a faulty facial recognition algorithm. This past June, progressive Democrats in Congress introduced a bill prohibiting the use of facial recognition by the federal government.

Amazon drivers have begun turning off the cameras in their delivery vans that gauge how safely they're driving, in part because the devices are intrusive and not always accurate. Even in AI-friendly China, there's a nascent backlash against facial recognition in public venues.

These concerns are likely to mount. McKinsey & Co.'s State of AI in 2020 report shows that organizations are paying less attention to issues like personal privacy and equity in AI than they have in the past. Leading technologists are pessimistic about ethical AI design becoming commonplace over the next 10 years, according to a June 2021 canvassing of experts conducted by Pew. In part, that's because it's hard to get everyone to agree on what ethical AI really means.

More than 80 percent of Americans polled by the Center for Governance of AI in 2018 said they believe there needs to be limits on the technology. A third of people surveyed by Brookings want the U.S. government to set up a Federal Robotics Commission to regulate robots.

People feel more at ease with AI when there are clear boundaries around it, says Lee Rainie, director of internet and technology research at Pew. For example, they're more sanguine about autonomous cars if the driver still has some control over the vehicle or if the vehicles are limited to dedicated lanes.

"If there's something like an off button, where human intervention can supersede the system, that makes people more comfortable," he says.

"Ultimately, the very root of the difference in attitudes is not about the potential benefits or dangers of an AI system," says Hande Sahin-Bahceci, director of worldwide marketing for AI, data, and security services at Hewlett Packard Enterprise. "It's more about the attitudes about people. Do we trust in the capabilities and decision-making of the humans more than the machines or AI systems? Do we trust the humans who are making AI systems?"

At best, most people see both the positive benefits and potential dangers of AI. They're cautiously optimistic but wary. That's because people are complicated, even more so than AI. But you don't need a survey to tell you that.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.