Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

AI in the workplace: Why companies need to get it right

As use of artificial intelligence in the workplace increases, so too does concern about job displacement and ethical issues. Careful evaluation of data, transparency and education are the first steps towards responsible AI.

While artificial intelligence promises to blaze new paths in healthcare, manufacturing, financial services, and other industries, organizations still have a key obstacle to overcome: worries among workers that AI will make humans obsolete.

So, will AI take our jobs? Well, yes, some of them. "Technology has been making jobs obsolete for centuries," points out Michael Bird, host of this episode of Technology Untangled. But current AI use cases are narrow, says guest Matt Armstrong-Barnes, a chief technologist at Hewlett Packard Enterprise, meaning AI typically does just one task.

Please listen: Accelerating time to insights by operationalizing machine learning in the enterprise

"The most successful implementations of AI is that when they're augmented with humans," Armstrong-Barnes says, noting much of the focus today is on machine learning and deep learning. Such AI can both enhance workers' abilities and alleviate mundane tasks, he says, freeing people to work on more high-value activities.

As AI use cases increase, however, so too do questions about potential bias, AI's role in workplace decision-making, and transparency.

To explore these concerns, Armstrong-Barnes joins Toju Duke, a responsible AI program manager at Google and member of Women in AI; Mary Towers, an employment rights policy officer for the U.K.'s Trades Union Congress; and Simon Peyton Jones, a computer science researcher for Microsoft and chair of Computing at School and the National Center for Computing Education. They look at a range of topics, from types of AI and how they will affect jobs to the ways AI can introduce bias and what organizations can do to prevent that.

It's all about the data

As Duke points out, when the data used to train AI isn't complete or comprehensive enough, it's not "fully encompassing," which can lead to built-in biases. She gives the example of a tech company that was using recruitment software trained on a dataset based on the majority of the people working at the company.

"As a typical tech company, it was male dominated, so [the software] was trained on 70% or 80% male CVs as opposed to female CVs," Duke explains, which led to women's CVs with the term women's college on them being dropped.

Please read: From the front lines to the enterprise: The ethical considerations of AI

Armstrong-Barnes recommends that any time organizations use AI to evaluate people, they need to thoroughly understand what data is being used to train their algorithms, using widely available tools and techniques, and identify gaps in their datasets.

The need for transparency

In addition to hiring, the experts say AI is now being used for task and team assignments, performance reviews, disciplinary action, and in some cases, termination, as well as monitoring and surveillance. While AI technology is being applied in these and other areas at various levels, "a big question on our minds is, is the technology being used but actually people aren't aware of it?" Towers says.

Another significant issue is the effect of AI on workers' physical and mental health. "Real-time monitoring seems to be a particular source of stress and unhappiness," Towers notes, adding that her organization's research also shows workers experience a "general sense of loneliness and isolation" when they are managed by AI tools rather than a human being. In addition, it found that they experience a sense of helplessness when AI makes decisions that affect them but they aren't able to challenge those decisions, she says.

Please read: AI promises and perils

To help counter these negative effects, Towers' organization is working on proposals that, among other goals, aim to make information on AI in the workplace transparent and understandable to workers. A first step for companies is making sure technology and human resources teams work together to meet the needs of workers, she says, adding that "these issues matter for everyone because at the heart of the debate are questions around how do we value human beings."

Educating the next-gen workforce

Lack of expertise in applying AI technologies and understanding how they work and impact people is going to be a big challenge going forward, says Peyton Jones. He notes one estimate predicts that 65 percent of students entering primary school today will work in jobs that don't yet exist―many of which will involve AI.

The problem, he says, is that "we're now working alongside the relatively simple AIs that do regression and classification, and we're running into trouble in terms of our understanding and the explainability [of applications]. So, how is the next generation going to handle it?"

Please read: Future-proof your career with AI

That means education in computer science, including AI, is an imperative, he says. As chair of Computing at School, Peyton Jones is working to change the U.K.'s national curriculum to make computer science a "foundational subject," whereby all students are required to learn technology fundamentals beginning in primary school.

Listen to other episodes:

What AI means for you and your business―now and in the future

Sustainable tech: Good for the planet, good for business

The IoT: Where real-time insights drive real-time action

2020: The year everything changed

Digital ethics: The good and bad of tech

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.