Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

What AI means for you and your business―now and in the future

Artificial intelligence has made significant advances in the past decade or so, impacting both our everyday lives and business transformation in big ways. In this episode of Technology Untangled, learn how the technology will change the way we live and work, along with the different types of AI, how best to implement them, and ethical considerations.

Artificial intelligence has been around a long time, evolving from ideas that emerged when humans first appeared on earth to a technology that today is integrated in everything from social media and music playlists to smart devices and self-parking cars.

And in the business world, "AI is now accounting for about 40 percent of all digital transformation initiatives," according to Matt Armstrong-Barnes, Hewlett Packard Enterprise's chief technologist for AI in the U.K. and Ireland.

"Artificial intelligence needs to be a tool in every organization's toolkit, all the way from large corporations that are generating significant volumes of information to small organizations that want to process their calls with customers more effectively," he says.

For many, though, questions about AI remain—namely, will it take our jobs? In this episode of Technology Untangled, Armstrong-Barnes joins host Michael Bird for a wide-ranging discussion on AI, including everything from the history of AI to its current and future impact, both in business and our everyday lives. They drill down on the types of AI, how to implement each, and ethical considerations—and explain why AI will add high-value jobs, not take them away.

Excerpts from the podcast follow:

Bird: Artificial intelligence is everywhere, seamlessly woven into our lives to such an extent that we often don't notice it. If you're on social media, you're using AI. When you ask your smart device to play your '80s disco playlist, you're using AI. Your email inbox, your dating apps, and your self-parking car―you guessed it―it's all AI. And it's beating us at computer games, it's writing poetry, and it's even helping radiologists to predict cancer. And it's pretty likely that AI even recommended this podcast to you.

When it comes to organizations, AI is already deeply entrenched.

Armstrong-Barnes: Critically, what we see is AI is now accounting for about 40 percent of all digital transformation initiatives.

The volume of use cases that you've got with AI is pretty much limited by your imagination. So it really is a case of how you can take this technology and make it most effective in your organization to drive the kind of change that all of our customers are expecting of us.

A bit of history

AI has been around for a long period of time and really started its main evolution in the 1940s, and that was when we were starting to think about missile trajectories―obviously a very big reason why they were starting to do that. But it wasn't until the 1950s, and there was a working group in Dartmouth University, which was where the term artificial intelligence was really coined.

Bird: The Dartmouth four thought that every aspect of human learning could be described in a way that a machine could simulate it. They wanted to look at language, unsolved math problems, and the ability of a machine to improve itself. Quite a tall order for now, let alone 1955. And although they didn't achieve the lot in a single summer as they'd hoped, they did set the groundwork for the next 70 years.

Armstrong-Barnes: So that's really where we are today―kind of stepping on the shoulders of giants. There's been massive evolution in AI as a discipline to drive us to where we are today.

Bird: Mainstream it certainly is. And as consumers, we've already become accustomed to AI in the form of voice-activated personal assistants that we shout at, or they pop up on our devices. Consumer surveys often show high levels of mistrust in AI, yet the digital assistant market is expected to grow to 1.8 billion users worldwide by the year 2021.

So, how have we overcome our anxiety as a society? Well, it might be down to plain old human nature: We like things that make our lives a little easier.

 

No more waiting in coffee shop lines

Armstrong-Barnes: What AI is great at is spotting patterns and making predictions. So, what that does mean is it's very good at taking away the mundane. So imagine a scenario where … you've arrived at the train station because that's what time your personal digital assistant has told you that you need to arrive. It's actually already booked your tickets. It's understood which platform you want to be on, and it knows the fact that you'd like to get a coffee.

So you just walk [into the coffee shop], you pick it up, you walk out, and the AI knows which platform you're going to get on. So you get onto the platform and you get onto the train, into your seat, and you sit down, and you can do all of this while listening to a podcast. That does require a greater degree of integration from an application's perspective, and what it does allow you to do is to be inherently more productive.

There is a larger question, which is, how much of the mundane tasks define us as human beings and how much of those we want to do ourselves. The answer to that is these would be recreational activities. So, if you do happen to enjoy standing in the coffee queue waiting for your favorite coffee to arrive, that is your decision. If you choose that you want an AI to take away those mundane tasks, you can allocate them to the AI and say, "Do all this because I don't perceive it as being a valuable act to me."

Bird: The idea of handing over mundane tasks to a machine is pretty persuasive, but before I let AI run amok with my life, Matt, explain exactly what we mean by AI today in a nutshell, or―OK, just go with this―in an onion.

Peeling back the layers of an onion

Armstrong-Barnes: So, if you think about how artificial intelligence is being the outside layer of an onion, it's quite a big discipline, and there are two main types of artificial intelligence. There is strong AI, which is AI that is operating at the same level as a human being who is capable of operating across multiple knowledge domains. If you view the other side of AI, which is narrow AI, this is an artificial intelligence that is capable of operating against a single knowledge domain and, in a lot of cases, a single use case. If we peel back the artificial intelligence onion, we get to machine learning.

If we think about traditional programming techniques, we have a collection of rules, and with the volume and complexity of data that we have today, you can actually get into a whole spaghetti mass of defining all of the rules that you would need in order to process the data. Whereas with a machine, we can teach it to understand what a human being looks like. Then there are lots of ways that you could achieve machine learning, but the most successful, peeling back the onion another layer, is an artificial neural network. So an artificial neural network is a brain-inspired mathematical model that replicates artificial neural networks and their interconnection. And the mathematics that we use to replicate an artificial neural network is something called deep learning.

Bird: It makes sense that computer scientists would model AI based on the most complex network imaginable―the brain―but much of the mistrust of artificial intelligence stems from the worry that machines will become too much like humans.

If deep learning mathematical models can replicate the biological brain, then it's not a huge leap to imagine the AI performing human tasks. So, is AI going to fundamentally change our industries, and are the machines coming for our jobs? In a word, yes, but maybe not the ones you'd think.

Armstrong-Barnes: If we're thinking about just core AI, [that] is a foundational technology underpinning robotics and underpinning robotic process automation.

Robotic process automation [applies to] … the standard activities in a business process that are operated by humans. Robotic process automation is a way of taking those away and getting a machine to operate them for you. We also see AI playing a role in robotics. This is becoming much more prevalent, where we're seeing in the entertainment industry or in hotel concierges, things along those lines, where we're seeing some of that interaction being taken away and being driven by AI to drive a different type of interaction and freeing people up to do different types of higher quality services.

From mundane to high-value work

There's been some recent analysis in the U.S. that says AI will get rid of something like 1.9 million jobs. But it will, in fact, create 2.3 million jobs because what AI is doing is it's processing data that is physically impossible for human beings to process anyway. So the volume of data that we have today, 90 percent of it has been generated in the past two years. So it does mean the job landscape will change.

AI is creating industries. It's creating jobs. And it's creating more jobs and more industry than it's getting rid of. And if you think about that, the plow, the printing press, they did exactly the same thing. They changed the need for human beings to operate on the type of jobs that they were going to do. And what we do find is it will move people from the mundane into being in much more fulfilling roles, many more things that are high value in terms of where [people] spend their time, effort, and energy.

Bird: So it sounds like I can probably rest easy for now. AI has the capability to change human life for the better. And in medicine, AI is playing a supporting role in the quest for new vaccines, by analyzing viral protein structures and by helping medical researchers scour tens of thousands of relevant research papers at an unprecedented pace.

Armstrong-Barnes: AI is based on probabilistic mathematics. So what does an AI do? It doesn't diagnose, but all it does is work out the probability of somebody having pneumonia or somebody having lung cancer. So, when it comes to those radiologists looking at all of the images, they can actually have some prioritization to that workload, which means that you can make use of your limited resources in a much more effective way.

Bird: The last 15 years have seen a boom in new, exciting technologies like this. The global AI market size is projected to reach a staggering $202 billion in 2026, and governments are scrambling to find a way to legislate while still encouraging innovation. AI is very much giving us the capabilities to enhance and enrich our lives, but with great power comes great responsibility. In the wrong hands, AI can be weaponized against us.

AI gone rogue

Armstrong-Barnes: There was quite a famous case recently of a temperature control gauge inside a fish tank inside a casino. Criminals compromised this IoT device and used it as a springboard to get into other areas of the casino, because once you're inside, propagation or further infection is much easier.

So what you need is an AI that is on your side, that is spotting for these kinds of anomalous activities. If, for example, the temperature control gauge inside a fish tank suddenly starts doing network discovery, then that is an unusual behavior. I don't think that it's operating correctly. Or if a person inside your network suddenly starts to upload documents to an IP address in a foreign state, that is also outside of their behavior.

Instead of having a security administrator who's looking at all of the security alerts, trying to work out where to spend their valuable time, effort, and energy, the AI sifts through all of that, looks for patterns, makes predictions and says, "You need to look at this thermometer, or you need to look at this person because I think they are operating in a way that they haven't done previously." And as a result, that's where you should investigate further.

Bird: But whether it's identifying patterns in a radiologist's data or alerting us of weird goings-on with our fish tank, the AI we're talking about is all controlled by human input. We feed data to our machine, and it will produce an output based on what we've asked it to look at. So what, then, if we instruct the machine to learn not only from the data we feed it but from its own experiences?

Well, in 2016, computer scientists did just that. The company DeepMind developed a computer program called AlphaGo that they want to train to play at the notoriously complex board game Go. Now, AlphaGo is made up of several neural networks―that was one of the innermost layers in Matt's onion example mentioned earlier―and originally its creators set it to play against amateurs, to create an understanding of human gameplay. Then, get this, they turned it loose on itself. AlphaGo played against different versions of itself thousands upon thousands of times, always learning from its mistakes. The final test came in 2016, when AlphaGo was pitched against 18-time world champion Go player Lee Sedol. The results: an incredible 4-to-1 victory in AlphaGo's favor.

Armstrong-Barnes: It could, in essence, create versions of itself to compete against itself, just to make itself better. And that's the kind of worry, as we get into the realm of science fiction, [about] these type of AIs, because unlike you or I in the length of time we have, we'll have played one game of Go and an AI will have played 10 million. And with the rapid evolutions in computing technology, it becomes faster and faster to get better and better.

Bird: With criminal organizations profiling us with AI and machines learning from themselves without a human intervention, it is starting to look a lot more dystopian than we imagined. This natural evolutionary path might have been what those academics at Dartmouth dreamt of, but should we be careful what we wish for?

What the future holds

Armstrong-Barnes: There's quite a lot of conversation about the challenges that we'll face with AI―where it is today, which is narrow AI, so narrow that AI fails brittlely, which means it fails with unknown consequences. I'm sure everyone has interacted with a personal digital assistant and had a very different outcome than they were expecting. That's because it's powered by narrow AI.

Where we and lots of the big industry leaders are talking about the challenges that will come in the AI space is when AI can operate multiple knowledge domains. The debate in the academic community is it will happen anywhere between the next 10 and 90 years. This is massively in the realms of theoretical probability.

And this is what the likes of Elon Musk and Bill Gates and all those guys are talking about: the challenge we have around how we put the right guardrails in place to protect against an AI that becomes malevolent but then sort of is not used in the right kind of way. And that's the challenge that we have in the AI space. As I said, that's the stuff of science fiction at present.

Bird: Thankfully, malevolent machines aren't on the immediate horizon. And despite the incredible advances in neural networks and machine learning, it's clear that our hands are still firmly on the reins. The fact that we're in control might make us sleep a little easier at night, but there is a hidden human factor that really affects the functionality of AIs we use today.

Humans vs. machines

Armstrong-Barnes: They don't think the way that we do: They're not intuitive; they don't have any empathy; they don't have a lot of the core human qualities. There's a great example of an AI that would always misclassify a Husky as a wolf. So some researchers did some analysis as to why this was the case and what they found was the AI didn't look at the Husky or the wolf.

It recognized a dog and then it recognized the background. So it only had ever been shown pictures of Huskies without snow. Whenever it saw a picture of a wolf, it always got snow in the background. So, instead of it looking at the wolf, it was looking at the snow. It's not like a human being, who would look at a wolf and a Husky and recognize the difference between the two. The AI was looking at the background.

Bird: The wolf-Husky AI drew connections between datasets using parameters that we don't deem relevant. But if we aren't careful with the data that we feed our machine, the output could be biased, skewed, or just plain wrong. Humans are imperfect, and so our data can be biased just by design. And because of that, any organization that wants to harness the power of AI needs to square its use with ethical practices.

The pitfalls of AI done wrong

Armstrong-Barnes: You need to think about ethics right from the very beginning. When you're starting to think about what you want to do with AI, think about the ethical considerations. Lots of organizations will use their data to train their models. And, unfortunately, historic data doesn't necessarily comply to all of the ethical considerations that you have today.

There's been a number of very famous cases of this where organizations have used their historic data to inform a decision today and either they've got gaps in it or they've got irrelevant data that they've put in or, inherently, decisions that they would have made 10 years ago were fundamentally different than they've got today.

Bird: If we use skew data in a lab model, like the Husky and the wolf, there's no real harm done. But in the real world, the ramifications can be life changing. Machine learning algorithms are already being used to screen candidates for some of the world's biggest organizations.

And in one case, a company developed a super-intelligent AI recruiting tool, which they trained on data from the last 10 years of applicants. The problem was diversity. Because they hired predominantly men in the past, their AI recruiter would favor male candidates just because of their gender. Suffice to say, this AI was scrapped as soon as its developers realized their mistake. By having these ethical conversations from the get-go, organizations can harness the full potential of today's technology. And if we can do that, well, the possibilities really are as big as our collective imagination.

Broad and fast growth

Armstrong-Barnes: So we're seeing AI being used to prioritize workloads for health professionals. We're seeing AI being used to spot the difference between a weed and a plant, so instead of bulk spraying fields with herbicides and pesticides, you can actually get much more targeted. We're seeing significantly smaller amounts get into our ecosystem. We are seeing it for advanced drug discovery. We're seeing it for genomic research, personalized medicine, as well as spotting or aging people for tackling human trafficking. So we are all seeing the number of use cases that come out from an AI as being broad and fast when it comes to driving forward the evolution. So, from a Hewlett Packard Enterprise perspective, we're working with NASA to get a man on Mars by 2030.

And they're actually using … the Mars Rover's being AI-powered because they're operating in an environment that is alien. You can't do telemetry up to Mars. So, if the Rover finds an issue that it doesn't know, that traditional rules-based approaches haven't catered for, it can't send a signal back to planet Earth saying, "I don't know what to do! What should I do next?" It has to be able to process that information locally, and AI is one of the critical techniques for doing that.

Bird: Now you might be thinking at this point, "Yeah, that's all well and good, but I'm not really undertaking genome research and I'm certainly not planning on sending anything to space anytime soon. So how can I start approaching AI within my organization?"

Armstrong-Barnes: Artificial intelligence needs to be a tool in every organization's toolkit, all the way from large corporations that are generating significant volumes of information to small organizations that want to process their calls with customers more effectively.

Bird: Intelligent tech advancements are happening in real time, and if organizations don't get on board now, they run the risk of being left behind. It's pertinent now to mention that innovation in AI has historically thrived in times of economic unpredictability. 2020 was already pipped one of AI's hottest years, and the trajectory is very much set to continue.

Explore, experiment, evolve

Armstrong-Barnes: It is something that we really need to embrace, but you need to think about the use cases that you want it to achieve. Just because you have a hammer with the word artificial intelligence on it, don't think of everything as a nail. Once you've identified that AI is the right tool to address that particular challenge, it is then a case of iterating through it. So at HPE, we have quite simple methods, which is explore, experiment, evolve.

So the first step is to explore, which is to find the use cases you want to execute against, making sure that you've got the data that's going to underpin that. Once you've done that, you can then go into the experiment phase, which is starting to build a proof of value. Don't prove the concept of AI; it's already been proven. Prove that it's going to add a value to your organization in the right kind of way. Once you've done that, don't build it in isolation. So don't build an AI island of power. It needs to be integrated into your existing systems. Otherwise, it won't give you the right kind of benefits. Then, once you've done that, and you've gone on the AI journey―because it is a journey, not a destination―you will continue to iterate through.

Bird: Organizations of any size can find their own path to AI. So, when you're considering these new capabilities, you also need to ask yourself the question, "What's the best fit for us?"

Ways to implement AI

Armstrong-Barnes: There are three main ways that you can implement an AI. The first one is, "I've got all of my data, I've got people that can build a successful model, and as a result, I'll go through activity myself." That gives you core market differentiation because you're building something for you. At the other end of the spectrum, if I have a problem, it may well be that a software vendor has already built a model and it's fixed that problem. I don't need to build it myself. I'm just using AI techniques to address that business challenge.

The middle ground, the third way, is to have somebody who has taken a model 80 percent of the way there and you provide 20 percent of the customization. So, depending on where you are, if you're a large organization, you might well be thinking about market differentiation. If you're a smaller organization, you might well be thinking about the fact that you've got a specific problem that has been addressed by somebody else.

If you think about natural language processing, understanding speech―critical across large organizations and small organizations―that business problem has been cracked. You wouldn't build your own custom AI to understand speech, written or spoken, you take on that somebody else has already done.

More innovation on the horizon

Bird: We're not living alongside the AI of science fiction yet, and that's probably a good thing. But ironically, the most exciting part of artificial intelligence right now is the way that organizations and individuals are coming together to innovate and explore new technological horizons.

Armstrong-Barnes: AI is a team sport; it's not about individuals. It's about a collection of people who are trying to drive a better outcome getting together. So you do need people from the business side who've got some of the challenges; you need people who are thinking about the data and how that could be applied; you need people from technology because you do have a different type of infrastructure that's required in order to drive a better outcome. So it is all about coming together, working as a team, and going on the journey much more effectively.

Bird: The Dartmouth researchers set out in 1955 to create super-intelligent programs that can solve problems that mathematicians simply couldn't. But what our ever-evolving AI journey has demonstrated is less about what machines can do for us and more about the power and potential of human thought.

Armstrong-Barnes: There's a great phrase, and I believe it comes from Afrikaans, which is, "If you want to go fast, go alone. If you want to go farther, go together."

Bird: You've been listening to Technology Untangled, and a very special thanks to our guest today, Matt Armstrong-Barnes. You can find more information on today's episode in the show notes, or you can visit our website, which is hpe.com. Make sure to hit subscribe in your podcast app and do join us for the next episode, when we're going to be talking all things as-a-Service: Is it jargon, or is it a game changer?

Today's show was written and researched by Isobel Pollard and Phillip Lakka, with sound design and editing by Alex Bennet and production support from Harry Morton and Thomas Berry, and was hosted by me, Michael Bird.

Technology Untangled is a Lower Street Production for Hewlett Packard Enterprise in the U.K. and Ireland. Thanks for tuning in, and we'll see you next time.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.