Skip to main content

Conversational AI and the rise of the chatbots

It’s important to understand what conversational AI is, why it’s become so popular, the obstacles, and its likely future.

You can hardly turn on the television news, pull a magazine off a rack in a doctor’s office, or check out your social media without being confronted by a discussion about artificial intelligence. Whether the writer or talking head is decrying the imminent robot apocalypse or celebrating our deep-learning-based salvation, most of the coverage has one thing in common: an imprecise definition of AI. AI is, at its base, nothing more than software that simulates intelligence.

One specific type of AI is cropping up all around the Internet: conversational AI, mostly in the form of chatbots. The most recent and high-profile news about AI was Google’s announcement that its AI, called Google Assistant, beat the Turing test—150 times. The Turing test evaluates a machine’s ability to successfully mimic human intelligence by presenting as indistinguishable from human communication.

Given the fact that we are already interacting with this sort of AI daily—in the form of our phone’s digital assistant or mapping software or a help bot on a website we use, for instance—it’s important to understand what conversational AI is, why it’s become so popular, the obstacles to its adoption, and its likely future.

Predictive analytics and machine learning can help optimize application performance and meet the needs of the business. This ESG report describes how.

Why AI chatbots, and why now?

One of the reasons for the rise of the machines—well, conversational AI bots anyway—is that bots are becoming smarter and faster. Many people prefer to use a chatbot to engage with a business.

The hope is that AI-driven chatbots will capture people’s attention in a platform-agnostic way. The back end and database will still be as if you were using, say, a traditional website, but the interface will be available on everything from online help sites to Facebook.

But none of this would be possible had the technology not developed to a point where the specific AI form would be possible. The widespread use of the Internet and the appeal of social media with the creation of viable networks for both has provided an ecosystem throughout which chatbots can move. The fact that direct messaging is omnipresent in our lives—texting, Skype, Apple Talk, Google Chat, proprietary messaging systems, and more—has allowed us to feel comfortable communicating linguistically within the confines of a floating box on a screen.

Such preconditions have allowed the birth and proliferation of chatbots and provided the inspiration for companies to use conversational AI. Bots you may be familiar with include Watson (yes, IBM’s supercomputer has a bot factory you can use); Amazon’s Lex; Instalocate (to track your flights via Facebook); National Geographic’s Tina the T-Rex (yes, NatGeo has a dinosaurbot); a bot from Endurance to alleviate loneliness for those suffering from dementia; and Hugo, Hewlett Packard Enterprise’s first AI bot.

Nothing is more powerful than an idea whose time has come

Hugo was developed by HPE in cooperation with the media company Quartz. The initial customer-side idea was to allow readers of a Quartz content series called Machines With Brains to penetrate deeper into the stories with an emphasis on helping readers learn more about the issues and how HPE’s technology relates to the tech in the series.

Hugo is HPE’s initial pilot program into the conversational experience, and it is successful enough to extend Hugo into version 3.0. Based on initial assessment of the opportunities in this space, up to 50 percent of the customer inquiries directed to the company’s online help site today could be addressed by Hugo by the end of 2019.

“We’re going to ingest more HPE content surrounding our key initiatives like hybrid IT,” says Pete Metrulas, worldwide digital capabilities manager for HPE. “Coming this summer, people can use Hugo to obtain both tech industry news and info, and HPE-specific content such as white papers and articles.” Like this one.

Hugo is powered by an HPE ProLiant server and uses natural language processing to surface the week's most relevant tech trends and stories. The full bot stack includes React for the front end, Node for the back end, and Mongo for the database. It uses Python for the machine learning and natural language processing.

The challenge of privacy and humanity

In the wake of Facebook’s Cambridge Analytica scandal, you cannot speak of AI and user information gathering without addressing the very real issues of privacy and data security.

One strategy for securing and maintaining user trust is to implement the European Union’s General Data Protection Regulation. The process may present any company with a way to assess its privacy needs based on the kind of company it is and the kind of customers it serves.

According to Duncan Brown, associate vice president for European infrastructure and security at IDC, the key to the efficacy of GDPR is not simply its strictures, but the way it contextualizes privacy concerns within a company’s values.

“GDPR is all about risk,” says Brown. “Each business has to decide where along the risk continuum it is going to sit. Some will play fast and loose and some conservative. If you can balance risk with opportunity to leverage good practice, you have the advantage of clarity.” GDPR provides businesses with a regulatory slide rule to decide where they will position themselves, a decision that can make the difference between thriving and struggling, as Facebook’s example shows us.

“I’m not convinced data security is really the problem that stands in the way of widespread adoption of conversational AI,” says Vlad Maraev, a PhD candidate in computational linguistics at the University of Gothenburg in Sweden and former research assistant at the University of Lisbon. “People tend to trust this kind of AI precisely because they present themselves as intelligent.” This, says Maraev, allows us to project our internal states on it, imbuing it with human qualities we feel are necessary to trust.

The real challenge is visible in the intersection of the technological and the human, specifically in what humans are better at than machines.

“The main challenge here is in human dialogue,” says Maraev. “Where human beings excel is in interactions and turn-taking,” linguistic and behavioral elements that are integral to real conversation. “The developers are not paying so much attention to this.”

When Maraev began to research the work being done in this area, he was extremely surprised to find it a “very narrow field.” Maraev’s advice to companies hoping to develop effective chatbots is to pay attention to this research in dialogue and linguistics as it applies to machines.

“To really take the development of conversational AI to the next level,” says Maraev, “developers will have to focus on aspects of the conversational process that are somewhat resistant to quantification, like intonation.”

Sean Mahoney, vice president and group content director at Digitas, agrees.

“Mostly the big curve to get over is one of intent,” says Mahoney, who collaborated on Hugo. “Language is insanely nuanced, and it takes us entire lifetimes to master it. It will be some time before we can expect a level of understanding so refined that our current keyword assistants operate with true intelligence.”

It is certainly possible to translate appropriate intonation into code that a machine can process, but it is a difficult road to walk and the temptations to follow another easier route is always there. But whoever makes this breakthrough will be responsible for a revolutionary leap in conversational AI, allowing that trust we create by projecting our internal state onto a machine to increase exponentially.

The future

New players will continue to enter the conversational AI space in the coming years with emphasis on prioritizing ever-more responsive and relevant real-time interactions with users.

“Conversational AI will also be used to cover a broader scope of possible areas, making it even more important to distinguish between novelty and bringing true value to users,” says Michael Dolan, creative director at Quartz.

And hopefully, those human elements we value will establish themselves as a programing priority.

Mahoney’s hope is that as we develop, we will do so “politely.” He fears “the current practice of barking orders at a voice-activated device like Alexa or Google Home” might take hold in interpersonal interactions, especially with children who are growing up expecting voice assistants to simply listen to orders and do what they say. Politeness protocols may be a necessary element for increasingly intelligent AI builds.

In the meantime, AI bots will soldier on, getting brighter and closer to what users wish to know and how they wish to access it.

Don’t take my word for it—go ask Hugo.

AI bots: Lessons for leaders

  • AI is not the future. It’s the present. We’re already interacting with it daily, most commonly by “talking” with conversational AIs, better known as chatbots.
  • Conversational AI can provide a practical solution to the loss of native website readers.
  • The next big leap in chatbot development will be in the quantification-resistant areas of intonation and intent, which could be the step that ensures universal adoption of this technology. 

Related link:

The ethics of AI: Is it moral to imbue machines with consciousness?

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.