Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Using AI to identify a toxic work culture and avoid burnout

It is getting very hard to hide who we truly are in an AI-enabled world. In fact, some of our favorite companies are using AI to evaluate us right now.

You have but to look at social media and your mother's text messages to know that people judge you all the time. But why would we want humans to judge us when artificial intelligence (AI) can do it more efficiently? After all, AI can make connections that humans haven't considered, and AI can evaluate people and their circumstances dispassionately—far more quickly than a human can.

Is it possible that one day, AI will be able to spot potential human issues and identify them before they lead to burnout or a toxic work culture? We have news for you: Not only will AI be able to spot human traits like narcissism and Machiavellianism, but AI is doing it right now.

Specific types of AI applications have been developed to spot such personality impulses because of the monetary and emotional costs they incur. These terrible traits have blowback particularly in the workplace, where stress takes a staggering toll. It's a wonder the machines let humans out of the Matrix at all.

So how does AI do it? Machines don't understand human emotions like pity, remorse, or fear, and they can't differentiate the love of a good person over the love of a good sandwich.

Language is the heart of the matter

It turns out that people reveal their worst character flaws the same way AI can catch it: through the use of language.

Jonathan Kreindler, co-founder and CEO of Receptiviti, an analytics platform that can determine the emotional states of your workforce, says that language can help you "understand people and their emotional, psychological, and cognitive state." Receptiviti can tell you whether your employees are fatigued and stressed or satisfied and empowered, and it does so by depersonalizing content, stripping away names and addresses, and looking at groups, teams, and departments so that every individual is completely anonymous.

Receptiviti uses its AI to hone in on two different types of words found on intraoffice communications systems such as Microsoft Office 365, Exchange, Gmail, Slack, and Skype. These two types of words? Content and function.

  • Content words are nouns, verbs, and adjectives.
  • Function words are pronouns, prepositions, and auxiliary verbs (for example, am, is, and have).

Receptiviti's AI focuses its efforts on function words. According to a 2013 TED Talk by Receptiviti's co-founder and chief scientist, James Pennebaker, the English language has only about 500 function words, which accounts for less than 1 percent of the words we know. Yet they reflect 55 to 60 percent of the words we use. (You can read the research of Pennebaker and co-author Cindy K. Chung here).

Kreindler says that function words—not the more common, more descriptive content words—indicate a user's emotional state.

"People who show signs of depression … tend to use the pronouns 'I, me, my' with more frequency than 'he, she, they.' They tend to be far more inward-focused," Kreindler says.

In addition, Kreindler says, people who show signs of burnout "tend to be more focused on 'now,' less on the future." When they use prepositions, they use words like before and after less often. "People who are stressed tend not to use prepositions that relate to the future," he says.

AI can tell who you are by your social media posts

This isn't the only way AI uses our language to show how sad and stressed we are. Airbnb's AI—acquired from the start-up Trooly—can spot narcissism and psychopathy, a.k.a., the worst flaws of my worst boyfriends.

According to an Evening Standard article, Airbnb's AI marks down guests based on whether "keywords, images, or video associated with them are involved with drugs or alcohol, hate websites or organizations, or sex work." Airbnb states on its website: "We use predictive analytics and machine learning to instantly evaluate hundreds of signals that help us flag and investigate suspicious activity before it happens."

What this means: Airbnb's algorithm works by scanning your social media posts. So if you like to tweet about drinking while frequenting sex workers who are associated with hate crimes, you might want to forgo Airbnb and get a hotel room.

AI can also determine if you're lying, yet another human flaw found in my worst boyfriends. Researchers at Cardiff University and Charles III University of Madrid developed an AI called VeriPol that it claims can identify fake robbery reports with 80 percent accuracy. VeriPol uses content words and even punctuation marks and acronyms to spot a lying liar.

According to Cardiff University, "VeriPol identified a number of themes that were common amongst false robbery reports, including shorter statements that were more focused on the stolen property than the incident; a lack of precise detail about the incident itself; limited details of the attacker; and a lack of witnesses or other hard evidence, such as contacting a police officer or doctor straight after the incident."

Researchers are hopeful that this lie-detection AI will prevent people from falsely reporting robberies in the first place. (Which begs the question, why do people falsely report crimes? Why are we so terrible? Why?)

AI and mental health

AI doesn't just potentially spot crimes before they happen—it may be able to identify some forms of psychosis. The Atlantic reported that a team of researchers developed an AI model that correctly predicted which members of a group of young people would develop psychosis.

It turns out that people who suffer from psychosis use language differently than people who suffer from depression. They use shorter sentences, and their language tends to be unclear. They more frequently use the words this, that, and a.

But this AI model, developed by Jim Schwoebel, CEO of NeuroLex, doesn't train its AI on writing. Instead, its AI uses spoken language.

Schwoebel says, "Voice generally can be used as a biomarker for a lot of different diseases," including psychosis. By asking a patient to describe a picture or perhaps how their day is going, "we then extract features, as well as the text." These features include pitch, loudness, and rhythm.

With the help of AI, people in the midst of a psychotic episode may be more quickly and accurately diagnosed, which helps sufferers get the correct treatment faster.

AI can also use a patient's voice to spot depression, according to a Smithsonian Magazine report, which notes that flatter, more monotone voices are characteristic of the mood disorder. But the written word may be more telling than the human voice: The paper "Detecting Depression with Audio/Text Sequence Modeling of Interviews" describes AI that could predict depression faster with written samples than with audio alone.

When it comes to nonverbal cues, AI can even tell you what kind of person you are, and it can do so by scanning your eyes (kind of like the Voight-Kampff test in the 1982 film "Blade Runner").

Researchers found that AI can determine which four of the Big Five personality traits—conscientiousness, extraversion, agreeableness, and neuroticism (but not openness)—you exhibit by using such metrics as pupil diameter, rate of blinking, eye motion, and where your gaze lands.

There you have it: AI can tell you whether or not your workforce suffers from burnout, whether or not someone is lying, and whether or not your Airbnb guest will trash your apartment. But people are not machines. In many cases, moods and behaviors are transient or situational, and even AI can make only a best-guess presumption based on the behavioral models that have been identified. The takeaway: These tools shouldn't be considered the final word but simply another evaluation tool.

How AI evaluates people: Lessons for leaders

  • Language use is a key indicator of human traits.
  • Writing style can indicate behavior and mood, as well as issues like depression.
  • Certain nonverbal indicators can be evaluated.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.