Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
Loneliness and isolation have long been the forbearers and sidekicks of mental illness. When social media became a fixture in everyday life, many hoped it would remedy both issues and create a more connected and healthier human existence. But the opposite has proved true.
Researchers are looking at artificial intelligence (AI) as a possible way to reverse the trends of isolation and loneliness, and thus improve mental health. Where AI can’t improve a feeling of connection for social media users, it might still make things better by predicting an individual’s need for intervention, and even summoning assistance from friends, family, and mental health professionals when needed. Researchers are also working hard to make AI-powered digital assistants more personal and personable, so that they are more readily accepted as a friend or therapist.
One example already on the market is Woebot, which was created by a team of Stanford psychologists and AI experts. “Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health,” reports Megan Molteni in a Wired article. “After spending the last year building a beta and collecting clinical data, Woebot Labs Inc. just launched the full commercial product—a cheeky, personalized chatbot that checks on you once a day for the price of $39 a month.”
AI holds much promise for mental health sufferers, but it isn’t a person, and it isn’t a doctor. That means it has practical limits and more than a few legal and ethical issues. Even so, early indications suggest that AI may be the best hope of getting meaningful help to those who need it, perhaps saving many lives at minimal costs. That's especially so in this age when healthcare costs are high, mental health issues are highly stigmatized, and health insurance is not guaranteed to help cover the costs.
Several studies connect social media and depression, and find a direct correlation between time spent on social media and feelings of isolation. In short, technology isn’t social enough. Social media can’t replace real life, in-person social interactions.
But what social media and other Internet activity can do is reveal behavioral patterns that signify the state of a user’s mental health and perhaps even predict a person's actions. For example, Facebook is already using AI to spot warning signs of suicidal users. The AI looks for natural language patterns in a user’s posts and in comments by the user’s friends to quickly identify signs of a potentially imminent suicide. A human team at Facebook then verifies the AI’s findings and contacts the user to suggest ways he or she can get help. A suicide helpline chief told BBC News that “the move was not just helpful but critical."
Facebook is also using algorithms to identify posts by terrorists, among other concerning content. This effort can be important to mental health efforts too, since some people with suicide tendencies kill other people before killing themselves, whether that is a case of domestic violence or a mass shooter. Early detection and intervention can save more than one life.
Facebook is looking into doing more in the way of proactive intervention, such as contacting someone in the user’s network or medical personnel to actively intervene before it is too late. "It's something that we have been discussing with Facebook," said Dr. John Draper in a BBC article. "The more we can mobilize the support network of an individual in distress to help them, the more likely they are to get help. The question is how we can do that in a way that doesn't feel invasive.”
Finding the line between privacy and appropriate intervention is a difficult challenge. But it’s one that AI may be able to help with as well, if it is used for more than back-end analytics. For example, it may take the form of a digital assistant or digital friend. In such a scenario, the AI would focus on learning the idiosyncrasies of a single person rather than searching for broad and common actions among many persons.
According to the World Health Organization (WHO), "More than 300 million people are now living with depression, an increase of more than 18 percent between 2005 and 2015.” WHO further reports that almost 800,000 people of all ages die by suicide every year. However, suicide is the second leading cause of death among those 15 to 29 years old.
Even in well-established, high-income countries, access to mental health care can be limited due to myriad factors ranging from overbooked professionals and facilities to too few resources in certain areas, high levels of social stigma in receiving mental health care, and lack of funds to cover the costs. According to WHO, 78 percent of suicides globally happen in low- and middle-income countries where access to mental health care is often even more limited.
AI holds tremendous promise in being able to identify and rapidly respond to individuals in crisis at scale. This means that AI-driven digital assistants and analytics can identify and respond faster than humans, and it can do so for many more humans than current healthcare systems can accommodate. Automated, AI systems programmed for mental health assistance and interventions can also help people for far less cost than a human professional. Further, users have greater perceived privacy talking to a machine than a human they feel may be judgmental.
Take for example, Karim, a psychotherapy chatbot designed by a company in Silicon Valley called X2AI. As reported in The New Yorker, Karim was designed largely to help Syrian refugees. But Karim is not an only brainchild. Emma is a Dutch-speaking bot that helps people with mild anxieties and phobias. Tess is an English-speaking chatbot skilled in cognitive-behavioral therapy and other techniques. The company scheduled other bots to deal with a range of mental health problems for people who have experienced gang violence in Brazil or suffer from HIV in Nigeria.
However, AI is not human, and it is not a doctor. Therefore AI-driven technologies are used as assistants for mental health professionals or as an observant and supportive friend to users. In other words, AI does not prescribe medications or admit anyone to the hospital.
Even so, AI-based technologies can make it possible for a limited number of mental health care professionals to manage far more patients well. It is this scalable approach in successfully delivering mental health care to a growing number of sufferers that excites the overworked healthcare industry today. Indeed, lives depend on it. X2AI reports that 90 percent of suicides are preventable—if only someone can get to them fast enough.
While researchers at Facebook, X2AI, and other organizations work to use advanced analytics and automation to expand access to mental health response programs and improve their efficiency and effectiveness, consumer technologies are adopting AI to provide users with more TLC from fellow humans in their social sphere.
Apple’s Siri, Samsung’s Bixby, Amazon's Echo helper Alexa, and Google's Assistant are in our phones, on our tablets, and increasingly in our homes as smart speakers. Each could be fitted with AI-programs capable of analyzing our mental states through indirect observation of our actions and direct observations through our interactions with these systems. They are also capable of summoning help in a crisis.
New smart home devices are designed to help the elderly age in place and the disabled to function better and safer in their own homes. Consider the many GPS-enabled medical alert systems for the elderly already in use. Many automatically detect falls. And all can summon help with the touch of a button.
At some point, these technologies will merge into a single, personalized AI entity that users likely will come to see as a personal assistant or friend. Perhaps even as their “best friend forever” since the AI is more likely to know the user better than human friends do.
This high degree of familiarity will forge a feeling of intimacy and trust. It may also help relieve feelings of isolation and loneliness.
Further, it is likely that if an AI suggests getting help, the user will trust the recommendation and act upon it. One day soon, AI may go so far as to make the appointment and even summon an autonomous car to ferry the user to a facility. It may even send notes of its observations to the person’s doctor as a wellness feature when no immediate threat exists.
While AI is still an emerging technology, it is already capable of performing a great many tasks efficiently. It makes great sense to task it with helping people suffering from a variety of mental health issues. Fortunately, researchers are already working on it.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.