Published on the 09/11/2020 | Written by Heather Wright
But can we really trust those bots with our deepest darkest secrets?…
Mental health has been a hot topic for a while now, with increased pressure on companies to look out for staff wellbeing, both physical and mental.
Now, some companies are turning to robots. And what’s more a study by Oracle and HR research and advisory firm Workplace Intelligence, says we’re actually more happy with robot therapists than humans.
The study, which polled more than 12,000 employees, managers, HR leaders and C-level executives in 11 countries, found 82 percent of people believe robots can support their mental health better than humans can, and 68 percent would rather talk to a robot than their manager about stress and anxiety at work.
The report comes as stress levels soar on the back of the Covid pandemic, with 78 percent of those surveyed reporting that the pandemic has negatively affected their mental health.
Interestingly, 75 percent of respondents say AI has already helped their mental health at work, including by providing the information needed to do their job more effectively, automating tasks and decreasing workload to prevent burnout and reducing stress by helping prioritise tasks.
“With the global pandemic, mental health has become not only a broader societal issue, but a top workplace challenge,” says Emily He, Oracle Cloud HCM senior vice president. “It has profound impact on individual performance, team effectiveness and organisational productivity.
“Now more than ever, it’s a conversation that needs to be had and employees are looking to employers to step up and provide solutions.
“There is a lot that can be done to support the mental health of the global workforce and there are so many ways that technology like AI can help. But first, organisations need to add mental health to their agenda. If we can get these conversations started – both at an HR and an executive level – we can begin to make some change.”
“AI-enabled mental health devices could also contain biases that have the potential to exclude or harm in unintended ways.”
Using technology for mental health isn’t new. Back in the 1960’s there was Eliza, a natural language processing computer program that emulates a Rogerian psychotherapist. The Doctor script for Eliza created the illusion of a therapy session, simulating conversations, but without any real substance.
With today’s advanced algorithms and higher computing power, much more is possible.
Fast forward 60 years and San Francisco’s Woebot Labs has been providing a ‘friend’ to talk to for several years. The creation of a team of Stanford psychologists and AI experts, Woebot is a ‘fully automated conversational agent’, based on cognitive behavioural therapy, and providing daily texts checking in on how people feel, as well as ‘in the moment’ help.
Woebot is not on his own. There’s Wysa, Joyable, Talkspace… Closer to home (though of course, in the digital world, location is not obstacle) Aroha is a chatbot, running on Facebook Messenger, designed to help young people in New Zealand cope with stress during the pandemic.
“The algorithms behind these new applications have been trained with enormous data sets and can produce genuine therapeutic statements,” says Alena Buyx, Professor of Ethics in Medicine and Health Technologies at Technical University of Munich (TUM), of the slew of AI therapy offerings.
But Buyx and her team at TUM aren’t entirely convinced that the new bots are all good.
Buyx and several colleagues at TUM have been studying how ‘embodied AI’ can help treat mental illness and they’re calling for urgent action by governments, professional associations and researchers, saying there are ‘important ethical questions’ to the technology remaining unanswered.
The TUM team say the new applications have ‘enormous potential’, making treatment accessible to more people and addressing the comfort issue highlighted by the Oracle study.
But they also note that it’s well established that existing human biases can be built into algorithms, reinforcing existing forms of social inequality.
“This raises the concern that AI-enabled mental health devices could also contain biases that have the potential to exclude or harm in unintended ways, such as data driven sexist or racist bias or bias produced by competing goals or endpoints of devices.”
They say there are longer term questions to be considered too, including the potential for patients to become ‘overly attached’ to the applications.
“Therapeutic AI applications are medical products for which we need appropriate approval processes and ethical guidelines,“ says Buyx. “For example, if the programs can recognise whether patients are having suicidal thoughts, then they must follow clear warning protocols, just like therapists do, in case of serious concerns.”
She’s keen to see a lot more research before we all rush headlong into AI therapy.
“We have very little information on how we as human beings are affected by contact with therapeutic AI,” she says. “For example, through contact with a robot, a child with a disorder on the autism spectrum might only learn how to interact better with robots – but not with people.”
Even Woebot’s team agree robots can only do so much.
In a Reddit AMA, Alison Darby, founder and CEO of Woebot said the potential for AI to ever take the place of humans in this realm is ‘massively overhyped’.
“Even if AI gets to a level of sophistication where it can be a wholesale alternative to traditional mental-health treatments, I believe that to be successful, it will need to be part of an integrated care pathway,” adds Jose Hamilton Vargas, CEO of Youper.