Empathetic AI: Panacea or Ethical Problem?
Millions of people are turning to specialist therapy chatbots for mental health support. But AI tools could potentially do more harm than good.
May 11, 2024
A Strategic Intervention Paper (SIP) from the Global Ideas Center
You may quote from this text, provided you mention the name of the author and reference it as a new Strategic Intervention Paper (SIP) published by the Global Ideas Center in Berlin on The Globalist.
Weighed down by the anxieties fueled by our uncertain times, millions of people are turning to specialist therapy chatbots for mental health support.
Compared to human therapists, these AI-driven applications are cheap, convenient and available around the clock. They could also help plug a serious clinician shortage around the world.
But, by lacking genuine emotional experiences, new AI tools could potentially do more harm than good.
To get a grip on the ethical and neurobiological consequences of AI technologies, especially those that mimic empathy, we must ask ourselves whether current forms of AI will actually make people feel more heard and understood.
If so, will so-called empathetic AI change the way we conceive of empathy and interact with each other?
A viable alternative?
It is easy to understand why smart therapy chatbots are seen as a possible panacea for creaking, under-staffed mental health infrastructures. A recent parliamentary report in the UK showed while the NHS’s mental health workforce grew by 22% between 2016 and 2022, patient referrals rose by 44%.
This is not just a British problem. A Eurobarometer poll revealed that on average every fourth European had difficulty accessing treatment for a mental health issue – in some countries, such as Ireland, this rose to almost every second person.
Some European countries report waiting times of above 230 days to see a child psychiatrist, and for many people appointments with private clinicians can be cost prohibitive.
The hope is that this can be remedied by AI-generated applications that have the ability to detect and respond to human emotions. A recent study conducted by the USC Marshall School of Business showed that AI has the potential to provide “enhanced emotional support capabilities” and can make recipients feel more “heard” compared to responses from untrained humans.
A pressing global health threat
This would be a welcome development for many given the growing levels of loneliness, which the World Health Organization has described as a pressing global health threat.
A Meta-Gallup survey taken across 142 countries found that nearly 1 in 4 adults reported feeling “very or fairly lonely.” The poll also found that the rates of loneliness were highest in young adults aged 19 to 29.
These trends have led to calls for AI tools to be made “artificially vulnerable and fully empathic”- to cater to emotional support needs, but also to prevent sociopathic robots.
But therein lies the rub: You need to have emotions to experience empathy. However well-meaning the attempts to create artificial empathy might be, they fail to recognize the dangers to people seeking emotional support from machines that only pretend to care.
Assisting the grieving process?
AI-assisted grieving tools that replicate the dead are a case in point. Despite being built to take the sting out of death, they present ethical quandaries, and could do more harm than good, as neuroscience and evolutionary psychology show that the grieving process is important for the human species.
The grieving process involves the activation of complex brain areas and neuroendocrine disruptions: Reminders of lost loved ones activate the brain’s neural reward activity, which can hinder our ability to “move on.’’
Interactions that feel real but lack substance could be dangerous as they interrupt the healthy progression through all grief cycles which are critical for avoiding prolonged grief.
Close, but not quite
Despite rapid technological innovation – chatbots powered by Large Language Models (LLMs) have made progress in reading our facial expressions and vocal tone – AI technologies still at best mimic the actual empathy expressed by humans.
This is in large part because AI chatbots lack mirror neurons, which are necessary for compassion and affective empathy. The intricate mechanisms involved in moral judgment in the human brain render human morality uniquely sophisticated, and hard to replicate in non-human mechanisms.
This is also the case with empathy, a feature that enables us to act as a moral species and make ethical judgements.
To get a grip on the speed of technological innovation and robot morality, we need to engage directly with issues that lie on the cusp of AI, neuroscience and philosophy (an area I have termed Neuro-Techno-Philosophy).
Transdisciplinary endeavors such as Neuro-Techno-Philosophy can teach us a lot about human frailty and empathy, both at the individual and group level.
By understanding our neurochemical motivations, neurobehavioral needs, fears and predilections, we are better placed to understand how AI may, one day, acquire notions of right and wrong, and learn to operationalize them in complex situations.
You cannot replace the human brain
Even if this happens, AI tools will still lack the affective brain circuits that render many human emotions – such as empathy – truly complex and nuanced.
Empathy is linked to the so-called pain matrix, which refers to the brain areas that are involved in processing pain, such as the bilateral anterior insula and the dorsal anterior cingulate cortex.
Recent studies also point to links between empathy and somatosensory processing – for instance, when we see the pain experienced by others. In simple terms, empathy is a two-way process, its effectiveness depends on how we emotionally connect to the other person, or machine.
Conclusion
Advanced AI technologies could soon be on the cusp of learning how to respond in ways that provide emotional support while demonstrating understanding and validation.
It is becoming evident that as applications for artificial empathy expand, guidelines and transparency will be needed to integrate the technology ethically.
But we should be clear-sighted about the limits of AI-driven emotional support tools in meeting human psychological needs and dignity needs.
AI models are now advanced enough to detect emotions. But AI will, at least for the foreseeable future, lack real emotional experiences and, therefore, the ability to truly empathize.
Takeaways
It is easy to understand why smart therapy chatbots are seen as a possible panacea for under-staffed mental health infrastructures.
To get a grip on the ethical and neurobiological consequences of AI technologies, especially those that mimic empathy, we must ask ourselves whether current forms of AI will actually make people feel more heard and understood.
Millions of people are turning to specialist therapy chatbots for mental health support. But AI tools could potentially do more harm than good.
AI models are now advanced enough to detect emotions. But AI will, at least for the foreseeable future, lack real emotional experiences and, therefore, the ability to truly empathize.
It is becoming evident that as applications for artificial empathy expand, guidelines and transparency will be needed to integrate the technology ethically.
A Strategic Intervention Paper (SIP) from the Global Ideas Center
You may quote from this text, provided you mention the name of the author and reference it as a new Strategic Intervention Paper (SIP) published by the Global Ideas Center in Berlin on The Globalist.
Read previous
Indebted to China
May 8, 2024