Mental health care is expensive. Talking to AI is free.
Chatting with AI isn't just a function of loneliness or convenience. It saves real money.
I wish you and yours a happy, healthful, and peaceful new year. In the year's first edition, I ask why millions of people all over the world are using AI bots for mental health support, despite the very real risks inherent in them. Let me know what you think.
Love,
Tanmoy
Sam Altman, the CEO of OpenAI, ChatGPT's parent company, said late last year that the platform has 800 million active weekly users. The company estimates that 0.07% of this user base – or 560,000 users – indicate possible signs of mental health emergencies related to psychosis or mania. Approximately 0.15% – 1,200,000 users – have conversations that include explicit indicators of potential suicidal planning or intent. A similar number show potentially heightened levels of emotional attachment to ChatGPT.
The usual narrative around why so many people are talking to AI for mental health purposes, notwithstanding the risks inherent in it, stresses three factors:
- Loneliness: People are profoundly lonely. They don't care if their interlocutor isn't a human being – they just want to feel heard and understood. AI is good at supplying this (faux) intimacy.
- Access: Conventional mental health care through therapy or psychiatry is simply not available for most people who need care. AI becomes their first and often only port of call.
- Convenience: Your therapist can typically see you for 50 minutes once a week. AI bots are available round the clock, never take holidays, never cancel on you, and never have scheduling conflicts with your calendar.
A fourth factor tends to get glossed over – cost.