Data v humans: this is how we win the battle for the future of mental healthcare

Data v humans: this is how we win the battle for the future of mental healthcare

Disturbing application of user data by a popular mental health helpline shocked the world. We must push back.

Deepa Singh

Content warning

Mentions suicide.

📌
Hi there. You are about to read a deep dive into a vital topic by a guest expert that took a lot of hard work and resources to produce. You can read stories like this for free thanks to the generosity of hundreds of paying supporters. Please join the community now and support Sanity. I really need your help.

-> Pick up a monthly or annual subscription.

-> Also, please share this story. Every little bit helps.

Thank you. - Tanmoy

Late-January, news broke that Crisis Text Line (CTL), a pioneering free, text-based, 24/7 mental health helpline active in the US, UK, Canada, and Ireland, was engaged in murky practices involving its user data.

CTL, a non-profit founded in 2013, has volunteer crisis counsellors who support people living through self-harm, emotional abuse, and thoughts of suicide. A Politico investigation revealed that the platform shared what it called ‘the largest mental health data set in the world’ with its for-profit subsidiary Loris AI — which in turn used ‘a sliced and repackaged version’ of that data to build and market ‘empathetic’ customer service software capable of handling 'hard conversations'.

As reported by Politico's Alexandra S. Levine, Loris pledged to share a part of its revenues with CTL as an example of how for-profit and non-profit companies can coexist symbiotically. "Simply put, why sell t-shirts when you can sell the thing your organization does best?" CTL says on its website.

CTL countered the investigation and the outrage it triggered by saying that all data was anonymised. It also ended its data-sharing arrangement with Loris AI. But none of that took away from the shock that it had helped exploit sensitive data related to vulnerable users, who trusted it as a safe space in moments of severe crisis.

As someone with lived experience of mental health illness, the story agitated, disappointed, and baffled me. I felt violated. And as a researcher working on the ethics of artificial intelligence, I found in it numerous critical questions – about the growing role of technology in mental health, the rise of mental health startups, their relationship with user data, and our collective digital future – which need to be brought to public attention for an urgent, wider debate.

This post is for subscribers only

Subscribe
Already have an account? Log in