• | 9:00 am

Does AI have a role to play in mental health treatment?

Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, discuss whether AI can help with mental health treatments—and what guardrails are necessary.

Does AI have a role to play in mental health treatment?
[Source photo: Brown University, TMV]

In recent weeks, OpenAI has faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. In a recent conversation at the Innovation@Brown Showcase, Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups, discussed the controversial relationship between AI and mental health. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners.

This is an abridged transcript of an interview from Rapid Response, hosted by former Fast Company editor-in-chief Robert Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode.

A recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for AI Research Institute on Interaction for AI Assistance. It’s a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here?

Pavlick: Mental health is something that is very, I would say I don’t even know if it’s polarizing. I think many people’s first reaction is negative, the concept of AI mental health. So as you can tell from the name, we didn’t actually start as a group that was trying to work on mental health.

We were a group of researchers who were interested in the biggest, hardest problems with current AI technologies. What are the hardest things that people are trying to apply AI to that we don’t think the current technology is quite up for? And mental health came up and actually was originally taken off our list of things that we wanted to work on because it is so scary to think about if you get it wrong, how big the risks are. And then we came back to it exactly because of this. We basically realized that this is happening, people are already using it. There’s companies that are like startups, some of them probably doing a great job, some of them not.

The truth is we actually have a hard time even being able to differentiate those right now. And then there are a ton of people just going to chatbots and using them as therapists. And so we’re like, the worst thing that could happen is we don’t actually have good scientific leadership around this. How do we decide what this technology can and can’t do? How do we evaluate these kinds of things? How do we build it safely in a way that we can trust?

There’s questions like this. There’s a demand for answers, and the reality is most of them we just can’t answer right now. They depend on an understanding of the AI that we don’t yet have. An understanding of humans and mental health that we don’t yet have. A level of discourse that society isn’t up for. We don’t have the vocabulary, we don’t have the terms. There’s just a lot that we can’t do yet to make this happen the right way. So that’s what ARIA is trying to provide this public sector, academic kind of voice to help lead this discussion.

That’s right. You’re not waiting for this data to come out or for the final, whatever academia might say, this consortium might say. You’re already investing in companies that do this. I know you’re an early stage investor in Slingshot AI, which delivers mental health support via the app Ash. Is Ash the kind of service that Ellie and her group should be wary about? What were you thinking about when you decided to make this investment?

Darabi: Well, actually I’m not hearing that Ellie’s wary. I think she’s being really pragmatic and realistic. In broad brushstrokes, zooming back and talking about the sobering facts and the scale of this problem, one billion out of eight billion people struggle with some sort of mental health issue. Fewer than 50% of people seek out treatment, and then the people who do find the cost to be prohibitive.

That recent study that you cited, it’s probably the one from the Harvard Business Review, which came out in March of this year, which studied use cases of ChatGPT and their analysis showed that the number one, four, and seven out of 10 use cases for foundational models broadly are therapy or mental health related. I mean, we’re talking about something that touches half of the planet. If you’re looking at investing with an ethical lens, there’s no greater TAM [total addressable market] than people who have a mental health disorder of some sort.

We’ve known the Slingshot AI team, which is the largest foundational model for psychology, for over a decade. We’ve followed their careers. We think exceptionally highly of the advisory board and panel they put together. But I think what really led us down the rabbit hole of caring deeply enough about mental health and AI to frankly start a fund dedicated to it, and we did that in December of last year. It was really kind of going back to the fact that AI therapy is so stigmatized and people hear it and they immediately jump to the wrong conclusions.

They jump to the hyperbolic examples of suicide. And yes, it’s terrible. There have been incidents of deep codependence upon ChatGPT or otherwise whereby young people in particular are susceptible to very scary things and yet those salacious headlines don’t represent the vast number of folks whom we think will be well-serviced by these technologies.

You said this phrase, we kind of stumbled on [these] uses for ChatGPT. It’s not what it was created for and yet people love it for that.

Darabi: It makes me think about 20 years ago when everybody was freaking out about the fact that kids were on video games all day, and now because of that we have Khan Academy and Duolingo. Fearmongering is good actually because it creates a precedent for the guardrails that I think are absolutely necessary for us to safeguard our children from anything that could be disastrous.

But at the same time, if we run in fear, we’re just repeating history and it’s probably time to just embrace the snowball, which will become an avalanche in mere seconds. AI is going to be omnipresent everywhere. Everything that we see and touch will be in some way supercharged by AI. So if we’re not understanding it to our deepest capabilities, then we’re actually doing ourselves a great disservice.

Pavlick: To this point of yeah, people are drawn to AI for this particular use case. So on our team in ARIA, we have a lot of computer scientists who build AI systems, but actually a lot of our teams do developmental psychology, core cognitive science, neuroscience. There are questions to say, why? The whys and the hows. What are people getting out of this? What need is it filling? I think this is a really important question to be asking soon.

I think you’re completely right. Fearmongering has a positive role to play. You don’t want to get too caught on it and you can point historically to examples of people freaked out and it turned out okay. There’s also cases like social media, maybe people didn’t freak out enough and I would not say it turned out okay. People can agree to disagree and there’s plus and minuses, but the point is these aren’t questions that really we are in a position that we can start asking questions.

You can’t do things perfectly, but you can run studies. You can say, “What is the process that’s happening? What is it like when someone’s talking to a chatbot? Is it similar to talking to a human? What is missing there? Is this going to be okay long-term? What about young people who are doing this in core developmental stages? What about somebody who’s in a state of acute psychological distress as opposed to as a general maintenance thing? What about somebody who’s struggling with substance abuse?” These are all different questions, they’re going to have different answers. Again, I feel very strongly that the one LLM that just is one interface for everything is, I think a lot is unknown, but I would bet that that’s not going to be the final thing that we’re going to want.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Robert Safian is the editor and managing director of The Flux Group. From 2007 through 2017, Safian oversaw Fast Company’s print, digital and live-events content, as well as its brand management and business operations. More

More Top Stories:

FROM OUR PARTNERS

Ads - State of Agentic Commerce in the Middle East 2025
Ads - State of Agentic Commerce in the Middle East 2025