The academic health system Virtua Health is adopting an artificial intelligence-enabled chatbot to provide care to behavioral health patients.
Virtua Health, based in Marlton, New Jersey, utilizes a tool from Woebot Health that has developed an app that uses AI to deliver therapy to patients through a chatbot. Woebot is being referred by primary care physicians to an initial cohort of up to 1,000 Virtua patients with mild-to-moderate signs of depression or anxiety. The two companies are also creating a resource center to help clinicians understand how to effectively deploy the chatbot. Financial terms of the arrangement were not disclosed.
Related: AI-detected suicide prevention raises ethical questions
Digital health companies are increasingly using voice- and text-based AI during patient visits to alert clinicians when someone is at risk for suicide. AI has also been pegged as a tool that health systems can use to help deal with shortages of mental health clinicians. But the concept has medical ethicists worried about vulnerable patients not understanding how the tools are being used and whether the AI adheres to clinical guidelines.
Virtua Chief Digital Transformation officer Dr. Tarun Kapoor spoke on why the company adopted an AI chatbot for mental health, his concerns over the potential safety ramifications for patients using the chatbot for therapy and more. The interview has been edited for length and clarity.
What are the mental health challenges your patient population faces that made you adopt Woebot?
What we're experiencing in southern New Jersey is happening across the country. Our community is having trouble accessing mental health professionals. That was starting to happen even before the [COVID-19] pandemic. The pandemic accelerated the mental health crisis, unfortunately. We're still recovering from that ... it's happening across the board from adults to children. We’ve come to a profound realization that we can’t recruit our way out of this. It’s not just about hiring more behavioral health therapists. We're all looking for those [providers]. Einstein said the definition of insanity is doing the same thing and expecting a different result. We’re trying some other things in addition to what we've already been doing.
Why Woebot?
What caught our attention about Woebot was the company's rigor on its studies, clinical trials and research ;papers. And the fact that the founder was an academic. This wasn't just a garage startup. This was a group of folks who actually put a lot of energy in validating the science the same way that you would expect a pharmaceutical drug to go through validation.
How will the collaboration work?
We're targeting patients who have typically mild to moderate depression and/or anxiety that a clinician thinks would benefit from having a mental health therapist with them. A lot of work in cognitive behavioral therapy happens outside of the office. That is how cognitive behavioral therapy is supposed to work. You don’t have to be in person. We saw some interesting data from Woebot. The number one time that people interact with these tools has been between 2 a.m. and 5 a.m. I can tell you our therapists are typically not available between 2 a.m. and 5 a.m. to have conversations with patients. How do we provide tools that are available at the time a person needs them? We’re not saying this is an either/or and that you don't need a [human] therapist. This is here to accentuate the work you do with a therapist.
What are safeguards in place? If it’s 2 a.m. and a patient tells Woebot they have suicide ideation, do you want a chatbot to deal with that?
That was legitimately the very first question I asked Woebot. After looking into this extensively, we felt very comfortable that there are very good listening definitions [within the AI] that if it starts to pick up on any subtlety of something that could be deemed as homicidal or suicidal ideation, Woebot immediately triggers the [patient] to call a crisis and trigger intervention. There’s an entire protocol that kicks off. Another way of thinking about it is ... a patient could be having a conversation with somebody else, such as a friend on social media. At least if they are having this conversation role with this [chatbot], it is programmed to listen for either overt or subtle statements and can trigger that intervention. It’s listening and can say we need to escalate this.
We had a good discussion internally about this: If someone were to send a message to a doctor's office at 2 a.m., you may not get it for hours. If you send a message in the portal to your doctor, the doctor is not necessarily watching that. We’ve set the parameters with this [chatbot]. It is is not for emergencies. But at least it’s better than just [a message] sitting there. The only way to have no exposure in medicine is to stop practicing medicine. There's always going to be risk in medicine. You operate on somebody, there's going to be risk.
There is a lot of hype about AI and mental health, but also concerns. As an early adopter, are you opening yourselves up to criticism?
We’re not letting the machine go hog wild here. We're giving the machine guardrails. There is some flexibility in how it interprets things, but you can’t just have it say, “The best way to solve this is to go ride a bicycle down a mountain.” We’re not going to let it do that. We talk about guided and structured AI and having guardrails versus some of these other tools that are out there, where no one is even watching the results that the [AI] has been putting out. This is a much more guarded version.