Other developers also are venturing into the space. TQIntelligence’s Clarity AI voice technology uses short voice recordings from patients to help therapists diagnose mental health issues among children. The AI can measure the severity of emotional distress based on the scientific relationship between trauma, stress and the human voice, said CEO Yared Alemu. The tool has been modeled on 14,000 voice samples, he said.
TQIntelligence, the Morehouse School of Medicine in Atlanta and the Georgia Institute of Technology published a study that found the AI was between 79% and 85% accurate in its assessments. The AI-enabled mental health diagnoses are verified by standard mental health screenings including the Patient Health Questionnaire-9, which measures for depression severity and suicide ideation. The company gets parental permission before incorporating the AI, Alemu said.
Using AI in that context is appropriate, said Dr. Stephen J. Motew, chief of clinical enterprise at Falls Church, Virginia-based Inova Health System. Motew is helping run several AI pilots at Inova and said he receives an endless number of pitches around AI each week.
“It makes a lot of sense because it's very simple, it's stepwise and you can build it on an algorithm,” Motew said.
Alemu said the tools can help offset a shortage of clinicians by increasing their efficiency when assessing patients. According to federal data, 160 million Americans live in an area with a shortage of mental health professionals.
“There are pretty extensive disparities in [mental health] treatment outcomes, especially for kids from low-income communities,” Alemu said. “Our approach is augmented clinical intelligence. We are looking at ways to insert AI in the right places at the right time.”
Medical ethicists' concerns
Ethicists say the gold rush to incorporate AI in medicine means some solutions aren't being built with strong clinical data, and that's a particular concern for vulnerable patient populations, such as those at risk for suicide.
Arthur Caplan, founding head of the division of medical ethics at NYU Grossman, said mental health companies should have professional clinical associations validate that their AI meets the standard of care for patients. He hasn’t seen any tech company go that route.
“These associations need to set the standards,” Caplan said. “They need to say, ‘This is how you should recruit patients. This is what should be in your ads. This is what should be in place for a verified AI program that meets our standards.’”
In June, the American Psychological Association released an advisory that clinicians should “exercise caution in recommending or incorporating AI-driven tools into their practice.” The association said many AI tools lack evidence around quality, safety, and effectiveness and can cause harm. It recommended clinicians evaluate apps based on their accessibility, privacy and safety standards, clinical foundation, usability and if they help achieve a patient’s therapeutic goal.
Caplan said he understands the appeal of these tools but many AI interventions lack informed consent, which can leave patients unaware of how technology uses their data.
“If you go online and use AI to pick out your new shirt, whatever,” Caplan said. “But when you use it for mental illness, this makes you vulnerable to exploitation.”
There are unlikely to be legal liabilities for U.S. developers of AI in these circumstances, said Barry Solaiman, a professor of medical ethics at Weill Cornell Medicine’s campus in Qatar. Still, he said in general he recommends doctors not use ChatGPT or other generative AI applications unless they have been developed by the hospital itself.
“What happens if the AI says you're not at risk of suicide, then you are and harm yourself? Or if the doctor doesn’t act on the information given by the system? It’s always going to depend on the standard of care and what information the doctor had,” Solaiman said.