Mental health treatment plans can involve medication and other physical treatment options, but talk therapy remains a nearly universal component. In the 1960s, MIT AI Laboratory’s Joseph Weizenbaum wrote the ELIZA program to talk with patients as a reflective Rogerian therapist. The challenge at the time was to write the program with enough intelligence that humans who interacted with it remotely via a computer terminal believed they were talking with a trained, human therapist.

Move forward to 2019 and mental health patients still talk with humans. Recent research in France about patient acceptance of biometric monitoring devices and AI treatment shows increasingly positive feelings about AI in factor analysis and diagnosis with any form of illness, but the majority still want a physician to make treatment decisions. Researchers from Columbia University and Cambridge University are working with India startup Bangalore to develop Touchkin, defined as a “compassionate AI chatbot for behavioral health.”

Researchers at the University of Utah Social Research Institute reversed the AI-powered conversation paradigm. The Utah group focuses on using AI-based neural conversation agents (chatbots) to talk with therapists-in-training rather than with patients. Mental health therapy training requires purposeful conversation practice in many forms with peers, and with practice and real patients or clients. Academics and professional trainers who observe trainee practice sessions for evaluation and feedback add significant expense to the programs. The work in Utah centers on feedback.

The scientists trained a text-based neural conversational agent with a collection of 2,354 psychotherapy transcripts. The team also trained the chatbot with specific feedback responses related to interviewing and counseling skills. The researchers enlisted a group of 151 non-therapists for the study. The subjects were randomly assigned to one of two groups. The first group received immediate feedback via a chatbox on their skills in asking questions and reflecting during practice therapy sessions. The researchers gave the second group basic education on skills to use to talk with patients and encouraged to use the skills, but there was no interaction during the practice sessions. The chatbot group used 91% more reflections than the control group during sessions when the chatbox gave feedback. In subsequent sessions without chatbot feedback, the first group still provided 76% more reflections than the control. The chatbot group asked more open-ended questions when they received feedback but not when the feedback stopped. The chatbot group used 31% more listening skills overall than the control group that started with initial training with no feedback during practice sessions.

The study indicates that therapist trainee practice with chatbot feedback can improve skills. This study, published in the Journal of Medical Internet Research, is a proof-of-concept that now needs further development and testing. These initial results are encouraging. I recall my years of training as a therapist in the late 1970s and later years supervising and observing graduate counseling students in their own training. Unobtrusive accurate feedback during practice sessions would have been an immense improvement.