French researchers at two Paris institutions recently published the results of an investigation of patients’ perceptions of wearable biometric monitoring devices (BMDs) and artificial intelligence (AI) in healthcare in npr Digital Medicine. The researchers from the METHODS Team, Center for Research in Epidemiology and StatisticS at Université Paris Descartes and the Center for Clinical Epidemiology, Hôtel-Dieu Hospital questioned the real-world effectiveness of BMDs and AI if patients don’t accept the new technologies in their care plans. The inspiration for the project arose from concern that, while spurring developments worldwide, the “hope and hype” of BMDs and AI could be overrated if patients weren’t onboard.
The research team enlisted the assistance of adult patients with chronic conditions in France’s Community of Patients for Research (ComPaRe). In mid-2018 1,183 ComPaRe patients participated in the study. The patients answered open-ended and quantitative questions about the potential dangers and benefits of BMDs and AI. They also expressed their willingness to use BMDs and AI in their own care based on four different scenarios.
Responding to the questionnaires, 47% of the ComPaRe participants saw BMDs and AI as a “great opportunity” to improve follow-up, reduce treatment burdens, and facilitate physicians’ efforts. At the other end of the spectrum, 11% of the respondents viewed BMDs and AI as a “great danger,” citing the inability to replace human intelligence in care, the risk of hacking, and the potential for misusing personal data. In a summary statement for both questionnaires, the researchers reported 20% of the patients felt the benefits of BMDs and AI far outweighed the dangers, and 3% believed the potential risks overrode the value of the benefits.
The researchers presented four scenarios to determine whether the patient-respondents would accept BMDs and AI in their own care. The patients asked if they would be OK with the new technologies to screen for skin cancer, predict flares of their chronic conditions, guide physical therapy, and help determine the urgency of problems via a chatbot. In their responses, 13% opposed using the technologies in all cases, 22% would not use BMDs and AI in at least one to three of the scenarios, and 65% said they would accept BMDs and AI in their care in all four vignettes. The positive respondents varied in their acceptance of the two technologies with no human control, but acceptance, in general, was higher than rejection.
The researchers noted the patients were more likely to accept AI used for predictions but preferred human judgment for decisions, actions, and recommendations for care. The ComPaRe participants were comfortable viewing AI as a form of clinician assistance (like to driver assistance in automobiles) but not as a replacement for clinician judgment.
The French study is an interesting read, but it also underscores the need for additional generalized studies of patient acceptance and attitudes about new technologies. The researchers pointed out that patient or subject reports about technology that are related to or concurrent with testing the same technology may not be as useful as more generalized studies. For example, if you are a diabetic involved with testing a specific technology to help with that disease, your response, while useful, would be tainted by personal involvement.