Artificial intelligence and big data score once again for health tech. Researchers from the Icahn School of Medicine at Mount Sinai in New York and Boston University’s Department of Bioengineering and Bioinformatics recently published a study in Radiology. The research team set out to determine whether artificial intelligence based on natural language machine learning could accurately interpret radiology reports to assist physician diagnosis. The short version: the AI engine identified the critical language in radiology reports with 91% accuracy.
We’ve written previously about natural language and big data applications in healthcare. Several large organizations partner with IBM’s Watson natural language processing to help people manage their own healthcare. The potential to predict and protect against a wide range of diseases and health conditions drives the National Institutes of Health (NIH) “All of Us” campaign. The Icahn and B.U. team’s work involved comparing various artificial intelligence approaches to identify specific head conditions from the words used in radiologist reports. The researchers worked with reports based on x-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI).
The team used an initial group of 96,303 CT reports to train the artificial intelligence program. They also added specific physician-assigned labels to the natural language recognition engine. Noting that the success of their project benefited from the standardized language used in radiology reports, the researchers concluded that artificial intelligence could indeed be employed to recognize findings in the reports. This is another step towards having machines help humans with complex tasks, which has the potential to make healthcare more accurate, faster, and less expensive.