Medical personnel use pain level information to determine the type and amount of pain medication to administer or prescribe. Unfortunately, pain level data depends on both objective and subjective factors that can interfere with determining the proper medication choice and dosage. The current pain level assessment gold standard “How much pain are you in on a scale of 1-10?,” called the Visual Analog Scale, or VAS, is subject to manipulation as well as cultural, ethnic, and individual factors. With opioid addiction pandemic in the U.S., accurate pain assessment tools gain even greater significance.

Because of its subjectivity and context dependence, the VAS results can vary significantly between persons. Researchers at MIT recently published a study in the Journal of Machine Learning Research documenting the effectiveness of DeepFaceLIFT. The researchers from MIT’s Media Lab, Computer Science and Artificial Intelligence Laboratory, and Department of Electrical Engineering and Computer Science devised DeepFaceLIFT as a two-state personalized model. The team used the UNBC-McMaster Shoulder Pain Expression Archive Database (UNBC-PAIN), a public dataset of facial image sets or videos of subjects with one-sided shoulder pain during various range-of-motion exercises. With the UNBC database as a reference, DeepFaceLIFT uses machine learning to analyze subject facial video and self-reported pain scores to produce a pain level estimate. The program outperformed traditional VAS in the study.

Next steps for the MIT team and DeepFaceLIFT include comparisons with other pain scores and to perform more advanced statistical analysis to “potentially capture additional information and improve estimates of subjective pain.”