AI-powered hand test may be used to rate Parkinson’s severity
Patients were asked to tap their fingers in front of a webcam
An online artificial intelligence (AI)-powered test where users tap their fingers in front of a webcam may ultimately help rate the severity of Parkinson’s disease from afar and within minutes, a study suggests.
The system performed almost as well as three expert neurologists and better than two primary care physicians in making sense of the data, which mirror Parkinson’s symptoms assessed using validated clinical tools.
“These findings could have huge implications for patients who have difficulty gaining access to neurologists, getting appointments, and traveling to the hospital,” Ehsan Hoque, PhD, the study’s senior author and an associate professor of computer science at the University of Rochester, said in a university news release. “It’s an example of how AI is being gradually introduced into health care to serve people outside of the clinic and improve health equity and access.” Hoque is also the co-director of the Rochester Human-Computer Interaction Laboratory.
The study, “Using AI to measure Parkinson’s disease severity at home,” was published in npj Digital Medicine.
Rating disease severity can help doctors — usually a neurologist or movement disorder specialist — keep track of Parkinson’s progression over time and decide on the most appropriate treatment plan for each patient.
AI to assess disease severity
The Movement Disorder Society’s Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) is commonly used to assess Parkinson’s severity. Based on patient examinations, it depends on the experience and skills of the trained neurologist or movement disorder specialist who performs the test.
Not every patient receives care from these specialists, however. “Access to neurological care is limited, and many individuals with [Parkinson’s] do not receive proper treatment or diagnosis,” the researchers wrote. “Even for those with access to care, arranging clinical visits can be challenging, especially for older individuals living in rural areas with cognitive and driving impairments.”
The finger tapping task, which consists in repeatedly tapping the thumb against the index finger as fast and as big as possible, is used to track changes in motor performance over time.
“Imagine anyone from anywhere in the world could perform a motor task (i.e., finger-tapping) using a computer webcam and get an automated assessment of their motor performance severity,” the researchers wrote. “This presents several challenges: collecting a large amount of data in the home environment, developing interpretable computational features that can be used as digital biomarkers to track the severity of motor functions, and developing a platform where (elderly) people can complete the tasks without direct supervision.”
To address these concerns, Hoque and his university colleagues developed an AI-powered solution that could track landmarks in the left and right hands as they move in the task.
The study included 172 Parkinson’s patients and 78 healthy people who used a web-based tool called Park to record themselves tapping their fingers 10 times with a webcam, mostly at home (79.6%). The task was performed with both hands. The participants’ mean age was 62.1 (range, 18-86), most (92%) were white, and there was a greater proportion of men in the Parkinson’s group (63.4% vs. 35.9%).
Differences in assessments among experts, non-experts
First, three expert neurologists and two non-experts, who’d been trained to use the MDS-UPDRS but don’t actively see Parkinson’s patients, rated disease severity on a scale from 0 to 4 based on the videos.
All the experts agreed in 30.7% of the videos and at least two agreed in 93%. Their ratings differed in no more than one point from the ground truth in 98.2% to 99.5% of the cases. The ground truth refers to disease severity agreed by majority.
“These metrics suggest that the experts can reliably rate our videos recorded from home environments,” the researchers wrote. However, “the non-experts were less reliable than the experts, demonstrating moderate agreement with the three expert raters,” they said.
Next, the researchers used a machine learning model that predicts Parkinson’s symptom severity based on 22 clinically relevant landmarks from a finger-tapping video. A machine learning model is a form of AI that involves training algorithms on data to make predictions based on the patterns it learns.
The model’s reliability in rating disease severity was moderate and its predictions deviated from the ground truth by an average of 0.58 points. As a comparison, “the non-experts’ ratings deviated from the ground truth severity score by 0.83 points” on average, and any pair of experts differed by 0.53 points from each other’s ratings.
“In most of the metrics we tested, the … model outperformed the non-expert clinicians, but was outperformed by the experts,” the researchers wrote.
While most videos were recorded at home, their quality didn’t affect how well the model performed in rating disease severity. “The dataset includes blurry videos caused by poor internet connection, videos where participants had difficulty following instructions, and videos with overexposed or underexposed backgrounds,” the researchers wrote.
The model was found to perform similarly across sex, age, and Parkinson’s versus non-Parkinson’s groups. While the model is still in its early stage of development, the findings “offer new opportunities for utilizing AI to address movement disorders,” including Parkinson’s disease as well as conditions like ataxia and Huntington’s disease, the researchers said.