Cognitive tests given virtually can be helpful, but more work needed

Study into reliability of online testing compared with in-person evaluations

Lindsey Shapiro, PhD avatar

by Lindsey Shapiro, PhD |

Share this article:

Share article via email
A woman and a doctor wave to each other during an online session.

Cognition tests given virtually showed results similar to in-person evaluations for Parkinson’s disease patients with no or mild cognitive impairments, but more work clearly is needed to overcome their technical limitations and improve their reliability, according to a recent study.

“In-person cognitive testing with a neuropsychologist remains the gold standard, and it remains to be determined if virtual cognitive testing is feasible in [Parkinson’s disease],” the researchers wrote.

The study, “Validating virtual administration of neuropsychological testing in Parkinson disease: a pilot study,” was published in Scientific Reports.

Recommended Reading
A vial is shown with the label

Dosing begins in trial of potential therapy for GBA1 Parkinson’s

Virtual test for cognitive decline in Parkinson’s doesn’t require office visit

Interest is growing in the development of remote tests or digital technologies to evaluate Parkinson’s motor and nonmotor symptoms. It’s thought that such tests would allow for more frequent evaluations without burdening patients with repeat visits to a doctor.

Remote testing also moved to the forefront of care during the COVID-19 pandemic, when in-person access to doctor’s appointments was limited.

Still, to fully implement such approaches, it has to be established whether virtual administration of routine tests are reliable and valid — meaning they are as consistent and accurate in assessing Parkinson’s symptoms as in-person testing.

In particular, current evidence is scarce on the validity of virtual cognitive assessments for people with the neurodegenerative disease, although some degree of cognitive impairment is commonly found.

“Cognitive assessments are a key component of clinical care and many clinical research projects, including randomized controlled trials (RCTs), yet there is little data reporting on the validity of virtual non-motor assessments in [Parkinson’s disease], the researchers wrote.

Scientists at the University of Pennsylvania’s Perelman School of Medicine conducted a pilot study in 35 Parkinson’s patients, who completed a series of cognitive tests both in-person and via video conferencing. These adults, with a mean age of 69.1, had varying cognitive abilities, ranging from normal (65.7% of group), to mild cognitive impairment (28.6%), and mild dementia (5.7%).

Assessments covering standard global cognition and major cognitive domains were completed virtually and in-person in a random order. Follow-up testing was performed within three to seven days of the initial test to evaluate how reliable, or consistent, the tests were.

Virtual tests were administered via video conference by a neuropsychologist and trained research coordinators, who also oversaw the in-person assessments. Prior to the video conference, patients were sent test packets for use on drawing tests and templates for some written tests that all either mailed back completed or returned on an in-patient visit.

Problems noted in successfully completing tests given via video conference

Across most tests, in-person and virtual scores did not significantly differ, results showed.

Only one test — semantic verbal fluency — showed significantly different performances by administration type, with better scores seen with virtual testing. Semantic verbal fluency involves naming as many words as possible that belong to a given category, like animals or clothing, in a set amount of time.

Still, not all tests were able to be completed successfully using a virtual approach. For example, the written Trail Making Test part B, which evaluates visual attention and working memory, was completed by 54.3% of participants virtually. A problem noted by the scientist was that test administrators could not easily correct participants during the test, as required.

Oral versions of certain written tests “may prove more useful in a virtual setting,” the researchers noted.

The virtual exam’s reliability — its ability to consistently produce a similar result — relative to in-person testing also was found to be good for only three cognitive tests. The remaining 11 tests were considered to have moderate or poor reliability.

Regardless of the mode of administration, performance on most tests showed significant variability between the first and second test.

Such “poor retest reliability [within] … just 3–7 days” suggests that “there are significant short-term fluctuations in cognitive performance in [Parkinson’s] patients, which has implications for interpreting a single or one-time test score in the context of clinical care and clinical research,” the researchers wrote.

They noted various challenges related to virtual test administration, including internet connectivity problems, lack of computer literacy, difficulties with seeing or hearing using the device — particularly among older adults — as well as the possibility for cheating, with patients completing the test packets prior to or after the video chat.

“While traditional in-person paper and pencil testing with a trained neuropsychologist remains the gold standard, there may be situations in which virtual testing is necessary or helpful,” the researchers wrote.

Future studies with larger patient groups and a broader range of cognitive abilities will be needed to more fully establish the feasibility of a virtual approach, the scientists noted.

“Regardless of any limitations, in a typically older population for which in-person clinical or clinical research visits can be a challenge, virtual cognitive testing in [Parkinson’s] merits further study,” the team concluded.