Using Artificial Intelligence to Analyze Brain Cells May Revolutionize Research

Patricia Inácio, PhD avatar

by Patricia Inácio, PhD |

Share this article:

Share article via email
GDNF

Computers may be taught to identify features in nerve cells that have not been stained or undergone other damaging treatments for microscope use, an approach with the potential to revolutionize the way researchers study neurodegenerative diseases such as Parkinson’s.

“Researchers are now generating extraordinary amounts of data. For neuroscientists, this means that training machines to help analyze this information can help speed up our understanding of how the cells of the brain are put together and in applications related to drug development,” Margaret Sutherland, PhD, said in a press release. Sutherland is program director at the National Institute of Neurological Disorders and Stroke (NINDS), which helps fund the research.

The study “In silico labeling: Predicting fluorescent labels in unlabeled images” was published in the journal Cell.

Research into the nervous system began with detailed and laborious drawings, like those by pioneering neuroscientists, Santiago Ramon y Cajal and Camillo Golgi in the late 19th century. From those early times, research evolved to use dyes to stain different cell types in the brain and identify cells’ health status.

Those methods, however, require harsh chemicals to fix the cells and stop everything that is occurring inside it at a specific time. This means analysis is performed in cells far from their natural, healthy state.

Researchers at NINDS, part of the National Institutes of Health, asked whether they could teach computers to identify structures inside cells and distinguish cell types without the need for dyes.

“Every day our lab had been creating hundreds of images, much more than we could look at and analyze ourselves. One day, a couple of researchers from Google knocked on our door to see if they could help us,” said Steven Finkbeiner, MD, PhD, director of the Gladstone Institutes in San Francisco and lead author of the study.

They used a type of artificial intelligence — called deep learning — where computers can analyze data and make decisions, much like the artificial intelligence behind facial recognition softwares.

Finkbeiner’s team trained a computer program to analyze paired sets of unstained and stained images. After the training, they tested the computer’s ability to learn by showing it new, unstained images of fixed or live cells that it had never seen.

The computer software correctly identified the cell’s nucleus — where the entire genomic DNA is stored. Also, additional training sessions allowed the software to distinguish dead cells from living cells, to identify specific types of brain cells and to differentiate between axons and dendrites, two types of extensions on nerve cells.

“Deep Learning takes an algorithm, or a set of rules, and structures it in layers, identifying simple features from parts of the image, and then passes the information to other layers that recognize increasingly complex features, such as patterns and structures. This is reminiscent of how our brain processes visual information,” said Finkbeiner. “Deep Learning methods are able to uncover much more information than can be seen with the human eye.”

The drawbacks linked to this type of technology include the need for a large dataset of images for the training — around 15,000 images — and the risk for a too-specialized program that is able to identify only structures in certain types of images (obtained in a specified manner, for example). This would limit the generalized goal to use the program in all types of images of the brain.

Finkbeiner’s team is focusing on methods to overcome these limitations.

“Now that we showed that this technology works, we can start using it in disease research. Deep Learning may spot something in cells that could help predict clinical outcomes and can help us screen potential treatments,” Finkbeiner said.