Evaluating Deep Learning Methods for Identifying Nuclei

Source: news-medical.net

Caicedo et al. have determined an evaluation framework to assess the improvements in fluorescent images of nucleus segmentation achieved by machine learning versus classical approaches. The team revealed that of the two deep learning modes interrogated (DeepCell and U-Net), there was improved accuracy of nuclei segmentation when both methods were trained with a large fluorescence image set.

A notable development in the field of fluorescent imaging analysis has been made recently with improvements in segmentation. The optimal approach to segmentation, which refers to the process of delimiting the boundaries of objects, is through solutions offered by deep learning solutions.

This subset of machine learning involves artificial neural networks, algorithms inspired by the human brain, that ‘learn’ from repeated interrogation of large quantities of data. A team from X has constructed a framework f evaluation to evaluate improvements in nucleus segmentation seen when adopting deep learning methods.

Traditionally, pixel-overlap is used for nucleus/cell segmentation evaluation, but missing and merged objects are not diagnosed earlier in the segmentation process. The team sought to determine an evaluation method that correctly differentiates between true positives and errors.

Therefore, classical machine learning and image processing algorithms do not satisfactorily capture biologically important error modes. Because of this, cell and nuclei segmentation algorithms are difficult to assess.ow

To compare classical machine learning and image processing algorithms against deep learning algorithms, the team hand-annotated over 20,000 nuclei in a collection of 200 images sampled from differently chemically treated sets.

They used three deep learning strategies to test the accuracy of segmenting each nucleus, each nucleus was labeled as a separate object, and these labels were transformed into masks for background, nucleus interior and boundaries. ‘Masks’ are used to distinguish the area of a cell of interest and exclude other cellular areas of the image that might compromise the analysis. They define a cell compartment, such as the cytoplasm or nucleus.

A convolutional neural network (CNN) was trained using the images and their masks, which generated predictions for pixel classification, where each pixel belongs to one of three classes; background, interior, and boundaries. Each pixel classified as ‘interior’ is assigned a nucleus label.

Two CNNs can be used for nucleus segmentation, DeepCell and U-Net, and these were compared against classic machine learning.

While traditional metrics have focussed on evaluating pixel‐wise segmentation accuracy only, the team used a metric that measures area coverage to identify correctly segmented nuclei. Caicedo et al. also measured other quality metrics, including the number and type of errors.

The team’s results showed that deep learning strategies reduce the errors of segmentation and improve segmentation accuracy compared to baseline images obtained from classical machine learning and image processing.  Deep learning methods were reported to ‘excel’ at the correct splitting of adjacent nuclei, being able to recognize boundaries to separate touching nuclei.

Of the two deep learning methods, U-Net was reported to be more sensitive at detecting cell edges and nuclei of all sizes than DeepCell. U-Net did see edges in their absence, however.

The team also determined that more data training was shown to reduce errors in segmentation and improve its accuracy.  They also determined that deep learning required more computing and annotating time compared to classical methods as well as improving the sensitivity of cytometry screens.

Deep learning methods still result in errors spotted by experts, and so require further investigation to quantify these mistakes. Despite this, deep learning offers improved quality, accuracy, and reliability of measurements obtained from the measurements extracted from fluorescence images.

The team concludes that although annotation efforts and computational costs are significant, deep learning methods have an overwhelmingly positive impact on the reliability and quality of fluorescent images.

Acknowledgements

Funding was provided by the National Institute of General Medical Sciences of the National Institutes of Health. The experiments were run on GPUs donated by NVIDIA Corporation through their GPU Grant Program (to AEC). The team acknowledges support from the HAS‐LENDULET‐BIOMAG and from the European Union and the European Regional Development Funds

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence