Retinal Fluid Segmentation Using Ensembled 2-Dimensionally and 2.5-Dimensionally Deep Learning Networks

Alsaih, K. and Yusoff, M.Z. and Faye, I. and Tang, T.B. and Meriaudeau, F. (2020) Retinal Fluid Segmentation Using Ensembled 2-Dimensionally and 2.5-Dimensionally Deep Learning Networks. IEEE Access, 8. pp. 152452-152464.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

Morphological changes related to different diseases that occur in the retina are currently extensively researched. Manual segmentation of retinal fluids is time-consuming and subject to variability, giving prominence to the demand for robust automatic segmentation methods. The standard in assessing the existence and mass of retinal fluids at present is through the optical coherence tomography (OCT) modality. In this study, semantic segmentation deep learning networks were examined in 2.5D and ensembled with 2D networks. This analysis aims to show how these networks can perform in-depth than using only a single B-scan and the effects of 2.5 patches when fitted to the deep networks. All experiments were evaluated using public data from the RETOUCH challenge as well as the OPTIMA challenge dataset and Duke dataset. The networks trained in 2.5D performed slightly better than 2D networks in all datasets. The average performance of the best network was 0.867, using the dice similarity coefficient score (DSC) metric on the RETOUCH dataset. On the DUKE dataset, Deeplabv3+Pa outperformed other networks in this study with a dice score of 0.80. Experiments showed a more robust performance when networks were ensembled. Intraretinal fluid (IRF) was recognized better than other fluids with a DSC of 0.924.\,\,Deeplabv3+Pa model outperformed all other networks with a p-value average of 0.03 on the RETOUCH challenge dataset. Methods used in this study to distinguish retinal disorders outperform human performance as well as showed competitive results to the teams who joined both challenges. Three consecutive B-scans, including partial depth information in training neural networks, were stacked as a single image built for more robust networks compared to providing only 2D information. © 2013 IEEE.

Item Type: Article
Impact Factor: cited By 0
Uncontrolled Keywords: Learning systems; Ophthalmology; Optical tomography; Semantics, Automatic segmentations; Human performance; Intra-retinal fluids; Manual segmentation; Morphological changes; Robust performance; Semantic segmentation; Similarity coefficients, Deep learning
Depositing User: Ms Sharifah Fahimah Saiyed Yeop
Date Deposited: 19 Aug 2021 06:09
Last Modified: 19 Aug 2021 06:09
URI: http://scholars.utp.edu.my/id/eprint/23221

Actions (login required)

View Item
View Item