首页 | 本学科首页   官方微博 | 高级检索  
检索        


Capsules for biomedical image segmentation
Institution:1. Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL;2. Nvidia, Bethesda, MD USA;3. Ege University, Izmir, Turkey;4. Johns Hopkins University, Baltimore, MD US State;1. Centre for Medical Image Computing, University College London, London, United Kingdom;2. Université Côte dAzur, Inria, Epione Team, 06902 Sophia Antipolis, France;3. School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom;4. NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and the Institute of Ophthalmology, University College London, London, United Kingdom;1. Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41–800 Zabrze, Poland;2. Radpoint Sp. z o.o., Gliwicka 275, 40–862 Katowice, Poland;3. Department of Radiology and Radiodiagnostics, Medical University of Silesia, ul. 3 Maja 13/15, 41–800 Zabrze, Poland;1. School of Biomedical Engineering, Western University, London, ON, Canada;2. Collaborative Training Program in Musculoskeletal Health Research, Western University, London, ON, Canada;3. Digital Imaging Group (DIG) of London, London, ON, Canada;4. School of Health Science, Western University, London, ON, Canada;1. Digital Technology and Inovation, Siemens Healthineers, Erlangen 91052, Germany;2. Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA;3. Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91058, Germany
Abstract:Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of “deconvolutional” capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects’ thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules’ ability to generalize to unseen handling of rotations/reflections on natural images.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号