首页 | 本学科首页   官方微博 | 高级检索  
检索        


Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images
Institution:1. School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;2. Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA;3. Peking University People’s Hospital, Beijing 100044, China;4. School of Biomedical Engineering, ShanghaiTech University, Shanghai, China;5. Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea;1. School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;2. Peking University People''s Hospital, Beijing 100044, China;2. Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University–Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University Medical School, Jeonju City, Jeollabuk-Do, South Korea;1. School of Mechanical Engineering, and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi''an 710072, China;2. School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510006, China;3. College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China;4. School of Computer Science, Northwestern Polytechnical University, Xi''an 710072, China;5. Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi''an 710072, China;1. Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA;2. Department of Computer Science, Utah State University, Logan, UT 84322, USA;3. School of Computer Science and Technology, Harbin Institute of Technology, China
Abstract:Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号