首页 | 本学科首页   官方微博 | 高级检索  
检索        


Transformer-based unsupervised contrastive learning for histopathological image classification
Institution:1. Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China;2. Department of Biomedical Engineering, Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY, USA.;4. China Medical University, Shenyang, China;5. Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China;6. Institute of Medical Informatics, University of Lübeck, Lübeck, Germany;1. Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha 410083, China;2. Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, RI 02912, USA;3. Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118, USA;1. Medical College, Guizhou University, Guizhou 550000, China;2. Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People''s Hospital, Guizhou 550002, China;3. College of Computer Science and Technology, Guizhou University, Guizhou 550025, China;4. Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100192, China;1. IBM Zurich Research Lab, Zurich, Switzerland;2. Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland;3. Signal Processing Laboratory 5, EPFL, Lausanne, Switzerland;4. National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy;5. Institute for High Performance Computing and Networking - CNR, Naples, Italy;6. Aurigen- Centre de Pathologie, Lausanne, Switzerland;7. Lausanne University Hospital, Lausanne, Switzerland;8. Department of Information Technology, Uppsala University, Sweden;1. Physical Sciences, Sunnybrook Research Institute, Toronto, Canada;2. Department of Medical Biophysics, University of Toronto, Canada;3. Department of Computer Science, University of Toronto, Canada;4. Department of Electrical & Computer Engineering, University of Toronto, Canada;1. Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA;2. Wake Forest Institute for Regenerative Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
Abstract:A large-scale and well-annotated dataset is a key factor for the success of deep learning in medical image analysis. However, assembling such large annotations is very challenging, especially for histopathological images with unique characteristics (e.g., gigapixel image size, multiple cancer types, and wide staining variations). To alleviate this issue, self-supervised learning (SSL) could be a promising solution that relies only on unlabeled data to generate informative representations and generalizes well to various downstream tasks even with limited annotations. In this work, we propose a novel SSL strategy called semantically-relevant contrastive learning (SRCL), which compares relevance between instances to mine more positive pairs. Compared to the two views from an instance in traditional contrastive learning, our SRCL aligns multiple positive instances with similar visual concepts, which increases the diversity of positives and then results in more informative representations. We employ a hybrid model (CTransPath) as the backbone, which is designed by integrating a convolutional neural network (CNN) and a multi-scale Swin Transformer architecture. The CTransPath is pretrained on massively unlabeled histopathological images that could serve as a collaborative local–global feature extractor to learn universal feature representations more suitable for tasks in the histopathology image domain. The effectiveness of our SRCL-pretrained CTransPath is investigated on five types of downstream tasks (patch retrieval, patch classification, weakly-supervised whole-slide image classification, mitosis detection, and colorectal adenocarcinoma gland segmentation), covering nine public datasets. The results show that our SRCL-based visual representations not only achieve state-of-the-art performance in each dataset, but are also more robust and transferable than other SSL methods and ImageNet pretraining (both supervised and self-supervised methods). Our code and pretrained model are available at https://github.com/Xiyue-Wang/TransPath.
Keywords:Histopathology  Transformer  Self-supervised learning  Feature extraction
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号