首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Accurate cardiac segmentation of multimodal images, e.g., magnetic resonance (MR), computed tomography (CT) images, plays a pivot role in auxiliary diagnoses, treatments and postoperative assessments of cardiovascular diseases. However, training a well-behaved segmentation model for the cross-modal cardiac image analysis is challenging, due to their diverse appearances/distributions from different devices and acquisition conditions. For instance, a well-trained segmentation model based on the source domain of MR images is often failed in the segmentation of CT images. In this work, a cross-modal images-oriented cardiac segmentation scheme is proposed using a symmetric full convolutional neural network (SFCNN) with the unsupervised multi-domain adaptation (UMDA) and a spatial neural attention (SNA) structure, termed UMDA-SNA-SFCNN, having the merits of without the requirement of any annotation on the test domain. Specifically, UMDA-SNA-SFCNN incorporates SNA to the classic adversarial domain adaptation network to highlight the relevant regions, while restraining the irrelevant areas in the cross-modal images, so as to suppress the negative transfer in the process of unsupervised domain adaptation. In addition, the multi-layer feature discriminators and a predictive segmentation-mask discriminator are established to connect the multi-layer features and segmentation mask of the backbone network, SFCNN, to realize the fine-grained alignment of unsupervised cross-modal feature domains. Extensive confirmative and comparative experiments on the benchmark Multi-Modality Whole Heart Challenge dataset show that the proposed model is superior to the state-of-the-art cross-modal segmentation methods.  相似文献   

2.
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.  相似文献   

3.
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.  相似文献   

4.
Automatic segmentation of organs at risk is crucial to aid diagnoses and remains a challenging task in medical image analysis domain. To perform the segmentation, we use multi-task learning (MTL) to accurately determine the contour of organs at risk in CT images. We train an encoder-decoder network for two tasks in parallel. The main task is the segmentation of organs, entailing a pixel-level classification in the CT images, and the auxiliary task is the multi-label classification of organs, entailing an image-level multi-label classification of the CT images. To boost the performance of the multi-label classification, we propose a weighted mean cross entropy loss function for the network training, where the weights are the global conditional probability between two organs. Based on MTL, we optimize the false positive filtering (FPF) algorithm to decrease the number of falsely segmented organ pixels in the CT images. Specifically, we propose a dynamic threshold selection (DTS) strategy to prevent true positive rates from decreasing when using the FPF algorithm. We validate these methods on the public ISBI 2019 segmentation of thoracic organs at risk (SegTHOR) challenge dataset and a private medical organ dataset. The experimental results show that networks using our proposed methods outperform basic encoder-decoder networks without increasing the training time complexity.  相似文献   

5.
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a “virtual” target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.  相似文献   

6.
Tissue/region segmentation of pathology images is essential for quantitative analysis in digital pathology. Previous studies usually require full supervision (e.g., pixel-level annotation) which is challenging to acquire. In this paper, we propose a weakly-supervised model using joint Fully convolutional and Graph convolutional Networks (FGNet) for automated segmentation of pathology images. Instead of using pixel-wise annotations as supervision, we employ an image-level label (i.e., foreground proportion) as weakly-supervised information for training a unified convolutional model. Our FGNet consists of a feature extraction module (with a fully convolutional network) and a classification module (with a graph convolutional network). These two modules are connected via a dynamic superpixel operation, making the joint training possible. To achieve robust segmentation performance, we propose to use mutable numbers of superpixels for both training and inference. Besides, to achieve strict supervision, we employ an uncertainty range constraint in FGNet to reduce the negative effect of inaccurate image-level annotations. Compared with fully-supervised methods, the proposed FGNet achieves competitive segmentation results on three pathology image datasets (i.e., HER2, KI67, and H&E) for cancer region segmentation, suggesting the effectiveness of our method. The code is made publicly available at https://github.com/zhangjun001/FGNet.  相似文献   

7.
A large-scale and well-annotated dataset is a key factor for the success of deep learning in medical image analysis. However, assembling such large annotations is very challenging, especially for histopathological images with unique characteristics (e.g., gigapixel image size, multiple cancer types, and wide staining variations). To alleviate this issue, self-supervised learning (SSL) could be a promising solution that relies only on unlabeled data to generate informative representations and generalizes well to various downstream tasks even with limited annotations. In this work, we propose a novel SSL strategy called semantically-relevant contrastive learning (SRCL), which compares relevance between instances to mine more positive pairs. Compared to the two views from an instance in traditional contrastive learning, our SRCL aligns multiple positive instances with similar visual concepts, which increases the diversity of positives and then results in more informative representations. We employ a hybrid model (CTransPath) as the backbone, which is designed by integrating a convolutional neural network (CNN) and a multi-scale Swin Transformer architecture. The CTransPath is pretrained on massively unlabeled histopathological images that could serve as a collaborative local–global feature extractor to learn universal feature representations more suitable for tasks in the histopathology image domain. The effectiveness of our SRCL-pretrained CTransPath is investigated on five types of downstream tasks (patch retrieval, patch classification, weakly-supervised whole-slide image classification, mitosis detection, and colorectal adenocarcinoma gland segmentation), covering nine public datasets. The results show that our SRCL-based visual representations not only achieve state-of-the-art performance in each dataset, but are also more robust and transferable than other SSL methods and ImageNet pretraining (both supervised and self-supervised methods). Our code and pretrained model are available at https://github.com/Xiyue-Wang/TransPath.  相似文献   

8.
Object segmentation and structure localization are important steps in automated image analysis pipelines for microscopy images. We present a convolution neural network (CNN) based deep learning architecture for segmentation of objects in microscopy images. The proposed network can be used to segment cells, nuclei and glands in fluorescence microscopy and histology images after slight tuning of input parameters. The network trains at multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The extra convolutional layers which bypass the max-pooling operation allow the network to train for variable input intensities and object size and make it robust to noisy data. We compare our results on publicly available data sets and show that the proposed network outperforms recent deep learning algorithms.  相似文献   

9.
Accurate segmentation of the pancreas from abdomen scans is crucial for the diagnosis and treatment of pancreatic diseases. However, the pancreas is a small, soft and elastic abdominal organ with high anatomical variability and has a low tissue contrast in computed tomography (CT) scans, which makes segmentation tasks challenging. To address this challenge, we propose a dual-input v-mesh fully convolutional network (FCN) to segment the pancreas in abdominal CT images. Specifically, dual inputs, i.e., original CT scans and images processed by a contrast-specific graph-based visual saliency (GBVS) algorithm, are simultaneously sent to the network to improve the contrast of the pancreas and other soft tissues. To further enhance the ability to learn context information and extract distinct features, a v-mesh FCN with an attention mechanism is initially utilized. In addition, we propose a spatial transformation and fusion (SF) module to better capture the geometric information of the pancreas and facilitate feature map fusion. We compare the performance of our method with several baseline and state-of-the-art methods on the publicly available NIH dataset. The comparison results show that our proposed dual-input v-mesh FCN model outperforms previous methods in terms of the Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD) and Hausdorff distance (HD). Moreover, ablation studies show that our proposed modules/structures are critical for effective pancreas segmentation.  相似文献   

10.
Road segmentation from high-resolution visible remote sensing images provides an effective way for automatic road network forming. Recently, deep learning methods based on convolutional neural networks (CNNs) are widely applied in road segmentation. However, it is a challenge for most CNN-based methods to achieve high segmentation accuracy when processing high-resolution visible remote sensing images with rich details. To handle this problem, we propose a road segmentation method based on a Y-shaped convolutional network (indicated as Y-Net). Y-Net contains a two-arm feature extraction module and a fusion module. The feature extraction module includes a deep downsampling-to-upsampling sub-network for semantic features and a convolutional sub-network without downsampling for detail features. The fusion module combines all features for road segmentation. Benefiting from this scheme, the Y-Net can well segment multi-scale roads (both wide and narrow roads) from high-resolution images. The testing and comparative experiments on a public dataset and a private dataset show that Y-Net has higher segmentation accuracy than four other state-of-art methods, FCN (Fully Convolutional Network), U-Net, SegNet, and FC-DenseNet (Fully Convolutional DenseNet). Especially, Y-Net accurately segments contours of narrow roads, which are missed by the comparative methods.  相似文献   

11.
Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images.  相似文献   

12.
The detection and segmentation of individual cells or nuclei is often involved in image analysis across a variety of biology and biomedical applications as an indispensable prerequisite. However, the ubiquitous presence of crowd clusters with morphological variations often hinders successful instance segmentation. In this paper, nuclei cluster focused annotation strategies and frameworks are proposed to overcome this challenging practical problem. Specifically, we design a nucleus segmentation framework, namely ClusterSeg, to tackle nuclei clusters, which consists of a convolutional-transformer hybrid encoder and a 2.5-path decoder for precise predictions of nuclei instance mask, contours, and clustered-edges. Additionally, an annotation-efficient clustered-edge pointed strategy pinpoints the salient and error-prone boundaries, where a partially-supervised PS-ClusterSeg is presented using ClusterSeg as the segmentation backbone. The framework is evaluated with four privately curated image sets and two public sets with characteristic severely clustered nuclei across a variety range of image modalities, e.g., microscope, cytopathology, and histopathology images. The proposed ClusterSeg and PS-ClusterSeg are modality-independent and generalizable, and superior to current state-of-the-art approaches in multiple metrics empirically. Our collected data, the elaborate annotations to both public and private set, as well the source code, are released publicly at https://github.com/lu-yizhou/ClusterSeg.  相似文献   

13.
Liver segmentation from abdominal CT images is an essential step for liver cancer computer-aided diagnosis and surgical planning. However, both the accuracy and robustness of existing liver segmentation methods cannot meet the requirements of clinical applications. In particular, for the common clinical cases where the liver tissue contains major pathology, current segmentation methods show poor performance. In this paper, we propose a novel low-rank tensor decomposition (LRTD) based multi-atlas segmentation (MAS) framework that achieves accurate and robust pathological liver segmentation of CT images. Firstly, we propose a multi-slice LRTD scheme to recover the underlying low-rank structure embedded in 3D medical images. It performs the LRTD on small image segments consisting of multiple consecutive image slices. Then, we present an LRTD-based atlas construction method to generate tumor-free liver atlases that mitigates the performance degradation of liver segmentation due to the presence of tumors. Finally, we introduce an LRTD-based MAS algorithm to derive patient-specific liver atlases for each test image, and to achieve accurate pairwise image registration and label propagation. Extensive experiments on three public databases of pathological liver cases validate the effectiveness of the proposed method. Both qualitative and quantitative results demonstrate that, in the presence of major pathology, the proposed method is more accurate and robust than state-of-the-art methods.  相似文献   

14.
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.  相似文献   

15.
High performance of deep learning models on medical image segmentation greatly relies on large amount of pixel-wise annotated data, yet annotations are costly to collect. How to obtain high accuracy segmentation labels of medical images with limited cost (e.g. time) becomes an urgent problem. Active learning can reduce the annotation cost of image segmentation, but it faces three challenges: the cold start problem, an effective sample selection strategy for segmentation task and the burden of manual annotation. In this work, we propose a Hybrid Active Learning framework using Interactive Annotation (HAL-IA) for medical image segmentation, which reduces the annotation cost both in decreasing the amount of the annotated images and simplifying the annotation process. Specifically, we propose a novel hybrid sample selection strategy to select the most valuable samples for segmentation model performance improvement. This strategy combines pixel entropy, regional consistency and image diversity to ensure that the selected samples have high uncertainty and diversity. In addition, we propose a warm-start initialization strategy to build the initial annotated dataset to avoid the cold-start problem. To simplify the manual annotation process, we propose an interactive annotation module with suggested superpixels to obtain pixel-wise label with several clicks. We validate our proposed framework with extensive segmentation experiments on four medical image datasets. Experimental results showed that the proposed framework achieves high accuracy pixel-wise annotations and models with less labeled data and fewer interactions, outperforming other state-of-the-art methods. Our method can help physicians efficiently obtain accurate medical image segmentation results for clinical analysis and diagnosis.  相似文献   

16.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51–3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.  相似文献   

17.
Cell instance segmentation is important in biomedical research. For living cell analysis, microscopy images are captured under various conditions (e.g., the type of microscopy and type of cell). Deep-learning-based methods can be used to perform instance segmentation if sufficient annotations of individual cell boundaries are prepared as training data. Generally, annotations are required for each condition, which is very time-consuming and labor-intensive. To reduce the annotation cost, we propose a weakly supervised cell instance segmentation method that can segment individual cell regions under various conditions by only using rough cell centroid positions as training data. This method dramatically reduces the annotation cost compared with the standard annotation method of supervised segmentation. We demonstrated the efficacy of our method on various cell images; it outperformed several of the conventional weakly-supervised methods on average. In addition, we demonstrated that our method can perform instance cell segmentation without any manual annotation by using pairs of phase contrast and fluorescence images in which cell nuclei are stained as training data.  相似文献   

18.
Organ shape plays an important role in various clinical practices, e.g., diagnosis, surgical planning and treatment evaluation. It is usually derived from low level appearance cues in medical images. However, due to diseases and imaging artifacts, low level appearance cues might be weak or misleading. In this situation, shape priors become critical to infer and refine the shape derived by image appearances. Effective modeling of shape priors is challenging because: (1) shape variation is complex and cannot always be modeled by a parametric probability distribution; (2) a shape instance derived from image appearance cues (input shape) may have gross errors; and (3) local details of the input shape are difficult to preserve if they are not statistically significant in the training data. In this paper we propose a novel Sparse Shape Composition model (SSC) to deal with these three challenges in a unified framework. In our method, a sparse set of shapes in the shape repository is selected and composed together to infer/refine an input shape. The a priori information is thus implicitly incorporated on-the-fly. Our model leverages two sparsity observations of the input shape instance: (1) the input shape can be approximately represented by a sparse linear combination of shapes in the shape repository; (2) parts of the input shape may contain gross errors but such errors are sparse. Our model is formulated as a sparse learning problem. Using L1 norm relaxation, it can be solved by an efficient expectation-maximization (EM) type of framework. Our method is extensively validated on two medical applications, 2D lung localization in X-ray images and 3D liver segmentation in low-dose CT scans. Compared to state-of-the-art methods, our model exhibits better performance in both studies.  相似文献   

19.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.  相似文献   

20.
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号