首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1 % (p << 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p << 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.  相似文献   

2.
Many existing approaches for mammogram analysis are based on single view. Some recent DNN-based multi-view approaches can perform either bilateral or ipsilateral analysis, while in practice, radiologists use both to achieve the best clinical outcome. MommiNet is the first DNN-based tri-view mass identification approach, which can simultaneously perform bilateral and ipsilateral analysis of mammographic images, and in turn, can fully emulate the radiologists’ reading practice. In this paper, we present MommiNet-v2, with improved network architecture and performance. Novel high-resolution network (HRNet)-based architectures are proposed to learn the symmetry and geometry constraints, to fully aggregate the information from all views for accurate mass detection. A multi-task learning scheme is adopted to incorporate both Breast Imaging-Reporting and Data System (BI-RADS) and biopsy information to train a mass malignancy classification network. Extensive experiments have been conducted on the public DDSM (Digital Database for Screening Mammography) dataset and our in-house dataset, and state-of-the-art results have been achieved in terms of mass detection accuracy. Satisfactory mass malignancy classification result has also been obtained on our in-house dataset.  相似文献   

3.
The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer's disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classifier (SRC) method, which has shown to be effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images.  相似文献   

4.
Cervical cancer has been one of the most lethal cancers threatening women’s health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists’ large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists’ workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.  相似文献   

5.
Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe. With the popularity of convolutional neural networks (CNNs), computer-aided diagnosis of cancer has attracted considerable attention. Such tools are easily deployable and are cost-effective. Hence, these can enable extensive coverage of cancer diagnostic facilities. However, the development of such a tool for ALL cancer was challenging so far due to the non-availability of a large training dataset. The visual similarity between the malignant and normal cells adds to the complexity of the problem. This paper discusses the recent release of a large dataset and presents a novel deep learning architecture for the classification of cell images of ALL cancer. The proposed architecture, namely, SDCT-AuxNetθ is a 2-module framework that utilizes a compact CNN as the main classifier in one module and a Kernel SVM as the auxiliary classifier in the other one. While CNN classifier uses features through bilinear-pooling, spectral-averaged features are used by the auxiliary classifier. Further, this CNN is trained on the stain deconvolved quantity images in the optical density domain instead of the conventional RGB images. A novel test strategy is proposed that exploits both the classifiers for decision making using the confidence scores of their predicted class labels. Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells to establish the validity of the proposed methodology that is also robust to subject-level variability. A weighted F1 score of 94.8% is obtained that is best so far on this challenging dataset.  相似文献   

6.
目的:评价新柏氏膜式超薄细胞检测配合阴道镜检查对子宫颈病变的诊断价值。方法:对2000例子宫颈疾病患者行新柏氏膜式超薄细胞检查初筛,细胞学专职人员进行最后诊断。对73例阳性病例行阴道镜下取活组织病理检查。结果:阳性涂片248例,占12.4%。其中子宫颈鳞癌4例;子宫内膜癌1例;子宫颈低度鳞状上皮内病变40例(占2.0%),其中人乳头瘤病毒感染占77.5%。该方法检查的敏感性为98.9%,特异性为90.9%,与阴道镜检查符合率为83.6%。结论:采用新柏氏膜式超薄细胞检测初筛,阳性病例配合阴道镜病理做最后诊断,能及早发现癌前病变。  相似文献   

7.
目的 构建颈动脉斑块超声图像数据集并探讨深度学习技术对颈动脉斑块自动分类诊断的应用价值。方法 首先采集254例患者和354例正常人的颈部动脉超声图像,每例采集2幅图像,构建共包含1216幅图像的颈动脉超声图像数据集;然后,基于已构建的颈动脉超声图像数据集对传统的HOG+SVM方法和14种不同结构的深度神经网络模型进行训练;最后,通过三个量化指标(分类精确率、召回率和F1值)确定现有的颈动脉斑块超声图像分类性能最好的深度神经网络模型。 结果 通过综合比较15种不同的颈动脉斑块超声图像分类方法,得出性能最好的模型是深度残差网络模型ResNet50,其精确率、召回率和F1值分别为97.36%、97.32%和97.34%。 结论 本文通过数据集构建、模型选择、模型训练和测试验证了深度学习技术在颈动脉斑块超声图像自动诊断应用中的有效性,其中深度残差网络模型ResNet50对颈动脉超声图像能进行高准确度自动分类。  相似文献   

8.
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.  相似文献   

9.

Purpose 

Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules.

Methods

We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification.

Results

Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy.

Conclusions

The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.
  相似文献   

10.
We propose a BlackBox Counterfactual Explainer, designed to explain image classification models for medical applications. Classical approaches (e.g., , saliency maps) that assess feature importance do not explain how imaging features in important anatomical regions are relevant to the classification decision. Such reasoning is crucial for transparent decision-making in healthcare applications. Our framework explains the decision for a target class by gradually exaggerating the semantic effect of the class in a query image. We adopted a Generative Adversarial Network (GAN) to generate a progressive set of perturbations to a query image, such that the classification decision changes from its original class to its negation. Our proposed loss function preserves essential details (e.g., support devices) in the generated images.We used counterfactual explanations from our framework to audit a classifier trained on a chest X-ray dataset with multiple labels. Clinical evaluation of model explanations is a challenging task. We proposed clinically-relevant quantitative metrics such as cardiothoracic ratio and the score of a healthy costophrenic recess to evaluate our explanations. We used these metrics to quantify the counterfactual changes between the populations with negative and positive decisions for a diagnosis by the given classifier.We conducted a human-grounded experiment with diagnostic radiology residents to compare different styles of explanations (no explanation, saliency map, cycleGAN explanation, and our counterfactual explanation) by evaluating different aspects of explanations: (1) understandability, (2) classifier’s decision justification, (3) visual quality, (d) identity preservation, and (5) overall helpfulness of an explanation to the users. Our results show that our counterfactual explanation was the only explanation method that significantly improved the users’ understanding of the classifier’s decision compared to the no-explanation baseline. Our metrics established a benchmark for evaluating model explanation methods in medical images. Our explanations revealed that the classifier relied on clinically relevant radiographic features for its diagnostic decisions, thus making its decision-making process more transparent to the end-user.  相似文献   

11.
《Medical image analysis》2015,25(1):245-254
Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain.We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier.We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image.  相似文献   

12.
《Remote sensing letters.》2013,4(11):1095-1104
ABSTRACT

As the resolution of Synthetic Aperture Radar (SAR) images increases, the fine-grained classification of ships has become a focus of the SAR field. In this paper, a ship classification framework based on deep residual network for high-resolution SAR images is proposed. In general, networks with more layers have higher classification accuracy. However, the training accuracy degradation and the limited dataset are major problems in the training process. To build deeper networks, residual modules are constructed and batch normalization is applied to keep the activation function output. Different fine tuning strategies are used to select the best training scheme. To take advantage of the proposed framework, a dataset including 835 ship slices is augmented by different multiples and then used to validate our method and other Convolutional Neural Network (CNN) models. The experimental results show that, the proposed framework can achieve a 99% overall accuracy on the augmented dataset under the optimal fine-tuning strategy, 3% higher than that in other models, which demonstrates the effectiveness of our proposed approach.  相似文献   

13.
Selecting an appropriate time to acquire imagery for land-cover classification can have a substantial effect on classification accuracy. In this research, multi-temporal analysis of six Landsat images for binary impervious surface classification was conducted to investigate whether specific image dates (beyond simply leaf on/off) have a significant effect on impervious surface classification. We further examined the image date effects across training data sample sizes and classification algorithms. In terms of single time classification, the selection of an appropriate image time had the largest effect on the accuracy with a range of 7% to 10% between the most and least accurate classifications. The greenness transitional time between leaf off and leaf on (May images for our site) offered the highest performance. With multi-temporal images, an additional improvement in classification accuracy, up to 2.4%, was achieved when compared to the best single-time classification, when an advanced classifier (Support Vector Machine) was used. In addition, using all six available images with a reference data sample size as small as 150 pixels, classification accuracy was higher than that of many single-time classifications with substantially larger calibration data sample size. Our study suggests that there is considerable variability in classification accuracy of multi-temporal imagery and image dates should be carefully considered, beyond a general leaf on/off rule. Further testing should be conducted in other sites to identify optimal image dates.  相似文献   

14.
Hyperspectral imagery combined with spatial features holds promise for improved remote sensing classification. In this letter, we propose a method for classification of hyperspectral data based on the incorporation of spatial arrangement of pixel's values. We use the semivariogram to measure the spatial correlation which is then combined with spectral features within the stacked kernel support vector machine framework. The proposed method is compared with a classifier based on first-order statistics. The overall classification accuracy is tested for the AVIRIS Indian Pines benchmark dataset. Error matrices are used to estimate individual class accuracy. Statistical significance of the accuracy estimates is assessed based on the kappa coefficient and z-statistics at the 95% confidence level. Empirical results show that the proposed approach gives better performance than the method based on first-order statistics.  相似文献   

15.
Histopathology is a crucial diagnostic tool in cancer and involves the analysis of gigapixel slides. Multiple instance learning (MIL) promises success in digital histopathology thanks to its ability to handle gigapixel slides and work with weak labels. MIL is a machine learning paradigm that learns the mapping between bags of instances and bag labels. It represents a slide as a bag of patches and uses the slide’s weak label as the bag’s label. This paper introduces distribution-based pooling filters that obtain a bag-level representation by estimating marginal distributions of instance features. We formally prove that the distribution-based pooling filters are more expressive than the classical point estimate-based counterparts, like ‘max’ and ‘mean’ pooling, in terms of the amount of information captured while obtaining bag-level representations. Moreover, we empirically show that models with distribution-based pooling filters perform equal to or better than those with point estimate-based pooling filters on distinct real-world MIL tasks defined on the CAMELYON16 lymph node metastases dataset. Our model with a distribution pooling filter achieves an area under the receiver operating characteristics curve value of 0.9325 (95% confidence interval: 0.8798 - 0.9743) in the tumor vs. normal slide classification task.  相似文献   

16.
Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy artificial intelligence (AI) models beyond conventional reporting of performance metrics. When considering their role in a clinical decision support setting, AI classification models should ideally avoid confident wrong predictions and maximise the confidence of correct predictions. Models that do this are said to be well calibrated with regard to confidence. However, relatively little attention has been paid to how to improve calibration when training these models, i.e. to make the training strategy uncertainty-aware. In this work we: (i) evaluate three novel uncertainty-aware training strategies with regard to a range of accuracy and calibration performance measures, comparing against two state-of-the-art approaches, (ii) quantify the data (aleatoric) and model (epistemic) uncertainty of all models and (iii) evaluate the impact of using a model calibration measure for model selection in uncertainty-aware training, in contrast to the normal accuracy-based measures. We perform our analysis using two different clinical applications: cardiac resynchronisation therapy (CRT) response prediction and coronary artery disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The best-performing model in terms of both classification accuracy and the most common calibration measure, expected calibration error (ECE) was the Confidence Weight method, a novel approach that weights the loss of samples to explicitly penalise confident incorrect predictions. The method reduced the ECE by 17% for CRT response prediction and by 22% for CAD diagnosis when compared to a baseline classifier in which no uncertainty-aware strategy was included. In both applications, as well as reducing the ECE there was a slight increase in accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD diagnosis respectively. However, our analysis showed a lack of consistency in terms of optimal models when using different calibration measures. This indicates the need for careful consideration of performance metrics when training and selecting models for complex high risk applications in healthcare.  相似文献   

17.
Purpose Lower back pain affects 80–90 % of all people at some point during their life time, and it is considered as the second most neurological ailment after headache. It is caused by defects in the discs, vertebrae, or the soft tissues. Radiologists perform diagnosis mainly from X-ray radiographs, MRI, or CT depending on the target organ. Vertebra fracture is usually diagnosed from X-ray radiographs or CT depending on the available technology. In this paper, we propose a fully automated Computer-Aided Diagnosis System (CAD) for the diagnosis of vertebra wedge compression fracture from CT images that integrates within the clinical routine. Methods We perform vertebrae localization and labeling, segment the vertebrae, and then diagnose each vertebra. We perform labeling and segmentation via coordinated system that consists of an Active Shape Model and a Gradient Vector Flow Active Contours (GVF-Snake). We propose a set of clinically motivated features that distinguish the fractured vertebra. We provide two machine learning solutions that utilize our features including a supervised learner (Neural Networks (NN)) and an unsupervised learner (K-Means). Results We validate our method on a set of fifty (thirty abnormal) Computed Tomography (CT) cases obtained from our collaborating radiology center. Our diagnosis detection accuracy using NN is 93.2 % on average while we obtained 98 % diagnosis accuracy using K-Means. Our K-Means resulted in a specificity of 87.5 % and sensitivity over 99 %. Conclusions We presented a fully automated CAD system that seamlessly integrates within the clinical work flow of the radiologist. Our clinically motivated features resulted in a great performance of both the supervised and unsupervised learners that we utilize to validate our CAD system. Our CAD system results are promising to serve in clinical applications after extensive validation.  相似文献   

18.
There are two challenges associated with the interpretability of deep learning models in medical image analysis applications that need to be addressed: confidence calibration and classification uncertainty. Confidence calibration associates the classification probability with the likelihood that it is actually correct – hence, a sample that is classified with confidence X% has a chance of X% of being correctly classified. Classification uncertainty estimates the noise present in the classification process, where such noise estimate can be used to assess the reliability of a particular classification result. Both confidence calibration and classification uncertainty are considered to be helpful in the interpretation of a classification result produced by a deep learning model, but it is unclear how much they affect classification accuracy and calibration, and how they interact. In this paper, we study the roles of confidence calibration (via post-process temperature scaling) and classification uncertainty (computed either from classification entropy or the predicted variance produced by Bayesian methods) in deep learning models. Results suggest that calibration and uncertainty improve classification interpretation and accuracy. This motivates us to propose a new Bayesian deep learning method that relies both on calibration and uncertainty to improve classification accuracy and model interpretability. Experiments are conducted on a recently proposed five-class polyp classification problem, using a data set containing 940 high-quality images of colorectal polyps, and results indicate that our proposed method holds the state-of-the-art results in terms of confidence calibration and classification accuracy.  相似文献   

19.
Microcystic macular edema (MME) manifests as small, hyporeflective cystic areas within the retina. For reasons that are still largely unknown, a small proportion of patients with multiple sclerosis (MS) develop MME—predominantly in the inner nuclear layer. These cystoid spaces, denoted pseudocysts, can be imaged using optical coherence tomography (OCT) where they appear as small, discrete, low intensity areas with high contrast to the surrounding tissue. The ability to automatically segment these pseudocysts would enable a more detailed study of MME than has been previously possible. Although larger pseudocysts often appear quite clearly in the OCT images, the multi-frame averaging performed by the Spectralis scanner adds a significant amount of variability to the appearance of smaller pseudocysts. Thus, simple segmentation methods only incorporating intensity information do not perform well. In this work, we propose to use a random forest classifier to classify the MME pixels. An assortment of both intensity and spatial features are used to aid the classification. Using a cross-validation evaluation strategy with manual delineation as ground truth, our method is able to correctly identify 79% of pseudocysts with a precision of 85%. Finally, we constructed a classifier from the output of our algorithm to distinguish clinically identified MME from non-MME subjects yielding an accuracy of 92%.OCIS codes: (100.0100) Image processing, (170.4470) Ophthalmology, (170.4500) Optical coherence tomography  相似文献   

20.

Purpose

We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI).

Methods

The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour.

Results

The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively.

Conclusions

This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号