首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 390 毫秒
1.
光学相干断层扫描(OCT)技术能实现视网膜的高分辨率三维层析成像,对视网膜疾病类型的诊断和发展阶段的分析具有至关重要的作用。临床基于 OCT 图像的视网膜疾病诊断主要依靠眼科医生对图像中病变结构的分析,这一人工分析过程不仅耗时而且易产生主观的误判。研究视网膜疾病的自动分析和诊断技术将极大减轻眼科医生的工作量,是实现高效诊疗的有效途径。针对视网膜OCT图像自动分类,构建一种联合决策的卷积神经网络分类模型。该模型利用卷积神经网络从原始输入OCT图像中自动地学习不同层级的特征,同时在网络多个卷积层上设计多个决策层,这些决策层能够根据网络中不同尺度的特征图分别对OCT图像分类,最后模型融合所有决策层的分类结果做出最终决策。在Duke数据集(3 231张OCT图像)上的实验结果表明,基于多层级特征联合决策的卷积神经网络分类模型对正常视网膜、视网膜年龄相关性黄斑变性和视网膜黄斑水肿的平均识别准确率达到94.5%,灵敏性达到90.5%,特异性达到95.8%。在HUCM数据集(4 322张OCT图像)上的实验结果表明,基于多层级特征联合决策的卷积神经网络分类模型的平均识别准确率达到89.6%,灵敏性达到88.8%,特异性达到90.8%。充分利用卷积神经网络中丰富的多层级特征,能够有效地对视网膜OCT图像实现准确的分类,为临床上视网膜疾病的辅助诊断提供技术支撑。  相似文献   

2.
梁楠    赵政辉    周依  武博    李长波  于鑫  马思伟  张楠   《中国医学物理学杂志》2020,37(12):1513-1519
目的:提出一种基于滑动块的深度卷积神经网络局部分类、整图乳腺肿块分割的算法,为临床诊断提供有效的肿块形态特征。方法:首先通过区域生长算法和膨胀算法提取患者乳腺区域,并进行数据归一化操作。为了得到每一个像素位置上的诊断信息,在图像的对应位置中滑动提取肿块类及非肿块类图像块,根据卷积神经网络提取其中的纹理信息并对图像块进行分类。通过整合图像块的预测分类结果,进行由粗到细的肿块分割,获得乳腺整图中像素级别的肿块分割。结果:通过比较先进的深度卷积神经网络模型,本文算法滑动块分类结果DenseNet模型下准确率达到96.71%,乳腺X线摄影图像全图肿块分割结果F1-score最优为83.49%。结论:本算法可以分割出乳腺X线摄影图像中的肿块,为后续的乳腺病灶诊断提供可靠的基础。  相似文献   

3.
由于斑点噪声、伪影以及病灶形状多变的影响,乳腺肿瘤超声图像中肿瘤区域的自动检测以及病灶的边缘提取比较困难,已有的方法主要是由医生先手工提取感兴趣区域(ROI)。本研究提出一种乳腺肿瘤超声图像中感兴趣区域自动检测的方法,选用超声图像的局部纹理、局部灰度共生矩阵以及位置信息作为特征,采用自组织映射神经网络进行分类,自动识别乳腺肿瘤区域。对包含168幅乳腺肿瘤超声图像的数据库进行识别的结果表明:该方法自动识别ROI的准确率达到86.9%,可辅助医生提取肿瘤的实际边缘以及进一步诊断。  相似文献   

4.
早期准确诊断能延迟阿尔茨海默症(Alzheimer′s disease,AD)病情的恶化。磁共振成像(MRI)和正电子发射断层扫描(PET)已被证明有助于了解AD相关的解剖和功能性神经变化。近期研究表明,多模态特征的融合可以提高分类性能。本研究提出了一种基于卷积循环神经网络的多模态数据分类新框架,新框架结合了2D卷积神经网络和循环神经网络,以学习3D MRI和3D PET图像切分为2D切片序列之后的切片内、切片间特征,完成AD的早期诊断。本研究方法在AD与NC的分类实验中ACC为93.3%,AUC为98.1%;在MCIc与NC的分类实验准确率为83.8%,AUC为91.9%;MCIc与MCInc的分类实验准确率为79.0%,AUC为88.9%。结果表明该方法具有良好的分类性能。  相似文献   

5.
本文利用影像组学的方法预测乳腺肿瘤分子标记物雌激素受体(ER)。首先采用基于相位信息的动态轮廓模型(PBAC)对乳腺图像进行分割,其次对乳腺超声图像中肿瘤的形态、纹理、小波三个方面的404个高通量特征进行提取并予以量化,然后利用R语言以及结合最大相关最小冗余(m RMR)准则的遗传算法进行特征筛选,最后利用支持向量机(SVM)和AdaBoost进行分类判别,实现根据乳腺超声图像预测分子病理指标ER的目的。对104例临床乳腺肿瘤超声图像数据进行实验,在使用AdaBoost作为分类器的情况下得到了最优指标,即分子标记物ER的预测准确率最高可以达到75.96%,受试者操作特性曲线下的面积(AUC)最高达到79.39%。实验结果证明了利用影像组学方法预测乳腺癌ER表达情况的可行性。  相似文献   

6.
自动乳腺全容积超声成像(ABVS)系统因其高效、无辐射等特性成为筛查乳腺癌的重要方式。针对ABVS图像进行计算机辅助乳腺肿瘤良恶性分类的研究,有利于帮助临床医生准确、快速地进行乳腺癌的诊断,甚至可辅助提高低年资医生的诊断水平。ABVS系统产生的三维乳腺图像数据量较大,造成常规的深度学习方式训练时间长、占用资源巨大。本研究设计了一种基于ABVS数据的多视角图像提取方式,替代常规的三维数据输入,在降低参数量的同时弥补二维深度学习中的空间关联性;其次,基于交叉视角图像的空间位置关系,提出一种深度自注意力编码器(Transformer)网络,用于获得图像的有效特征表达。实验是基于自有ABVS数据库的153例容积图像,良恶性分类的准确率为86.88%,F1-评分为81.70%,AUC达到0.831 6。所提出的方法有望应用于ABVS图像的乳腺肿瘤良恶性筛查。  相似文献   

7.
为了充分提取抑郁症患者的磁共振影像信息,提高抑郁症的诊断准确率,本研究将功能磁共振图像与结构磁共振图像作为研究对象,提出一种双模态数据融合的抑郁症分类算法。首先构建4种不同尺度的功能脑网络,提取功能磁共振图像的数据特征,然后使用迁移学习处理的三维密集连接卷积神经网络,提取结构磁共振图像的数据特征,接着使用典型相关分析方法融合两种特征,最后使用支持向量机对融合特征进行分类,从而将受试者识别为健康者或抑郁症患者。实验结果表明,本文提出的方法可获得89.56%的分类准确率与95.48%的召回率,与单模态数据分类相比,基于双模态数据的分类方法具有更好的分类性能。此外,典型相关分析法可以有效融合双模态的图像特征。  相似文献   

8.
为提升乳腺肿瘤影像的检测效果,本研究提出一种基于深度神经网络的乳腺肿瘤自动区域分割方法.采用直方图自适应均衡化算法与Sobel算子,对乳腺肿瘤成像的灰度单通道图像进行预处理.基于深度神经网络构建乳腺肿瘤自动区域分割算法网络,其主要构成包括上下文特征多尺度提取单元、3D与2D混合卷积单元和主路网络三个单元.将广义骰子损失...  相似文献   

9.
目的探讨基于卷积神经网络的肝脏组织切片图像正常和病变性分类方法的可行性及应用价值。方法使用一种能够自动学习图像特征并分类的方法,先利用原始的Inception V3模型在肝脏组织切片数据集上进行训练,然后在原始模型的基础上通过微调得到改进的Inception V3模型,最后用改进的模型来实现肝脏组织切片图像正常和病变性两种类型的分类。结果改进后的Inception V3模型对肝脏切片图像的分类结果较佳,平均分类准确率达到99.2%。结论卷积神经网络的肝脏组织切片图像正常和病变性分类方法可行、合理,改进的Inception V3模型的分类效果较好。  相似文献   

10.
目的:解决传统方法在临床中对病理图像检测不足以及人工判断导致的错误判断等问题。方法:使用乳腺肿瘤细胞数据集,首先对数据集进行数据增强,增强后数据集为原来的2倍,将增强后数据集输入到本文提出的模型中进行训练。结果:经过100次迭代,训练集的准确率为97.44%,在测试集中准确率为96.4%,召回率为95.5%,与同类型文献相比都有明显提高。结论:本文章提出的改进型卷积神经网络具有收敛快,泛化好等特点。可以对乳腺肿瘤细胞的良恶性进行有效的辨识分类。  相似文献   

11.
乳腺癌是全球女性癌症死亡的主要原因之一。现有诊断方法主要是医生通过乳腺癌观察组织病理学图像进行判断,不仅费时费力,而且依赖医生的专业知识和经验,使得诊断效率无法令人满意。针对以上问题,设计基于组织学图像的深度学习框架,以提高乳腺癌诊断准确性,同时减少医生的工作量。开发一个基于多网络特征融合和稀疏双关系正则化学习的分类模型:首先,通过子图像裁剪和颜色增强进行乳腺癌图像预处理;其次,使用深度学习模型中典型的3种深度卷积神经网络(InceptionV3、ResNet-50和VGG-16),提取乳腺癌病理图像的多网络深层卷积特征并进行特征融合;最后,通过利用两种关系(“样本-样本”和“特征-特征”关系)和lF正则化,提出一种有监督的双关系正则化学习方法进行特征降维,并使用支持向量机将乳腺癌病理图像区分为4类—正常、良性、原位癌和浸润性癌。实验中,通过使用ICIAR2018公共数据集中的400张乳腺癌病理图像进行验证,获得93%的分类准确性。融合多网络深层卷积特征可以有效地捕捉丰富的图像信息,而稀疏双关系正则化学习可以有效降低特征冗余并减少噪声干扰,有效地提高模型的分类性能。  相似文献   

12.
The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256?×?256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p?<?0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.  相似文献   

13.
Liu  Yufeng  Wang  Shiwei  Qu  Jingjing  Tang  Rui  Wang  Chundan  Xiao  Fengchun  Pang  Peipei  Sun  Zhichao  Xu  Maosheng  Li  Jiaying 《BMC medical imaging》2023,23(1):1-15
Grading of cancer histopathology slides requires more pathologists and expert clinicians as well as it is time consuming to look manually into whole-slide images. Hence, an automated classification of histopathological breast cancer sub-type is useful for clinical diagnosis and therapeutic responses. Recent deep learning methods for medical image analysis suggest the utility of automated radiologic imaging classification for relating disease characteristics or diagnosis and patient stratification. To develop a hybrid model using the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM RNN) to classify four benign and four malignant breast cancer subtypes. The proposed CNN-LSTM leveraging on ImageNet uses a transfer learning approach in classifying and predicting four subtypes of each. The proposed model was evaluated on the BreakHis dataset comprises 2480 benign and 5429 malignant cancer images acquired at magnifications of 40×, 100×, 200× and 400×. The proposed hybrid CNN-LSTM model was compared with the existing CNN models used for breast histopathological image classification such as VGG-16, ResNet50, and Inception models. All the models were built using three different optimizers such as adaptive moment estimator (Adam), root mean square propagation (RMSProp), and stochastic gradient descent (SGD) optimizers by varying numbers of epochs. From the results, we noticed that the Adam optimizer was the best optimizer with maximum accuracy and minimum model loss for both the training and validation sets. The proposed hybrid CNN-LSTM model showed the highest overall accuracy of 99% for binary classification of benign and malignant cancer, and, whereas, 92.5% for multi-class classifier of benign and malignant cancer subtypes, respectively. To conclude, the proposed transfer learning approach outperformed the state-of-the-art machine and deep learning models in classifying benign and malignant cancer subtypes. The proposed method is feasible in classification of other cancers as well as diseases.  相似文献   

14.
Although magnetic resonance imaging (MRI) has a higher sensitivity of early breast cancer than mammography, the specificity is lower. The purpose of this study was to develop a computer-aided diagnosis (CAD) scheme for distinguishing between benign and malignant breast masses on dynamic contrast material-enhanced MRI (DCE-MRI) by using a deep convolutional neural network (DCNN) with Bayesian optimization. Our database consisted of 56 DCE-MRI examinations for 56 patients, each of which contained five sequential phase images. It included 26 benign and 30 malignant masses. In this study, we first determined a baseline DCNN model from well-known DCNN models in terms of classification performance. The optimum architecture of the DCNN model was determined by changing the hyperparameters of the baseline DCNN model such as the number of layers, the filter size, and the number of filters using Bayesian optimization. As the input of the proposed DCNN model, rectangular regions of interest which include an entire mass were selected from each of DCE-MRI images by an experienced radiologist. Three-fold cross validation method was used for training and testing of the proposed DCNN model. The classification accuracy, the sensitivity, the specificity, the positive predictive value, and the negative predictive value were 92.9% (52/56), 93.3% (28/30), 92.3% (24/26), 93.3% (28/30), and 92.3% (24/26), respectively. These results were substantially greater than those with the conventional method based on handcrafted features and a classifier. The proposed DCNN model achieved high classification performance and would be useful in differential diagnoses of masses in breast DCE-MRI images as a diagnostic aid.  相似文献   

15.
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.  相似文献   

16.
联合动态增强磁共振成像(DCE-MRI)以及弥散加权成像(DWI)的影像特征,通过建立模型,分别对乳腺癌的组织学分级以及Ki-67的表达进行预测。对144例未经过任何手术或化疗的乳腺浸润性导管癌患者的数据进行回顾性分析,这些患者均采用3T 扫描仪进行术前乳腺 MRI 检查,从中获取DCE-MRI以及DWI影像,并从 DWI 中计算得到表观扩散系数 (ADC)。对不同参数磁共振影像进行肿瘤分割,并分别从整个肿瘤区域中提取纹理特征、统计特征、形态特征等。采用无监督判别特征选择方法(UDFS)和Fisher Score算法进行特征选择,将分类模型分别应用于DCE-MRI及DWI图像数据,将得到的不同分类器进行多分类器模型融合,最终得到多参数图像的联合预测结果。为了评估所建立模型的分类性能, 通过留一法交叉验证 (LOOCV) 的方法计算ROC曲线下的面积(AUC)。对于分级任务,DCE-MRI的第二增强序列达到0.780的最优AUC,(特异度为0.647,灵敏度为0.934);对于Ki-67预测任务,DWI序列达到0.756的最优AUC(特异度为0.806,灵敏度为0.695)。经过融合,分级的预测结果提高到AUC为0.808(特异度为0.706,灵敏度为0.895),Ki-67的预测结果提高到AUC为0.783(特异度为0.778,灵敏度为0.722)。结果表明,相比采用单一参数的磁共振图像数据,DCE-MRI和DWI的影像特征联合可以提高分类器的性能。  相似文献   

17.
联合动态增强磁共振成像(DCE-MRI)、T2加权成像(T2WI)以及弥散加权成像(DWI)的影像特征,建立基于多参数影像组学的预测模型,分别对乳腺癌分子分型、组织学分级和Ki-67表达进行预测。采集150例术前、化疗前的浸润性导管癌患者乳腺MRI数据,获取DCE-MRI、T2WI和DWI影像。分割各参数影像的病灶区域,并提取多参数影像特征。在训练集采用支持向量机递归特征消除(SVM-RFE)算法,获得影像组学最优特征子集并构建基于SVM的预测模型,在测试集中测试模型性能。采用概率平均法、概率投票法和概率模型优化法,分别将基于不同参数影像构建的预测模型进行融合,得到多参数影像联合预测结果,并计算ROC曲线下的面积(AUC)评估模型的分类性能。单参数影像模型预测LuminalA、LuminalB、HER2和Basal-like等4种分子分型的最佳AUC分别为0.6721、0.6940、0.6777和0.7086,多参数影像模型的预测结果提高到AUC分别为0.7995、0.7279、0.7375和0.7925。单参数影像模型预测分级的最佳AUC为0.7533,多参数影像模型的预测结果提高到0.8017。单参数影像模型预测Ki-67表达的最佳AUC为0.6647,多参数影像模型预测结果提高到0.7718。相比于单参数影像模型的预测结果,多参数影像模型的预测结果有所提升,且差异具有显著性(P<0.05)。实验结果表明,采用多参数磁共振影像(DCE-MRI、T2WI以及DWI)组学的联合,可以显著提高单一参数影像模型预测乳腺癌病理信息的性能,对乳腺癌的诊断和个性化治疗方案的选择具有重要意义。  相似文献   

18.
为了实现对糖尿病周围神经病变(DPN)的早期预防,辅助医生进行早期诊断与决策,提出了一种基于一维卷积神经网络的DPN预测模型,对原始数据进行了一系列的预处理工作以提高数据的质量,此外数据集的特征维度较高,为了进一步提高预测模型的准确性,进行了主成分分析(PCA)降维处理,通过自主学习数据的特征信息,从中挖掘其有价值的医学信息与规律,来实现DPN的预测。通过支持向量机、BP神经网络和一维卷积神经网络分别建立了DPN预测模型。实验结果表明,一维卷积神经网络模型预测效果优于其他两个模型,其准确率、召回率、F1值、AUC值分别达到了0.983、0.916、0.923、0.98。  相似文献   

19.
新辅助化疗提高了乳腺癌的治愈率,但并不是对所有患者都有效,准确预测化疗疗效可以为患者治疗方案的制定提供参考价值。本研究使用深度学习的方法,融合纵向时间的动态增强磁共振成像(DCE-MRI)的影像特征对新辅助化疗疗效进行预测。分析164例进行了乳腺癌新辅助化疗患者的DCE-MRI影像,从每例患者影像数据集中挑选肿瘤最大径及上下2张切片以扩充数据量至442例,并随机划分为训练集312例,测试集130例。DCE-MRI影像共6个序列,分割每个序列的乳房区域,去除皮肤和胸腔,使用深度学习模型分别根据化疗前影像、2个疗程化疗后影像、化疗前和2个疗程化疗后影像相融合对新辅助化疗疗效进行预测,并绘制预测结果的ROC曲线,计算对应曲线下面积(AUC)评估模型的分类性能。深度学习模型对化疗前影像、2个疗程化疗后影像的疗效预测的最佳AUC分别为0.775和0.808,融合化疗前和2个疗程化疗后影像对疗效进行预测的最佳AUC为0.863,预测效果优于仅使用化疗前的影像。实验结果表明,相较于单独使用化疗前影像,融合使用纵向时间的影像可以提高对新辅助化疗疗效的预测性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号