首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
目的 基于深度学习(deep learning,DL)和前房角超声生物显微镜(ultrasound biomicroscopy,UBM)图像进行前房角开闭状态的自动识别,为原发性闭角型青光眼的临床自动诊断提供辅助分析.方法 数据集为天津医科大学眼科医院采集的眼科疾病患者的前房角UBM图像,由眼科专家将UBM图像分为房角开放和房角关闭两类,按照6:2:2的比例随机设置训练集、验证集和测试集.为提高深度学习模型的鲁棒性和识别精度,对训练集图像随机进行了旋转、平移和反转等不影响房角形态的数据增强操作.比较VGG16、VGG19、DenseNet121、Xception和InceptionV3网络模型在本文数据集上的迁移学习结果,根据迁移学习结果对VGG16进行卷积层和全连接层的微调,用微调后的VGG16模型实现前房角开闭状态的自动识别.用接收者操作特征曲线下面积和准确率等评价指标对模型识别结果进行定量评价,用类激活热力图可视化模型识别前房角开闭状态时的主要关注区域.结果 类激活热力图表明微调后的VGG16模型识别前房角开闭状态的主要关注区域为房角中心区域,与眼科专家的识别依据一致.该模型的识别准确率为96.19%,接收者操作特征曲线下面积为0.9973.结论 基于深度学习和前房角UBM图像能够以较高的准确率实现前房角开闭状态的自动识别,有利于原发性闭角型青光眼自动诊断技术的发展.  相似文献   

2.
肺炎是一种严重危害身体健康的疾病,通常使用肺部X光片进行检查。肺炎诊断是肺炎治疗前非常重要的环节,但是由于肺部其他疾病的干扰、医疗数据的爆发式增长以及专业病理医生的缺乏等,导致肺炎的准确诊断较为困难。深度学习能够模仿人脑的机制准确高效地解释医学图像数据,在肺炎图像检测方面获得了广泛应用。构建了3种基于深度学习的图像目标检测模型,单发多框探测器(SSD)、faster-RCNN和faster-RCNN优化模型,对来自Kaggle数据集的26 684张带标签的肺部X光图像进行研究。原始X光图像经预处理后输入3种深度学习模型,分别对单处和两处病灶区域进行目标检测。随机选取500张测试图像,利用损失函数、分类准确率、回归精度和误检病灶数等指标对各模型的性能进行评估。结果表明,faster-RCNN的性能指标优于SSD;Faster-RCNN优化模型的性能指标均优于其他两种模型,其损失函数值小且可快速达到稳定,平均分类准确率为93.7%,平均回归精度为79.8%,且误检病灶数为0。该方法有助于肺炎的准确识别和诊断。  相似文献   

3.
目的:为了满足临床新冠肺炎检测的实际需求,提出一种基于轻量级人工神经网络的新冠肺炎CT新型识别技术。方法:首先,选取目前公开的所有新冠肺炎CT图像数据集,经过图像亮度规范化和数据集清洗后作为训练数据,通过大样本提高深度学习的泛化能力;其次,采用GhostNet轻量级网络简化网络参数,使深度学习模型能够在医用计算机上运行,提高新冠肺炎CT诊断的效率;再次,在网络输入中加入肺部区域分割图像,进一步提高新冠肺炎CT诊断的准确性;最后,提出加权交叉熵损失函数减小漏诊率。结果:在本研究构建的数据集上进行测试,所提出方法的精确率、召回率、准确率和F1值分别为83%、96%、90%和88%,且在医用计算机上耗时为236 ms。结论:本研究提出方法的效率和准确性均优于其他对比算法,能较好地适应新冠肺炎诊断的需求。  相似文献   

4.
现有的心律失常分类方法通常采用人为选取心电图(ECG)信号特征的方式,其特征选取具有主观性,且特征提取复杂,导致分类准确性容易受到影响等。基于以上问题,本文提出了一种基于判别式深度置信网络(DDBNs)的心律失常自动分类新方法。该方法所构建的生成受限玻尔兹曼机(GRBM)自动提取心拍信号形态特征,然后引入具有特征学习和分类能力的判别式受限玻尔兹曼机(DRBM),依据提取的形态特征和RR间期特征进行心律失常分类。为了进一步提高DDBNs的分类性能,本文将DDBNs转换为使用柔性最大值(Softmax)回归层进行监督分类的深度神经网络(DNN),通过反向传播对网络进行微调。最后,采用麻省理工学院与贝斯以色列医院心律失常数据库(MIT-BIH AR)进行实验验证,对于数据来源一致的训练集和测试集,该方法整体分类精度可达99.84%±0.04%;对于数据来源非一致的训练集和测试集,通过主动学习(AL)方法扩充少量训练集,该方法整体分类精度可达99.31%±0.23%。实验结果表明了该方法在心律失常自动特征提取和分类上的有效性,为深度学习自动提取ECG信号特征及分类提供了一种新的解决方法。  相似文献   

5.
目的:结合全卷积神经网络(Fully Convolutional Network,FCN)和多孔卷积(Atrous Convolution,AC)的深度学习方法,实现放射治疗计划图像的组织器官自动勾画。方法:选取122套已经由放疗医师勾画好正常器官结构轮廓的胸部患者CT图像,以其中71套图像(8 532张轴向切层图像)作为训练集,31套图像(5 559张轴向切层图像)作为验证集,20套图像(3 589张轴向切层图像)作为测试集。选取5种公开的FCN网络模型,并结合FCN和AC算法形成3种改进的深度卷积神经网络,即带孔全卷积神经网络(Dilation Fully Convolutional Network,D-FCN)。分别以训练集图像对上述8种网络进行调优训练,使用验证集图像在训练过程中对8种神经网络进行器官自动识别勾画验证,以获取各网络的最佳分割模型,最后使用测试集图像对充分训练后获取的最佳分割模型进行勾画测试,比较自动勾画与医师勾画的相似度系数(Dice)评价各模型的图像分割能力。结果:使用训练图像集进行充分调优训练后,实验的各个神经网络均表现出较好的自动图像分割能力,其中改进的D-FCN 4s网络模型在测试实验中具有最佳的自动分割效果,其全局Dice为94.38%,左肺、右肺、心包、气管和食道等单个结构自动勾画的Dice分别为96.49%、96.75%、86.27%、61.51%和65.63%。结论:提出了一种改进型全卷积神经网络D-FCN,实验测试表明该网络模型可以有效地提高胸部放疗计划图像的自动分割精度,并可同时进行多目标的自动分割。  相似文献   

6.
目的 提出一种基于卷积神经网络(convolutional neural network,CNN)的眼科光学相干断层成像(optical coherence tomography,OCT)图像自动分类方法,实现对视网膜OCT图像的自动分类,缓解人工诊断依赖医生的临床经验、费时费力等问题.方法 基于公开的数据集2014_BOE_Srinivasan构建了2个样本数据集.其中样本数据集一为仅对数据集中的图像进行预处理后裁剪,样本数据集二为对取出测试集后剩余图像的裁剪过程中引入随机平移和水平翻转技术对图像进行扩充,并划分为训练集和验证集.搭建基于CNN的视网膜OCT图像分类网络,并分别使用两个数据集训练网络得到分类模型.最后使用独立的测试集对模型进行测试,并通过输出混淆矩阵查看模型对3种类别图像的分类情况.结果 通过混淆矩阵计算得出,使用扩充后的图像训练的分类模型的准确度为93.43%,灵敏度为91.38%,特异度为95.88%.结论 提出的基于CNN的视网膜OCT图像自动分类方法可以对老年性黄斑变性、糖尿病性黄斑水肿和正常3种类别的视网膜OCT图像进行分类.同时,数据扩充有助于提高分类算法的性能.  相似文献   

7.
目的探讨借助深度学习算法进行结直肠病理组织切片的自动辅助诊断。方法收集首都医科大学附属朝阳医院病理科已确诊的92例增生性息肉、61例管状腺瘤、135例腺癌及100例神经内分泌肿瘤的存档病理切片,利用数字显微镜采集1 790张数字病理图像。其中1 530张图像作为训练集,260张图像作为测试集。利用深度神经网络基于训练集训练自动辅助诊断模型,并在测试集上进行测试。结果利用深度学习技术在结直肠病理图像测试集上的整体准确率≥91%,采用该方法对恶性肿瘤的灵敏度可达96. 7%。结论利用深度学习技术对结直肠病理组织切片的自动辅助诊断具有重要意义,不仅可以提高诊断效率,还能够降低漏诊风险。  相似文献   

8.
目的:利用3D深度残差网络和多模态MRI实现对脑胶质瘤的自动分级。方法:利用BraTS2020公共数据集的293例高级别胶质瘤(HGG)和76例低级别胶质瘤(LGG)的多模态MRI数据训练和测试3D深度残差卷积网络模型。多模态MRI图像经过3D剪裁、重采样和归一化的预处理,随机分组为训练(64%)、验证(16%)和测试(20%)样本,将预处理后的多模态MRI图像和分级标注输入到网络模型进行训练、验证和测试。利用准确率(ACC)和受试者工作特征(ROC)曲线下面积(AUC)评价分级结果。结果:在59例(48例HGG和11例LGG)验证数据集上,ACC和AUC分别为0.93和0.97,在75例(62例HGG和13例LGG)测试数据集上,ACC和AUC分别为0.89和0.93。结论:3D深度残差网络在多模态MRI数据集上获得了较好的脑胶质瘤自动分级结果,可以为确定治疗方案和预测预后方面提供重要参考。  相似文献   

9.
黑色素细胞病变发生于皮肤表层,恶性病变即为致死率极高的黑色素瘤,严重危害人类健康,病理组织学分析是其诊断的金标准。本文对黑色素细胞病变病理全切片图像(WSI)进行分类研究,提出一种基于深度学习的黑色素细胞病变全流程智能化诊断方法。首先,基于CycleGAN神经网络对多中心病理WSI进行颜色校正;其次,通过745张WSI构建以ResNet-152神经网络为架构的深度卷积网络预测模块;然后,级联以预测概率平均值计算为核心的决策融合模块;最终,分别采用包含182张和54张WSI的内外部测试集验证所提方法的诊断性能。实验结果显示,所提方法的整体准确率在内部测试集上达到94.12%,在外部测试集上超越90%;采用的颜色校正方式在组织结构保持、伪影抑制方面均优于传统基于颜色统计或染色分离的方式。研究证实了本文所提方法可实现高精度、强鲁棒的黑色素细胞病变病理WSI分类,对推动临床病理人工智能辅助诊断具有重要的指导意义。  相似文献   

10.
小肠镜下的溃疡病变形态复杂,鉴别诊断困难。为实现小肠溃疡病变的人工智能辅助识别,提高诊断效率和准确度,构建了一种基于MobileNetV2网络的小肠溃疡性病灶识别算法。以MobileNetV2为主干特征提取网络,将输出特征图进行空间上的多尺度提取后输入至通道注意力模块中进行特征重标定,并将多个尺度上的特征进行融合后输出分类。为了缓解数据集不均衡所带来的影响,提出了一种改进的损失函数。所用数据集来自上海长海医院282位患者的共2 124张小肠镜临床图像。采用所提方法对该数据集测试的识别准确率为87.86%,5折交叉验证平均准确率为87.27%。使用梯度加权类激活图进行了可视化验证,同时将所提模块应用在不同主干网络架构上,均具有良好的通用性。研究表明,该网络模型能够更加注重病灶信息,加强病灶特征判别指向,对于小肠溃疡图像具有较高的识别准确率,可初步实现小肠溃疡病灶的自动识别。  相似文献   

11.
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73–100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.  相似文献   

12.
为了提高胸片异物自动检测的能力,采用深度学习网络高效提取各种尺度和形状的异物影像特征,实现胸片中多种异物的自动稳定检测。在网络构建过程中,根据异物的影像特征,改进YOLO v4目标检测网络,通过在特征提取网络CSPDarkNet53中加入SE-block(Squeeze and Excitation),使模型能够区别利用各个通道的信息。实验结果表明,改进的深度学习网络在异物检测中能够实现92%的精确率和83%的召回率。因此,新的深度学习方法可用于胸片异物检测等应用场景,客观评判摄影质量,为放射影像的质量控制打下基础。  相似文献   

13.
Kasai S  Li F  Shiraishi J  Li Q  Doi K 《Medical physics》2006,33(12):4664-4674
Vertebral fracture (or vertebral deformity) is a very common outcome of osteoporosis, which is one of the major public health concerns in the world. Early detection of vertebral fractures is important because timely pharmacologic intervention can reduce the risk of subsequent additional fractures. Chest radiographs are used routinely for detection of lung and heart diseases, and vertebral fractures can be visible on lateral chest radiographs. However, investigators noted that about 50% of vertebral fractures visible on lateral chest radiographs were underdiagnosed or under-reported, even when the fractures were severe. Therefore, our goal was to develop a computerized method for detection of vertebral fractures on lateral chest radiographs in order to assist radiologists' image interpretation and thus allow the early diagnosis of osteoporosis. The cases used in this study were 20 patients with severe vertebral fractures and 118 patients without fractures, as confirmed by the consensus of two radiologists. Radiologists identified the locations of fractured vertebrae, and they provided morphometric data on the vertebral shape for evaluation of the accuracy of detecting vertebral end plates by computer. In our computerized method, a curved search area, which included a number of vertebral end plates, was first extracted automatically, and was straightened so that vertebral end plates became oriented horizontally. Edge candidates were enhanced by use of a horizontal line-enhancement filter in the straightened image, and a multiple thresholding technique, followed by feature analysis, was used for identification of the vertebral end plates. The height of each vertebra was determined from locations of identified vertebral end plates, and fractured vertebrae were detected by comparison of the measured vertebral height with the expected height. The sensitivity of our computerized method for detection of fracture cases was 95% (19/20), with 1.03 (139/135) false-positive fractures per image. The accuracy of identifying vertebral end plates, marked by radiologists in a morphometric study, was 76.6% (400/522) and 70.9% (420/592) for cases used for training and those for testing, respectively. We prepared 32 additional fracture cases for a validation test, and we examined the detection accuracy of our computerized method. The sensitivity for these cases was 75% (24/32) at 1.03 (33/32) false-positive fractures per image. Our preliminary results show that the automated computerized scheme for detecting vertebral fractures on lateral chest radiographs has the potential to assist radiologists in detecting vertebral fractures.  相似文献   

14.
PurposeThis study aimed to propose an effective end-to-end process in medical imaging using an independent task learning (ITL) algorithm and to evaluate its performance in maxillary sinusitis applications.Materials and MethodsFor the internal dataset, 2122 Waters’ view X-ray images, which included 1376 normal and 746 sinusitis images, were divided into training (n=1824) and test (n=298) datasets. For external validation, 700 images, including 379 normal and 321 sinusitis images, from three different institutions were evaluated. To develop the automatic diagnosis system algorithm, four processing steps were performed: 1) preprocessing for ITL, 2) facial patch detection, 3) maxillary sinusitis detection, and 4) a localization report with the sinusitis detector.ResultsThe accuracy of facial patch detection, which was the first step in the end-to-end algorithm, was 100%, 100%, 99.5%, and 97.5% for the internal set and external validation sets #1, #2, and #3, respectively. The accuracy and area under the receiver operating characteristic curve (AUC) of maxillary sinusitis detection were 88.93% (0.89), 91.67% (0.90), 90.45% (0.86), and 85.13% (0.85) for the internal set and external validation sets #1, #2, and #3, respectively. The accuracy and AUC of the fully automatic sinusitis diagnosis system, including site localization, were 79.87% (0.80), 84.67% (0.82), 83.92% (0.82), and 73.85% (0.74) for the internal set and external validation sets #1, #2, and #3, respectively.ConclusionITL application for maxillary sinusitis showed reasonable performance in internal and external validation tests, compared with applications used in previous studies.  相似文献   

15.
To assist physicians identify COVID-19 and its manifestations through the automatic COVID-19 recognition and classification in chest CT images with deep transfer learning. In this retrospective study, the used chest CT image dataset covered 422 subjects, including 72 confirmed COVID-19 subjects (260 studies, 30,171 images), 252 other pneumonia subjects (252 studies, 26,534 images) that contained 158 viral pneumonia subjects and 94 pulmonary tuberculosis subjects, and 98 normal subjects (98 studies, 29,838 images). In the experiment, subjects were split into training (70%), validation (15%) and testing (15%) sets. We utilized the convolutional blocks of ResNets pretrained on the public social image collections and modified the top fully connected layer to suit our task (the COVID-19 recognition). In addition, we tested the proposed method on a finegrained classification task; that is, the images of COVID-19 were further split into 3 main manifestations (ground-glass opacity with 12,924 images, consolidation with 7418 images and fibrotic streaks with 7338 images). Similarly, the data partitioning strategy of 70%-15%-15% was adopted. The best performance obtained by the pretrained ResNet50 model is 94.87% sensitivity, 88.46% specificity, 91.21% accuracy for COVID-19 versus all other groups, and an overall accuracy of 89.01% for the three-category classification in the testing set. Consistent performance was observed from the COVID-19 manifestation classification task on images basis, where the best overall accuracy of 94.08% and AUC of 0.993 were obtained by the pretrained ResNet18 (P < 0.05). All the proposed models have achieved much satisfying performance and were thus very promising in both the practical application and statistics. Transfer learning is worth for exploring to be applied in recognition and classification of COVID-19 on CT images with limited training data. It not only achieved higher sensitivity (COVID-19 vs the rest) but also took far less time than radiologists, which is expected to give the auxiliary diagnosis and reduce the workload for the radiologists.  相似文献   

16.
目的:基于密度分布特征及机器学习诊断新型冠状病毒(COVID-19)相关性肺炎。方法:回顾性收集经荧光逆转录聚合酶链反应检测确诊COVID-19的患者42例(COVID-19组),社区获得性肺炎43例(对照组)。共获得211份胸部CT图像,以6:4比例分层抽样为训练集(126份)及验证集(85份)。采用一种CAD软件中的肺炎模块获得肺炎不同密度区间所占全肺体积的百分比(P/L%)。密度分布特征降维后采用支持向量机(SVM)建模,并评价4种核函数的SVM模型的诊断效能。结果:两组患者的年龄、性别及出现胸膜腔积液的构成比差异均无统计学意义(P>0.05)。肺炎密度分布特征降维后获得32个特征。基于该32个特征建立的4种核函数SVM模型中,多项式SVM模型在验证集的效能最高,受试者特征曲线(ROC)的曲线下面积为0.897(95%可信区间0.828~0.966),P<0.001。准确性为0.906(95%可信区间0.823~0.959),敏感性为0.906,特异性为0.906。结论:基于密度分布特征及机器学习诊断COVID-19相关性肺炎有较高的效能,有助于快速筛选COVID-19患者。  相似文献   

17.
目的:评价基于密度分布特征(CDD)的深度神经网络(DNN)模型对新型冠状病毒肺炎(COVID-19)的诊断价值。方法:收集42例COVID-19病例和43例非COVID-19肺炎病例。将所有患者的211份胸部CT图像分为训练集(n=128)和验证集(n=83)。参考北美放射学会发布的COVID-19相关性肺炎的CT结构化报告,构建基于CT影像特征的DNN模型(DNN-CTIF)。根据胸部CT图像上肺炎CDD建立DNN-CDD模型。采用ROC曲线分析和决策曲线分析对两种模型进行评价。结果:DNN-CTIF模型的AUC在训练集为0.927,在验证集为0.829。DNN-CDD模型的AUC在训练集为0.965,在验证集为0.929。DNN-CDD模型在验证集的AUC高于DNN-CTIF模型(P=0.047)。决策曲线分析表明在0.04~1.00概率阈值范围内,DNN-CDD模型相比DNN-CTIF模型使患者的净获益更高。结论:DNN-CTIF和DNN-CDD模型对COVID-19均具有较好的诊断性能,其中DNN-CDD模型优于DNN-CTIF模型。  相似文献   

18.
目的:胸部X线图像中肺野的自动分割是相关疾病筛查和诊断的关键步骤,为了适应计算机辅助诊断系统的要求,提出一种基于空洞空间金字塔池化的U-Net网络对胸部X线图像中肺野进行自动分割。方法:在编码和解码之间引入带有空洞卷积的空间金字塔池化用于扩大接受域;同时,在多个尺度上获取图像上下文信息,用于从胸片中分割肺野,使用Montgomery数据集及深圳数据集进行验证。根据医学图像分割常用指标准确性、Dice相似系数及交并比评价基于空洞空间金字塔池化的U-Net网络分割肺野的性能。结果:验证准确性为98.29%,Dice相似系数为96.61%,交并比为93.47%。结论:本文提出一种基于空洞空间金字塔池化的U-Net网络用于分割肺野,相较于其他方法学习到更多边缘分割特征,取得更好的分割结果。  相似文献   

19.
医学图像配准技术对临床诊断治疗具有重要意义。相比传统的图像配准方法,目前基于深度学习的配准方法提高了配准的精度和速度。为了将深度学习图像配准技术应用于胸片的配准以及减影分析,本研究先采用深度学习掩膜对原始胸片进行预处理,然后以掩膜图像作为输入,以ResUNet网络作为配准框架来实现胸片图像配准,最后评估配准效果。结果显示深度学习掩膜结合深度学习图像配准方法训练出的模型在胸片配准上具有良好的图像配准精度。这种基于掩膜的深度学习配准模型可以较好地应用于胸片的减影分析。  相似文献   

20.
Cho  Kyungjin  Kim  Ki Duk  Nam  Yujin  Jeong  Jiheon  Kim  Jeeyoung  Choi  Changyong  Lee  Soyoung  Lee  Jun Soo  Woo  Seoyeon  Hong  Gil-Sun  Seo  Joon Beom  Kim  Namkug 《Journal of digital imaging》2023,36(3):902-910

Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号