首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目的:研究在完全无监督的条件下深度神经网络实现常规磁共振图像间相互转换的可行性。方法:在循环生成式对抗网络(CycleGAN)中引入感知损失,使网络利用对抗损失学习图像结构信息的同时,结合循环一致性损失和感知损失生成高质量的磁共振图像,并将生成图像与CycleGAN模型以及有监督的CycleGAN模型(S_CycleGAN)生成的图像进行定量比较。结果:引入感知损失后的网络生成的图像定量评估值均高于CycleGAN模型生成的图像,生成的T1加权图像(T1WI)的定量评估值也均高于S_CycleGAN模型生成的T1WI,生成的T2加权图像(T2WI)与S_CycleGAN模型生成的T2WI的定量评估值相似。结论:在CycleGAN中引入感知损失,可以在完全无监督的条件下生成高质量的磁共振图像,进而实现高质量的常规磁共振图像的相互转换。  相似文献   

2.
恶性黑色素瘤是最常见和最致命的皮肤癌之一。临床上,皮肤镜检查是恶性黑色素瘤早期诊断的常规手段。但是人工检查费力、费时,并且高度依赖于皮肤科医生的临床经验。因此,研究出自动识别皮肤镜图像中的黑色素瘤算法显得尤为重要。提出一种皮肤镜图像自动评估的新框架,利用深度学习方法,使其在有限的训练数据下产生更具区分性的特征。具体来说,首先在大规模自然图像数据集上预训练一个深度为152层的残差神经网络(Res-152),用来提取皮肤病变图像的深度卷积层特征,并对其使用均值池化得到特征向量,然后利用支持向量机(SVM)对提取的黑色素瘤特征进行分类。在公开的皮肤病变图像ISBI 2016挑战数据集中,用所提出的方法对248幅黑色素瘤图像和1 031幅非黑色素瘤图像进行评估,达到86.28%的准确率及84.18%的AUC值。同时,为论证神经网络深度对分类结果的影响,比较不同深度的模型框架。与现有使用传统手工特征的研究(如基于密集采样SIFT描述符的词袋模型)相比,或仅从深层神经网络的全连接层提取特征进行分类的方法相比,新方法能够产生区分性能更强的特征表达,可以在有限的训练数据下解决黑色素瘤的类内差异大、黑色素瘤与非黑素瘤之间的类间差异小的问题。  相似文献   

3.
医学图像融合方法可以将有用的信息整合到一张图上,提高单张图像的信息量。对多模态医学图像进行融合时,如何对图像进行有效的变换,提取到不同图像中独有的特征,并施以适当的融合规则是医学图像融合领域研究的重点。近年随着深度学习的快速发展,深度学习被广泛应用于医学图像领域,代替传统方法中的一些人工操作,并在图像表示、图像特征提取以及融合规则的选择方面显示出独特优势。本文针对基于深度学习的医学图像融合进展予以探讨,介绍了卷积神经网络、卷积稀疏表示、深度自编码和深度信念网络这些常用于医学图像融合的框架,对一些应用于融合过程不同步骤的深度学习方法进行分析和总结,最后,分析了当前基于深度学习的融合方法的不足并展望了未来的研究方向。  相似文献   

4.
目的:为了满足临床新冠肺炎检测的实际需求,提出一种基于轻量级人工神经网络的新冠肺炎CT新型识别技术。方法:首先,选取目前公开的所有新冠肺炎CT图像数据集,经过图像亮度规范化和数据集清洗后作为训练数据,通过大样本提高深度学习的泛化能力;其次,采用GhostNet轻量级网络简化网络参数,使深度学习模型能够在医用计算机上运行,提高新冠肺炎CT诊断的效率;再次,在网络输入中加入肺部区域分割图像,进一步提高新冠肺炎CT诊断的准确性;最后,提出加权交叉熵损失函数减小漏诊率。结果:在本研究构建的数据集上进行测试,所提出方法的精确率、召回率、准确率和F1值分别为83%、96%、90%和88%,且在医用计算机上耗时为236 ms。结论:本研究提出方法的效率和准确性均优于其他对比算法,能较好地适应新冠肺炎诊断的需求。  相似文献   

5.
提出一种由头部锥形束CT(CBCT)图像生成合成CT(sCT)图像的无监督深度学习网络,并与循环生成对抗(CycleGAN)网络及对比非配对转换(CUT)网络进行比较。本研究共获取56例脑部肿瘤患者的计划CT(pCT)和CBCT数据(其中49例用于训练,7例用于测试),分别使用CycleGAN网络、CUT网络以及本研究提出的密集对比非配对转换(DenseCUT)网络由CBCT图像生成sCT。DenseCUT网络有两点创新之处:将CUT网络与密集块网络结合;在损失函数中加入结构相似性。与pCT-CBCT相比,pCT-sCT(DenseCUT网络)的HU值平均绝对误差从34.38 HU降低到17.75 HU,峰值信噪比从26.19 dB提升到29.83 dB,结构相似性从0.78提升到0.87。本文方法可在不改变解剖结构的情况下从CBCT图像中生成高质量的sCT图像,同时降低图像伪影,使CBCT应用于剂量计算和自适应放疗计划成为可能。  相似文献   

6.
基于深度学习网络的医学核磁共振(MR)图像超分辨重建实验研究,提出并构建一个大规模的高质量用于MR图像超分辨的数据集,涵盖了头颅、膝盖、乳房以及头颈4个部位。通过数据质量筛选和不同低分辨率图像生成方式,在原始图像的高分辨率基础下,以×2、×3、×4的下采样尺度,原始MRI图像形成3种不同尺度下的MR图像数据集,同时给出不同部位超分辨难易程度分析。采用7个在自然图像的超分辨率领域中取得最好效果的深度学习网络,将它们迁移到MR图像中,学习低分辨率MR图像到高低分辨MR图像的映射关系,并对比分析这些深度学习网络在自然图像的超分辨效果。通过实验可以看出,深度学习网络在MR图像超分辨取得了比传统算法更好的效果,部分结果不亚于自然图像;不同部位的超分辨效果差异较大,难以以一个深度学习网络使不同部位均具有更好的超分辨效果。深度学习网络在MR图像超分辨将具有重要的应用价值和理论意义。  相似文献   

7.
Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.  相似文献   

8.
张倩雯  陈明    秦玉芳    陈希 《中国医学物理学杂志》2019,(11):1356-1361
目的:将深度残差结构和U-Net网络结合形成新的网络ResUnet,并利用ResUnet深度学习网络结构对胸部CT影像进行图像分割以提取肺结节区域。方法:使用的CT影像数据来源于LUNA16数据集,首先对CT图像预处理提取出肺实质,然后对其截取立体图像块并进行数据增强来扩充样本数,形成相应的肺结节掩膜图像,最后将生成的样本输入到ResUnet模型中进行训练。结果:本研究模型最终的精度和召回率分别为35.02%和97.68%。结论:该模型能自动学习肺结节特征,为后续的肺癌自动诊断提供可靠基础,减少临床诊断的成本并节省医生诊断的时间。 【关键词】肺结节;分割;深度残差结构;召回率;ResUnet  相似文献   

9.
10.
董国亚    宋立明      李雅芬  李文  谢耀钦 《中国医学物理学杂志》2020,37(10):1335-1339
运用深度学习的方法基于脑部CT扫描图像合成相应的MRI。将28例患者进行颅脑CT和MRI扫描得到的CT和MRI的断层图像进行刚性配准,随机选取20例患者的图像输入U-Net卷积神经网络进行训练,利用训练好的网络对未参与训练的8例患者的CT图像进行预测,得到合成的MRI。研究结果显示:通过对合成的MRI进行定量分析,利用基于L2损失函数构建的U-Net网络合成MRI效果良好,平均绝对平均误差(MAE)为47.81,平均结构相似性指数(SSIM)为0.91。本研究表明可以利用深度学习方法对CT图像进行转换,获得合成MRI,现阶段可以达到扩充MRI医学图像数据库的目的,随着合成图像精度的提高,可以用于帮助诊断等临床应用。  相似文献   

11.
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.  相似文献   

12.
Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3?×?3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.  相似文献   

13.
Purpose: The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding. Methods: In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network. Results: To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved. Conclusion: Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.  相似文献   

14.
医学图像语义概念识别是医学图像知识表示的重要技术环节。研究医学图像语义概念识别方法,有助于机器理解和学习医学图像中的潜在医学知识,在影像辅助诊断和智能读片等应用中发挥重要作用。将医学图像的高频概念识别问题转化为多标签分类任务,利用基于卷积神经网络的深度迁移学习方法,识别有限数量的高频医学概念;同时利用基于图像检索的主题建模方法,从给定医学图像的相似图像中提取语义相关概念。国际跨语言图像检索论坛ImageCLEF于2018年5月组织ImageCLEFcaption 2018评测,其子任务“概念检测”的目标是给定222 314张训练图片和9 938张测试图片,识别111 156个语义概念。上述两种方法的实验结果已被提交。实验结果表明,利用基于卷积神经网络的深度迁移学习方法识别医学图像高频概念,F1值为0.092 8,在提交团队中排名第二;基于图像检索的主题模型可召回部分低频相关概念,F1值为0.090 7,然而其性能依赖于图像检索结果的质量。基于卷积神经网络的深度迁移学习方法识别医学图像高频概念的鲁棒性优于基于图像检索方法的鲁棒性,但在大规模开放语义概念的识别技术研究上仍需进一步完善。  相似文献   

15.
2D/3D配准在临床诊断和手术导航规划中有着广泛的应用,可解决医学图像领域中不同维度图像存在信息缺失的问题,能辅助医生在术中精准定位患者的病灶。常规的2D/3D配准方法主要依赖于图像的灰度进行配准,但非常耗时,不利于临床实时性的需求,并且配准过程中容易陷入局部最优值。提出用深度学习的方法来解决2D/3D医学图像配准问题。采用一个基于深度学习的卷积神经网络,通过网络对数字影像重建技术(DRR)进行训练并自动学习图像特征,预测X光图像所对应的参数,从而实现配准。以人体骨盆的模型骨为实验对象,根据骨盆的CT数据生成36000张DRR图像作为训练集,同时通过C臂采集模型骨的50张X光图像作为验证。结果显示,深度学习算法在相关系数、归一化互信息、欧式距离3个精度评价指标上的测试值分别为0.82±0.07、0.32±0.03、61.56±10.91,而常规2D/3D算法对应的测试值分别为0.79±0.07、0.29±0.03、37.92±7.24,说明深度学习算法的配准精度优于常规2D/3D算法的配准精度,且不存在陷入局部最优值的问题。同时,深度学习的配准时间约为0.03s,远低于常规2D/3D配准的时间,可满足临床对于实时配准的需求,未来将进一步开展临床数据的2D/3D配准研究。  相似文献   

16.
乳腺癌是全球女性癌症死亡的主要原因之一。现有诊断方法主要是医生通过乳腺癌观察组织病理学图像进行判断,不仅费时费力,而且依赖医生的专业知识和经验,使得诊断效率无法令人满意。针对以上问题,设计基于组织学图像的深度学习框架,以提高乳腺癌诊断准确性,同时减少医生的工作量。开发一个基于多网络特征融合和稀疏双关系正则化学习的分类模型:首先,通过子图像裁剪和颜色增强进行乳腺癌图像预处理;其次,使用深度学习模型中典型的3种深度卷积神经网络(InceptionV3、ResNet-50和VGG-16),提取乳腺癌病理图像的多网络深层卷积特征并进行特征融合;最后,通过利用两种关系(“样本-样本”和“特征-特征”关系)和lF正则化,提出一种有监督的双关系正则化学习方法进行特征降维,并使用支持向量机将乳腺癌病理图像区分为4类—正常、良性、原位癌和浸润性癌。实验中,通过使用ICIAR2018公共数据集中的400张乳腺癌病理图像进行验证,获得93%的分类准确性。融合多网络深层卷积特征可以有效地捕捉丰富的图像信息,而稀疏双关系正则化学习可以有效降低特征冗余并减少噪声干扰,有效地提高模型的分类性能。  相似文献   

17.
There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.  相似文献   

18.
近年来,随着医学影像技术的快速发展,医学图像分析步入大数据时代,如何从海量的医学图像数据中挖掘出有用信息,对医学图像识别带来巨大的挑战。深度学习是机器学习的一个新领域,传统的机器学习方法不能有效地挖掘到医学图像中蕴含的丰富信息,而深度学习通过模拟人脑建立分层模型,具有强大的自动特征提取、复杂模型构建以及高效的特征表达能力,更重要的是深度学习方法能从像素级的原始数据中逐级提取从底层到高层的特征,这为解决医学图像识别面临的新问题提供了新思路。首先阐述深度学习方法,列举深度学习方法的三种常见的实现模型,并介绍深度学习的训练过程;随后总结了深度学习方法在疾病检测与分类和病变识别两方面的应用情况,以及深度学习应用在医学图像识别中的两个共性问题;最后对深度学习在医学图像识别中存在的问题进行分析及展望.  相似文献   

19.
目的:提出一种基于深度学习的方法用于低剂量CT(LDCT)图像的噪声去除。方法:首先进行滤波反投影重建,然后利用多尺度并行残差U-net(MPR U-net)的深度学习模型对重建后的LDCT图像进行去噪。实验数据采用LoDoPaB-CT挑战赛的医学CT数据集,其中训练集35 820张图像,验证集3 522张图像,测试集3 553张图像,并采用峰值信噪比(PSNR)与结构相似性系数(SSIM)来评估模型的去噪效果。结果:LDCT图像处理前后PSNR分别为28.80、38.22 dB,SSIM分别为0.786、0.966,平均处理时间为0.03 s。结论:MPR U-net深度学习模型能较好地去除LDCT图像噪声,提升PSNR,保留更多图像细节。  相似文献   

20.
OBJECTIVE: Adaptive and automatic adjustment of the display window parameters for magnetic resonance images under different viewing conditions is a challenging problem in medical image perception. An adaptive hierarchical neural network-based system with online adaptation capabilities is presented to achieve this goal in this paper. METHODOLOGY: The online adaptation capabilities are primarily attributed to the use of the hierarchical neural networks and the development of a new width/center mapping algorithm. The large training image set is hierarchically organized for efficient user interaction and effective re-mapping of the width/center settings. The width/center mapping functions are estimated from the new user-adjusted width/center values of some representative images by using a global spline function for the entire training images as well as a first-order polynomial function for each selected image sequence. The hierarchical neural networks are then re-trained for the new training data set after this mapping process. RESULTS: The proposed automatic display window parameter adjustment system is implemented as a program on a personal computer for testing its adaptation performance. Experimental results show that the proposed system can successfully adapt its parameter adjustment on a variety of MR images after user re-adjustment and re-training of neural networks. CONCLUSION: This demonstrates the effective adaptation capabilities of the proposed system based on the framework of training data mapping and neural network re-training.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号