首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
【摘要】目的:基于磁共振Dixon图像的不同组合,采用深度学习方法进行颅骨二值重建,通过与CT图像比较评估骨重建效果。方法:回顾性收集2021年6月-8月共21例头颅CT和MR图像。刚性配准后,将CT值大于150和400HU像素点作为颅骨组织。采用U-Net神经网络模型训练,16例作为训练集,5例作为测试集。使用Dixon四种对比图像及其不同组合形成集成模型,进行二值颅骨图像重建。采用戴斯相似性系数(DSC)、准确度、敏感度和特异度评估骨重建效果。结果:在以400HU为阈值重建MR二值骨图像,水相和同相位组合的重建结果DSC值最高(0.760±0.038)。在以150HU为阈值时,水相和反相位组合的重建结果DSC值最高(0.795±0.040)。150HU重建结果比400HU敏感度高(0.880±0.050 vs. 0.855±0.052),特异度下降(0.977±0.004 vs. 0.982±0.004)。结论:利用Dixon图像进行深度学习重建颅骨二值图像,在400HU为阈值时水相和同相位图像结合进行颅骨重建的效果最优,在150HU为阈值时水相和反相位图像结合的效果最优。  相似文献   

2.
目的 探讨基于卷积神经网络的深度学习模型在胸部CT图像上对肋骨区域的自动分割与三维重组的价值.方法 搜集2020年11月至2021年1月在本院行胸部CT检查者共130例(共计33280张轴位图像),以其中的80例作为训练集,20例作为测试集,来自另外三台不同CT设备的被检者各10例作为独立验证集,评价基于四种3D分割网...  相似文献   

3.
目的:使用深度学习压缩感知技术行子宫T2WI,通过对综合质量的评估,探讨其在临床应用中的可行性。方法:选取临床女性盆腔检查患者80例,分别应用常规并行采集(PI)和深度学习卷积神经网络压缩感知(CNN-CS)技术行T2WI,各采集40例。通过对运动伪影和组织边界清晰度评分进行评估,以及对子宫内膜、肌层与结合带对比度进行比较。结果:CNN-CS扫描的T2WI图像质量总体评分显著高于PI法(2.75±0.44 vs 2.35±0.53,P<0.05);CNN-CS组子宫内膜、肌层与结合带对比度均优于常规PI组(0.74±0.07vs 0.60±0.11,P<0.001;0.53±0.11 vs 0.44±0.10,P<0.05);CNN-CS组成像时间小于常规PI组。结论:与常规PI技术成像对比,基于深度学习的CNN-CS技术对子宫T2WI能够减少伪影的影响并提高组织图像对比度,可优化图像质量并减少成像时间。  相似文献   

4.
【摘要】半月板作为膝关节的重要组成部分,半月板退变或损伤在膝骨性关节炎发生发展过程中占据重要地位。MRI是目前检测半月板病变最重要的影像学方法,半月板病变的MRI检出和分级在临床实践中工作量较大且较繁杂,常因人的主观性造成判读结果的偏差。深度学习(DL)在医学图像的自动分析中已表现出巨大潜能且较多地应用于半月板病变及膝骨关节炎的诊疗中。本文就目前基于半月板MRI的DL研究进展进行总结,并探讨面临的挑战及今后的研究方向。  相似文献   

5.
随着大数据时代的到来,人工智能得以在医疗领域崭露头角并实现了飞速发展,尤其在肿瘤诊断方面存在巨大潜能。人工智能利用自动化图像分割及提取等关键技术,在实现短时间内对大量肿瘤信息汇总分析的同时,还可以反映现实环境中成像数据的分布,使肿瘤诊断从主观感知转向客观科学,从而高效精确地协助医师的诊断,为诊疗计划的制订和预后的判断提供坚实的基础。笔者拟对人工智能在肿瘤诊断中的关键技术及当前的应用进行综述。  相似文献   

6.
乳腺X线摄影是乳腺癌筛查的有效手段,但具有一定局限性。人工智能(AI)具有提取图像特征并分析的强大能力,是推动未来智能医学影像进步的核心技术。近年来深度学习(DL)在乳腺X线摄影上的应用迅速发展,能够提高医生的工作效率、诊断准确率并降低漏诊率。对基于DL的乳腺X线摄影在乳腺癌筛查、临床诊断及风险评估中的应用价值和发展前景予以综述与展望。  相似文献   

7.
目的研究基于深度学习的方法预测乳腺癌保乳术后调强放疗(IMRT)剂量分布, 并评估其预测精度。方法回顾性分析2018年1月至2023年3月在上海国际医学中心接受IMRT的110例左侧乳腺癌保乳术后患者的调强放疗数据, 随机固定选择80例作为训练集, 随机固定10例作为验证集, 剩余20例作为测试集。首先将患者的计算机体层成像(CT)图像、感兴趣区、体素与靶区距离和对应的剂量分布四通道特征作为输入数据, 然后使用U-net网络进行训练得到预测模型, 利用该模型对测试集进行剂量预测, 验证体素与靶区距离特征在剂量预测中的影响, 并将剂量预测结果与实际手动计划剂量进行比较。结果加入体素与靶区距离特征的模型使预测精度更高, 测试集中20例患者的剂量评分和剂量体积直方图(DVH)评分分别为2.10±0.18和2.28±0.08, 与手动计划剂量分布更加接近(t=2.52、2.40, P<0.05)。靶区和危及器官(OAR)的剂量预测结果与手动计划剂量的偏差在4%以内, 健侧乳腺平均剂量增加了13 cGy, 均在临床可接受范围内。除PTV60的D2、D98(Di为i%的PTV体积接受的剂量)...  相似文献   

8.
光学相干断层扫描成像(optical coherence tomogrnphy,OCT)的介入可以清晰显示视网膜组织的显微形态结构,观察高度近视的OCT图像特征,能明确常规检查不能发现的病变,有助于高度近视黄斑病变的临床诊断和治疗。  相似文献   

9.
【摘要】目的:探索利用深度学习方法在建立颈椎病MR诊断模型的可行性。方法:回顾性搜集本院2020年10月-2023年3月诊断为颈椎病患者的MR图像514例,使用已有颈椎分割模型在轴面T2WI上分别预测硬膜囊、脊髓、椎间盘、后纵韧带和黄韧带,在矢状面T1WI和T2WI上预测颈椎椎体和椎间盘。由一位低年资放射科医生(阅片经验2年)修改标注,另一位高年资放射科医生(阅片经验≥15年)对低年资医师的标注进行复核。按照颈椎病的不同诊断要点分别进行3D或2D深度学习分类模型训练,包括①颈椎椎体增生模型;②颈椎椎体滑脱模型;③颈椎间盘突出分类模型;④后纵韧带增厚模型;⑤黄韧带增厚模型。将模型输出结果导入R软件进行混淆矩阵分析及ROC曲线绘制,采用正确率、灵敏度、特异度、阳性预测值、阴性预测值以及ROC曲线下面积等评价5种模型的分类效能。结果:5种分类模型中诊断效能最好的是颈椎间盘突出分类模型,正确率0.90,灵敏度0.95,特异度0.85,ROC曲线下面积0.982。颈椎椎体增生和滑脱的正确率分别为0.81和0.80,灵敏度为0.74和0.76,特异度为0.84和1.00,ROC曲线下面积分别为0.855和0.905。后纵韧带和黄韧带增厚的模型正确率分别为0.82和0.77,灵敏度为0.78和0.84,特异度为0.86和0.70,ROC曲线下面积分别为0.902和0.929。结论:本部分研究采用深度学习方法建立了颈椎病MR的自动分类诊断模型,对颈椎椎体增生、滑脱、椎间盘突出、后纵韧带及黄韧带增厚进行了分类模型训练,证明深度学习方法可以用于颈椎病MR的辅助诊断,为未来进一步探索建立颈椎病MR自动诊断模型及结构化报告的植入奠定了基础。  相似文献   

10.
【摘要】目的:探讨3D U-Net模型自动分割颈椎矢状面T1WI和T2WI图像上颈椎各结构的可行性。方法:回顾性搜集拟诊为颈椎病的92例患者的矢状面T1WI和T2WI图像资料,由两位影像医师在每例患者的2个序列图像上分别人工标注颈椎各结构,包括椎体、椎间盘、硬膜囊、脊髓和椎间孔。将178个序列的图像随机分为训练集(n=138)、调优集(n=20)和测试集(n=20)。采用训练集的数据训练3D U-Net分割模型,在调优数据集中微调参数,在测试集中采用定量指标(Dice相似系数,DSC)和定性指标(主观评分)评价模型的分割效能,并比较各结构的DSC值在3组内及3组间是否存在统计学差异。结果:在测试集中3D U-Net模型分割颈椎椎体、椎间盘、硬膜囊、脊髓和椎间孔的DSC值分别为0.87±0.03、0.85±0.04、0.87±0.04、0.82±0.05和0.57±0.08,分割颈椎各解剖结构的总体DSC值为0.80±0.13。各结构的DSC值在3组内及组间均有统计学差异(P<0.001)。主观评价显示3D U-Net模型分割颈椎各结构获得的图像均符合临床测量要求。结论:基于矢状面T1WI和T2WI序列的3D U-Net模型对颈椎各结构的分割可达到较高的准确性。  相似文献   

11.
12.
目的探讨基于动态增强磁共振成像(DCE-MRI)的影像组学及深度学习在肺癌脊柱转移鉴别诊断中的应用价值。方法回顾性分析61例确诊为脊柱转移患者的DCE-MRI,绘制感兴趣区域的时间-信号强度曲线,根据曲线定义3个参数,用区域增长算法对病灶进行标准化分割,通过影像组学提取分析3个DCE-MRI参数图的特征,用随机森林算法挑选出与鉴别疾病最相关的特征用于构建分类器进而进行诊断;研究包含2种深度学习算法,3个DCE-MRI参数图作为卷积神经元网络(CNN)的输入,将DCE-MRI每个层面的图像集视为一个时间序列,12层DCE图像作为卷积长短时间记忆(CLSTM)神经元网络的输入。结果影像组学诊断的准确率为0.71,CNN和CLSTM的平均诊断准确率分别为0.71、0.81。结论基于DCE-MRI的影像组学及深度学习在鉴别诊断肺癌脊柱转移方面具有可行性,可为临床诊断提供有价值的信息。  相似文献   

13.
ObjectiveTo evaluate the image quality and lesion detectability of lower-dose CT (LDCT) of the abdomen and pelvis obtained using a deep learning image reconstruction (DLIR) algorithm compared with those of standard-dose CT (SDCT) images.Materials and MethodsThis retrospective study included 123 patients (mean age ± standard deviation, 63 ± 11 years; male:female, 70:53) who underwent contrast-enhanced abdominopelvic LDCT between May and August 2020 and had prior SDCT obtained using the same CT scanner within a year. LDCT images were reconstructed with hybrid iterative reconstruction (h-IR) and DLIR at medium and high strengths (DLIR-M and DLIR-H), while SDCT images were reconstructed with h-IR. For quantitative image quality analysis, image noise, signal-to-noise ratio, and contrast-to-noise ratio were measured in the liver, muscle, and aorta. Among the three different LDCT reconstruction algorithms, the one showing the smallest difference in quantitative parameters from those of SDCT images was selected for qualitative image quality analysis and lesion detectability evaluation. For qualitative analysis, overall image quality, image noise, image sharpness, image texture, and lesion conspicuity were graded using a 5-point scale by two radiologists. Observer performance in focal liver lesion detection was evaluated by comparing the jackknife free-response receiver operating characteristic figures-of-merit (FOM).ResultsLDCT (35.1% dose reduction compared with SDCT) images obtained using DLIR-M showed similar quantitative measures to those of SDCT with h-IR images. All qualitative parameters of LDCT with DLIR-M images but image texture were similar to or significantly better than those of SDCT with h-IR images. The lesion detectability on LDCT with DLIR-M images was not significantly different from that of SDCT with h-IR images (reader-averaged FOM, 0.887 vs. 0.874, respectively; p = 0.581).ConclusionOverall image quality and detectability of focal liver lesions is preserved in contrast-enhanced abdominopelvic LDCT obtained with DLIR-M relative to those in SDCT with h-IR.  相似文献   

14.
15.
Objective:The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs.Methods:Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions.Results:The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals.Conclusions:The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.  相似文献   

16.
ObjectivesPerformance of recently developed deep learning models for image classification surpasses that of radiologists. However, there are questions about model performance consistency and generalization in unseen external data. The purpose of this study is to determine whether the high performance of deep learning on mammograms can be transferred to external data with a different data distribution.Materials and MethodsSix deep learning models (three published models with high performance and three models designed by us) were evaluated on four different mammogram data sets, including three public (Digital Database for Screening Mammography, INbreast, and Mammographic Image Analysis Society) and one private data set (UKy). The models were trained and validated on either Digital Database for Screening Mammography alone or a combined data set that included Digital Database for Screening Mammography. The models were then tested on the three external data sets. The area under the receiver operating characteristic curve (auROC) was used to evaluate model performance.ResultsThe three published models reported validation auROC scores between 0.88 and 0.95 on the validation data set. Our models achieved between 0.71 (95% confidence interval [CI]: 0.70-0.72) and 0.79 (95% CI: 0.78-0.80) auROC on the same validation data set. However, the same evaluation criteria of all six models on the three external test data sets were significantly decreased, only between 0.44 (95% CI: 0.43-0.45) and 0.65 (95% CI: 0.64-0.66).ConclusionOur results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets, and these models need further assessment and validation before being applied in clinical practice.  相似文献   

17.
目的 探究使用基于3D U-Net结合三期相CT图像的分割模型对鼻咽癌肿瘤原发灶(GTVnx)和转移的区域淋巴结(GTVnd)自动勾画的有效性和可行性。方法 回顾性收集215例鼻咽癌病例的电子计算机体层扫描(CT),包括平扫期(CT)、增强期(CTC)和延迟期(CTD)3个期相共计645组图像。采用随机数字表法,将数据集划分为172例训练集和43例的测试集。设置了包括三期相CT图像模型及期相微调模型共计6个实验组:三期相CT图像模型即仅使用平扫期(CT)A1组、仅使用增强期(CTC)A2组、仅使用延迟期(CTD)A3组和同时使用三期相(All)A4组。期相微调模型:CTC微调B1组和CTD微调B2组。使用Dice相似性系数(DSC)和95%豪斯多夫距离(HD95)作为定量评价指标。结果 使用三期相CT(A4)进行GTVnd靶区自动勾画相比于仅使用单期相CT(A1、A2、A3)获得更好的勾画效果(DSC:0.67 vs. 0.61、0.64、0.64, t=7.48、3.27、4.84,P<0.01; HD95: 36.45 mm vs. 79.23、59.55、65.17 mm,t=5.24、2.99、3.89,P<0.01),差异有统计学意义。使用三期相CT(A4)对于GTVnx的自动勾画效果相比于仅使用单期相(A1、A2、A3)无明显提升(DSC: 0.73 vs. 0.74、0.74、0.74;HD95: 14.17 mm vs. 8.06、8.11、8.10 mm),差异无统计学意义(P>0.05)。在GTVnd的自动勾画中,B2、B3 vs. A1模型具有更好的自动勾画精度(DSC:0.63、0.63 vs. 0.61, t=4.10、3.03, P<0.01;HD95:58.11、50.31 mm vs. 79.23 mm,t=2.75、3.10, P<0.01)。结论 使用三期CT扫描对于鼻咽癌GTVnd靶区具有更好的自动勾画效果。通过期相微调模型,可以提升平扫CT图像上GTVnd靶区的自动勾画精度。  相似文献   

18.
深度学习是目前人工智能领域备受关注和极具应用前景的机器学习算法,有望革新传统计算机辅助诊断系统,在精准影像诊断中发挥重要作用。本文就人工智能、机器学习、深度学习、卷积神经网络、迁移学习的基本概念以及基于深度学习的计算机辅助诊断系统在肺、乳腺、心脏、颅脑、肝脏、前列腺、骨骼影像领域及病理领域的研究现状予以综述。  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号