首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
目的 探讨DCE-MRI定量和半定量参数对时间-信号强度曲线表现为平台型前列腺癌灶和前列腺增生的鉴别诊断价值.方法 回顾性分析48例时间-信号强度曲线呈平台型的前列腺疾病患者(前列腺癌26例,前列腺增生22例)DCE-MRI定量和半定量参数,包括容积转移常数(Ktrans)、转移速率常数(Kep)、血管外细胞外间隙容积分数(Ve),血容量(BV)、血流量(BF)、达峰时间(TTP).并运用SPSS进行统计学分析.结果 在DCE-MRI时间-信号强度曲线为平台型的前列腺癌灶组和前列腺增生组Ktrans、Kep、BF值分别为(2.33 ±0.93) min-1和(1.21±0.71)min-1、(3.46±1.41)min-1和(1.81 ±0.85) min-1、(182.63±74.79)ml· g-1·min-1和(140.88±50.73)ml·g-1·min-1,组间差异均有统计学意义(P<0.05).结论 Ktrans、Kep、BF值在前列腺癌灶组和前列腺增生组间差异存在统计学意义,DCE-MRI定量和半定量参数对平台型时间-信号强度曲线的前列腺癌灶和前列腺增生间的鉴别具有一定的价值.  相似文献   

2.
影像组学是医学影像图像分析最常用的技术。传统机器学习方法建立的影像组学及深度学习方法为医学影像分析提供了新的手段。本文拟介绍传统机器学习建立的影像组学及深度学习方法在肝细胞癌的研究现状,为进一步研究提供参考。  相似文献   

3.
目的:探讨基于MRI动态增强扫描(DCE-MRI)的影像组学在预测乳腺癌前哨淋巴结(SLN)转移中的价值。方法 :回顾性收集经病理证实并行DCE-MRI检查的浸润性乳腺癌164例(训练组124例,验证组40例)。在DCE-MRI图像上提取影像组学特征,并计算DCE参数,采用Lasso-Logistic回归模型对影像组学特征进行筛选。分别建立单纯影像组学模型、单纯DCE参数模型及联合模型。采用ROC的AUC评价不同模型的鉴别预测效能,并对模型的ROC曲线行DeLong检验;在验证队列中评估其预测效能。结果:共提取396个影像组学特征,经筛选得到28个特征,联合DCE参数分别建模。对于术前预测SLN转移的效能,在训练组中单纯影像组学模型AUC的95%CI为0.81(0.72,0.89),单纯DCE参数模型AUC的95%CI为0.77(0.68,0.86),联合预测模型AUC的95%CI为0.80(0.72,0.89);在验证组中单纯影像组学模型AUC的95%CI为0.74(0.59,0.89),单纯DCE参数模型AUC的95%CI为0.74(0.59,0.90),联合预测模型AUC的95%CI为0.76(0.61,0.91),Delong检验显示差异无统计学意义(P 0.05),联合模型的效能可能稍高。结论:基于DCE-MRI图像提取影像组学特征及DCE参数建立预测模型,作为一种无创性预测乳腺癌SLN转移的工具,有良好的应用前景。  相似文献   

4.
目的 利用3D-CT影像组学预测小细胞肺癌(SCLC)经铂类化疗后脑转移的发生情况。方法 回顾性分析采用铂类化疗方案的148例SCLC患者的影像资料,其中发生脑转移者共57例,未发生脑转移者共91例。利用慧影大数据科研平台将数据集按照4∶1随机分成训练集和测试集,提取所勾画病灶上的影像组学特征,利用支持向量机建立影像组学鉴别模型,通过对受试者工作特征曲线下面积、敏感度、特异度及准确率进行分析,预测SCLC在铂类治疗过程中发生脑转移的概率。结果 发生脑转移和未发生脑转移组的训练集及测试集的受试者工作特征曲线下面积分别为0.798、0.789,0.798、0.789;训练集未发生脑转移组特异性0.72,准确率为0.80,发生脑转移组中特异性0.71,准确率为0.62,测试集中未发生脑转移组特异性0.83,准确率为0.87,发生脑转移组中特异性0.68,准确率为0.62。结论 基于经铂类治疗的SCLC患者肺部CT 3D影像组学特征预测脑转移具有一定临床价值,能对患者进行早期临床干预,从而改善患者预后,提高生存期。  相似文献   

5.
近年来, 深度学习在医学成像领域受到广泛的关注, 而卷积神经网络作为深度学习的重要分支备受放射医师的青睐。深度卷积神经网络从影像图像中挖掘大量肉眼无法识别的图像特征, 不仅可以提高临床诊断效能, 还能有效缩短诊断时间。另外, 结直肠癌发病率和死亡率居高不下, 准确的影像评估对指导患者治疗十分重要。本文就基于影像图像的深度卷积神经网络在结直肠癌术前评估、基因分子分型及新辅助、辅助治疗疗效评估等方面的应用及存在的问题进行综述。  相似文献   

6.
正摘要目的评价在动态增强成像中应用深度卷积神经网络(CNN)鉴别肝脏肿瘤的诊断价值。材料与方法本临床回顾性研究纳入三期肝脏肿块的CT影像(平扫、动脉期和延  相似文献   

7.
目的:探讨深度卷积神经网络模型(CNN)在评估锥形束计算机断层扫描(CBCT)影像中的腭中缝成熟度的应用,验证深度学习算法的有效性。方法:在已有卷积神经网络Xception模型的基础上,对模型进行针对性的结构优化,引入了注意力以及多特征融合机制。使用661例CBCT的腭平面截图,图像经预处理后作为训练集,在对网络模型进行训练后,利用20例典型分期样本进行验证,再分别测试38例困难样本(测试集A)和60例平均难度样本(测试集B)的分期准确率。最后将模型与医生的判断结果进行对比分析。结果:所设计的深度神经网络模型在数据集A和数据集B上的准确率分别为0.868和0.916,医生在数据集A和数据集B的准确率分别为0.628和0.850,结论:在病例出现多期性时,深度神经网络模型能够给出更加准确的结论,因此深度神经网络模型能够给医生提供有价值的参考,辅助医生做出正确的诊断。  相似文献   

8.
人工神经元网络鉴别星形胶质细胞瘤良恶性的初步研究   总被引:9,自引:0,他引:9  
目的:基于磁共振影像特点,应用人工神经元网络建立计算机辅助诊断系统,研究其判断星形胶质细胞肿瘤良、恶性的可行性及其诊断效果.材料和方法:搜集280例星形胶质细胞肿瘤病例的MRI影像资料,其中良性169例,恶性111例.由放射科医生对MRI图像进行12方面的特征提取并记录.然后将其输入人工神经元网络,对网络训练,建立计算机辅助诊断系统,以数据库病例初步评价其诊断效果并与放射科专家比较其诊断准确性.结果:数据库病例测试表明人工神经元网络的诊断结果为,对于良性和恶性星形胶质细胞瘤的诊断准确率分别为92.1%和94.3%,特异性分别为93.6%和89.9%诊断准确性接近放射科专家.结论:神经元网络可以用来进行星形胶质细胞瘤良、恶性的鉴别诊断.本研究建立的计算机辅助诊断系统对于提高良、恶性星形胶质细胞瘤鉴别诊断的准确性和医学影像学教学方面具有一定的实用价值.随着人工智能的快速发展,建立计算机辅助诊断系统帮助放射科医生提高诊断的准确性逐渐成为可能.  相似文献   

9.
目的 验证动态对比增强磁共振成像(DCE-MRI)定量参数在乳腺良恶性病变中的诊断效能,探讨其最佳定量参数直方图分析的诊断价值,比较DCE-MRI定量参数及直方图参数对鉴别乳腺良恶性病变的价值。方法 回顾性分析DCE-MRI检查的151例乳腺病变的患者,共166个病灶,参照病理结果将研究对象分为良性组、恶性组。使用图像后处理软件获得病灶动态增强定量参数容积转运常数(Ktrans)、速率常数(Kep)、血管外细胞外间隙容积分数(Ve)、血管容积分数(Vp)值,Kruskal-Wallis H检验比较DCE-MRI定量参数值的组间差异,采用受试者工作特征曲线(ROC)评价各参数鉴别良恶性病变的效能。选择诊断效能最佳的参数作直方图分析,提取14个直方图参数,评价组间参数差异,通过Logistic回归分析筛选出鉴别良恶性乳腺病变的最佳参数,评价其诊断效能。比较常规定量参数与直方图方法的诊断效能。结果 乳腺恶性病变Ktrans、Kep、Vp值大...  相似文献   

10.
目的:探讨动态增强磁共振成像(DCE-MRI)定量和半定量参数对结直肠良恶性肿瘤的鉴别诊断价值。方法:回顾性分析2019年4月至2022年4月我院105例结直肠肿瘤患者的临床及影像资料,105例患者均行T1WI、T2WI和DCE-MRI扫描,建立各参数伪彩图并计算定量参数转运常数(Ktrans)、速率常数(Kep)和容积分数(Ve),同时根据时间-浓度曲线计算达峰时间(TTP)、最大信号强度(SImax)和初始60s曲线下面积(iAUC)等半定量参数。以病理检查结果为金标准将105例患者分为良性组43例和结直肠癌(CRC)组62例,采用组内相关系数(ICC)分析不同观察者对各参数测量结果的一致性,比较两组各项DCE-MRI参数并分析其对结直肠良恶性肿瘤的鉴别诊断效能。结果:各参数两位观察者间的测量结果均具有良好一致性(ICC>0.8)。CRC组的Ktrans和Kep均明显高于良性组(P<0.05),两...  相似文献   

11.
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. “Deep learning”, or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.  相似文献   

12.
原发性肺癌是我国乃至全世界发病率和死亡率位居首位的恶性肿瘤,且死亡率呈上升趋势,严重危害着人类健康。影像组学(radiomics)通过挖掘高维影像特征与病理生理特征之间的深层关系,进而建立鉴别病理类型、肿瘤分期、远处转移和生存的预测模型,指导个体化诊疗策略,改善预后。正电子发射计算机断层显像(PET/CT)可通过反映肿瘤组织代谢而具有较高的诊断准确性和特异性。本文就PET/CT影像组学在非小细胞肺癌(NSCLC)治疗中的应用进行综述。  相似文献   

13.
Objective:To propose the prediction model for degree of differentiation for locally advanced esophageal cancer patients from the planning CT image by radiomics analysis with machine learning.Methods:Data of 104 patients with esophagus cancer, who underwent chemoradiotherapy followed by surgery at the Hiroshima University hospital from 2003 to 2016 were analyzed. The treatment outcomes of these tumors were known prior to the study. The data were split into 3 sets: 57/16 tumors for the training/validation and 31 tumors for model testing. The degree of differentiation of squamous cell carcinoma was classified into two groups. The first group (Group I) was a poorly differentiated (POR) patients. The second group (Group II) was well and moderately differentiated patients. The radiomics feature was extracted in the tumor and around the tumor regions. A total number of 3480 radiomics features per patient image were extracted from radiotherapy planning CT scan. Models were built with the least absolute shrinkage and selection operator (LASSO) logistic regression and applied to the set of candidate predictors. The radiomics features were used for the input data in the machine learning. To build predictive models with radiomics features, neural network classifiers was used. The precision, accuracy, sensitivity by generating confusion matrices, the area under the curve (AUC) of receiver operating characteristic curve were evaluated.Results:By the LASSO analysis of the training data, we found 13 radiomics features from CT images for the classification. The accuracy of the prediction model was highest for using only CT radiomics features. The accuracy, specificity, and sensitivity of the predictive model were 85.4%, 88.6%, 80.0%, and the AUC was 0.92.Conclusion:The proposed predictive model showed high accuracy for the classification of the degree of the differentiation of esophagus cancer. Because of the good prediction ability of the method, the method may contribute to reducing the pathological examination by biopsy and predicting the local control.Advances in knowledge:For esophageal cancer, the differentiation of degree is the import indexes reflecting the aggressiveness. The current study proposed the prediction model for the differentiation of degree with radiomics analysis.  相似文献   

14.
目的:探讨基于MR早期动态增强的影像组学标签鉴别乳腺良恶性病变的价值.方法:回顾性搜集通过乳腺动态对比增强MRI(DCE-MRI)检查,发现乳腺结节或肿块的144例患者(146个病变),146个病变按照样本量7:3随机抽样选取良性病变与恶性病变(102个作为训练组,44个作为验证组).所有病例基于病变的三维图像对影像组...  相似文献   

15.
Objective:This study aims to build machine learning-based CT radiomic features to predict patients developing metastasis after osteosarcoma diagnosis.Methods and materials:This retrospective study has included 81 patients with a histopathological diagnosis of osteosarcoma. The entire dataset was divided randomly into training (60%) and test sets (40%). A data augmentation technique for the minority class was performed in the training set, along with feature’s selection and model’s training. The radiomic features were extracted from CT’s image of the local osteosarcoma. Three frequently used machine learning models tried to predict patients with lung metastases (MT) and those without lung metastases (non-MT). According to the higher area under the curve (AUC), the best classifier was chosen and applied in the testing set with unseen data to provide an unbiased evaluation of the final model.Results:The best classifier for predicting MT and non-MT groups used a Random Forest algorithm. The AUC and accuracy results of the test set were bulky (accuracy of 73% [ 95% coefficient interval (CI): 54%; 87%] and AUC of 0.79 [95% CI: 0.62; 0.96]). Features that fitted the model (radiomics signature) derived from Laplacian of Gaussian and wavelet filters.Conclusions:Machine learning-based CT radiomics approach can provide a non-invasive method with a fair predictive accuracy of the risk of developing pulmonary metastasis in osteosarcoma patients.Advances in knowledge:Models based on CT radiomic analysis help assess the risk of developing pulmonary metastases in patients with osteosarcoma, allowing further studies for those with a worse prognosis.  相似文献   

16.
《Radiography》2022,28(1):61-67
IntroductionDeep learning approaches have shown high diagnostic performance in image classifications, such as differentiation of malignant tumors and calcified coronary plaque. However, it is unknown whether deep learning is useful for characterizing coronary plaques without the presence of calcification using coronary computed tomography angiography (CCTA). The purpose of this study was to compare the diagnostic performance of deep learning with a convolutional neural network (CNN) with that of radiologists in the estimation of coronary plaques.MethodsWe retrospectively enrolled 178 patients (191 coronary plaques) who had undergone CCTA and integrated backscatter intravascular ultrasonography (IB-IVUS) studies. IB-IVUS diagnosed 81 fibrous and 110 fatty or fibro-fatty plaques. We manually captured vascular short-axis images of the coronary plaques as Portable Network Graphics (PNG) images (150 × 150 pixels). The display window level and width were 100 and 700 Hounsfield units (HU), respectively. The deep-learning system (CNN; GoogleNet Inception v3) was trained on 153 plaques; its performance was tested on 38 plaques. The area under the curve (AUC) obtained by receiver operating characteristic analysis of the deep learning system and by two board-certified radiologists was compared.ResultsWith the CNN, the AUC and the 95% confidence interval were 0.83 and 0.69–0.96, respectively; for radiologist 1 they were 0.61 and 0.42–0.80; for radiologist 2 they were 0.68 and 0.51–0.86, respectively. The AUC for CNN was significantly higher than for radiologists 1 (p = 0.04); for radiologist 2 it was not significantly different (p = 0.22).ConclusionDL-CNN performed comparably to radiologists for discrimination between fatty and fibro-fatty plaque on CCTA images.Implications for practiceThe diagnostic performance of the CNN and of two radiologists in the assessment of 191 ROIs on CT images of coronary plaques whose type corresponded with their IB-IVUS characterization was comparable.  相似文献   

17.
目的 探讨基于MRI影像组学对卵巢卵泡膜细胞瘤(OTCA)与阔韧带肌瘤(BLM)的鉴别诊断价值。资料与方法 回顾性分析安阳市肿瘤医院2016年1月—2021年3月经病理证实的76例OTCA和58例BLM的MRI图像,比较两组疾病的MRI特征。于肿瘤最大层面勾画感兴趣区提取T2WI脂肪抑制序列图像纹理特征,采用分层抽样方式按照7∶3分为训练组104例和测试组30例,根据病理结果分为OTCA亚组和BLM亚组。基于训练组,使用最小绝对收缩和选择算子回归分析筛选关键特征,根据回归模型中变量的回归系数,建立线性方程计算影像组学标签评分。采用受试者工作特征(ROC)曲线评价基于MRI图像特征、影像组学及其组合区分两种疾病的能力。结果 共4个MRI特征为鉴别两组疾病的独立特征,包括同侧卵巢可见性(χ2=5.503,P<0.05)、外周囊性区(χ2=7.693,P<0.05)、动脉期强化程度(P<0.05)及表观扩散系数(t=3.310,P<0.05);训练组和测试组OTCA、BLM亚组的影像组学标签评分比较,差异均有统计学意义(P<0.05)。联合MRI图像特征和影像组...  相似文献   

18.
目的:建立并验证可高效鉴别肺腺癌及其浸润程度的预测模型,并根据结节/肿块性质分层分析模型的预测效能.方法:回顾性分析本院2011年10月-2018年12月经病理证实的肺结节/肿块患者2105例.根据肿瘤性质,分为磨玻璃组(A组,1711例)和实性组(B组,394例),组内以2017年10月为界,分为训练集和测试集.收集...  相似文献   

19.
目的 探讨3.0 T MR体素内不相干运动(IVIM)对于脊柱转移瘤与脊柱结核、脊柱转移瘤原发灶鉴别诊断的应用价值.方法 搜集经穿刺或手术病理证实为脊柱转移瘤(71例,其中肺癌43例,乳腺癌14例,肾癌14例)和脊柱结核(25例)患者的影像资料,用后处理软件测量感兴趣区(ROI)的标准表观扩散系数(ADCstand)、...  相似文献   

20.
The quality of radiotherapy has greatly improved due to the high precision achieved by intensity-modulated radiation therapy (IMRT). Studies have been conducted to increase the quality of planning and reduce the costs associated with planning through automated planning method; however, few studies have used the deep learning method for optimization of planning. The purpose of this study was to propose an automated method based on a convolutional neural network (CNN) for predicting the dosimetric eligibility of patients with prostate cancer undergoing IMRT. Sixty patients with prostate cancer who underwent IMRT were included in the study. Treatment strategy involved division of the patients into two groups, namely, meeting all dose constraints and not meeting all dose constraints, by experienced medical physicists. We used AlexNet (i.e., one of common CNN architectures) for CNN-based methods to predict the two groups. An AlexNet CNN pre-trained on ImageNet was fine-tuned. Two dataset formats were used as input data: planning computed tomography (CT) images and structure labels. Five-fold cross-validation was used, and performance metrics included sensitivity, specificity, and prediction accuracy. Class activation mapping was used to visualize the internal representation learned by the CNN. Prediction accuracies of the model with the planning CT image dataset and that with the structure label dataset were 56.7?±?9.7% and 70.0?±?11.3%, respectively. Moreover, the model with structure labels focused on areas associated with dose constraints. These results revealed the potential applicability of deep learning to the treatment planning of patients with prostate cancer undergoing IMRT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号