共查询到19条相似文献,搜索用时 156 毫秒
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Chengwen Deng Dongyan Han Ming Feng Zhongwei Lv Dan Li 《The Journal of international medical research》2022,50(4)
Objective To explore the differential diagnostic efficiency of the residual network (ResNet)50, random forest (RF), and DS ensemble models for papillary thyroid carcinoma (PTC) and other pathological types of thyroid nodules.Methods This study retrospectively analyzed 559 patients with thyroid nodules and collected thyroid pathological images and auxiliary examination results (laboratory and ultrasound results) to construct datasets. The pathological image dataset was used to train a ResNet50 model, the text dataset was used to train a random forest (RF) model, and a DS ensemble model was constructed from the results of the two models. The differential diagnostic values of the three models for PTC and other types of thyroid nodules were then compared.Results The DS ensemble model had the highest sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (85.87%, 97.18%, 93.77%, and 0.982, respectively).Conclusions Compared with Resnet50 and the RF models trained only on imaging data or text information, respectively, the DS ensemble model showed better diagnostic value for PTC. 相似文献
13.
目的 基于深度学习技术,建立胃活检病理切片胃癌诊断模型,并对模型的性能进行评价。方法 回顾性收集2015年1月—2020年1月浙江省人民医院胃活检诊断为正常胃黏膜、慢性胃炎、高级别上皮内瘤变和胃腺癌患者的病理切片。以20倍率扫描为全视野数字图像(whole slide image, WSI),并按2∶2∶1的比例随机分为图块分类数据集、切片分类训练集与切片分类测试集。对图块分类数据集病变区域进行标注、图块截取后,按20∶1∶1的比例随机分为训练集、测试集、验证集。基于Efficientnet和ResNet网络结构构建卷积神经网络(convolutional neural network, CNN)图块级癌与非癌分类模型,并以图块分类准确率、受试者操作特征曲线下面积(area under the curve, AUC)评价该模型的性能。基于此模型拼接获取整张WSI的癌变热力图,提取热力图中切片级癌与非癌分类特征,对LightGBM算法进行训练,最终完成整张胃癌活检切片的诊断与识别,其识别结果以AUC、准确率、灵敏度、特异度进行评价。结果 共入选符合纳入和排除标准的胃良性疾病(正常胃黏膜、... 相似文献
14.
目的 针对乳腺癌免疫组化全视野数字图像(whole slide image, WSI),提出一种智能化定量分析Ki-67指数的方法。方法 回顾性纳入2020年1—12月北京协和医院乳腺癌患者的病理切片,将其以40倍率扫描为WSI图像,并由2名病理科医生按照2019年国际乳腺癌Ki-67工作组制订的指南对Ki-67指数进行人工判读。按5∶8的比例随机将WSI图像分为A、B两个数据集(A数据集按7∶1∶2比例随机分为训练集、验证集和测试集)。病理科医生对A数据集人工标注热点区域后,40倍视野下将每张WSI随机裁剪为2000个512×512像素的图块,随机选取其中的50个图块,对肿瘤细胞进行标注并计算Ki-67指数。采用条件随机场模型融合图块的空间特征,经ResNet34预训练模型进行特征提取后构建热点区域识别模型,并采用准确率评价其性能。在热点区域内,40倍视野下随机选取10个视野,模型可自动完成细胞分类,并计算Ki-67指数均值。以人工判读结果为金标准,计算模型对B数据集Ki-67指数评估结果的准确率,并采用Bland-Altman法对人工判读与模型分析结果进行一致性评价。结果 共入选符... 相似文献
15.
《Ultrasound in medicine & biology》2022,48(11):2267-2275
The aim of the work described here was to develop an ultrasound (US) image–based deep learning model to reduce the rate of malignancy among breast lesions diagnosed as category 4A of the Breast Imaging-Reporting and Data System (BI-RADS) during the pre-operative US examination. A total of 479 breast lesions diagnosed as BI-RADS 4A in pre-operative US examination were enrolled. There were 362 benign lesions and 117 malignant lesions confirmed by postoperative pathology with a malignancy rate of 24.4%. US images were collected from the database server. They were then randomly divided into training and testing cohorts at a ratio of 4:1. To correctly classify malignant and benign tumors diagnosed as BI-RADS 4A in US, four deep learning models, including MobileNet, DenseNet121, Xception and Inception V3, were developed. The performance of deep learning models was compared using the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). Meanwhile, the robustness of the models was evaluated by five-fold cross-validation. Among the four models, the MobileNet model turned to be the optimal model with the best performance in classifying benign and malignant lesions among BI-RADS 4A breast lesions. The AUROC, accuracy, sensitivity, specificity, PPV and NPV of the optimal model in the testing cohort were 0.897, 0.913, 0.926, 0.899, 0.958 and 0.784, respectively. About 14.4% of patients were expected to be upgraded to BI-RADS 4B in US with the assistance of the MobileNet model. The deep learning model MobileNet can help to reduce the rate of malignancy among BI-RADS 4A breast lesions in pre-operative US examinations, which is valuable to clinicians in tailoring treatment for suspicious breast lesions identified on US. 相似文献
16.
17.
目的 探讨建立非炎性主动脉中膜变性(medial degeneration, MD)患者多种染色病理图像计算机辅助诊断模型的可行性。方法 回顾性收集2018年7—12月首都医科大学附属北京安贞医院诊治胸主动脉瘤及夹层患者非炎性病变的主动脉手术标本病理切片。将其以400倍率扫描为全视野数字图像(whole slide image, WSI)后由2名病理科医师对病变进行标注。按6∶1的比例将标注后的WSI图像随机分为训练集和测试集。采用训练集数据对SE-EmbraceNet进行训练,构建多种染色病理图像MD多分类[包括层内型黏液样细胞外基质聚集(intralamellar mucoid extracellular matrix accumulation, MEMA-I)、穿透型黏液样细胞外基质聚集(translamellar mucoid extracellular matrix accumulation, MEMA-T)、弹力纤维断裂和/或缺失(elastic fiber fragmentation and/or loss, EFFL)和平滑肌细胞核缺失(smooth muscle ce... 相似文献
18.
Julia P. Owen Marian Blazes Niranchana Manivannan Gary C. Lee Sophia Yu Mary K. Durbin Aditya Nair Rishi P. Singh Katherine E. Talcott Alline G. Melo Tyler Greenlee Eric R. Chen Thais F. Conti Cecilia S. Lee Aaron Y. Lee 《Biomedical optics express》2021,12(9):5387
This work explores a student-teacher framework that leverages unlabeled images to train lightweight deep learning models with fewer parameters to perform fast automated detection of optical coherence tomography B-scans of interest. Twenty-seven lightweight models (LWMs) from four families of models were trained on expert-labeled B-scans (∼70 K) as either “abnormal” or “normal”, which established a baseline performance for the models. Then the LWMs were trained from random initialization using a student-teacher framework to incorporate a large number of unlabeled B-scans (∼500 K). A pre-trained ResNet50 model served as the teacher network. The ResNet50 teacher model achieved 96.0% validation accuracy and the validation accuracy achieved by the LWMs ranged from 89.6% to 95.1%. The best performing LWMs were 2.53 to 4.13 times faster than ResNet50 (0.109s to 0.178s vs. 0.452s). All LWMs benefitted from increasing the training set by including unlabeled B-scans in the student-teacher framework, with several models achieving validation accuracy of 96.0% or higher. The three best-performing models achieved comparable sensitivity and specificity in two hold-out test sets to the teacher network. We demonstrated the effectiveness of a student-teacher framework for training fast LWMs for automated B-scan of interest detection leveraging unlabeled, routinely-available data. 相似文献
19.
《Ultrasound in medicine & biology》2023,49(8):1760-1767
ObjectiveThe goal of the work described here was to construct a deep learning–based intelligent diagnostic model for ophthalmic ultrasound images to provide auxiliary analysis for the intelligent clinical diagnosis of posterior ocular segment diseases.MethodsThe InceptionV3–Xception fusion model was established by using two pre-trained network models—InceptionV3 and Xception—in series to achieve multilevel feature extraction and fusion, and a classifier more suitable for the multiclassification recognition task of ophthalmic ultrasound images was designed to classify 3402 ophthalmic ultrasound images. The accuracy, macro-average precision, macro-average sensitivity, macro-average F1 value, subject working feature curves and area under the curve were used as model evaluation metrics, and the credibility of the model was assessed by testing the decision basis of the model using a gradient-weighted class activation mapping method.ResultsThe accuracy, precision, sensitivity and area under the subject working feature curve of the InceptionV3–Xception fusion model on the test set reached 0.9673, 0.9521, 0.9528 and 0.9988, respectively. The model decision basis was consistent with the clinical diagnosis basis of the ophthalmologist, which proves that the model has good reliability.ConclusionThe deep learning–based ophthalmic ultrasound image intelligent diagnosis model can accurately screen and identify five posterior ocular segment diseases, which is beneficial to the intelligent development of ophthalmic clinical diagnosis. 相似文献