首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
病理学是疾病诊断的金标准。利用全切片扫描技术将病理切片转化为数字图像后,人工智能特别是深度学习模型在病理学图像分析领域展现出了巨大潜力。人工智能在肺癌全切片扫描中的应用涉及组织病理学分型、肿瘤微环境分析、疗效及生存预测等多个方面,有望辅助临床进行精准治疗决策。然而标注数据不足、切片质量差异等因素也限制了病理学图像分析的发展。本文总结了肺癌领域利用人工智能手段进行病理学图像分析的应用进展,并对未来发展方向进行展望。  相似文献   

2.
膀胱癌的发病率逐年上升,其诊断的金标准依赖于组织病理活检。全载玻片数字化技术可产生大量高分辨率捕获的病理图像,促进了数字病理学的发展。随着人工智能的热潮掀起,深度学习作为人工智能的一种新方法,已经在膀胱癌的肿瘤诊断、分子分型、预测预后和复发等病理图像分析中取得了显著成果。传统病理极度依赖于病理学家的专业水平和经验储备,主观性强且可重复性差。深度学习以其自动提取图像特征的能力,在辅助病理学家进行决策时,可提高诊断效率和可重复性,降低漏诊和误诊率。这不仅能缓解目前面临人才短缺和医疗资源不均的压力,而且也能促进精准医疗的发展。本文就深度学习在膀胱癌病理图像分析中的最新研究进展和前景作一述评。  相似文献   

3.
病理图像定量分析及其测量误差的控制   总被引:13,自引:1,他引:12  
病理图像分析技术可为人们提供定量的、数字化的病理学诊断依据 ,已成为病理科医师做出客观、准确和可靠的病理学诊断必不可少的辅助手段。然而测量仪器和测量方法导致的测量误差对测量结果的准确性具有举足轻重的影响。本文对病理图像分析的原理做一简介 ,并对影响病理图像分析测量结果的因素及其测量误差的控制做一浅析。  相似文献   

4.
近年来,人工智能在乳腺X线摄影中有着十分广泛的应用。目前,深度学习已经成为人工智能图像处理领域最先进的手段之一,其对肿瘤位置的确定和肿瘤性质的判定已经达到专业影像医师的程度。此外,深度学习还具有其他重要功能,如:构建预测模型,评估患病风险;提取成像特征,降低召回率;合成乳腺X线图像用于教育和科研等。笔者对深度学习的概念及其在乳腺X线摄影中的应用作一综述,供乳腺科和医学影像科医师参考。  相似文献   

5.
近几十年在全球范围内甲状腺癌发病率增长迅速。超声检查是甲状腺结节诊断的首选检查手段。作为一项经济、便利且易于推广的检查项目,其对影像学医师的要求较高,需要有丰富的经验。超声检查对甲状腺癌的诊断标准也在不断完善,统一化标准的推广、普及和成熟利用在实施过程中需要大量的人力和财力,目前较难实现。近些年中国医疗卫生资源需求巨大,医疗资源分布欠均衡,临床诊疗中需要全面评估甲状腺结节的恶性风险和颈部淋巴结的性质以进行临床决策,诊断工作繁重复杂。人工智能领域的深度学习算法飞速发展,其在医学图像诊断领域展示出强大的性能,大数据结合深度学习可有效地解决临床诊疗中的问题,深度学习在医疗图像诊断领域、甲状腺结节超声诊断方向显示出较大的优势和应用前景。利用深度学习算法分析超声图像构建超声自动诊断系统,可辅助甲状腺肿瘤超声诊断,简化超声医生的工作流程,有助于提高临床实践效率。本文对近年来深度学习在甲状腺肿瘤及颈部淋巴结超声诊断领域的研究现状和研究进展予以概述。   相似文献   

6.
目的结合近年来分子病理学技术的发展,探讨定量分子病理学技术在糖尿病研究中的应用。方法联合应用RT-PCR、ISH、IHC、FCM及图像分析技术进行糖尿病分子发病机理的研究。结果分别证实了联合微量营养素上调胰岛β细胞胰岛素转录与翻译水平的表达并保护胰岛正常结构,拮抗糖尿病的发生发展。结论不断改进与发展定量分子病理学技术,有利于提高糖尿病分子发病机理的研究水平。  相似文献   

7.
目的 采用快速在线细胞病理学评估方法进行肺癌中晚期患者的病理评估,是目前常用诊断方法,但存在人工诊断准确率低和细胞病理医生人数不足等问题。本文提出一种基于深度学习的细胞病理涂片多分类方法,以求实现六类常见肺部细胞病理涂片的鉴别诊断。方法 本文提出了一种基于CBAM注意力机制增强的ResNet-18网络,以及一种由粗到细的多分类框架,并对深度学习分类方法的特征激活图进行了分析。结果 本文共收集了313张肺部Diff-quick染色的细胞病理涂片,其中259张用于训练,54张用于测试。本文所提出方法在正常肺组织、小细胞癌、非小细胞癌、鳞癌、腺癌和类癌共计6种细胞的分类鉴别中取得了准确率为70.4%、精确率为81.5%,召回率为78.2%和F1评分为78.9%的结果。在与金标准的相关性对比中,该模型与高年资细胞病理学医生相当,高于低年资细胞病理学医生。结论 本文提出了一种基于深度学习多分类模型的肺部细胞病理涂片鉴别诊断方法,该方法可以协助细胞病理学医生进行肺癌患者的细胞病理涂片诊断,并提高快速在线细胞病理学评估的可行性。  相似文献   

8.
目的 对以深度学习为代表的人工智能在肿瘤放射治疗全程中的应用现状进行归纳,并对目前其存在的问题和发展前景进行阐述。方法 以“放射治疗、人工智能、深度学习、自动勾画、质量保证、图像配准”等为中文关键词,“radiotherapy, artificial intelligence, deep learning, automatic contouring, quality assurance, image registration”等为英文关键词,在PubMed、CNKI数据库中检索2012-2023发表的相关文献。纳入标准:(1)人工智能在放疗图像配准和自动勾画中的应用;(2)人工智能在放疗计划制定中的应用;(3)人工智能在放疗质量保证及疗效预测中的应用。排除标准:与放射治疗相关性较低。结果 人工智能中的深度学习技术在放射治疗的多个环节中扮演重要角色,尤其在医学图像处理方面,现有研究证明深度学习在图像合成与配准、靶区自动勾画领域可以减少放射科医生的工作量并提高结果的一致性。然而在放射治疗计划和质量保证中的应用仍需要进一步开发和临床验证以充分发挥其潜力。结论 人工智能与放射治疗的结合已经初步...  相似文献   

9.
目的探讨偏振光皮肤镜数字图像分析技术,在确诊色素性皮损中的意义。方法设计有偏振光装置的皮肤镜,进行色素性皮损图像采集分析的软硬件数字图像系统,观察了38例色素性皮损,并与组织病理学方法进行比较。结果对已观察到的皮损表面色素形态,在不同的色素性皮损诊断中都有其不同的分布排列模式和特有的结构,与组织病理学观察到的结果有相关性。结论利用偏振光皮肤镜的配套技术进行无创性图像采集,提高了皮损图像的色素形态分布模式和特征观察的可视性;为临床某些色素性皮肤病和黑素细胞肿瘤的早期诊断和鉴别诊断提供了参考依据。  相似文献   

10.
目的 探讨18F-FDG PET/CT在子宫颈癌放疗靶区勾画中的价值.方法 收集2015年3月至2016年10月经病理学检查证实为子宫颈鳞癌Ⅲb期患者33例,由3名放疗医师分别基于单纯CT和PET/CT融合图像勾画原发病灶大体肿瘤靶区体积(gross target volume,GTV),比较不同医师所勾画靶区的差异.结果 3名医师在单纯CT和PET/CT融合图像下定义的GTV比较差异均有统计学意义(P<0.001).不同医师定义GTVCr差异有统计学意义(F=4.28,P<0.001),但GTVPERT-CT差异无统计学意义(F=0.21,P=0.81).3位医师应用PET/CT图像勾画的肿瘤靶区体积变异减小(7.75 cm3 vs 24.50 cm3).结论 PET/CT融合图像可以提高靶区勾画的准确性.  相似文献   

11.
Deep learning is becoming increasingly popular and available to new users, particularly in the medical field. Deep learning image segmentation, outcome analysis, and generators rely on presentation of Digital Imaging and Communications in Medicine (DICOM) images and often radiation therapy (RT) structures as masks. Although the technology to convert DICOM images and RT structures into other data types exists, no purpose-built Python module for converting NumPy arrays into RT structures exists. The 2 most popular deep learning libraries, Tensorflow and PyTorch, are both implemented within Python, and we believe a set of tools built in Python for manipulating DICOM images and RT structures would be useful and could save medical researchers large amounts of time and effort during the preprocessing and prediction steps. Our module provides intuitive methods for rapid data curation of RT-structure files by identifying unique region of interest (ROI) names and ROI structure locations and allowing multiple ROI names to represent the same structure. It is also capable of converting DICOM images and RT structures into NumPy arrays and SimpleITK Images, the most commonly used formats for image analysis and inputs into deep learning architectures and radiomic feature calculations. Furthermore, the tool provides a simple method for creating a DICOM RT-structure from predicted NumPy arrays, which are commonly the output of semantic segmentation deep learning models. Accessing DicomRTTool via the public Github project invites open collaboration, and the deployment of our module in PyPi ensures painless distribution and installation. We believe our tool will be increasingly useful as deep learning in medicine progresses.  相似文献   

12.
近年来,深度学习已逐渐应用于放射治疗的器官自动分割和勾画。但是基于CT图像的盆腔器官自动分割仍具有较大挑战性。本文介绍了图像分割常用的基础网络模型和框架,以及适用于医学图像分割的网络、损失函数、常用数据集改进,重点概述了近五年基于CT图像使用深度学习自动分割男性盆腔器官的主要网络和结果,探讨了深度学习自动分割所面临的挑战和局限性,以及未来潜在的研究方向。  相似文献   

13.
目的:评估计算机辅助诊断技术对口腔鳞状细胞癌组织病理图像进行自动检测的准确性及其临床应用价值。方法:来自B Borooah癌症研究所的医学专家,从230例患者中收集、准备和分类的1 224张口腔组织病理图片被纳入研究。本研究采用十折交叉验证对图像样本进行训练和测试,验证本研究模型的有效性。此外,本研究采用经典的ResNet50模型作为深度学习的框架,并根据切片图像的性质进行了改进,以确保自动检测的效果。结果:分类实验的结果表明,本研究所提出的深度学习模型可以快速、精确的对口腔鳞状细胞癌组织病理图像进行检测,受试者工作特征曲线(receiver operating characteristic curve,ROC)和曲线下面积(area under the curve,AUC)(最优AUC=0.91,平均AUC=0.88)显示了该方法的实验效果。此外,模型的准确率(accuracy,ACC)(0.976)、敏感性(sensitivity,SEN)(0.981)以及特异性(specificity,SPE)(0.971)也进一步显示了该研究的效果。结论:本研究所提出的深度学习框架可以很好的对口腔鳞状细胞癌进行自动检测,所得到的结果可以有效地转化为软件,对于临床辅助诊断使用有极大的帮助。  相似文献   

14.
肿瘤个体化治疗的进展对肿瘤组织病理标记物的精确诊断提出了更高的要求。数字病理(digital pathology,DP)的发展为人工智能(artificial intelligence,AI)辅助诊断在肿瘤组织病理图像分析中的应用提供了基础。基于卷积神经网络的深度学习(deep learning,DL)算法能够将DP图像与计算机分析技术相结合,有望成为定量评价肿瘤组织生物标志物的重要工具。本文概述了AI在组织病理学中的发展,并以目前研究相对广泛且与临床诊疗密切相关的分子病理指标Her-2、Ki-67及PD-L1的图像分析为具体案例,重点阐述了当前AI在肿瘤病理标志物分析中的研究进展。AI辅助的肿瘤病理诊断具有客观性强及可重复性高等优点,能够实现肿瘤组织标志物诊断的定量分析,从而克服病理医生人工判读的挑战,提高病理诊断的精确性。通过计算机工具构建肿瘤组织标志物的AI判读模式,是构建未来肿瘤智能诊疗体系的重要环节。   相似文献   

15.
Objective: Automated Pap smear cervical screening is one of the most effective imaging based cancer detection tools used for categorizing cervical cell images as normal and abnormal. Traditional classification methods depend on hand-engineered features and show limitations in large, diverse datasets. Effective feature extraction requires an efficient image preprocessing and segmentation, which remains prominent challenge in the field of Pathology. In this paper, a deep learning concept is used for cell image classification in large datasets. Methods: This relatively proposed novel method, combines abstract and complicated representations of data acquired in a hierarchical architecture. Convolution Neural Network (CNN) learns meaningful kernels that simulate the extraction of visual features such as edges, size, shape and colors in image classification. A deep prediction model is built using such a CNN network to classify the various grades of cancer: normal, mild, moderate, severe and carcinoma. It is an effective computational model which uses multiple processing layers to learn complex features. A large dataset is prepared for this study by systematically augmenting the images in Herlev dataset. Result: Among the three sets considered for the study, the first set of single cell enhanced original images achieved an accuracy of 94.1% for 5 class, 96.2% for 4 class, 94.8% for 3 class and 95.7% for 2 class problems. The second set includes contour extracted images showed an accuracy of 92.14%, 92.9%, 94.7% and 89.9% for 5, 4, 3 and 2 class problems. The third set of binary images showed 85.07% for 5 class, 84% for 4 class, 92.07% for 3 class and highest accuracy of 99.97% for 2 class problems. Conclusion: The experimental results of the proposed model showed an effective classification of different grades of cancer in cervical cell images, exhibiting the extensive potential of deep learning in Pap smear cell image classification.  相似文献   

16.
The application of deep neural networks for segmentation in medical imaging has gained substantial interest in recent years. In many cases, this variant of machine learning has been shown to outperform other conventional segmentation approaches. However, little is known about its general applicability. Especially the robustness against image modifications (e.g., intensity variations, contrast variations, spatial alignment) has hardly been investigated. Data augmentation is often used to compensate for sensitivity to such changes, although its effectiveness has not yet been studied. Therefore, the goal of this study was to systematically investigate the sensitivity to variations in input data with respect to segmentation of medical images using deep learning. This approach was tested with two publicly available segmentation frameworks (DeepMedic and TractSeg). In the case of DeepMedic, the performance was tested using ground truth data, while in the case of TractSeg, the STAPLE technique was employed. In both cases, sensitivity analysis revealed significant dependence of the segmentation performance on input variations. The effects of different data augmentation strategies were also shown, making this type of analysis a useful tool for selecting the right parameters for augmentation. The proposed analysis should be applied to any deep learning image segmentation approach, unless the assessment of sensitivity to input variations can be directly derived from the network.  相似文献   

17.
Objective: This study aims to develop automatic breast tumor detection and classification including automatic tumor volume estimation using deep learning techniques based on computerized analysis of breast ultrasound images. When the skill levels of the radiologists and image quality are important to detect and diagnose the tumor using handheld ultrasound, the ability of this approach tends to assist the radiologist’s decision for breast cancer diagnosis. Material and Methods: Breast ultrasound images were provided by the Department of Radiology of Thammasat University and Queen Sirikit Center of Breast Cancer of Thailand. The dataset consists of 655 images including 445 benign and 210 malignant. Several data augmentation methods including blur, flip vertical, flip horizontal, and noise have been applied to increase the training and testing dataset. The tumor detection, localization, and classification were performed by drawing the appropriate bounding box around it using YOLO7 architecture based on deep learning techniques. Then, the automatic tumor volume estimation was performed using a simple pixel per metric technique. Result: The model demonstrated excellent tumor detection performance with a confidence score of 0.95. In addition, the model yielded satisfactory predictions on the test sets, with a lesion classification accuracy of 95.07%, a sensitivity of 94.97%, a specificity of 95.24%, a PPV of 97.42%, and an NPV of 90.91%. Conclusion: An automatic breast tumor detection and classification including automatic tumor volume estimation using deep learning technique yielded satisfactory predictions in distinguishing benign from malignant breast lesions. In addition, automatic tumor volume estimation was performed. Our approach could be integrated into the conventional breast ultrasound machine to assist the radiologist’s decision for breast cancer diagnosis.  相似文献   

18.
Artificial intelligence, and in particular deep learning using convolutional neural networks, has been used extensively for image classification and segmentation, including on medical images for diagnosis and prognosis prediction. Use in radiotherapy prognostic modelling is still limited, however, especially as applied to toxicity and tumour response prediction from radiation dose distributions. We review and summarise studies that applied deep learning to radiotherapy dose data, in particular studies that utilised full three-dimensional dose distributions. Ten papers have reported on deep learning models for outcome prediction utilising spatial dose information, whereas four studies used reduced dimensionality (dose volume histogram) information for prediction. Many of these studies suffer from the same issues that plagued early normal tissue complication probability modelling, including small, single-institutional patient cohorts, lack of external validation, poor data and model reporting, use of late toxicity data without taking time-to-event into account, and nearly exclusive focus on clinician-reported complications. They demonstrate, however, how radiation dose, imaging and clinical data may be technically integrated in convolutional neural networks-based models; and some studies explore how deep learning may help better understand spatial variation in radiosensitivity. In general, there are a number of issues specific to the intersection of radiotherapy outcome modelling and deep learning, for example translation of model developments into treatment plan optimisation, which will require further combined effort from the radiation oncology and artificial intelligence communities.  相似文献   

19.
We examined whether automated visual evaluation (AVE), a deep learning computer application for cervical cancer screening, can be used on cervix images taken by a contemporary smartphone camera. A large number of cervix images acquired by the commercial MobileODT EVA system were filtered for acceptable visual quality and then 7587 filtered images from 3221 women were annotated by a group of gynecologic oncologists (so the gold standard is an expert impression, not histopathology). We tested and analyzed on multiple random splits of the images using two deep learning, object detection networks. For all the receiver operating characteristics curves, the area under the curve values for the discrimination of the most likely precancer cases from least likely cases (most likely controls) were above 0.90. These results showed that AVE can classify cervix images with confidence scores that are strongly associated with expert evaluations of severity for the same images. The results on a small subset of images that have histopathologic diagnoses further supported the capability of AVE for predicting cervical precancer. We examined the associations of AVE severity score with gynecologic oncologist impression at all regions where we had a sufficient number of cases and controls, and the influence of a woman's age. The method was found generally resilient to regional variation in the appearance of the cervix. This work suggests that using AVE on smartphones could be a useful adjunct to health-worker visual assessment with acetic acid, a cervical cancer screening method commonly used in low- and middle-resource settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号