首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
胎盘成熟度分级错误可能会导致小于胎龄儿、死产、死胎等的发生。目前,胎盘成熟度分级主要依赖于临床医生的经验和观察,主观性很强,分级准确性易受医生工作强度、工作时长和工作经验的影响。提出一种基于特征融合和判别式学习的胎盘成熟度自动分级算法。首先,对共544例的B型超声图像和彩色能量多普勒(CDE)胎盘图像,采用提取关键点、对关键点提取特征、进行融合并加以判别式特征编码的方法,形成码书,经过归一化,最后用支持向量机(SVM)进行分类,得到胎盘成熟度分级结果。在测试阶段,将胎盘成熟度测试结果与临床医生的分级结果进行对比,得到如下结果:准确率92.7%,敏感性91.1%,特异性97.6%,平均精度97.3%。结果表明,该方法对胎盘成熟度自动分级具有较高的指导意义。  相似文献   

2.
恶性黑色素瘤是最常见和最致命的皮肤癌之一。临床上,皮肤镜检查是恶性黑色素瘤早期诊断的常规手段。但是人工检查费力、费时,并且高度依赖于皮肤科医生的临床经验。因此,研究出自动识别皮肤镜图像中的黑色素瘤算法显得尤为重要。提出一种皮肤镜图像自动评估的新框架,利用深度学习方法,使其在有限的训练数据下产生更具区分性的特征。具体来说,首先在大规模自然图像数据集上预训练一个深度为152层的残差神经网络(Res-152),用来提取皮肤病变图像的深度卷积层特征,并对其使用均值池化得到特征向量,然后利用支持向量机(SVM)对提取的黑色素瘤特征进行分类。在公开的皮肤病变图像ISBI 2016挑战数据集中,用所提出的方法对248幅黑色素瘤图像和1 031幅非黑色素瘤图像进行评估,达到86.28%的准确率及84.18%的AUC值。同时,为论证神经网络深度对分类结果的影响,比较不同深度的模型框架。与现有使用传统手工特征的研究(如基于密集采样SIFT描述符的词袋模型)相比,或仅从深层神经网络的全连接层提取特征进行分类的方法相比,新方法能够产生区分性能更强的特征表达,可以在有限的训练数据下解决黑色素瘤的类内差异大、黑色素瘤与非黑素瘤之间的类间差异小的问题。  相似文献   

3.
乳腺癌是全球女性癌症死亡的主要原因之一。现有诊断方法主要是医生通过乳腺癌观察组织病理学图像进行判断,不仅费时费力,而且依赖医生的专业知识和经验,使得诊断效率无法令人满意。针对以上问题,设计基于组织学图像的深度学习框架,以提高乳腺癌诊断准确性,同时减少医生的工作量。开发一个基于多网络特征融合和稀疏双关系正则化学习的分类模型:首先,通过子图像裁剪和颜色增强进行乳腺癌图像预处理;其次,使用深度学习模型中典型的3种深度卷积神经网络(InceptionV3、ResNet-50和VGG-16),提取乳腺癌病理图像的多网络深层卷积特征并进行特征融合;最后,通过利用两种关系(“样本-样本”和“特征-特征”关系)和lF正则化,提出一种有监督的双关系正则化学习方法进行特征降维,并使用支持向量机将乳腺癌病理图像区分为4类—正常、良性、原位癌和浸润性癌。实验中,通过使用ICIAR2018公共数据集中的400张乳腺癌病理图像进行验证,获得93%的分类准确性。融合多网络深层卷积特征可以有效地捕捉丰富的图像信息,而稀疏双关系正则化学习可以有效降低特征冗余并减少噪声干扰,有效地提高模型的分类性能。  相似文献   

4.
利用超声图像的分形特征评估妊娠期胎盘功能   总被引:1,自引:0,他引:1  
利用超声仪器得到孕妇妊娠期的胎盘B型超声图像然后根据分数布朗运动模型对图像进行分形处理。提取出用以胎盘分级的分形特征参数,并结合临床医学专家的判断,利用模糊分类法,建立起胎盘功能的自动分级规则,通过106例胎盘图像的分析。结果表明:胎盘B型超声图像的分形特征可以有效地表征胎盘的功能状况,从而使胎盘功能的无损自动分级成为可能,因此有较好的临床应用前景。  相似文献   

5.
肾透明细胞癌病理图像中细胞核的形态和位置信息对肾癌的良恶性分级诊断具有重要意义,为提高肾透明细胞癌细胞核分割的质量,本研究提出了基于深度卷积神经网络的细胞核分割方法。首先,根据标定的病理图像中细胞核轮廓,构建细胞核分割样本集;然后,深度卷积神经网络通过隐式特征学习对细胞核分割模型进行训练,避免人为设计特征;最后,利用细胞核分割模型对病理图像进行逐像素分割。实验结果表明,深度卷积神经网络的细胞核分割算法在肾透明细胞癌细胞核分割的像素准确率高达90.33%,细胞核分割性能稳定,深度卷积神经网络强大的鲁棒性和适应性使得肾透明细胞癌细胞核自动分割具有可能。  相似文献   

6.
目的:构建混合多尺度神经网络(HMnet)实现放疗临床靶区的自动勾画,提供一个高精度的CT影像自动分割模型。方法:HMnet是一种端到端的卷积神经网络,使用深度残差网络提取特征,由4个不同内核的卷积层组成的多尺度特征融合模块进行处理,以适应不同尺度大小的临床靶区;再引入注意力残差模块对多尺度特征融合模块输出的有效特征进行强化。共采用117例Graves眼病病例的CT影像数据和临床靶区轮廓训练和评估HMnet,选择骰子相似系数(DSC)、95%豪斯多夫距离(95HD)作为评估指标。结果:采用HMnet进行Graves眼病放疗临床靶区自动勾画的DSC为0.874 9,95HD为2.525 4 mm,均优于Unet、Vnet、ResAttUnet3D网络,也优于同一位医生两次勾画结果的平均DSC。结论:HMnet能准确实现Graves眼病放疗临床靶区的自动勾画,可提高放疗医生的工作效率及勾画的一致性。  相似文献   

7.
目的:结合全卷积神经网络(Fully Convolutional Network,FCN)和多孔卷积(Atrous Convolution,AC)的深度学习方法,实现放射治疗计划图像的组织器官自动勾画。方法:选取122套已经由放疗医师勾画好正常器官结构轮廓的胸部患者CT图像,以其中71套图像(8 532张轴向切层图像)作为训练集,31套图像(5 559张轴向切层图像)作为验证集,20套图像(3 589张轴向切层图像)作为测试集。选取5种公开的FCN网络模型,并结合FCN和AC算法形成3种改进的深度卷积神经网络,即带孔全卷积神经网络(Dilation Fully Convolutional Network,D-FCN)。分别以训练集图像对上述8种网络进行调优训练,使用验证集图像在训练过程中对8种神经网络进行器官自动识别勾画验证,以获取各网络的最佳分割模型,最后使用测试集图像对充分训练后获取的最佳分割模型进行勾画测试,比较自动勾画与医师勾画的相似度系数(Dice)评价各模型的图像分割能力。结果:使用训练图像集进行充分调优训练后,实验的各个神经网络均表现出较好的自动图像分割能力,其中改进的D-FCN 4s网络模型在测试实验中具有最佳的自动分割效果,其全局Dice为94.38%,左肺、右肺、心包、气管和食道等单个结构自动勾画的Dice分别为96.49%、96.75%、86.27%、61.51%和65.63%。结论:提出了一种改进型全卷积神经网络D-FCN,实验测试表明该网络模型可以有效地提高胸部放疗计划图像的自动分割精度,并可同时进行多目标的自动分割。  相似文献   

8.
张倩雯  陈明    秦玉芳    陈希 《中国医学物理学杂志》2019,(11):1356-1361
目的:将深度残差结构和U-Net网络结合形成新的网络ResUnet,并利用ResUnet深度学习网络结构对胸部CT影像进行图像分割以提取肺结节区域。方法:使用的CT影像数据来源于LUNA16数据集,首先对CT图像预处理提取出肺实质,然后对其截取立体图像块并进行数据增强来扩充样本数,形成相应的肺结节掩膜图像,最后将生成的样本输入到ResUnet模型中进行训练。结果:本研究模型最终的精度和召回率分别为35.02%和97.68%。结论:该模型能自动学习肺结节特征,为后续的肺癌自动诊断提供可靠基础,减少临床诊断的成本并节省医生诊断的时间。 【关键词】肺结节;分割;深度残差结构;召回率;ResUnet  相似文献   

9.
胃癌前疾病识别对降低癌变风险及胃癌发病率具有重要意义。提出一种基于胃镜图像浅层特征与深层特征融合的胃癌前疾病识别方法。首先,根据胃镜图像性质,手工设计75维浅层特征,包含图像的直方图特征、纹理特征以及高阶特征;然后,基于构建的Resnet、GoogLe Net等卷积神经网络,在其输出层前添加一个全连接层作为图像的深层特征,为保证特征权重一致,全连接层的神经元数目设计为75维;最后,串联图像的浅层与深层特征,使用机器学习分类器,识别胃息肉、胃溃疡和胃糜烂等3类胃癌前疾病。对每种疾病收集了380张图像,并以4:1的比例划分为训练集和测试集,然后基于该数据集,分别采用传统机器学习、深度学习、特征融合等3种方法进行模型训练和测试。模型在测试集上的结果显示,所提出的特征融合方法识别准确率高达95.18%,优于传统的机器学习方法(74.12%)和深度学习方法(92.54%)。所提出的方法能够充分利用浅层特征与深层特征,为医生提供临床决策支持以辅助胃癌前疾病诊断。  相似文献   

10.
息肉是小肠常见疾病之一。无线胶囊内窥镜(WCE)是检查小肠疾病的常规手段,但每次检查都会产生大量图像,却仅可能包含少数病变图像。目前WCE病变的筛查高度依赖于医生的临床经验,耗时费力,且可能发生漏检或误检,因此实现WCE图像小肠息肉的自动识别意义重大。基于深度学习框架,结合数据增强技术和迁移学习策略,提出实现小肠息肉识别的新方法。基于原始数据集(包含4 300张正常图像和429张息肉图像)和拓展数据集(包含6 920张正常图像和6 864张息肉图像),对比分析不同的深度卷积神经网络模型(AlexNet、VGGNet和GoogLeNet)对息肉的识别效果。实验结果表明,在随机初始化的卷积神经网络中,GoogLeNet模型对息肉的识别效果最好,其敏感性、特异性和准确性分别达到97.18%、98.78%和97.99%,说明增加网络深度可以有效提高识别率。但网络深度增加需要更高的硬件配置和训练时间,因此结合迁移学习策略,AlexNet模型的敏感性、特异性和准确性分别达到了96.57%、98.89%和97.74%,AUC为0.996,表明该方法能有效提高模型整体性能,同时降低对训练时间和实验配置的要求。与传统手工提取图像特征或仅基于深层卷积神经网络进行分类的方法相比,所提出的方法可以在有限的训练数据和实验环境下为小肠息肉的自动识别提供有效的解决方案,有望帮助医生高效完成基于WCE检查的消化道疾病的精准诊断。  相似文献   

11.
糖尿病视网膜病变(DR)已成为全球4大主要致盲疾病之一,及早确诊可以有效降低患者视力受损的风险。通过融合深度学习可解释性特征,提出一种DR自动诊断方法,首先利用导向梯度加权类激活映射图和显著图两种可解释性方法生成不同标记的病灶图像,再通过卷积神经网络提取原图像和两种生成图像的特征向量,最后融合3种特征向量并输入到支持向量机中以实现DR的自动诊断。在1 443张彩色眼底图像构成的数据集上,相对于基础ResNet50模型,该方法诊断准确率提高3.6%,特异性提高2.4%,灵敏度提高5.8%,精度提高4.6%,Kappa系数提高7.9%,实验结果表明该方法能有效降低误诊的风险。  相似文献   

12.

Diabetic retinopathy is a chronic condition that causes vision loss if not detected early. In the early stage, it can be diagnosed with the aid of exudates which are called lesions. However, it is arduous to detect the exudate lesion due to the availability of blood vessels and other distractions. To tackle these issues, we proposed a novel exudates classification from the fundus image known as hybrid convolutional neural network (CNN)-based binary local search optimizer–based particle swarm optimization algorithm. The proposed method from this paper exploits image augmentation to enlarge the fundus image to the required size without losing any features. The features from the resized fundus images are extracted as a feature vector and fed into the feed-forward CNN as the input. Henceforth, it classifies the exudates from the fundus image. Further, the hyperparameters are optimized to reduce the computational complexities by utilization of binary local search optimizer (BLSO) and particle swarm optimization (PSO). The experimental analysis is conducted on the public ROC and real-time ARA400 datasets and compared with the state-of-art works such as support vector machine classifiers, multi-modal/multi-scale, random forest, and CNN for the performance metrics. The classification accuracy is high for the proposed work, and thus, our proposed outperforms all the other approaches.

  相似文献   

13.

Cancer statistics in 2020 reveals that breast cancer is the most common form of cancer among women in India. One in 28 women is likely to develop breast cancer during their lifetime. The mortality rate is 1.6 to 1.7 times higher than maternal mortality rates. According to the US statistics, about 42,170 women in the US are expected to die in 2020 from breast cancer. The chance of survival can be increased through early and accurate diagnosis of cancer. The pathologists manually analyze the histopathology images using high-resolution microscopes to detect the mitotic cells. This is a time-consuming process because there is a minute difference between the normal and mitotic cells. To overcome these challenges, an automatic analysis and detection of breast cancer by using histopathology images play a vital role in prognosis. Earlier researchers used conventional image processing techniques for the detection of mitotic cells. These methods were found to be producing results with low accuracy and time-consuming. Therefore, several deep learning techniques were adopted by researchers to increase the accuracy and minimize the time. The hybrid deep learning model is proposed for selecting abstract features from the histopathology images. In the proposed approach, we have concatenated two different CNN architectures into a single model for effective classification of mitotic cells. Convolution neural network (CNN) automatically detects efficient features without human intervention and classifies cancerous and non-cancerous images using a hybrid fully connected network. It is a computationally efficient, very powerful, and efficient model for performing automatic feature extraction. It detects different phenotypic signatures of nuclei. In order to enhance the accuracy and computational efficiency, the histopathology images are preprocessed, segmented, and feature extracted through CNN and fed into a hybrid CNN for classification. The hybrid CNN is obtained by concatenating two CNN models; together, this is called model leveraging. Model averaging can be improved by weighting the contributions of each sub-model to the combined prediction by the expected performance of the sub-model. The proposed hybrid CNN architecture with data preprocessing with median filter and Otsu-based segmentation technique is trained using 50,000 images and tested using 50,000 images. It provides an overall accuracy of 98.9%.

  相似文献   

14.
光学相干断层扫描(OCT)技术能实现视网膜的高分辨率三维层析成像,对视网膜疾病类型的诊断和发展阶段的分析具有至关重要的作用。临床基于 OCT 图像的视网膜疾病诊断主要依靠眼科医生对图像中病变结构的分析,这一人工分析过程不仅耗时而且易产生主观的误判。研究视网膜疾病的自动分析和诊断技术将极大减轻眼科医生的工作量,是实现高效诊疗的有效途径。针对视网膜OCT图像自动分类,构建一种联合决策的卷积神经网络分类模型。该模型利用卷积神经网络从原始输入OCT图像中自动地学习不同层级的特征,同时在网络多个卷积层上设计多个决策层,这些决策层能够根据网络中不同尺度的特征图分别对OCT图像分类,最后模型融合所有决策层的分类结果做出最终决策。在Duke数据集(3 231张OCT图像)上的实验结果表明,基于多层级特征联合决策的卷积神经网络分类模型对正常视网膜、视网膜年龄相关性黄斑变性和视网膜黄斑水肿的平均识别准确率达到94.5%,灵敏性达到90.5%,特异性达到95.8%。在HUCM数据集(4 322张OCT图像)上的实验结果表明,基于多层级特征联合决策的卷积神经网络分类模型的平均识别准确率达到89.6%,灵敏性达到88.8%,特异性达到90.8%。充分利用卷积神经网络中丰富的多层级特征,能够有效地对视网膜OCT图像实现准确的分类,为临床上视网膜疾病的辅助诊断提供技术支撑。  相似文献   

15.
Lung sounds convey relevant information related to pulmonary disorders, and to evaluate patients with pulmonary conditions, the physician or the doctor uses the traditional auscultation technique. However, this technique suffers from limitations. For example, if the physician is not well trained, this may lead to a wrong diagnosis. Moreover, lung sounds are non-stationary, complicating the tasks of analysis, recognition, and distinction. This is why developing automatic recognition systems can help to deal with these limitations. In this paper, we compare three machine learning approaches for lung sounds classification. The first two approaches are based on the extraction of a set of handcrafted features trained by three different classifiers (support vector machines, k-nearest neighbor, and Gaussian mixture models) while the third approach is based on the design of convolutional neural networks (CNN). In the first approach, we extracted the 12 MFCC coefficients from the audio files then calculated six MFCCs statistics. We also experimented normalization using zero mean and unity variance to enhance accuracy. In the second approach, the local binary pattern (LBP) features are extracted from the visual representation of the audio files (spectrograms). The features are normalized using whitening. The dataset used in this work consists of seven classes (normal, coarse crackle, fine crackle, monophonic wheeze, polyphonic wheeze, squawk, and stridor). We have also experimentally tested dataset augmentation techniques on the spectrograms to enhance the ultimate accuracy of the CNN. The results show that CNN outperformed the handcrafted feature based classifiers.  相似文献   

16.
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.  相似文献   

17.
为更加准确地从动态心电中提取异常心拍,设计一种融合卷积神经网络(CNN)和多层双边长短时记忆网络(BiLSTM)的心律失常心拍分类模型。心电信号首先被分割成0.75 s和4 s两种不同尺度大小的心拍信号,然后利用11层CNN网络和3层BiLSTM网络分别对小/大尺度心拍信号进行特征提取与合并,并使用3层全连接网络对合并特征进行降维,最后利用softmax函数实现分类。针对MIT心律失常数据库异常心拍类型分布不均衡的问题,采用添加随机运动噪声和基线漂移噪声的样本扩展方法,降低模型的过拟合。采用基于患者的5折交叉检验进行模型验证。MIT心律失常数据库116 000个心拍的分类结果表明:所建立的模型针对4类心拍(正常、房性早搏、室性早搏、未分类)的识别准确率为90.42%,比单独使用CNN(76.45%)和BiLSTM(83.28%)的模型分别提高13.97%和7.14%。所提出的融合CNN和BiLSTM的心律失常心拍分类模型,相比单一基于CNN模型或者BiLSTM模型的机器学习算法,有更好的异常心拍分类准确率。  相似文献   

18.
The proposed automatic bone age estimation system was based on the phalanx geometric characteristics and carpals fuzzy information. The system could do automatic calibration by analyzing the geometric properties of hand images. Physiological and morphological features are extracted from medius image in segmentation stage. Back-propagation, radial basis function, and support vector machine neural networks were applied to classify the phalanx bone age. In addition, the proposed fuzzy bone age (BA) assessment was based on normalized bone area ratio of carpals. The result reveals that the carpal features can effectively reduce classification errors when age is less than 9 years old. Meanwhile, carpal features will become less influential to assess BA when children grow up to 10 years old. On the other hand, phalanx features become the significant parameters to depict the bone maturity from 10 years old to adult stage. Owing to these properties, the proposed novel BA assessment system combined the phalanxes and carpals assessment. Furthermore, the system adopted not only neural network classifiers but fuzzy bone age confinement and got a result nearly to be practical clinically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号