首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   15篇
  国内免费   1篇
基础医学   22篇
口腔科学   2篇
临床医学   3篇
内科学   1篇
神经病学   1篇
特种医学   1篇
外科学   1篇
综合类   1篇
眼科学   1篇
中国医学   1篇
肿瘤学   1篇
  2023年   6篇
  2022年   9篇
  2021年   14篇
  2020年   4篇
  2019年   1篇
  2018年   1篇
排序方式: 共有35条查询结果,搜索用时 562 毫秒
1.
目的:面向放疗危及器官自动勾画构建基于U-Net的模型并针对肝脏分割构建3种改进模型。方法:采集共计184例肝癌患者和183例头部放疗患者的计算机断层扫描(CT)图像及组织结构信息,并结合公开数据集Sliver07用于模型的训练与评估。通过搭建U-Net模型并针对肝脏分割分别结合空洞卷积、SLIC超像素算法、区域生长算法进行训练并得到预测模型,利用预测模型对自动勾画结果进行预测。采用交并比(Io U)和平均交并比(MIo U)评价预测结果的精确性。结果:测试集头部放疗危及器官自动勾画预测结果MIo U为0.795~0.970,肝脏分割使用U-Net预测结果MIo U约为0.876,使用改进后模型预测结果MIo U约为0.888,并很好地约束了预测偏差较大结果的出现,使得测试样本中Io U结果小于0.8的数量占比从16.67%降至7.5%。直观勾画方面结合改进算法的模型比U-Net更能捕捉到复杂、混淆性的边界区域。结论:构建U-Net模型能够在头部放疗危及器官和肝脏自动勾画上表现良好,3种改进的模型能够在肝脏分割上具有更优的表现。  相似文献   
2.
《Journal of endodontics》2020,46(7):987-993
IntroductionThe aim of this study was to use a Deep Learning (DL) algorithm for the automated segmentation of cone-beam computed tomographic (CBCT) images and the detection of periapical lesions.MethodsLimited field of view CBCT volumes (n = 20) containing 61 roots with and without lesions were segmented clinician dependent versus using the DL approach based on a U-Net architecture. Segmentation labeled each voxel as 1 of 5 categories: “lesion” (periapical lesion), “tooth structure,” “bone,” “restorative materials,” and “background.” Repeated splits of all images into a training set and a validation set based on 5-fold cross validation were performed using Deep Learning segmentation (DLS), and the results were averaged. DLS versus clinical-dependent segmentation was assessed by dichotomized lesion detection accuracy evaluating sensitivity, specificity, positive predictive value, negative predictive value, and voxel-matching accuracy using the DICE index for each of the 5 labels.ResultsDLS lesion detection accuracy was 0.93 with specificity of 0.88, positive predictive value of 0.87, and negative predictive value of 0.93. The overall cumulative DICE indexes for the individual labels were lesion = 0.52, tooth structure = 0.74, bone = 0.78, restorative materials = 0.58, and background = 0.95. The cumulative DICE index for all actual true lesions was 0.67.ConclusionsThis DL algorithm trained in a limited CBCT environment showed excellent results in lesion detection accuracy. Overall voxel-matching accuracy may be benefited by enhanced versions of artificial intelligence.  相似文献   
3.
【摘要】目的 开发基于人工智能算法对非酒精性脂肪肝病理特征的识别模型,研究人工智能模型是否能够识别出包括炎细胞、脂肪变性细胞和纤维化在内的非酒精性脂肪肝病理特征并可视化,帮助病理医生提高识别非酒精性脂肪肝病理特征的效率与准确率。方法 我们通过对65只患有非酒精性脂肪肝(NAFLD)的小鼠进行剖取肝脏组织,分别经HE和天狼猩红染色后获得65例HE和65例天狼猩红病理切片。对于HE切片,使用CaseViewer软件在20、30、40倍镜下截取HE染色切片的病变部位图像,将图像上传Horizope专业标注平台进行标注,并按4:1:1的比例将数据集划分为训练集、验证集和测试集,使用基于深度学习模型的U-Net分割网络对非酒精性脂肪肝病理特征进行识别,采用四个评价指标进行性能评估;对于天狼猩红染色切片,使用CaseViewer软件在5倍镜下对天狼猩红染色切片进行全视野截取,采用了颜色特征提取算法进行纤维化识别。对130例原始病理切片进行病理特征的识别和参数的计算,包括脂肪变性面积占比(PFA)的计算、炎细胞密度(DIC)的计算、纤维化面积占比(RFA)的计算,并用统计学方法进行分析。结果 根据识别结果计算病理参数PFA、DIC、RFA并纳入分析,PFA平均值为0.370,中位值为0.371(范围:0.013-0.743),与得分的相关系数R=0.9476;DIC平均值为313,中位值为288(范围:19-894),与得分的相关系数R=0.8883;RFA平均值为0.049,中位值为0.0507(范围:0.001-0.121),与得分的相关系数R=0.9731。结论 人工智能算法对非酒精性脂肪肝病理特征的识别取得了良好的表现,能够帮助病理医生提高识别非酒精性脂肪肝病理特征的效率与准确率,辅助医生对非酒精性脂肪肝进行正确的分级分期和疗效评估。  相似文献   
4.
《Cancer radiothérapie》2023,27(2):109-114
PurposeAccurate segmentation of target volumes and organs at risk from computed tomography (CT) images is essential for treatment planning in radiation therapy. The segmentation task is often done manually making it time-consuming. Besides, it is biased to the clinician experience and subject to inter-observer variability. Therefore, and due to the development of artificial intelligence tools and particularly deep learning (DL) algorithms, automatic segmentation has been proposed as an alternative. The purpose of this work is to use a DL-based method to segment the kidneys on CT images for radiotherapy treatment planning.Materials and methodsIn this contribution, we used the CT scans of 20 patients. Segmentation of the kidneys was performed using the U-Net model. The Dice similarity coefficient (DSC), the Matthews correlation coefficient (MCC), the Hausdorff distance (HD), the sensitivity and the specificity were used to quantitatively evaluate this delineation.ResultsThis model was able to segment the organs with a good accuracy. The obtained values of the used metrics for the kidneys segmentation, were presented. Our results were also compared to those obtained recently by other authors.ConclusionFully automated DL-based segmentation of CT images has the potential to improve both the speed and the accuracy of radiotherapy organs contouring.  相似文献   
5.
Since lung nodules on computed tomography images can have different shapes, contours, textures or locations and may be attached to neighboring blood vessels or pleural surfaces, accurate segmentation is still challenging. In this study, we propose an accurate segmentation method based on an improved U-Net convolutional network for different types of lung nodules on computed tomography images.The first phase is to segment lung parenchyma and correct the lung contour by applying α-hull algorithm. The second phase is to extract image pairs of patches containing lung nodules in the center and the corresponding ground truth and build an improved U-Net network with introduction of batch normalization.A large number of experiments manifest that segmentation performance of Dice loss has superior results than mean square error and Binary_crossentropy loss. The α-hull algorithm and batch normalization can improve the segmentation performance effectively. Our best result for Dice similar coefficient (0.8623) is also more competitive than other state-of-the-art segmentation algorithms.In order to segment different types of lung nodules accurately, we propose an improved U-Net network, which can improve the segmentation accuracy effectively. Moreover, this work also has practical value in helping radiologists segment lung nodules and diagnose lung cancer.  相似文献   
6.
Our objective is to investigate the reliability and usefulness of anatomic point–based lung zone segmentation on chest radiographs (CXRs) as a reference standard framework and to evaluate the accuracy of automated point placement. Two hundred frontal CXRs were presented to two radiologists who identified five anatomic points: two at the lung apices, one at the top of the aortic arch, and two at the costophrenic angles. Of these 1000 anatomic points, 161 (16.1%) were obscured (mostly by pleural effusions). Observer variations were investigated. Eight anatomic zones then were automatically generated from the manually placed anatomic points, and a prototype algorithm was developed using the point-based lung zone segmentation to detect cardiomegaly and levels of diaphragm and pleural effusions. A trained U-Net neural network was used to automatically place these five points within 379 CXRs of an independent database. Intra- and inter-observer variation in mean distance between corresponding anatomic points was larger for obscured points (8.7 mm and 20 mm, respectively) than for visible points (4.3 mm and 7.6 mm, respectively). The computer algorithm using the point-based lung zone segmentation could diagnostically measure the cardiothoracic ratio and diaphragm position or pleural effusion. The mean distance between corresponding points placed by the radiologist and by the neural network was 6.2 mm. The network identified 95% of the radiologist-indicated points with only 3% of network-identified points being false-positives. In conclusion, a reliable anatomic point–based lung segmentation method for CXRs has been developed with expected utility for establishing reference standards for machine learning applications.  相似文献   
7.
Automatic brain tumor segmentation on MRI is a prerequisite to provide a quantitative and intuitive assistance for clinical diagnosis and treatment. Meanwhile, 3D deep neural network related brain tumor segmentation models have demonstrated considerable accuracy improvement over corresponding 2D methodologies. However, 3D brain tumor segmentation models generally suffer from high computation cost. Motivated by a recently proposed 3D dilated multi-fiber network (DMF-Net) architecture that pays more attention to reduction of computation cost, we present in this work a novel encoder-decoder neural network, ie a 3D asymmetric expectation-maximization attention network (AEMA-Net), to automatically segment brain tumors. We modify DMF-Net by introducing an asymmetric convolution block into a multi-fiber unit and a dilated multi-fiber unit to capture more powerful deep features for the brain tumor segmentation. In addition, AEMA-Net further incorporates an expectation-maximization attention (EMA) module into the DMF-Net by embedding the EMA block in the third stage of skip connection, which focuses on capturing the long-range dependence of context. We extensively evaluate AEMA-Net on three MRI brain tumor segmentation benchmarks of BraTS 2018, 2019 and 2020 datasets. Experimental results demonstrate that AEMA-Net outperforms both 3D U-Net and DMF-Net, and it achieves competitive performance compared with the state-of-the-art brain tumor segmentation methods.  相似文献   
8.
自体肋软骨雕刻法是目前治疗先天性小儿畸形的临床标准疗法,而耳软骨组织工程和3D生物打印是有前景的治疗方案。可是,这些治疗方案的核心—(复合物)支架构造缺乏基于医学图像的耳软骨自动分割方法。基于3D U-Net提出改进的网络模型,能够实现MRI图像的人体耳软骨解剖结构的自动分割。该网络模型结合残差结构和多尺度融合等设计,在减少网络参数量的同时实现12个耳软骨解剖结构的精确分割。首先,使用超短回波时间(UTE)序列采集40名志愿者单侧外耳的MRI图像;然后,对所采集的图像进行预处理、耳软骨和多解剖结构手动标注;接下来,划分数据集训练改进的3D U-Net模型,其中32例数据作为训练集、4例为验证集、4例为测试集;最后,使用三维全连接条件随机场对网络输出结果进行后处理。模型经过10折交叉验证后,耳软骨12个解剖结构的自动分割结果的平均Dice相似度系数(DSC)和平均95%豪斯多夫距离(HD95)分别为0.818和1.917,相比于使用基础的3D U-Net模型,DSC指标分别提高6.0%,HD95指标降低了3.186,其中耳软骨关键结构耳轮和对耳轮的DSC指标达到了0.907和0.901。实验结果表明,所提出的深度学习方法与专家手动标注两者之间的结果非常接近。在临床应用中,根据患者健侧UTE核磁图像,本研究提出的方法既可以为现有自体肋软骨雕刻法快速、自动生成三维个性化雕刻模板,也可以为组织工程或者3D生物打印技术构建耳软骨复合物支架提供高质量的可打印模型。  相似文献   
9.
CT成像已成为检测新型冠状病毒肺炎(COVID-19)最重要的步骤之一。针对手动分割患者胸部CT图像中毛玻璃混浊区域繁琐的问题提出了一种自注意力循环残差U型网络模型来实现COVID-19患者肺部CT图像的自动分割,辅助医生诊断。在U-Net模型的基础上引入了循环残差模块和自注意力机制来加强对特征信息的抓取从而提升分割精度。在公开数据集上的分割实验结果显示,该算法的Dice系数、敏感度和特异度分别达到了85.36%、76.64%和76.25%,与其他算法相比具有良好的分割效果。  相似文献   
10.
In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using hyperspectral infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号