首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Today’s medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.  相似文献   

2.
Clinical assessment routinely uses terms such as development, growth trajectory, degeneration, disease progression, recovery or prediction. This terminology inherently carries the aspect of dynamic processes, suggesting that single measurements in time and cross-sectional comparison may not sufficiently describe spatiotemporal changes. In view of medical imaging, such tasks encourage subject-specific longitudinal imaging. Whereas follow-up, monitoring and prediction are natural tasks in clinical diagnosis of disease progression and of assessment of therapeutic intervention, translation of methodologies for calculation of temporal profiles from longitudinal data to clinical routine still requires significant research and development efforts. Rapid advances in image acquisition technology with significantly reduced acquisition times and with increase of patient comfort favor repeated imaging over the observation period. In view of serial imaging ranging over multiple years, image acquisition faces the challenging issue of scanner standardization and calibration which is crucial for successful spatiotemporal analysis. Longitudinal 3D data, represented as 4D images, capture time-varying anatomy and function. Such data benefits from dedicated analysis methods and tools that make use of the inherent correlation and causality of repeated acquisitions of the same subject. Availability of such data spawned progress in the development of advanced 4D image analysis methodologies that carry the notion of linear and nonlinear regression, now applied to complex, high-dimensional data such as images, image-derived shapes and structures, or a combination thereof. This paper provides examples of recently developed analysis methodologies for 4D image data, primarily focusing on progress in areas of core expertise of the authors. These include spatiotemporal shape modeling and growth trajectories of white matter fiber tracts demonstrated with examples from ongoing longitudinal clinical neuroimaging studies such as analysis of early brain growth in subjects at risk for mental illness and neurodegeneration in Huntington’s disease (HD). We will discuss broader aspects of current limitations and need for future research in view of data consistency and analysis methodologies.  相似文献   

3.
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers. Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR images. However, it requires large-scale and diverse medical data with high-quality annotations to train such robust and accurate CADs. To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation. Nevertheless, previous methods lack the ability to generate nodules that are realistic with the shape/size attributes desired by the detector. To address this issue, we introduce a novel lung nodule synthesis framework in this paper, which decomposes nodule attributes into three main aspects including the shape, the size, and the texture, respectively. A GAN-based Shape Generator firstly models nodule shapes by generating diverse shape masks. The following Size Modulation then enables quantitative control on the diameters of the generated nodule shapes in pixel-level granularity. A coarse-to-fine gated convolutional Texture Generator finally synthesizes visually plausible nodule textures conditioned on the modulated shape masks. Moreover, we propose to synthesize nodule CXR images by controlling the disentangled nodule attributes for data augmentation, in order to better compensate for the nodules that are easily missed in the detection task. Our experiments demonstrate the enhanced image quality, diversity, and controllability of the proposed lung nodule synthesis framework. We also validate the effectiveness of our data augmentation strategy on greatly improving nodule detection performance.  相似文献   

4.
A computer assisted system for automatic retrieval of medical images with similar image contents can serve as an efficient management tool for handling and mining large scale data, and can also be used as a tool in clinical decision support systems. In this paper, we propose a deep community based automated medical image retrieval framework for extracting similar images from a large scale X-ray database. The framework integrates a deep learning-based image feature generation approach and a network community detection technique to extract similar images. When compared with the state-of-the-art medical image retrieval techniques, the proposed approach demonstrated improved performance. We evaluated the performance of the proposed method on two large scale chest X-ray datasets, where given a query image, the proposed approach was able to extract images with similar disease labels with a precision of 85%. To the best of our knowledge, this is the first deep community based image retrieval application on large scale chest X-ray database.  相似文献   

5.
Rheumatoid arthritis (RA) is a chronic autoimmune disease that can result in considerable disability and pain. The metacarpophalangeal (MCP) joint is the most common diseased joint in RA. In clinical practice, MCP synovitis is commonly diagnosed on the basis of musculoskeletal ultrasound (MSUS) images. However, because of the vague criteria, the consistency in grading MCP synovitis based on MSUS images fluctuates between ultrasound imaging practitioners. Therefore, a new method for diagnosis of MCP synovitis is needed. Deep learning has developed rapidly in the medical area, which often requires a large-scale data set. However, the total number of MCP-MSUS images fell far short of the demand, and the distribution of different medical grades of images was unbalanced. With use of the traditional image augmentation methods, the diversity of the data remains insufficient. In this study, a high-resolution generative adversarial network (HRGAN) method that generates enough images for network training and enriches the diversity of the training data set is described. In comparison experiments, our proposed diagnostic system based on MSUS images provided more consistent results than those provided by clinical physicians. As the proposed method is image relevant, this study might provide a reference for other medical image classification research with insufficient data sets.  相似文献   

6.
背景:三维重建技术是采用计算机技术对二维医学图像进行边界识别,重新还原出被检组织或器官的三维图像。目的:分忻在不同情况下进行医学图像三维重建时如何进行算法的选择。。方法:采用计算机检索中国期刊全文数据库和Pubmed数据库。中文检索词为“医学图像,三维重建,面绘制,体绘制”,英文检索词为“medicalimages,three—dimensionalreconstruction,surfacerendering,volumerendering”。检索与医学图像三维重建算法相关的文献33篇,从面绘制重置方法和体绘制重置方法的实现原理、实现复杂度、实时显示情况等方面进行分析。结果与结论:目前,医学图像三维重建根据绘制过程中数据描述方法的不同可分为三大类:面绘制方法、体绘制方法和混合绘制方法。通过对面绘制和体绘制方法中不同算法的分析,可以看到面绘制方法在算法效率和实时交互性上是优于体绘制的,虽然面绘制方法在绘制时候会丢失许多细节,使得绘制图像效果不理想,但是由于其算法比较简单,占用内存资源少,所以目前得到了广泛的运用。体绘制方法是对体数据场中的体索进行直接操作,可以绘制出三维数据场中更丰富的信息,因此体绘制方法的绘制效果优于面绘制方法。  相似文献   

7.
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.  相似文献   

8.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.  相似文献   

9.
The widespread implementation of computerized medical files in intensive care units (ICUs) over recent years has made available large databases of clinical data for the purpose of developing clinical prediction models. The typical intensive care unit has several information sources from which data is electronically collected as time series of varying time resolutions. We present an overview of research questions studied in the ICU setting that have been addressed through the automatic analysis of these large databases. We focus on automatic learning methods, specifically data mining approaches for predictive modeling based on these time series of clinical data. On the one hand we examine short and medium term predictions, which have as ultimate goal the development of early warning or decision support systems. On the other hand we examine long term outcome prediction models and evaluate their performance with respect to established scoring systems based on static admission and demographic data.  相似文献   

10.
ImUnity is an original 2.5D deep-learning model designed for efficient and flexible MR image harmonization. A VAE-GAN network, coupled with a confusion module and an optional biological preservation module, uses multiple 2D slices taken from different anatomical locations in each subject of the training database, as well as image contrast transformations for its training. It eventually generates ‘corrected’ MR images that can be used for various multi-center population studies. Using 3 open source databases (ABIDE, OASIS and SRPBS), which contain MR images from multiple acquisition scanner types or vendors and a large range of subjects ages, we show that ImUnity: (1) outperforms state-of-the-art methods in terms of quality of images generated using traveling subjects; (2) removes sites or scanner biases while improving patients classification; (3) harmonizes data coming from new sites or scanners without the need for an additional fine-tuning and (4) allows the selection of multiple MR reconstructed images according to the desired applications. Tested here on T1-weighted images, ImUnity could be used to harmonize other types of medical images.  相似文献   

11.
医学图像的可视化已成为基础医学研究和I临床辅助诊断、治疗的重要手段,用计算机构建高度精密的人体各部位的三维模型已成为目前医学研究和疾病诊疗方法进一步发展的重要基础。VisualizationToolkit(VTK)作为一款流行的科学可视化软件,具有方便、高效的编程特点。采用VTK结合VC++实现医学图像三维可视化,分别采用Contour-connecting算法、MarchingCubes算法和Ray—casting算法进行了头部的三维绘制。结果证明VTK使用灵活,功能强大,具有重建步骤简单、速度快、交互能力强等优点,可以被广泛应用于医学图像的三维重建中。  相似文献   

12.
深度学习是当前人工智能发展最为迅速的一个分支。深度学习可以在大样本数据中自动提取良好的特征表达,有效提升各种机器学习的任务性能,广泛应用于图像信号处理、计算机视觉和自然语言处理等领域。随着数字影像的发展,深度学习凭借自动提取特征,高效处理高维度医学图像数据的优点,已成为医学图像分析在临床应用的重要技术之一。目前这项技术在分析某些医学影像方面已达到放射科医生水平,如肺结节的检出识别以及对膝关节退变进行级别分类等,这将为计算机科学发展在医疗应用的提供一个新机遇。由于骨科领域疾病种类繁多,图像数据特征清晰,内容复杂丰富,相关的学习任务与应用场景对深度学习提出了新要求。本文将从骨关节关键参数测量、病灶检测、疾病分级、图像分割以及图像配准五大临床图像处理分析任务对深度学习在骨科领域的应用研究进展进行综述,并对其发展趋势进行展望,以供从事骨科相关研究人员作参考。  相似文献   

13.
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.  相似文献   

14.
Deep hashing methods have been shown to be the most efficient approximate nearest neighbor search techniques for large-scale image retrieval. However, existing deep hashing methods have a poor small-sample ranking performance for case-based medical image retrieval. The top-ranked images in the returned query results may be as a different class than the query image. This ranking problem is caused by classification, regions of interest (ROI), and small-sample information loss in the hashing space. To address the ranking problem, we propose an end-to-end framework, called Attention-based Triplet Hashing (ATH) network, to learn low-dimensional hash codes that preserve the classification, ROI, and small-sample information. We embed a spatial-attention module into the network structure of our ATH to focus on ROI information. The spatial-attention module aggregates the spatial information of feature maps by utilizing max-pooling, element-wise maximum, and element-wise mean operations jointly along the channel axis. To highlight the essential role of classification in direntiating case-based medical images, we propose a novel triplet cross-entropy loss to achieve maximal class-separability and maximal hash code-discriminability simultaneously during model training. The triplet cross-entropy loss can help to map the classification information of images and similarity between images into the hash codes. Moreover, by adopting triplet labels during model training, we can utilize the small-sample information fully to alleviate the imbalanced-sample problem. Extensive experiments on two case-based medical datasets demonstrate that our proposed ATH can further improve the retrieval performance compared to the state-of-the-art deep hashing methods and boost the ranking performance for small samples. Compared to the other loss methods, the triplet cross-entropy loss can enhance the classification performance and hash code-discriminability.  相似文献   

15.
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative.We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance.The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine X-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity.Experimental results included in the paper evaluate (1) the usefulness of shape similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding.  相似文献   

16.
随着大数据时代的到来, 深度学习技术在图像分类、检测等任务中相对传统模式识别方法均取得了令人瞩目的突破。2017年1月, 斯坦福大学人工智能实验室采用深度学习方法对皮肤镜和临床皮损图像进行自动分类, 并在《自然》杂志上发表了相关研究成果, 代表了皮肤图像自动分析领域的最新研究进展。本文从数据库建立、研究方法设计以及试验结果分析等角度对这一研究工作进行解读, 并分析国内皮肤影像计算机辅助诊断的研究现状, 以及未来多源皮肤影像大数据分析与智能辅助诊断的发展空间, 以期推进我国皮肤疾病的医疗诊断水平。  相似文献   

17.
背景:小波图像融合是将两幅图像融合在一起,以获取对同一场景的更为精确、全面、可靠的图像描述。目的:用小波变换图像融合技术融合MRI脑梗死图像,以恢复缺损图像。方法:图像融合的主要机制是利用二维小波分析法对MRI脑梗死图像进行小波分解,并对高低频信号采用多种融合方式进行融合。通过对比不同融合方式后的效果图,找出最适合本部位MRI图像的融合方法。结果与结论:不同方式的融合技术能成功修复不同的缺损部位,多种融合方式的合适组合能完全修复多处缺失部位。对于文中给出的MRI脑梗死图像,采用最小值融合方式的融合效果最好。提示使用二维小波分析法处理医学图像,简便快捷,能有效改善图像的视觉效果,辅助临床诊断。  相似文献   

18.
医学图像具有重要临床作用,但同时采集多模态图像存在困难,医学图像合成应运而生。图像合成指将某种变换作用于一组给定的输入图像,生成与真实图像足够接近的新图像,其方法主要有基于地图集配准方法、基于强度变换方法、基于学习和基于深度学习方法。本研究针对医学图像合成方法、存在问题及发展方向进行综述。  相似文献   

19.
20.
In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号