首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Image retrieval at the semantic level mostly depends on image annotation or image classification. Image annotation performance largely depends on three issues: (1) automatic image feature extraction; (2) a semantic image concept modeling; (3) algorithm for semantic image annotation. To address first issue, multilevel features are extracted to construct the feature vector, which represents the contents of the image. To address second issue, domain-dependent concept hierarchy is constructed for interpretation of image semantic concepts. To address third issue, automatic multilevel code generation is proposed for image classification and multilevel image annotation. We make use of the existing image annotation to address second and third issues. Our experiments on a specific domain of X-ray images have given encouraging results.  相似文献   

2.
3.
The demand for automatically classification of medical X-ray images is rising faster than ever. In this paper, an approach is presented to gain high accuracy rate for those classes of medical database with high ratio of intraclass variability and interclass similarities. The classification framework was constructed via annotation using the following three techniques: annotation by binary classification, annotation by probabilistic latent semantic analysis, and annotation using top similar images. Next, final annotation was constructed by applying ranking similarity on annotated keywords made by each technique. The final annotation keywords were then divided into three levels according to the body region, specific bone structure in body region as well as imaging direction. Different weights were given to each level of the keywords; they are then used to calculate the weightage for each category of medical images based on their ground truth annotation. The weightage computed from the generated annotation of query image was compared with the weightage of each category of medical images, and then the query image would be assigned to the category with closest weightage to the query image. The average accuracy rate reported is 87.5 %.  相似文献   

4.
5.
This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.  相似文献   

6.
Visual pattern mining in histology image collections using bag of features   总被引:1,自引:0,他引:1  

Objective

The paper addresses the problem of finding visual patterns in histology image collections. In particular, it proposes a method for correlating basic visual patterns with high-level concepts combining an appropriate image collection representation with state-of-the-art machine learning techniques.

Methodology

The proposed method starts by representing the visual content of the collection using a bag-of-features strategy. Then, two main visual mining tasks are performed: finding associations between visual-patterns and high-level concepts, and performing automatic image annotation. Associations are found using minimum-redundancy-maximum-relevance feature selection and co-clustering analysis. Annotation is done by applying a support-vector-machine classifier. Additionally, the proposed method includes an interpretation mechanism that associates concept annotations with corresponding image regions.The method was evaluated in two data sets: one comprising histology images from the different four fundamental tissues, and the other composed of histopathology images used for cancer diagnosis. Different visual-word representations and codebook sizes were tested. The performance in both concept association and image annotation tasks was qualitatively and quantitatively evaluated.

Results

The results show that the method is able to find highly discriminative visual features and to associate them to high-level concepts. In the annotation task the method showed a competitive performance: an increase of 21% in f-measure with respect to the baseline in the histopathology data set, and an increase of 47% in the histology data set.

Conclusions

The experimental evidence suggests that the bag-of-features representation is a good alternative to represent visual content in histology images. The proposed method exploits this representation to perform visual pattern mining from a wider perspective where the focus is the image collection as a whole, rather than individual images.  相似文献   

7.
As the use of positron emission tomography-computed tomography (PET-CT) has increased rapidly, there is a need to retrieve relevant medical images that can assist image interpretation. However, the images themselves lack the explicit information needed for query. We constructed a semantically structured database of nuclear medicine images using the Annotation and Image Markup (AIM) format and evaluated the ability the AIM annotations to improve image search. We created AIM annotation templates specific to the nuclear medicine domain and used them to annotate 100 nuclear medicine PET-CT studies in AIM format using controlled vocabulary. We evaluated image retrieval from 20 specific clinical queries. As the gold standard, two nuclear medicine physicians manually retrieved the relevant images from the image database using free text search of radiology reports for the same queries. We compared query results with the manually retrieved results obtained by the physicians. The query performance indicated a 98 % recall for simple queries and a 89 % recall for complex queries. In total, the queries provided 95 % (75 of 79 images) recall, 100 % precision, and an F1 score of 0.97 for the 20 clinical queries. Three of the four images missed by the queries required reasoning for successful retrieval. Nuclear medicine images augmented using semantic annotations in AIM enabled high recall and precision for simple queries, helping physicians to retrieve the relevant images. Further study using a larger data set and the implementation of an inference engine may improve query results for more complex queries.  相似文献   

8.
This paper proposes a new method of content based medical image retrieval through considering fused, context-sensitive similarity. Firstly, we fuse the semantic and visual similarities between the query image and each image in the database as their pairwise similarities. Then, we construct a weighted graph whose nodes represent the images and edges measure their pairwise similarities. By using the shortest path algorithm over the weighted graph, we obtain a new similarity measure, context-sensitive similarity measure, between the query image and each database image to complete the retrieval process. Actually, we use the fused pairwise similarity to narrow down the semantic gap for obtaining a more accurate pairwise similarity measure, and spread it on the intrinsic data manifold to achieve the context-sensitive similarity for a better retrieval performance. The proposed method has been evaluated on the retrieval of the Common CT Imaging Signs of Lung Diseases (CISLs) and achieved not only better retrieval results but also the satisfactory computation efficiency.  相似文献   

9.
Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations (“semantic” metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system’s “match observations” function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.  相似文献   

10.
Quantitative imaging biomarkers are of particular interest in drug development for their potential to accelerate the drug development pipeline. The lack of consensus methods and carefully characterized performance hampers the widespread availability of these quantitative measures. A framework to support collaborative work on quantitative imaging biomarkers would entail advanced statistical techniques, the development of controlled vocabularies, and a service-oriented architecture for processing large image archives. Until now, this framework has not been developed. With the availability of tools for automatic ontology-based annotation of datasets, coupled with image archives, and a means for batch selection and processing of image and clinical data, imaging will go through a similar increase in capability analogous to what advanced genetic profiling techniques have brought to molecular biology. We report on our current progress on developing an informatics infrastructure to store, query, and retrieve imaging biomarker data across a wide range of resources in a semantically meaningful way that facilitates the collaborative development and validation of potential imaging biomarkers by many stakeholders. Specifically, we describe the semantic components of our system, QI-Bench, that are used to specify and support experimental activities for statistical validation in quantitative imaging  相似文献   

11.
12.
This work proposes a histology image indexing strategy based on multimodal representations obtained from the combination of visual features and associated semantic annotations. Both data modalities are complementary information sources for an image retrieval system, since visual features lack explicit semantic information and semantic terms do not usually describe the visual appearance of images. The paper proposes a novel strategy to build a fused image representation using matrix factorization algorithms and data reconstruction principles to generate a set of multimodal features. The methodology can seamlessly recover the multimodal representation of images without semantic annotations, allowing us to index new images using visual features only, and also accepting single example images as queries. Experimental evaluations on three different histology image data sets show that our strategy is a simple, yet effective approach to building multimodal representations for histology image search, and outperforms the response of the popular late fusion approach to combine information.  相似文献   

13.
医学图像语义概念识别是医学图像知识表示的重要技术环节。研究医学图像语义概念识别方法,有助于机器理解和学习医学图像中的潜在医学知识,在影像辅助诊断和智能读片等应用中发挥重要作用。将医学图像的高频概念识别问题转化为多标签分类任务,利用基于卷积神经网络的深度迁移学习方法,识别有限数量的高频医学概念;同时利用基于图像检索的主题建模方法,从给定医学图像的相似图像中提取语义相关概念。国际跨语言图像检索论坛ImageCLEF于2018年5月组织ImageCLEFcaption 2018评测,其子任务“概念检测”的目标是给定222 314张训练图片和9 938张测试图片,识别111 156个语义概念。上述两种方法的实验结果已被提交。实验结果表明,利用基于卷积神经网络的深度迁移学习方法识别医学图像高频概念,F1值为0.092 8,在提交团队中排名第二;基于图像检索的主题模型可召回部分低频相关概念,F1值为0.090 7,然而其性能依赖于图像检索结果的质量。基于卷积神经网络的深度迁移学习方法识别医学图像高频概念的鲁棒性优于基于图像检索方法的鲁棒性,但在大规模开放语义概念的识别技术研究上仍需进一步完善。  相似文献   

14.
Content-based medical image retrieval (CBMIR) is a powerful resource to improve differential computer-aided diagnosis. The major problem with CBMIR applications is the semantic gap, a situation in which the system does not follow the users’ sense of similarity. This gap can be bridged by the adequate modeling of similarity queries, which ultimately depends on the combination of feature extractor methods and distance functions. In this study, such combinations are referred to as perceptual parameters, as they impact on how images are compared. In a CBMIR, the perceptual parameters must be manually set by the users, which imposes a heavy burden on the specialists; otherwise, the system will follow a predefined sense of similarity. This paper presents a novel approach to endow a CBMIR with a proper sense of similarity, in which the system defines the perceptual parameter depending on the query element. The method employs ensemble strategy, where an extreme learning machine acts as a meta-learner and identifies the most suitable perceptual parameter according to a given query image. This parameter defines the search space for the similarity query that retrieves the most similar images. An instance-based learning classifier labels the query image following the query result set. As the concept implementation, we integrated the approach into a mammogram CBMIR. For each query image, the resulting tool provided a complete second opinion, including lesion class, system certainty degree, and set of most similar images. Extensive experiments on a large mammogram dataset showed that our proposal achieved a hit ratio up to 10% higher than the traditional CBMIR approach without requiring external parameters from the users. Our database-driven solution was also up to 25% faster than content retrieval traditional approaches.  相似文献   

15.
A growing number of clinicians, educators, researchers, and others use digital images in their work and search for them via image retrieval systems. Yet, this area of information retrieval is much less understood and developed than searching for text-based content, such as biomedical literature and its derivations. The goal of the ImageCLEF medical image retrieval task (ImageCLEFmed) is to improve understanding and system capability in search for medical images. In this paper, we describe the development and use of a medical image test collection designed to facilitate research with image retrieval systems and their users. We also provide baseline results with the new collection and describe them in the context of past research with portions of the collection.  相似文献   

16.
Ideally, an image should be reported and interpreted in the same way (e.g., the same perceived likelihood of malignancy) or similarly by any two radiologists; however, as much research has demonstrated, this is not often the case. Various efforts have made an attempt at tackling the problem of reducing the variability in radiologists’ interpretations of images. The Lung Image Database Consortium (LIDC) has provided a database of lung nodule images and associated radiologist ratings in an effort to provide images to aid in the analysis of computer-aided tools. Likewise, the Radiological Society of North America has developed a radiological lexicon called RadLex. As such, the goal of this paper is to investigate the feasibility of associating LIDC characteristics and terminology with RadLex terminology. If matches between LIDC characteristics and RadLex terms are found, probabilistic models based on image features may be used as decision-based rules to predict if an image or lung nodule could be characterized or classified as an associated RadLex term. The results of this study were matches for 25 (74%) out of 34 LIDC terms in RadLex. This suggests that LIDC characteristics and associated rating terminology may be better conceptualized or reduced to produce even more matches with RadLex. Ultimately, the goal is to identify and establish a more standardized rating system and terminology to reduce the subjective variability between radiologist annotations. A standardized rating system can then be utilized by future researchers to develop automatic annotation models and tools for computer-aided decision systems.Key words: Chest CT, digital imaging, image data, image interpretation, imaging informatics, lung, radiographic image interpretation, computer-assisted, reporting, RadLex, semantic, LIDC  相似文献   

17.
《ITBM》2008,29(1):35-43
In this article, we present a Case-based Reasoning system for the retrieval of patient files similar to a case placed as query. We focus on patient files made up of several images with contextual information (such as the patient age, sex and medical history). Indeed, medical experts generally need varied sources of information (which might be incomplete) to diagnose a pathology. Consequently, we derive a retrieval framework from decision trees, which are well suited to process heterogeneous and incomplete information. To be integrated in the system, images are indexed by their digital content. The method is evaluated on a classified diabetic retinopathy database. On this database, results are promising: the retrieval sensitivity reaches 79.5% for a window of five cases, which is almost twice as good as the retrieval of single images alone.  相似文献   

18.
为了实现对乳腺X线影像的医学语义标注,提出一种利用贝叶斯网络(BN)的多层乳腺影像钙化点语义建模方法。该方法首先用支持向量机(SVM)得到从图像底层视觉特征到中层特征语义的映射,然后再利用BN融合特征语义,最终提取出高层病症语义即恶性程度的概率表达,完成语义模型。将模型应用于乳腺图像的语义标注,本实验选用142幅图像作为训练集,50幅图像作为测试集,结果表明,样本标注诊断语义的准确率:恶性为81.48%,良性为73.91%。  相似文献   

19.
目的利用计算机技术设计一种基于数字图像视觉特征的超声图像中胎儿颈项透明层(NT)的自动检测方法。方法建立基于数字图像视觉特征自动检测超声图像中胎儿NT的方法,包括胎儿超声图像提取、连通分量提取、目标检测和NT测量的实现共4个主要步骤。评价标准实验图像(n=35)、临床挑选图像(n=1208)和人工已标注图像(n=120)3组图像采用自动检测方法进行NT测量的实验结果,包括其定位准确性和测量误差,并记录最大误差值。结果应用自动检测方法检测标准实验图像、临床挑选图像和人工已标注图像中胎儿NT的定位准确率分别为100%(35/35)、90.7%(1096/1208)、90.8%(109/120),NT的测量误差均小于0.03mm。结论成功建立了基于数字图像视觉特征自动检测超声图像中胎儿NT的方法,可减少人工测量的主观性和随机性,提高超声筛查的准确性。  相似文献   

20.
This paper presents a technical framework to support the development and installation of system for content-based image retrieval in medical applications (IRMA). A strict separation of feature extraction, feature storage, feature comparison, and the user interfaces is suggested. This allows to reuse implemented components in different retrieval algorithms, which improves software quality, shortens the development cycle for applications, and allows to introduce standardized end-user interfaces. Based on the proposed framework, the IRMA engine has been established, which is currently used to evaluate content-based retrieval methods on a collection of 20,000 medical and 135,000 non-medical images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号