首页 | 本学科首页   官方微博 | 高级检索  
检索        


A graph-based approach for the retrieval of multi-modality medical images
Institution:1. Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland;2. Institute of Physical Activity and Nutrition Research, Deakin University, Burwood, Victoria, Australia;3. Charité University Medical School Berlin, Germany;4. University of Ljubljana, Slovenia;5. Stanford University, USA;6. University of Exeter, The United Kingdom;7. Imperial College London, The United Kingdom;8. The Chinese University of HongKong, China;9. Sectra, Linköping, Sweden;10. Case Western Reserve University and University Hospitals Case Medical Center, USA;11. University of Queensland, Australia;12. The Australian e-Health Research Centre, CSIRO Health and Biosecurity, Australia;13. Graz University of Technology, Austria;14. Ludwig Boltzmann Institute for Clinical Forensic Imaging, Austria;15. VRVis Center for Virtual Reality and Visualization, Austria;p. University of Western Ontario, Canada;1. ETH Zurich, Zurich, Switzerland;2. Imperial College London, London, UK;1. Department of Electrical, Electronics and Information Engineering, University of Bologna, Bologna, Italy;2. University of Chicago Medical Center, Chicago, IL, USA
Abstract:In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging.The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships.We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location.We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects.
Keywords:Content-based image retrieval  Graph similarity  Multi-modality  PET-CT
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号