首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.  相似文献   

2.
3.
We present a new non-uniform sampling method for the accurate estimation of mutual information in multi-modal brain image rigid registration. Most existing density estimators used for mutual information computation incorrectly assume that the intensity of each voxel is independent from its neighborhood. Our method uses the 3D Fast Discrete Curvelet Transform to reduce the sampled voxels' interdependency by sampling voxels that are less dependent on their neighborhood, and thus provide a more accurate estimation of the mutual information and a more accurate registration. The main advantages of our method over other non-uniform sampling schemes are that: (1) it provides more accurate estimation of the image statistics with fewer samples; (2) it is less sensitive to the variability of anatomical structures shapes, orientations, and sizes, and; (3) it yields more accurate registration results. Extensive evaluation on 1000 synthetic registrations between T1 and T2-weighted clinical MRI images and 20 real clinical registrations of brain CT images to Proton Density (PD) and T1 and T2-weighted MRI images from the public RIRE database show the effectiveness of our method. Our method has the lowest mean registration errors recorded to date for CT-MR image registration in the RIRE website for methods tested on more than five datasets. These results indicate that our sampling scheme can be used to achieve more accurate multi-modal registration required for image guided therapy and surgery.  相似文献   

4.
Semi-supervised cluster analysis of imaging data   总被引:1,自引:0,他引:1  
In this paper, we present a semi-supervised clustering-based framework for discovering coherent subpopulations in heterogeneous image sets. Our approach involves limited supervision in the form of labeled instances from two distributions that reflect a rough guess about subspace of features that are relevant for cluster analysis. By assuming that images are defined in a common space via registration to a common template, we propose a segmentation-based method for detecting locations that signify local regional differences in the two labeled sets. A PCA model of local image appearance is then estimated at each location of interest, and ranked with respect to its relevance for clustering. We develop an incremental k-means-like algorithm that discovers novel meaningful categories in a test image set. The application of our approach in this paper is in analysis of populations of healthy older adults. We validate our approach on a synthetic dataset, as well as on a dataset of brain images of older adults. We assess our method's performance on the problem of discovering clusters of MR images of human brain, and present a cluster-based measure of pathology that reflects the deviation of a subject's MR image from normal (i.e. cognitively stable) state. We analyze the clusters' structure, and show that clustering results obtained using our approach correlate well with clinical data.  相似文献   

5.
Image registration is a very important problem in computer vision and medical image processing. Numerous algorithms for registering single and multi-modal image data have been reported in these areas. Robustness as well as computational efficiency are prime factors of importance in image data registration. In this paper, a robust/reliable and efficient algorithm for estimating the transformation between two image data sets of a patient taken from the same modality over time is presented. Estimating the registration between two image data sets is formulated as a motion-estimation problem. We use a hierarchical optical flow motion model which allows for both global as well as local motion between the data sets. In this hierarchical motion model, we represent the flow field with a B-spline basis which implicitly incorporates smoothness constraints on the field. In computing the motion, we minimize the expectation of the squared differences energy function numerically via a modified Newton iteration scheme. The main idea in the modified Newton method is that we precompute the Hessian of the energy function at the optimum without explicitly knowing the optimum. This idea is used for both global and local motion estimation in the hierarchical motion model. We present examples of motion estimation on synthetic and real data (from a patient acquired during pre- and post-operative stages) and compare the performance of our algorithm with that of competing ones.  相似文献   

6.
The hierarchical subdivision strategy which decomposes a non-rigid matching problem into numerous local rigid transformations is a very common approach in image registration. While mutual information (MI) has proven to be a very robust and reliable similarity measure for intensity-based matching of multi-modal images, numerous problems have to be faced if it is applied to small-sized images, compromising its usefulness for such subdivision schemes. We examine and explain the loss of MI's statistical consistency along the hierarchical subdivision. Information theoretical measures are proposed to identify the problematic regions in order to overcome the MI drawbacks. This does not only improve the accuracy and robustness of the registration, but also can be used as a very efficient stopping criterion for the further subdivision of nodes in the hierarchy, which drastically reduces the computational cost of the entire registration procedure. Moreover, we present a new intensity mapping technique allowing to replace MI by more reliable measures for small patches. Integrated into the hierarchical framework, this mapping can locally transform the multi-modal images into an intermediate pseudo-modality. This intensity mapping uses the local joint intensity histograms of the coarsely registered sub-images and allows the use of the more robust and computationally more efficient cross-correlation coefficient (CC) for the matching at lower levels of the hierarchy.  相似文献   

7.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.  相似文献   

8.
提出一种综合应用图像分割与互信息的医学图像自动配准方法.首先采用门限法和数学形态学方法进行预处理,再用k-means方法进行分割,之后采用基于互信息的Powell优化方法配准.将该方法用于磁共振图像(MRI)和正电子发射断层扫描(PET)临床医学图像配准,得到较满意的效果.  相似文献   

9.
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.  相似文献   

10.
We describe a new algorithm for non-rigid registration capable of estimating a constrained dense displacement field from multi-modal image data. We applied this algorithm to capture non-rigid deformation between digital images of histological slides and digital flat-bed scanned images of cryotomed sections of the larynx, and carried out validation experiments to measure the effectiveness of the algorithm. The implementation was carried out by extending the open-source Insight ToolKit software. In diagnostic imaging of cancer of the larynx, imaging modalities sensitive to both anatomy (such as MRI and CT) and function (PET) are valuable. However, these modalities differ in their capability to discriminate the margins of tumor. Gold standard tumor margins can be obtained from histological images from cryotomed sections of the larynx. Unfortunately, the process of freezing, fixation, cryotoming and staining the tissue to create histological images introduces non-rigid deformations and significant contrast changes. We demonstrate that the non-rigid registration algorithm we present is able to capture these deformations and the algorithm allows us to align histological images with scanned images of the larynx. Our non-rigid registration algorithm constructs a deformation field to warp one image onto another. The algorithm measures image similarity using a mutual information similarity criterion, and avoids spurious deformations due to noise by constraining the estimated deformation field with a linear elastic regularization term. The finite element method is used to represent the deformation field, and our implementation enables us to assign inhomogeneous material characteristics so that hard regions resist internal deformation whereas soft regions are more pliant. A gradient descent optimization strategy is used and this has enabled rapid and accurate convergence to the desired estimate of the deformation field. A further acceleration in speed without cost of accuracy is achieved by using an adaptive mesh refinement strategy.  相似文献   

11.
Semi-supervised learning has a great potential in medical image segmentation tasks with a few labeled data, but most of them only consider single-modal data. The excellent characteristics of multi-modal data can improve the performance of semi-supervised segmentation for each image modality. However, a shortcoming for most existing multi-modal solutions is that as the corresponding processing models of the multi-modal data are highly coupled, multi-modal data are required not only in the training but also in the inference stages, which thus limits its usage in clinical practice. Consequently, we propose a semi-supervised contrastive mutual learning (Semi-CML) segmentation framework, where a novel area-similarity contrastive (ASC) loss leverages the cross-modal information and prediction consistency between different modalities to conduct contrastive mutual learning. Although Semi-CML can improve the segmentation performance of both modalities simultaneously, there is a performance gap between two modalities, i.e., there exists a modality whose segmentation performance is usually better than that of the other. Therefore, we further develop a soft pseudo-label re-learning (PReL) scheme to remedy this gap. We conducted experiments on two public multi-modal datasets. The results show that Semi-CML with PReL greatly outperforms the state-of-the-art semi-supervised segmentation methods and achieves a similar (and sometimes even better) performance as fully supervised segmentation methods with 100% labeled data, while reducing the cost of data annotation by 90%. We also conducted ablation studies to evaluate the effectiveness of the ASC loss and the PReL module.  相似文献   

12.
Fast and robust registration of PET and MR images of human brain   总被引:8,自引:0,他引:8  
In recent years, mutual information has proved to be an excellent criterion for registration of intra-individual images from different modalities. Multi-resolution coarse-to-fine optimization was proposed for speeding-up of the registration process. The aim of our work was to further improve registration speed without compromising robustness or accuracy. We present and evaluate two procedures for co-registration of positron emission tomography (PET) and magnetic resonance (MR) images of human brain that combine a multi-resolution approach with an automatic segmentation of input image volumes into areas of interest and background. We show that an acceleration factor of 10 can be achieved for clinical data and that a suitable preprocessing can improve robustness of registration. Emphasis was laid on creation of an automatic registration system that could be used routinely in a clinical environment. For this purpose, an easy-to-use graphical user interface has been developed. It allows physicians with no special knowledge of the registration algorithm to perform a fast and reliable alignment of images. Registration progress is presented on the fly on a fusion of images and enables visual checking during a registration.  相似文献   

13.
Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.  相似文献   

14.
In settings where high-level inferences are made based on registered image data, the registration uncertainty can contain important information. In this article, we propose a Bayesian non-rigid registration framework where conventional dissimilarity and regularization energies can be included in the likelihood and the prior distribution on deformations respectively through the use of Boltzmann’s distribution. The posterior distribution is characterized using Markov Chain Monte Carlo (MCMC) methods with the effect of the Boltzmann temperature hyper-parameters marginalized under broad uninformative hyper-prior distributions. The MCMC chain permits estimation of the most likely deformation as well as the associated uncertainty. On synthetic examples, we demonstrate the ability of the method to identify the maximum a posteriori estimate and the associated posterior uncertainty, and demonstrate that the posterior distribution can be non-Gaussian. Additionally, results from registering clinical data acquired during neurosurgery for resection of brain tumor are provided; we compare the method to single transformation results from a deterministic optimizer and introduce methods that summarize the high-dimensional uncertainty. At the site of resection, the registration uncertainty increases and the marginal distribution on deformations is shown to be multi-modal.  相似文献   

15.
Diabetic retinopathy (DR) is a common ophthalmic disease among diabetic patients. It is essential to diagnose DR in the early stages of treatment. Various imaging systems have been proposed to detect and visualize retina diseases. The fluorescein angiography (FA) imaging technique is now widely used as a gold standard technique to evaluate the clinical manifestations of DR. Optical coherence tomography (OCT) imaging is another technique that provides 3D information of the retinal structure. The FA and OCT images are captured in two different phases and field of views and image fusion of these modalities are of interest to clinicians. This paper proposes a hybrid registration framework based on the extraction and refinement of segmented major blood vessels of retinal images. The newly extracted features significantly improve the success rate of global registration results in the complex blood vessel network of retinal images. Afterward, intensity-based and deformable transformations are utilized to further compensate the motion magnitude between the FA and OCT images. Experimental results of 26 images of the various stages of DR patients indicate that this algorithm yields promising registration and fusion results for clinical routine.  相似文献   

16.
In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.  相似文献   

17.
《Medical image analysis》2014,18(7):1132-1142
Echo Planar Imaging (EPI) is routinely used in diffusion and functional MR imaging due to its rapid acquisition time. However, the long readout period makes it prone to susceptibility artefacts which results in geometric and intensity distortions of the acquired image. The use of these distorted images for neuronavigation hampers the effectiveness of image-guided surgery systems as critical white matter tracts and functionally eloquent brain areas cannot be accurately localised. In this paper, we present a novel method for correction of distortions arising from susceptibility artefacts in EPI images. The proposed method combines fieldmap and image registration based correction techniques in a unified framework. A phase unwrapping algorithm is presented that can efficiently compute the B0 magnetic field inhomogeneity map as well as the uncertainty associated with the estimated solution through the use of dynamic graph cuts. This information is fed to a subsequent image registration step to further refine the results in areas with high uncertainty. This work has been integrated into the surgical workflow at the National Hospital for Neurology and Neurosurgery and its effectiveness in correcting for geometric distortions due to susceptibility artefacts is demonstrated on EPI images acquired with an interventional MRI scanner during neurosurgery.  相似文献   

18.
We developed a new flexible approach for a co-analysis of multi-modal brain imaging data using a non-parametric framework. In this approach, results from separate analyses on different modalities are combined using a combining function and assessed with a permutation test. This approach identifies several cross-modality relationships, such as concordance and dissociation, without explicitly modeling the correlation between modalities. We applied our approach to structural and perfusion MRI data from an Alzheimer's disease (AD) study. Our approach identified areas of concordance, where both gray matter (GM) density and perfusion decreased together, and areas of dissociation, where GM density and perfusion did not decrease together. In conclusion, these results demonstrate the utility of this new non-parametric method to quantitatively assess the relationships between multiple modalities.  相似文献   

19.
The registration of functional brain data to common stereotaxic brain space facilitates data sharing and integration across different subjects, studies, and even imaging modalities. Thus, we previously described a method for the probabilistic registration of functional near-infrared spectroscopy (fNIRS) data onto Montreal Neurological Institute (MNI) coordinate space that can be used even when magnetic resonance images of the subjects are not available. This method, however, requires the careful measurement of scalp landmarks and fNIRS optode positions using a 3D-digitizer. Here we present a novel registration method, based on simulations in place of physical measurements for optode positioning. First, we constructed a holder deformation algorithm and examined its validity by comparing virtual and actual deformation of holders on spherical phantoms and real head surfaces. The discrepancies were negligible. Next, we registered virtual holders on synthetic heads and brains that represent size and shape variations among the population. The registered positions were normalized to MNI space. By repeating this process across synthetic heads and brains, we statistically estimated the most probable MNI coordinate values, and clarified errors, which were in the order of several millimeters across the scalp, associated with this estimation. In essence, the current method allowed the spatial registration of completely stand-alone fNIRS data onto MNI space without the use of supplementary measurements. This method will not only provide a practical solution to the spatial registration issues in fNIRS studies, but will also enhance cross-modal communications within the neuroimaging community.  相似文献   

20.
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号