首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A brain image registration algorithm, referred to as RABBIT, is proposed to achieve fast and accurate image registration with the help of an intermediate template generated by a statistical deformation model. The statistical deformation model is built by principal component analysis (PCA) on a set of training samples of brain deformation fields that warp a selected template image to the individual brain samples. The statistical deformation model is capable of characterizing individual brain deformations by a small number of parameters, which is used to rapidly estimate the brain deformation between the template and a new individual brain image. The estimated deformation is then used to warp the template, thus generating an intermediate template close to the individual brain image. Finally, the shape difference between the intermediate template and the individual brain is estimated by an image registration algorithm, e.g., HAMMER. The overall registration between the template and the individual brain image can be achieved by directly combining the deformation fields that warp the template to the intermediate template, and the intermediate template to the individual brain image. The algorithm has been validated for spatial normalization of both simulated and real magnetic resonance imaging (MRI) brain images. Compared with HAMMER, the experimental results demonstrate that the proposed algorithm can achieve over five times speedup, with similar registration accuracy and statistical power in detecting brain atrophy.  相似文献   

2.
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.  相似文献   

3.
Rigid point feature registration using mutual information   总被引:17,自引:0,他引:17  
We have developed a new mutual information-based registration method for matching unlabeled point features. In contrast to earlier mutual information-based registration methods, which estimate the mutual information using image intensity information, our approach uses the point feature location information. A novel aspect of our approach is the emergence of correspondence (between the two sets of features) as a natural by-product of joint density estimation. We have applied this algorithm to the problem of geometric alignment of primate autoradiographs. We also present preliminary results on three-dimensional robust matching of sulci derived from anatomical magnetic resonance images. Finally, we present an experimental comparison between the mutual information approach and other recent approaches which explicitly parameterize feature correspondence.  相似文献   

4.
Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.  相似文献   

5.
In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit consideration of the long-range spatial relationships in an image. Recently, Vision Transformer architectures have been proposed to address the shortcomings of ConvNets and have produced state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their substantially larger receptive field enables a more precise comprehension of the spatial correspondence between moving and fixed images. Here, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. This paper also presents diffeomorphic and Bayesian variants of TransMorph: the diffeomorphic variants ensure the topology-preserving deformations, and the Bayesian variant produces a well-calibrated registration uncertainty estimate. We extensively validated the proposed models using 3D medical images from three applications: inter-patient and atlas-to-patient brain MRI registration and phantom-to-CT registration. The proposed models are evaluated in comparison to a variety of existing registration methods and Transformer architectures. Qualitative and quantitative results demonstrate that the proposed Transformer-based model leads to a substantial performance improvement over the baseline methods, confirming the effectiveness of Transformers for medical image registration.  相似文献   

6.
Neuroimaging studies are increasingly performed in macaque species, including the pig-tailed macaque (Macaca nemestrina). At times experimental questions can be answered by analysis of functional images in individual subjects and reference to a structural image in that subject. However, coregistration of functional brain images across many subjects offers the experimental advantage of enabling voxel-based analysis over multiple subjects and is therefore widely used in human studies. Voxel-based coregistration methods require a high-quality 3D template image. We created such templates, derived from T1-weighted MRI and blood-flow PET images from 12 nemestrina monkeys. We designed the macaque templates to be maximally compatible with the baboon template images described in a companion paper, to facilitate cross-species comparison of functional imaging data. Here we present data showing the reliability and validity of automatic image registration to the template. Alignment of selected internal fiducial points was accurate to within 1.9 mm overall (mean) even across species. The template images, along with copies aligned to the UCLA nemestrina brain atlas, are available on the Internet (purl.org/net/kbmd/n2k) and can be used as targets with any image registration software.  相似文献   

7.
A lot of research has been done during the past 20 years in the area of medical image registration for obtaining detailed, important, and complementary information from two or more images and aligning them into a single, more informative image. Nature of the transformation and domain of the transformation are two important medical image registration techniques that deal with characters of objects (motions) in images. This article presents a detailed survey of the registration techniques that belong to both categories with detailed elaboration on their features, issues, and challenges. An investigation estimating similarity and dissimilarity measures and performance evaluation is the main objective of this work. This article also provides reference knowledge in a compact form for researchers and clinicians looking for the proper registration technique for a particular application.  相似文献   

8.
Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation.The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images.We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals.  相似文献   

9.
Deformable image registration (DIR) can be used to track cardiac motion. Conventional DIR algorithms aim to establish a dense and non-linear correspondence between independent pairs of images. They are, nevertheless, computationally intensive and do not consider temporal dependencies to regulate the estimated motion in a cardiac cycle. In this paper, leveraging deep learning methods, we formulate a novel hierarchical probabilistic model, termed DragNet, for fast and reliable spatio-temporal registration in cine cardiac magnetic resonance (CMR) images and for generating synthetic heart motion sequences. DragNet is a variational inference framework, which takes an image from the sequence in combination with the hidden states of a recurrent neural network (RNN) as inputs to an inference network per time step. As part of this framework, we condition the prior probability of the latent variables on the hidden states of the RNN utilised to capture temporal dependencies. We further condition the posterior of the motion field on a latent variable from hierarchy and features from the moving image. Subsequently, the RNN updates the hidden state variables based on the feature maps of the fixed image and the latent variables. Different from traditional methods, DragNet performs registration on unseen sequences in a forward pass, which significantly expedites the registration process. Besides, DragNet enables generating a large number of realistic synthetic image sequences given only one frame, where the corresponding deformations are also retrieved. The probabilistic framework allows for computing spatio-temporal uncertainties in the estimated motion fields. Our results show that DragNet performance is comparable with state-of-the-art methods in terms of registration accuracy, with the advantage of offering analytical pixel-wise motion uncertainty estimation across a cardiac cycle and being a motion generator. We will make our code publicly available.  相似文献   

10.
Longitudinal atlas construction plays an important role in medical image analysis. Given a set of longitudinal images from different subjects, the task of longitudinal atlas construction is to build an atlas sequence which can represent the trend of anatomical changes of the population. The major challenge for longitudinal atlas construction is how to effectively incorporate both the subject-specific information and population information to build the unbiased atlases. In this paper, a novel groupwise longitudinal atlas construction framework is proposed to address this challenge, and the main contributions of the proposed framework lie in the following aspects: (1) The subject-specific longitudinal information is captured by building the growth model for each subject. (2) The longitudinal atlas sequence is constructed by performing groupwise registration among all the subject image sequences, and only one transformation is needed to transform each subject's image sequence to the atlas space. The constructed longitudinal atlases are unbiased and no explicit template is assumed. (3) The proposed method is general, where the number of longitudinal images of each subject and the time points at which they are taken can be different. The proposed method is extensively evaluated on two longitudinal databases, namely the BLSA and ADNI databases, to construct the longitudinal atlas sequence. It is also compared with a state-of-the-art longitudinal atlas construction algorithm based on kernel regression on the temporal domain. Experimental results demonstrate that the proposed method consistently achieves higher registration accuracies and more consistent spatial-temporal correspondences than the compared method on both databases.  相似文献   

11.
This paper describes the design, implementation and preliminary results of a technique for creating a comprehensive probabilistic atlas of the human brain based on high-dimensional vector field transformations. The goal of the atlas is to detect and quantify distributed patterns of deviation from normal anatomy, in a 3-D brain image from any given subject. The algorithm analyzes a reference population of normal scans and automatically generates color-coded probability maps of the anatomy of new subjects. Given a 3-D brain image of a new subject, the algorithm calculates a set of high-dimensional volumetric maps (with typically 3842 × 256 × 3 ≈ 108 degrees of freedom) elastically deforming this scan into structural correspondence with other scans, selected one by one from an anatomic image database. The family of volumetric warps thus constructed encodes statistical properties and directional biases of local anatomical variation throughout the architecture of the brain. A probability space of random transformations, based on the theory of anisotropic Gaussian random fields, is then developed to reflect the observed variability in stereotaxic space of the points whose correspondences are found by the warping algorithm. A complete system of 3842 × 256 probability density functions is computed, yielding confidence limits in stereotaxic space for the location of every point represented in the 3-D image lattice of the new subject's brain. Color-coded probability maps are generated, densely defined throughout the anatomy of the new subject. These indicate locally the probability of each anatomic point being unusually situated, given the distributions of corresponding points in the scans of normal subjects. 3-D MRI and high-resolution cryosection volumes are analyzed from subjects with metastatic tumors and Alzheimer's disease. Gradual variations and continuous deformations of the underlying anatomy are simulated and their dynamic effects on regional probability maps are animated in video format (on the accompanying CD-ROM). Applications of the deformable probabilistic atlas include the transfer of multi-subject 3-D functional, vascular and histologic maps onto a single anatomic template, the mapping of 3-D atlases onto the scans of new subjects, and the rapid detection, quantification and mapping of local shape changes in 3-D medical images in disease and during normal or abnormal growth and development.  相似文献   

12.
In this paper, a novel non-rigid registration method is proposed for registration of the Talairach-Tournoux brain atlas with MRI images and the Schaltenbrand-Wahren brain atlas. A metalforming principle-based finite element method with the large deformation problem is used to find the local deformation, in which finite element equations are governed by constraints in the form of displacements derived from the correspondence relationship between extracted feature points. Some detectable substructures, such as the cortical surface, ventricles and corpus callosum, are first extracted from MRI, forming feature points which are classified into different groups. The softassign method is used to establish the correspondence relationship between feature points within each group and to obtain the global transformation concurrently. The displacement constraints are then derived from the correspondence relationship. A metalforming principle-based finite element method with the large deformation problem is used in which finite element equations are reorganized and simplified by integrating the displacement constraints into the system equations. Our method not only matches the model to the data efficiently, but also decreases the degrees of freedom of the system and consequently reduces the computational cost. The method is illustrated by matching the Talairach-Tournoux brain atlas to MRI normal and pathological data and to the Schaltenbrand-Wahren brain atlas. We compare the results quantitatively between the force assignment-based method and the proposed method. The results show that the proposed method yields more accurate results in a fraction of the time taken by the previous method.  相似文献   

13.
The main objective of anatomically plausible results for deformable image registration is to improve model’s registration accuracy by minimizing the difference between a pair of fixed and moving images. Since many anatomical features are closely related to each other, leveraging supervision from auxiliary tasks (such as supervised anatomical segmentation) has the potential to enhance the realism of the warped images after registration. In this work, we employ a Multi-Task Learning framework to formulate registration and segmentation as a joint issue, in which we utilize anatomical constraint from auxiliary supervised segmentation to enhance the realism of the predicted images. First, we propose a Cross-Task Attention Block to fuse the high-level feature from both the registration and segmentation network. With the help of initial anatomical segmentation, the registration network can benefit from learning the task-shared feature correlation and rapidly focusing on the parts that need deformation. On the other hand, the anatomical segmentation discrepancy from ground-truth fixed annotations and predicted segmentation maps of initial warped images are integrated into the loss function to guide the convergence of the registration network. Ideally, a good deformation field should be able to minimize the loss function of registration and segmentation. The voxel-wise anatomical constraint inferred from segmentation helps the registration network to reach a global optimum for both deformable and segmentation learning. Both networks can be employed independently during the testing phase, enabling only the registration output to be predicted when the segmentation labels are unavailable. Qualitative and quantitative results indicate that our proposed methodology significantly outperforms the previous state-of-the-art approaches on inter-patient brain MRI registration and pre- and intra-operative uterus MRI registration tasks within our specific experimental setup, which leads to state-of-the-art registration quality scores of 0.755 and 0.731 (i.e., by 0.8% and 0.5% increases) DSC for both tasks, respectively.  相似文献   

14.
《Medical image analysis》2014,18(6):914-926
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance.  相似文献   

15.
In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.  相似文献   

16.
Image registration is an important procedure for medical diagnosis. Since the large inter-site retrospective validation study led by Fitzpatrick at Vanderbilt University, voxel-based methods and more specifically mutual information-based registration methods (see for instance [IEEE Trans. Med. Imag. 22 (8) (2003) 986] for a review on these methods) have been regarded as the method of choice for rigid-body intra-subject registration problems. In this study we propose a method that is based on the Iterative Closest Point algorithm and a pre-computed closest point map obtained with a slight modification of the fast marching method proposed by Sethian. Pre-computing the closest point map speeds up the process because at each iteration point correspondence can be established by table lookup. We also show that because the closest point map is defined on a regular grid it introduces a registration error and we propose an interpolation scheme that addresses this issue. The method has been tested both on synthetic and real images, and registration results have been assessed quantitatively using the data set provided by the Retrospective Registration Evaluation Project. For these volumes, MR and CT head surfaces were extracted automatically using a level-set technique. Results show that on these data sets this registration method leads to accuracy numbers that are comparable to those obtained with voxel-based methods.  相似文献   

17.
A constrained non-rigid registration (CNRR) algorithm for use in prostate image-guided adaptive radiotherapy is presented in a coherent mathematical framework. The registration algorithm is based on a global rigid transformation combined with a series of local injective non-rigid multi-resolution cubic B-spline Free Form Deformation (FFD) transformations. The control points of the FFD are used to non-rigidly constrain the transformation to the prostate, rectum, and bladder. As well, the control points are used to rigidly constrain the transformation to the estimated position of the pelvis, left femur, and right femur. The algorithm was tested with both 3D conformal radiotherapy (3DCRT) and intensity-modulated radiotherapy (IMRT) dose plan data sets. The 3DCRT dose plan set consisted of 10 fan-beam CT (FBCT) treatment-day images acquired from four different patients. The IMRT dose plan set consisted of 32 cone-beam CT (CBCT) treatment-day images acquired from 4 different patients. The CNRR was tested with different combinations of anatomical constraints and each test significantly outperformed both rigid and non-rigid registration at aligning constrained bones and critical organs. The CNRR results were used to adapt the dose plans to account for patient positioning errors as well as inter-day bone motion and intrinsic organ deformation. Each adapted dose plan improved performance by lowering radiation distribution to the rectum and bladder while increasing or maintaining radiation distribution to the prostate.  相似文献   

18.
Whole-body computed tomography (CT) image registration is important for cancer diagnosis, therapy planning and treatment. Such registration requires accounting for large differences between source and target images caused by deformations of soft organs/tissues and articulated motion of skeletal structures. The registration algorithms relying solely on image processing methods exhibit deficiencies in accounting for such deformations and motion. We propose to predict the deformations and movements of body organs/tissues and skeletal structures for whole-body CT image registration using patient-specific non-linear biomechanical modelling. Unlike the conventional biomechanical modelling, our approach for building the biomechanical models does not require time-consuming segmentation of CT scans to divide the whole body into non-overlapping constituents with different material properties. Instead, a Fuzzy C-Means (FCM) algorithm is used for tissue classification to assign the constitutive properties automatically at integration points of the computation grid. We use only very simple segmentation of the spine when determining vertebrae displacements to define loading for biomechanical models. We demonstrate the feasibility and accuracy of our approach on CT images of seven patients suffering from cancer and aortic disease. The results confirm that accurate whole-body CT image registration can be achieved using a patient-specific non-linear biomechanical model constructed without time-consuming segmentation of the whole-body images.  相似文献   

19.
Wu G  Jia H  Wang Q  Shen D 《NeuroImage》2011,56(4):1968-1981
Groupwise registration has become more and more popular due to its attractiveness for unbiased analysis of population data. One of the most popular approaches for groupwise registration is to iteratively calculate the group mean image and then register all subject images towards the latest estimated group mean image. However, its performance might be undermined by the fuzzy mean image estimated in the very beginning of groupwise registration procedure, because all subject images are far from being well-aligned at that moment. In this paper, we first point out the significance of always keeping the group mean image sharp and clear throughout the entire groupwise registration procedure, which is intuitively important but has not been explored in the literature yet. To achieve this, we resort to developing the robust mean-image estimator by the adaptive weighting strategy, where the weights are adaptive across not only the individual subject images but also all spatial locations in the image domain. On the other hand, we notice that some subjects might have large anatomical variations from the group mean image, which challenges most of the state-of-the-art registration algorithms. To ensure good registration results in each iteration, we explore the manifold of subject images and build a minimal spanning tree (MST) with the group mean image as the root of the MST. Therefore, each subject image is only registered to its parent node often with similar shapes, and its overall transformation to the group mean image space is obtained by concatenating all deformations along the paths connecting itself to the root of the MST (the group mean image). As a result, all the subjects will be well aligned to the group mean image adaptively. Our method has been evaluated in both real and simulated datasets. In all experiments, our method outperforms the conventional algorithm which generally produces a fuzzy group mean image throughout the entire groupwise registration.  相似文献   

20.
We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号