首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.  相似文献   

3.
《Medical image analysis》2014,18(6):914-926
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance.  相似文献   

4.
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.  相似文献   

5.
6.
Purpose  We present a system which supports deformable image registration guided by a haptic device. Methods  The haptic device is tied to a block matching method where a set of uniformly distributed control points determine the block positions. Each control point constitutes a particle in a mass spring grid which limits the space of allowed movements to elastic movements. Control points are manipulated by the haptic device, and the negative gradient of the similarity metric over the corresponding block is rendered as a force on the haptic device guiding the user to a minimum of the optimization landscape. Fast update of forces was achieved by exploiting the GPU for computations of the similarity metric and for interpolation of the deformation field. Results  We show that haptic guided registration facilitates faster and improved registration compared to using a purely visual alignment in a user study on synthetic images. We also demonstrate feasibility of applying the system on medical images through a comparison with an automatic block matching algorithm. A radiologist performing registration with the haptic registration system posted faster registration times and better registration results than the automatic block matching algorithm when using identical grid and block sizes. Conclusions  Possible applications of the system are refinement of registration results from automatic registration methods and construction of initial state used in automatic deformable registration methods.  相似文献   

7.
Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.  相似文献   

8.
A registration method for motion estimation in dynamic medical imaging data is proposed. Registration is performed directly on the dynamic image, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transformations are taken into account. Optionally, cyclic motion can be imposed, which can be useful for visualization (viewing the segmentation sequentially) or model building purposes. The method is based on a 3D (2D+time) or 4D (3D+time) free-form B-spline deformation model, a similarity metric that minimizes the intensity variances over time and constrained optimization using a stochastic gradient descent method with adaptive step size estimation. The method was quantitatively compared with existing registration techniques on synthetic data and 3D+t computed tomography data of the lungs. This showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results. Furthermore, the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated. On average, the deviation from the curves derived from the manual annotations was approximately 3%. The potential of the method for other imaging modalities was shown on 2D+t ultrasound and 2D+t magnetic resonance images. The software is publicly available as an extension to the registration package elastix.  相似文献   

9.
A lot of research has been done during the past 20 years in the area of medical image registration for obtaining detailed, important, and complementary information from two or more images and aligning them into a single, more informative image. Nature of the transformation and domain of the transformation are two important medical image registration techniques that deal with characters of objects (motions) in images. This article presents a detailed survey of the registration techniques that belong to both categories with detailed elaboration on their features, issues, and challenges. An investigation estimating similarity and dissimilarity measures and performance evaluation is the main objective of this work. This article also provides reference knowledge in a compact form for researchers and clinicians looking for the proper registration technique for a particular application.  相似文献   

10.
In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.  相似文献   

11.
We present a groupwise US to CT registration algorithm for guiding percutaneous spinal interventions. In addition, we introduce a comprehensive validation scheme that accounts for changes in the curvature of the spine between preoperative and intraoperative imaging. In our registration methodology, each vertebra in CT is treated as a sub-volume and transformed individually. A biomechanical model is used to constrain the displacement of the vertebrae relative to one another. The sub-volumes are then reconstructed into a single volume. During each iteration of registration, an US image is simulated from the reconstructed CT volume and an intensity-based similarity metric is calculated with the real US image. Validation studies are performed on CT and US images from a sheep cadaver, five patient-based phantoms designed to preserve realistic curvatures of the spine and a sixth patient-based phantom where the curvature of the spine is changed between preoperative and intraoperative imaging. For datasets where the spine curve between two imaging modalities was artificially perturbed, the proposed methodology was able to register initial misalignments of up to 20mm with a success rate of 95%. For the phantom with a physical change in the curvature of the spine introduced between the US and CT datasets, the registration success rate was 98.5%. Finally, the registration success rate for the sheep cadaver with soft-tissue information was 87%. The results demonstrate that our algorithm allows for robust registration of US and CT datasets, regardless of a change in the patients pose between preoperative and intraoperative image acquisitions.  相似文献   

12.
We describe a new algorithm for non-rigid registration capable of estimating a constrained dense displacement field from multi-modal image data. We applied this algorithm to capture non-rigid deformation between digital images of histological slides and digital flat-bed scanned images of cryotomed sections of the larynx, and carried out validation experiments to measure the effectiveness of the algorithm. The implementation was carried out by extending the open-source Insight ToolKit software. In diagnostic imaging of cancer of the larynx, imaging modalities sensitive to both anatomy (such as MRI and CT) and function (PET) are valuable. However, these modalities differ in their capability to discriminate the margins of tumor. Gold standard tumor margins can be obtained from histological images from cryotomed sections of the larynx. Unfortunately, the process of freezing, fixation, cryotoming and staining the tissue to create histological images introduces non-rigid deformations and significant contrast changes. We demonstrate that the non-rigid registration algorithm we present is able to capture these deformations and the algorithm allows us to align histological images with scanned images of the larynx. Our non-rigid registration algorithm constructs a deformation field to warp one image onto another. The algorithm measures image similarity using a mutual information similarity criterion, and avoids spurious deformations due to noise by constraining the estimated deformation field with a linear elastic regularization term. The finite element method is used to represent the deformation field, and our implementation enables us to assign inhomogeneous material characteristics so that hard regions resist internal deformation whereas soft regions are more pliant. A gradient descent optimization strategy is used and this has enabled rapid and accurate convergence to the desired estimate of the deformation field. A further acceleration in speed without cost of accuracy is achieved by using an adaptive mesh refinement strategy.  相似文献   

13.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.  相似文献   

14.
Current minimally invasive techniques for beating heart surgery are associated with three major limitations: the shortage of realistic and safe training methods, the process of selecting port locations for optimal target coverage from X-rays and angiograms, and the sole use of the endoscope for instrument navigation in a dynamic and confined 3D environment. To supplement the current surgery training, planning and guidance methods, we continue to develop our Virtual Cardiac Surgery Planning environment (VCSP) -- a virtual reality, patient-specific, thoracic cavity model derived from 3D pre-procedural images. In this work, we create and validate dynamic models of the heart and its components. A static model is first generated by segmenting one of the image frames in a given 4D data set. The dynamics of this model are then extracted from the remaining image frames using a non-linear, intensity-based registration algorithm with a choice of six different similarity metrics. The algorithm is validated on an artificial CT image set created using an excised porcine heart, on CT images of canine subjects, and on MR images of human volunteers. We found that with the appropriate choice of similarity metric, our algorithm extracts the motion of the epicardial surface in CT images, or of the myocardium, right atrium, right ventricle, aorta, left atrium, pulmonary arteries, vena cava and epicardial surface in MR images, with a root mean square error in the 1 mm range. These results indicate that our method of modeling the motion of the heart is easily adaptable and sufficiently accurate to meet the requirements for reliable cardiac surgery training, planning, and guidance.  相似文献   

15.
目的随着基于医学图像导航的手术对三维信息需求的加强,建立二维/三维的医学图像配准算法,在二维规划的基础上引入三维信息。方法首先,将获取的骨盆图像体数据集通过投影得到DRR图像,将获取的X线图像进行校正;然后,利用灰度相似性测度配准DRR图像和无失真X线图像。结果在初始位置相对坐标位移<20mm,角度<5°时配准精度小于1.47%,取得良好的配准结果。结论本文提出的基于灰度计算的配准技术能够实现术中二维医学图像与术前三维医学图像的配准,实现术中二维/三维手术规划相结合。  相似文献   

16.
Synthesized medical images have several important applications. For instance, they can be used as an intermedium in cross-modality image registration or used as augmented training samples to boost the generalization capability of a classifier. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 2D/3D images without needing paired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) more importantly, improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 2D/3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss (supervised by segmentors) to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. We validate our proposed method on three datasets, including cardiovascular CT and magnetic resonance imaging (MRI), abdominal CT and MRI, and mammography X-rays from different data domains, showing both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively.  相似文献   

17.
A global optimisation method for robust affine registration of brain images   总被引:19,自引:0,他引:19  
Registration is an important component of medical image analysis and for analysing large amounts of data it is desirable to have fully automatic registration methods. Many different automatic registration methods have been proposed to date, and almost all share a common mathematical framework - one of optimising a cost function. To date little attention has been focused on the optimisation method itself, even though the success of most registration methods hinges on the quality of this optimisation. This paper examines the assumptions underlying the problem of registration for brain images using inter-modal voxel similarity measures. It is demonstrated that the use of local optimisation methods together with the standard multi-resolution approach is not sufficient to reliably find the global minimum. To address this problem, a global optimisation method is proposed that is specifically tailored to this form of registration. A full discussion of all the necessary implementation details is included as this is an important part of any practical method. Furthermore, results are presented for inter-modal, inter-subject registration experiments that show that the proposed method is more reliable at finding the global minimum than several of the currently available registration packages in common usage.  相似文献   

18.
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.  相似文献   

19.
In this paper, we present a Bayesian framework for both generating inter-subject large deformation transformations between two multi-modal image sets of the brain and for forming multi-class brain atlases. In this framework, the estimated transformations are generated using maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved by jointly estimating the posterior probabilities associated with the multi-modal image sets and the high-dimensional registration transformations mapping these posteriors. To maximally use the information present in all the modalities for registration, Kullback-Leibler divergence between the estimated posteriors is minimized. Registration results for image sets composed of multi-modal MR images of healthy adult human brains are presented. Atlas formation results are presented for a population of five infant human brains.  相似文献   

20.
In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit consideration of the long-range spatial relationships in an image. Recently, Vision Transformer architectures have been proposed to address the shortcomings of ConvNets and have produced state-of-the-art performances in many medical imaging applications. Transformers may be a strong candidate for image registration because their substantially larger receptive field enables a more precise comprehension of the spatial correspondence between moving and fixed images. Here, we present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration. This paper also presents diffeomorphic and Bayesian variants of TransMorph: the diffeomorphic variants ensure the topology-preserving deformations, and the Bayesian variant produces a well-calibrated registration uncertainty estimate. We extensively validated the proposed models using 3D medical images from three applications: inter-patient and atlas-to-patient brain MRI registration and phantom-to-CT registration. The proposed models are evaluated in comparison to a variety of existing registration methods and Transformer architectures. Qualitative and quantitative results demonstrate that the proposed Transformer-based model leads to a substantial performance improvement over the baseline methods, confirming the effectiveness of Transformers for medical image registration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号