首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a mass preserving image registration algorithm for lung CT images. To account for the local change in lung tissue intensity during the breathing cycle, a tissue appearance model based on the principle of preservation of total lung mass is proposed. This model is incorporated into a standard image registration framework with a composition of a global affine and several free-form B-Spline transformations with increasing grid resolution. The proposed mass preserving registration method is compared to registration using the sum of squared intensity differences as a similarity function on four groups of data: 44 pairs of longitudinal inspiratory chest CT scans with small difference in lung volume; 44 pairs of longitudinal inspiratory chest CT scans with large difference in lung volume; 16 pairs of expiratory and inspiratory CT scans; and 5 pairs of images extracted at end exhale and end inhale phases of 4D-CT images. Registration errors, measured as the average distance between vessel tree centerlines in the matched images, are significantly lower for the proposed mass preserving image registration method in the second, third and fourth group, while there is no statistically significant difference between the two methods in the first group. Target registration error, assessed via a set of manually annotated landmarks in the last group, was significantly smaller for the proposed registration method.  相似文献   

2.
3.
4.
《Medical image analysis》2014,18(6):914-926
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance.  相似文献   

5.
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.  相似文献   

6.
7.
This paper presents the Population Learning followed by One Shot Learning (PLOSL) pulmonary image registration method. PLOSL is a fast unsupervised learning-based framework for 3D-CT pulmonary image registration algorithm based on combining population learning (PL) and one-shot learning (OSL). The PLOSL image registration has the advantages of the PL and OSL approaches while reducing their respective drawbacks. The advantages of PLOSL include improved performance over PL, substantially reducing OSL training time and reducing the likelihood of OSL getting stuck in local minima. PLOSL pulmonary image registration uses tissue volume preserving and vesselness constraints for registration of inspiration-to-expiration and expiration-to-inspiration pulmonary CT images. A coarse-to-fine convolution encoder-decoder CNN architecture is used to register large and small shape features. During training, the sum of squared tissue volume difference (SSTVD) compensates for intensity differences between inspiration and expiration computed tomography (CT) images and the sum of squared vesselness measure difference (SSVMD) helps match the lung vessel tree. Results show that the PLOSL (SSTVD+SSVMD) algorithm achieved subvoxel landmark error while preserving pulmonary topology on the SPIROMICS data set, the public DIR-LAB COPDGene and 4DCT data sets.  相似文献   

8.
We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50 s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.  相似文献   

9.
Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.  相似文献   

10.
In this paper, we present a protocol for the evaluation of similarity measures for non-rigid registration. The evaluation is based on five intuitive properties that characterize the behavior of a similarity measure, i.e. the accuracy, capture range, distinctiveness of the optimum, number of local minima, and risk of non-convergence. These five properties are estimated locally from similarity measure values that correspond to a range of systematic local free-form deformations, obtained by displacing control points in random directions from the gold standard position. Global similarity measure properties are obtained by combining the local properties over image regions or over the entire image. The feasibility of the proposed evaluation protocol is demonstrated for three similarity measures: mutual information, normalized mutual information and correlation ratio. The evaluation is carried out on a number of MR and CT images: a pair of simulated MR T1 and MR T2 images of the head, three pairs of real MR T1 and T2 images of the head, six pairs of real MR T1 and CT images of the head, and pairs of MR and CT images of three vertebrae. The protocol may help researchers to select the most appropriate similarity measure for a non-rigid registration task.  相似文献   

11.
We present a groupwise US to CT registration algorithm for guiding percutaneous spinal interventions. In addition, we introduce a comprehensive validation scheme that accounts for changes in the curvature of the spine between preoperative and intraoperative imaging. In our registration methodology, each vertebra in CT is treated as a sub-volume and transformed individually. A biomechanical model is used to constrain the displacement of the vertebrae relative to one another. The sub-volumes are then reconstructed into a single volume. During each iteration of registration, an US image is simulated from the reconstructed CT volume and an intensity-based similarity metric is calculated with the real US image. Validation studies are performed on CT and US images from a sheep cadaver, five patient-based phantoms designed to preserve realistic curvatures of the spine and a sixth patient-based phantom where the curvature of the spine is changed between preoperative and intraoperative imaging. For datasets where the spine curve between two imaging modalities was artificially perturbed, the proposed methodology was able to register initial misalignments of up to 20mm with a success rate of 95%. For the phantom with a physical change in the curvature of the spine introduced between the US and CT datasets, the registration success rate was 98.5%. Finally, the registration success rate for the sheep cadaver with soft-tissue information was 87%. The results demonstrate that our algorithm allows for robust registration of US and CT datasets, regardless of a change in the patients pose between preoperative and intraoperative image acquisitions.  相似文献   

12.
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.  相似文献   

13.
We describe a new algorithm for non-rigid registration capable of estimating a constrained dense displacement field from multi-modal image data. We applied this algorithm to capture non-rigid deformation between digital images of histological slides and digital flat-bed scanned images of cryotomed sections of the larynx, and carried out validation experiments to measure the effectiveness of the algorithm. The implementation was carried out by extending the open-source Insight ToolKit software. In diagnostic imaging of cancer of the larynx, imaging modalities sensitive to both anatomy (such as MRI and CT) and function (PET) are valuable. However, these modalities differ in their capability to discriminate the margins of tumor. Gold standard tumor margins can be obtained from histological images from cryotomed sections of the larynx. Unfortunately, the process of freezing, fixation, cryotoming and staining the tissue to create histological images introduces non-rigid deformations and significant contrast changes. We demonstrate that the non-rigid registration algorithm we present is able to capture these deformations and the algorithm allows us to align histological images with scanned images of the larynx. Our non-rigid registration algorithm constructs a deformation field to warp one image onto another. The algorithm measures image similarity using a mutual information similarity criterion, and avoids spurious deformations due to noise by constraining the estimated deformation field with a linear elastic regularization term. The finite element method is used to represent the deformation field, and our implementation enables us to assign inhomogeneous material characteristics so that hard regions resist internal deformation whereas soft regions are more pliant. A gradient descent optimization strategy is used and this has enabled rapid and accurate convergence to the desired estimate of the deformation field. A further acceleration in speed without cost of accuracy is achieved by using an adaptive mesh refinement strategy.  相似文献   

14.
Whole-body computed tomography (CT) image registration is important for cancer diagnosis, therapy planning and treatment. Such registration requires accounting for large differences between source and target images caused by deformations of soft organs/tissues and articulated motion of skeletal structures. The registration algorithms relying solely on image processing methods exhibit deficiencies in accounting for such deformations and motion. We propose to predict the deformations and movements of body organs/tissues and skeletal structures for whole-body CT image registration using patient-specific non-linear biomechanical modelling. Unlike the conventional biomechanical modelling, our approach for building the biomechanical models does not require time-consuming segmentation of CT scans to divide the whole body into non-overlapping constituents with different material properties. Instead, a Fuzzy C-Means (FCM) algorithm is used for tissue classification to assign the constitutive properties automatically at integration points of the computation grid. We use only very simple segmentation of the spine when determining vertebrae displacements to define loading for biomechanical models. We demonstrate the feasibility and accuracy of our approach on CT images of seven patients suffering from cancer and aortic disease. The results confirm that accurate whole-body CT image registration can be achieved using a patient-specific non-linear biomechanical model constructed without time-consuming segmentation of the whole-body images.  相似文献   

15.
Graph-based groupwise registration methods are widely used in atlas construction. Given a group of images, a graph is built whose nodes represent the images, and whose edges represent a geodesic path between two nodes. The distribution of images on an image manifold is explored through edge traversal in a graph. The final atlas is a mean image at the population center of the distribution on the manifold. The procedure of warping all images to the mean image turns to dynamic graph shrinkage in which nodes become closer to each other. Most conventional groupwise registration frameworks construct and shrink a graph without considering the local distribution of images on the dataset manifold and the local structure variations between image pairs. Neglecting the local information fundamentally decrease the accuracy and efficiency when population atlases are built for organs with large inter-subject anatomical variabilities. To overcome the problem, this paper proposes a global-local graph shrinkage approach that can generate accurate atlas. A connected graph is constructed automatically based on global similarities across the images to explore the global distribution. A local image distribution obtained by image clustering is used to simplify the edges of the constructed graph. Subsequently, local image similarities refine the deformation estimated through global image similarity for each image warping along the graph edges. Through the image warping, the overall simplified graph shrinks gradually to yield the atlas with respecting both global and local features. The proposed method is evaluated on 61 synthetic and 20 clinical liver datasets, and the results are compared with those of six state-of-the-art groupwise registration methods. The experimental results show that the proposed method outperforms non-global-local method approaches in terms of accuracy.  相似文献   

16.
3D Gabor wavelets for evaluating SPM normalization algorithm   总被引:1,自引:0,他引:1  
  相似文献   

17.
18.
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down.  相似文献   

19.
This study presents methods to 2-D registration of retinal image sequences and 3-D shape inference from fluorescein images. The Y-feature is a robust geometric entity that is largely invariant across modalities as well as across the temporal grey level variations induced by the propagation of the dye in the vessels. We first present a Y-feature extraction method that finds a set of Y-feature candidates using local image gradient information. A gradient-based approach is then used to align an articulated model of the Y-feature to the candidates more accurately while optimizing a cost function. Using mutual information, fitted Y-features are subsequently matched across images, including colors and fluorescein angiographic frames, for registration. To reconstruct the retinal fundus in 3-D, the extracted Y-features are used to estimate the epipolar geometry with a plane-and-parallax approach. The proposed solution provides a robust estimation of the fundamental matrix suitable for plane-like surfaces, such as the retinal fundus. The mutual information criterion is used to accurately estimate the dense disparity map. Our experimental results validate the proposed method on a set of difficult fluorescein image pairs.  相似文献   

20.
This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号