首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aim of deformable brain image registration is to align anatomical structures, which can potentially vary with large and complex deformations. Anatomical structures vary in size and shape, requiring the registration algorithm to estimate deformation fields at various degrees of complexity. Here, we present a difficulty-aware model based on an attention mechanism to automatically identify hard-to-register regions, allowing better estimation of large complex deformations. The difficulty-aware model is incorporated into a cascaded neural network consisting of three sub-networks to fully leverage both global and local contextual information for effective registration. The first sub-network is trained at the image level to predict a coarse-scale deformation field, which is then used for initializing the subsequent sub-network. The next two sub-networks progressively optimize at the patch level with different resolutions to predict a fine-scale deformation field. Embedding difficulty-aware learning into the hierarchical neural network allows harder patches to be identified in the deeper sub-networks at higher resolutions for refining the deformation field. Experiments conducted on four public datasets validate that our method achieves promising registration accuracy with better preservation of topology, compared with state-of-the-art registration methods.  相似文献   

2.
目的 采用无监督方式基于多级串联深度卷积神经网络(CNN)建立大形变图像配准网络(LDIRnet)模型,评估其配准脑部MRI及肺部CT图像的性能.方法 串联多个结构相同而参数不同的深度CNN,以端到端方式学习待配准图像之间的多个小形变场;再通过叠加小形变场计算待配准图像之间的大形变场,实现大形变图像配准.结果 配准3D...  相似文献   

3.
Automatic correction of intensity nonuniformity (also termed as the bias correction) is an essential step in brain MR image analysis. Existing methods are typically developed for adult brain MR images based on the assumption that the image intensities within the same brain tissue are relatively uniform. However, this assumption is not valid in infant brain MR images, due to the dynamic and regionally-heterogeneous image contrast and appearance changes, which are caused by the underlying spatiotemporally-nonuniform myelination process. Therefore, it is not appropriate to directly use existing methods to correct the infant brain MR images. In this paper, we propose an end-to-end 3D adversarial bias correction network (ABCnet), tailored for direct prediction of bias fields from the input infant brain MR images for bias correction. The “ground-truth” bias fields for training our network are carefully defined by an improved N4 method, which integrates manually-corrected tissue segmentation maps as anatomical prior knowledge. The whole network is trained alternatively by minimizing generative and adversarial losses. To handle the heterogeneous intensity changes, our generative loss includes a tissue-aware local intensity uniformity term to reduce the local intensity variation in the corrected image. Besides, it also integrates two additional terms to enhance the smoothness of the estimated bias field and to improve the robustness of the proposed method, respectively. Comprehensive experiments with different sizes of training datasets have been carried out on a total of 1492 T1w and T2w MR images from neonates, infants, and adults, respectively. Both qualitative and quantitative evaluations on simulated and real datasets consistently demonstrate the superior performance of our ABCnet in both accuracy and efficiency, compared with popularly available methods.  相似文献   

4.
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients’ datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.  相似文献   

5.
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.  相似文献   

6.
Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.  相似文献   

7.
8.
Non-rigid image registration techniques are commonly used to estimate complex tissue deformations in medical imaging. A range of non-rigid registration algorithms have been proposed, but they typically have high computational complexity. To reduce this complexity, combinations of multiple less complex deformations have been proposed such as hierarchical techniques which successively split the non-rigid registration problem into multiple locally rigid or affine components. However, to date the splitting has been regular and the underlying image content has not been considered in the splitting process. This can lead to errors and artefacts in the resulting motion fields. In this paper, we propose three novel adaptive splitting techniques, an image-based, a similarity-based, and a motion-based technique within a hierarchical framework which attempt to process regions of similar motion and/or image structure in single registration components. We evaluate our technique on free-breathing whole-chest 3D MRI data from 10 volunteers and two publicly available CT datasets. We demonstrate a reduction in registration error of up to 49.1% over a non-adaptive technique and compare our results with a commonly used free-form registration algorithm.  相似文献   

9.
This paper proposes a 3D statistical model aiming at effectively capturing statistics of high-dimensional deformation fields and then uses this prior knowledge to constrain 3D image warping. The conventional statistical shape model methods, such as the active shape model (ASM), have been very successful in modeling shape variability. However, their accuracy and effectiveness typically drop dramatically in high-dimensionality problems involving relatively small training datasets, which is customary in 3D and 4D medical imaging applications. The proposed statistical model of deformation (SMD) uses wavelet-based decompositions coupled with PCA in each wavelet band, in order to more accurately estimate the pdf of high-dimensional deformation fields, when a relatively small number of training samples are available. SMD is further used as statistical prior to regularize the deformation field in an SMD-constrained deformable registration framework. As a result, more robust registration results are obtained relative to using generic smoothness constraints on deformation fields, such as Laplacian-based regularization. In experiments, we first illustrate the performance of SMD in representing the variability of deformation fields and then evaluate the performance of the SMD-constrained registration, via comparing a hierarchical volumetric image registration algorithm, HAMMER, with its SMD-constrained version, referred to as SMD+HAMMER. This SMD-constrained deformable registration framework can potentially incorporate various registration algorithms to improve robustness and stability via statistical shape constraints.  相似文献   

10.
Discrete optimisation strategies have a number of advantages over their continuous counterparts for deformable registration of medical images. For example: it is not necessary to compute derivatives of the similarity term; dense sampling of the search space reduces the risk of becoming trapped in local optima; and (in principle) an optimum can be found without resorting to iterative coarse-to-fine warping strategies. However, the large complexity of high-dimensional medical data renders a direct voxel-wise estimation of deformation vectors impractical. For this reason, previous work on medical image registration using graphical models has largely relied on using a parameterised deformation model and on the use of iterative coarse-to-fine optimisation schemes. In this paper, we propose an approach that enables accurate voxel-wise deformable registration of high-resolution 3D images without the need for intermediate image warping or a multi-resolution scheme. This is achieved by representing the image domain as multiple comprehensive supervoxel layers and making use of the full marginal distribution of all probable displacement vectors after inferring regularity of the deformations using belief propagation. The optimisation acts on the coarse scale representation of supervoxels, which provides sufficient spatial context and is robust to noise in low contrast areas. Minimum spanning trees, which connect neighbouring supervoxels, are employed to model pair-wise deformation dependencies. The optimal displacement for each voxel is calculated by considering the probabilities for all displacements over all overlapping supervoxel graphs and subsequently seeking the mode of this distribution. We demonstrate the applicability of this concept for two challenging applications: first, for intra-patient motion estimation in lung CT scans; and second, for atlas-based segmentation propagation of MRI brain scans. For lung registration, the voxel-wise mode of displacements is found using the mean-shift algorithm, which enables us to determine continuous valued sub-voxel motion vectors. Finding the mode of brain segmentation labels is performed using a voxel-wise majority voting weighted by the displacement uncertainty estimates. Our experimental results show significant improvements in registration accuracy when using the additional information provided by the registration uncertainty estimates. The multi-layer approach enables fusion of multiple complementary proposals, extending the popular fusion approaches from multi-image registration to probabilistic one-to-one image registration.  相似文献   

11.
Image registration is a fundamental task in medical image analysis. Recently, many deep learning based image registration methods have been extensively investigated due to their comparable performance with the state-of-the-art classical approaches despite the ultra-fast computational time. However, the existing deep learning methods still have limitations in the preservation of original topology during the deformation with registration vector fields. To address this issues, here we present a cycle-consistent deformable image registration, dubbed CycleMorph. The cycle consistency enhances image registration performance by providing an implicit regularization to preserve topology during the deformation. The proposed method is so flexible that it can be applied for both 2D and 3D registration problems for various applications, and can be easily extended to multi-scale implementation to deal with the memory issues in large volume registration. Experimental results on various datasets from medical and non-medical applications demonstrate that the proposed method provides effective and accurate registration on diverse image pairs within a few seconds. Qualitative and quantitative evaluations on deformation fields also verify the effectiveness of the cycle consistency of the proposed method.  相似文献   

12.
We propose a Dual-stream Pyramid Registration Network (referred as Dual-PRNet) for unsupervised 3D brain image registration. Unlike recent CNN-based registration approaches, such as VoxelMorph, which computes a registration field from a pair of 3D volumes using a single-stream network, we design a two-stream architecture able to estimate multi-level registration fields sequentially from a pair of feature pyramids. Our main contributions are: (i) we design a two-stream 3D encoder-decoder network that computes two convolutional feature pyramids separately from two input volumes; (ii) we propose sequential pyramid registration where a sequence of pyramid registration (PR) modules is designed to predict multi-level registration fields directly from the decoding feature pyramids. The registration fields are refined gradually in a coarse-to-fine manner via sequential warping, which equips the model with a strong capability for handling large deformations; (iii) the PR modules can be further enhanced by computing local 3D correlations between the feature pyramids, resulting in the improved Dual-PRNet++ able to aggregate rich detailed anatomical structure of the brain; (iv) our Dual-PRNet++ can be integrated into a 3D segmentation framework for joint registration and segmentation, by precisely warping voxel-level annotations. Our methods are evaluated on two standard benchmarks for brain MRI registration, where Dual-PRNet++ outperforms the state-of-the-art approaches by a large margin, i.e., improving recent VoxelMorph from 0.511 to 0.748 (Dice score) on the Mindboggle101 dataset. In addition, we further demonstrate that our methods can greatly facilitate the segmentation task in a joint learning framework, by leveraging limited annotations.  相似文献   

13.
14.
We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario.  相似文献   

15.
Accurate 3D segmentation of calf muscle compartments in volumetric MR images is essential to diagnose as well as assess progression of muscular diseases. Recently, good segmentation performance was achieved using state-of-the-art deep learning approaches, which, however, require large amounts of annotated data for training. Considering that obtaining sufficiently large medical image annotation datasets is often difficult, time-consuming, and requires expert knowledge, minimizing the necessary sizes of expert-annotated training datasets is of great importance. This paper reports CMC-Net, a new deep learning framework for calf muscle compartment segmentation in 3D MR images that selects an effective small subset of 2D slices from the 3D images to be labelled, while also utilizing unannotated slices to facilitate proper generalization of the subsequent training steps. Our model consists of three parts: (1) an unsupervised method to select the most representative 2D slices on which expert annotation is performed; (2) ensemble model training employing these annotated as well as additional unannotated 2D slices; (3) a model-tuning method using pseudo-labels generated by the ensemble model that results in a trained deep network capable of accurate 3D segmentations. Experiments on segmentation of calf muscle compartments in 3D MR images show that our new approach achieves good performance with very small annotation ratios, and when utilizing full annotation, it outperforms state-of-the-art full annotation segmentation methods. Additional experiments on a 3D MR thigh dataset further verify the ability of our method in segmenting leg muscle groups with sparse annotation.  相似文献   

16.
Deep-learning-based registration methods emerged as a fast alternative to conventional registration methods. However, these methods often still cannot achieve the same performance as conventional registration methods because they are either limited to small deformation or they fail to handle a superposition of large and small deformations without producing implausible deformation fields with foldings inside.In this paper, we identify important strategies of conventional registration methods for lung registration and successfully developed the deep-learning counterpart. We employ a Gaussian-pyramid-based multilevel framework that can solve the image registration optimization in a coarse-to-fine fashion. Furthermore, we prevent foldings of the deformation field and restrict the determinant of the Jacobian to physiologically meaningful values by combining a volume change penalty with a curvature regularizer in the loss function. Keypoint correspondences are integrated to focus on the alignment of smaller structures.We perform an extensive evaluation to assess the accuracy, the robustness, the plausibility of the estimated deformation fields, and the transferability of our registration approach. We show that it achieves state-of-the-art results on the COPDGene dataset compared to conventional registration method with much shorter execution time. In our experiments on the DIRLab exhale to inhale lung registration, we demonstrate substantial improvements (TRE below 1.2 mm) over other deep learning methods. Our algorithm is publicly available at https://grand-challenge.org/algorithms/deep-learning-based-ct-lung-registration/.  相似文献   

17.
To make up for the lack of concern on the spatial information in the conventional mutual information based image registration framework, this paper designs a novel spatial feature field, namely the maximum distance-gradient (MDG) vector field, for registration tasks. It encodes both the local edge information and globally defined spatial information related to the intensity difference, the distance, and the direction of a voxel to a MDG source point. A novel similarity measure is proposed as the combination of the multi-dimensional mutual information and an angle measure on the MDG vector field. This measure integrates both the magnitude and orientation information of the MDG vector field into the image registration process.Experimental results on clinical 3D CT and T1-weighted MR image volumes show that, as compared with the conventional mutual information based method and two of its adaptations incorporating spatial information, the proposed method can give longer capture ranges at different image resolutions. This leads to more robust registrations. Around 2000 randomized rigid registration experiments demonstrate that our method consistently gives much higher success rates than the aforementioned three related methods. Moreover, it is shown that the registration accuracy of our method is high.  相似文献   

18.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.  相似文献   

19.
A deformable registration method is described that enables automatic alignment of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland. The method employs a novel "model-to-image" registration approach in which a deformable model of the gland surface, derived from an MR image, is registered automatically to a TRUS volume by maximising the likelihood of a particular model shape given a voxel-intensity-based feature that represents an estimate of surface normal vectors at the boundary of the gland. The deformation of the surface model is constrained by a patient-specific statistical model of gland deformation, which is trained using data provided by biomechanical simulations. Each simulation predicts the motion of a volumetric finite element mesh due to the random placement of a TRUS probe in the rectum. The use of biomechanical modelling in this way also allows a dense displacement field to be calculated within the prostate, which is then used to non-rigidly warp the MR image to match the TRUS image. Using data acquired from eight patients, and anatomical landmarks to quantify the registration accuracy, the median final RMS target registration error after performing 100 MR-TRUS registrations for each patient was 2.40 mm.  相似文献   

20.
We propose a multimodal free-form registration algorithm based on maximization of mutual information. The warped image is modeled as a viscous fluid that deforms under the influence of forces derived from the gradient of the mutual information registration criterion. Parzen windowing is used to estimate the joint intensity probability of the images to be matched. The method is evaluated for non-rigid inter-subject registration of MR brain images. The accuracy of the method is verified using simulated multi-modal MR images with known ground truth deformation. The results show that the root mean square difference between the recovered and the ground truth deformation is smaller than 1 voxel. We illustrate the application of the method for atlas-based brain tissue segmentation in MR images in case of gross morphological differences between atlas and patient images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号