首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 930 毫秒
1.
Early identification of kidney function deterioration is essential to determine which newborn patients with congenital kidney disease should be considered for surgical intervention as opposed to observation. Kidney function can be measured by fitting a tracer kinetic (TK) model onto a series of Dynamic Contrast Enhanced (DCE) MR images and estimating the filtration rate parameter from the model. Unfortunately, breathing and large bulk motion events due to patient movement in the scanner create outliers and misalignments that introduce large errors in the TK model parameter estimates even when using a motion-robust dynamic radial VIBE sequence for DCE-MR imaging. The misalignments between the series of volumes are difficult to correct using standard registration due to 1) the large differences in geometry and contrast between volumes of the dynamic sequence and 2) the requirement of fast dynamic imaging to achieve high temporal resolution and motion deteriorates image quality. These difficulties reduce the accuracy and stability of registration over the dynamic sequence. An alternative registration approach is to generate noise and motion free templates of the original data from the TK model and use them to register each volume to its contrast-matched template. However, the TK models used to characterize DCE-MRI are tissue specific, non-linear and sensitive to the same motion and sampling artifacts that hinder registration in the first place. Hence, these can only be applied to register accurately pre-segmented regions of interest, such as kidneys, and might converge to local minima under the presence of large artifacts. Here we introduce a novel linear time invariant (LTI) model to characterize DCE-MR data for different tissue types within a volume. We approximate the LTI model as a sparse sum of first order LTI functions to introduce robustness to motion and sampling artifacts. Hence, this model is well suited for registration of the entire field of view of DCE-MR data with artifacts and outliers. We incorporate this LTI model into a registration framework and evaluate it on both synthetic data and data from 20 children. For each subject, we reconstructed the sequence of DCE-MR images, detected corrupted volumes acquired during motion, aligned the sequence of volumes and recovered the corrupted volumes using the LTI model. The results show that our approach correctly aligned the volumes, provided the most stable registration in time and improved the tracer kinetic model fit.  相似文献   

2.
Medical image registration with partial data   总被引:2,自引:0,他引:2  
We have developed a general-purpose registration algorithm for medical images and volumes. The transformation between images is modeled as locally affine but globally smooth, and explicitly accounts for local and global variations in image intensities. An explicit model of missing data is also incorporated, allowing us to simultaneously segment and register images with partial or missing data. The algorithm is built upon a differential multiscale framework and incorporates the expectation maximization algorithm. We show that this approach is highly effective in registering a range of synthetic and clinical medical images.  相似文献   

3.
F L Bookstein 《NeuroImage》2001,14(6):1454-1462
John Ashburner and Karl Friston (2000) introduced a standardized method of "voxel-based morphometry" (VBM) for comparisons of local concentrations of gray matter between two groups of subjects. Segmented images of gray matter from grossly normalized high-resolution images are smoothed and their group differences analyzed by the now-conventional voxelwise Worsley approach to Gaussian random fields of differences. This comment concerns an unfortunate interaction between the algorithm's spatial normalization and voxelwise comparison steps, whereby several obvious quantitative confounds are injected at the core of the inference engine the authors put forward. Specifically, the statistics of the resulting voxelwise comparisons are uninformative about group differences wherever the spatial normalization algorithm has failed to register on any robustly appearing image gradient. The method of Ashburner and Friston is defensible only far from all image gradients.  相似文献   

4.
We propose a method for registration of 3D fetal brain ultrasound with a reconstructed magnetic resonance fetal brain volume. This method, for the first time, allows the alignment of models of the fetal brain built from magnetic resonance images with 3D fetal brain ultrasound, opening possibilities to develop new, prior information based image analysis methods for 3D fetal neurosonography. The reconstructed magnetic resonance volume is first segmented using a probabilistic atlas and a pseudo ultrasound image volume is simulated from the segmentation. This pseudo ultrasound image is then affinely aligned with clinical ultrasound fetal brain volumes using a robust block-matching approach that can deal with intensity artefacts and missing features in the ultrasound images. A qualitative and quantitative evaluation demonstrates good performance of the method for our application, in comparison with other tested approaches. The intensity average of 27 ultrasound images co-aligned with the pseudo ultrasound template shows good correlation with anatomy of the fetal brain as seen in the reconstructed magnetic resonance image.  相似文献   

5.
John Ashburner and Karl Friston (2000) introduced a standardized method of “voxel-based morphometry” (VBM) for comparisons of local concentrations of gray matter between two groups of subjects. Segmented images of gray matter from grossly normalized high-resolution images are smoothed and their group differences analyzed by the now-conventional voxelwise Worsley approach to Gaussian random fields of differences. This comment concerns an unfortunate interaction between the algorithm's spatial normalization and voxelwise comparison steps, whereby several obvious quantitative confounds are injected at the core of the inference engine the authors put forward. Specifically, the statistics of the resulting voxelwise comparisons are uninformative about group differences wherever the spatial normalization algorithm has failed to register on any robustly appearing image gradient. The method of Ashburner and Friston is defensible only far from all image gradients.  相似文献   

6.
7.
Introduction – Retinal layer segmentation in optical coherence tomography (OCT) images is an important approach for detecting and prognosing disease. Automating segmentation using robust machine learning techniques lead to computationally efficient solutions and significantly reduces the cost of labor-intensive labeling, which is traditionally performed by trained graders at a reading center, sometimes aided by semi-automated algorithms. Although several algorithms have been proposed since the revival of deep learning, eyes with severe pathological conditions continue to challenge fully automated segmentation approaches. There remains an opportunity to leverage the underlying spatial correlations between the retinal surfaces in the segmentation approach. Methods - Some of these proposed traditional methods can be expanded to utilize the three-dimensional spatial context governing the retinal image volumes by replacing the use of 2D filters with 3D filters. Towards this purpose, we propose a spatial-context, continuity and anatomical relationship preserving semantic segmentation algorithm, which utilizes the 3D spatial context from the image volumes with the use of 3D filters. We propose a 3D deep neural network capable of learning the surface positions of the layers in the retinal volumes. Results - We utilize a dataset of OCT images from patients with Age-related Macular Degeneration (AMD) to assess performance of our model and provide both qualitative (including segmentation maps and thickness maps) and quantitative (including error metric comparisons and volumetric comparisons) results, which demonstrate that our proposed method performs favorably even for eyes with pathological changes caused by severe retinal diseases. The Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for patients with a wide range of AMD severity scores (0–11) were within 0.84±0.41 and 1.33±0.73 pixels, respectively, which are significantly better than some of the other state-of-the-art algorithms. Conclusion – The results demonstrate the utility of extracting features from the entire OCT volume by treating the volume as a correlated entity and show the benefit of utilizing 3D autoencoder based regression networks for smoothing the approximated retinal layers by inducing shape based regularization constraints.  相似文献   

8.
Accurate quantification of the morphology of vessels is important for diagnosis and treatment of cardiovascular diseases. We introduce a new joint segmentation and registration approach for the quantification of the aortic arch morphology that combines 3D model-based segmentation with elastic image registration. With this combination, the approach benefits from the robustness of model-based segmentation and the accuracy of elastic registration. The approach can cope with a large spectrum of vessel shapes and particularly with pathological shapes that deviate significantly from the underlying model used for segmentation. The performance of the approach has been evaluated on the basis of 3D synthetic images, 3D phantom data, and clinical 3D CTA images including pathologies. We also performed a quantitative comparison with previous approaches.  相似文献   

9.
Minimally invasive surgery (MIS) offers great benefits to patients compared with open surgery. Nevertheless during MIS surgeons often need to contend with a narrow field-of-view of the endoscope and obstruction from other surgical instruments. He/she may also need to relate the surgical scene to information derived from previously acquired 3D medical imaging. We thus present a new framework to reconstruct the 3D surface of an internal organ from endoscopic images which is robust to measurement noise, missing data and outliers. This can provide 3D surface with a wide field-of-view for surgeons, and it can also be used for 3D-3D registration of the anatomy to pre-operative CT/MRI data for use in image guided interventions. Our proposed method first removes most of the outliers using an outlier removal method that is based on the trilinear constraints over three images. Then data that are missing from one or more of the video images (missing data) and 3D structure are recovered using the structure from motion (SFM) technique. Evolutionary agents are applied to improve both the efficiency of data recovery and robustness to outliers. Furthermore, an incremental bundle adjustment strategy is used to refine the camera parameters and 3D structure and produce a more accurate 3D surface. Experimental results with synthetic data show that the method is able to reconstruct surfaces in the presence of feature tracking errors (up to 5 pixel standard deviation) and a large amount of missing data (up to 50%). Experiments on a realistic phantom model and in vivo data further demonstrate the good performance of the proposed approach in terms of accuracy (1.7 mm residual phantom surface error) and robustness (50% missing data rate, and 20% outliers in in vivo experiments).  相似文献   

10.
The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data.We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.  相似文献   

11.
Purpose  We propose and compare different registration approaches to align small-animal PET studies and a procedure to validate the results by means of objective registration consistency measurements. Procedures  We have applied a registration algorithm based on information theory, using different approaches to mask the reference image. The registration consistency allows for the detection of incorrect registrations. This methodology has been evaluated on a test dataset (FDG-PET rat brain images). Results  The results show that a multiresolution two-step registration approach based on the use of the whole image at the low resolution step, while masking the brain at the high resolution step, provides the best robustness (87.5% registration success) and highest accuracy (0.67-mm average). Conclusions  The major advantages of our approach are minimal user interaction and automatic assessment of the registration error, avoiding visual inspection of the results, thus facilitating the accurate, objective, and rapid analysis of large groups of rodent PET images.  相似文献   

12.
Estimating the forces acting between instruments and tissue is a challenging problem for robot-assisted minimally-invasive surgery. Recently, numerous vision-based methods have been proposed to replace electro-mechanical approaches. Moreover, optical coherence tomography (OCT) and deep learning have been used for estimating forces based on deformation observed in volumetric image data. The method demonstrated the advantage of deep learning with 3D volumetric data over 2D depth images for force estimation. In this work, we extend the problem of deep learning-based force estimation to 4D spatio-temporal data with streams of 3D OCT volumes. For this purpose, we design and evaluate several methods extending spatio-temporal deep learning to 4D which is largely unexplored so far. Furthermore, we provide an in-depth analysis of multi-dimensional image data representations for force estimation, comparing our 4D approach to previous, lower-dimensional methods. Also, we analyze the effect of temporal information and we study the prediction of short-term future force values, which could facilitate safety features. For our 4D force estimation architectures, we find that efficient decoupling of spatial and temporal processing is advantageous. We show that using 4D spatio-temporal data outperforms all previously used data representations with a mean absolute error of 10.7 mN. We find that temporal information is valuable for force estimation and we demonstrate the feasibility of force prediction.  相似文献   

13.

Purpose

Simulated 2D X-ray images called digitally reconstructed radiographs (DRRs) have important applications within medical image registration frameworks where they are compared with reference X-rays or used in implementations of digital tomosynthesis (DTS). However, rendering DRRs from a CT volume is computationally demanding and relatively slow using the conventional ray-casting algorithm. Image-guided radiation therapy systems using DTS to verify target location require a large number DRRs to be precomputed since there is insufficient time within the automatic image registration procedure to generate DRRs and search for an optimal pose.

Method

DRRs were rendered from octree-compressed CT data. Previous work showed that octree-compressed volumes rendered by conventional ray casting deliver a registration with acceptable clinical accuracy, but efficiently rendering the irregular grid of an octree data structure is a challenge for conventional ray casting. We address this by using vertex and fragment shaders of modern graphics processing units (GPUs) to directly project internal spaces of the octree, represented by textured particle sprites, onto the view plane. The texture is procedurally generated and depends on the CT pose.

Results

The performance of this new algorithm was found to be 4 times faster than that of a ray-casting algorithm implemented using NVIDIA?Compute Unified Device Architecture (CUDA?) on an equivalent GPU (~95 % octree compression). Rendering artifacts are apparent (consistent with other splatting algorithm), but image quality tends to improve with compression and fewer particles are needed. A peak signal-to-noise ratio analysis confirmed that the images rendered from compressed volumes were of marginally better quality to those rendered using Gaussian footprints.

Conclusions

Using octree-encoded DRRs within a 2D/3D registration framework indicated the approach may be useful in accelerating automatic image registration.  相似文献   

14.
Accurate 3D segmentation of calf muscle compartments in volumetric MR images is essential to diagnose as well as assess progression of muscular diseases. Recently, good segmentation performance was achieved using state-of-the-art deep learning approaches, which, however, require large amounts of annotated data for training. Considering that obtaining sufficiently large medical image annotation datasets is often difficult, time-consuming, and requires expert knowledge, minimizing the necessary sizes of expert-annotated training datasets is of great importance. This paper reports CMC-Net, a new deep learning framework for calf muscle compartment segmentation in 3D MR images that selects an effective small subset of 2D slices from the 3D images to be labelled, while also utilizing unannotated slices to facilitate proper generalization of the subsequent training steps. Our model consists of three parts: (1) an unsupervised method to select the most representative 2D slices on which expert annotation is performed; (2) ensemble model training employing these annotated as well as additional unannotated 2D slices; (3) a model-tuning method using pseudo-labels generated by the ensemble model that results in a trained deep network capable of accurate 3D segmentations. Experiments on segmentation of calf muscle compartments in 3D MR images show that our new approach achieves good performance with very small annotation ratios, and when utilizing full annotation, it outperforms state-of-the-art full annotation segmentation methods. Additional experiments on a 3D MR thigh dataset further verify the ability of our method in segmenting leg muscle groups with sparse annotation.  相似文献   

15.
16.
A method for registration of speckle-tracked freehand 3D ultrasound (US) to preoperative CT volumes of the spine is proposed. We register the US volume to the CT volume by creating individual US "sub-volumes", each consisting of a small section of the entire US volume. The registration proceeds incrementally from the beginning of the US volume to the end, registering every sub-volume, where each sub-volume contains overlapping images with the previous sub-volume. Each registration is performed by generating simulated US images from the CT volume. As a by-product of our registration, the significant drift error common in speckle-tracked US volumes is corrected for. Results are validated through a phantom study of plastic spine phantoms created from clinical patient CT data as well as an animal study using a lamb cadaver. Results demonstrate that we were able to successfully register a speckle-tracked US volume to CT volume in four out of five phantoms with a success rate of greater than 98%. The final error of the registered US volumes decreases by over 50 percent from the speckle tracking error to consistently below 3 mm. Studies on the lamb cadaver showed a mean registration error consistently below 2 mm.  相似文献   

17.
Freehand 3D ultrasound reconstruction algorithms--a review   总被引:1,自引:0,他引:1  
Three-dimensional (3D) ultrasound (US) is increasingly being introduced in the clinic, both for diagnostics and image guidance. Although dedicated 3D US probes exist, 3D US can also be acquired with the still frequently used two-dimensional (2D) US probes. Obtaining 3D volumes with 2D US probes is a two-step process. First, a positioning sensor must be attached to the probe; second, a reconstruction of a 3D volume can be performed into a regular voxel grid. Various algorithms have been used for performing 3D reconstruction based on 2D images. Up till now, a complete overview of the algorithms, the way they work and their benefits and drawbacks due to various applications has been missing. The lack of an overview is made clear by confusions about algorithm and group names in the existing literature. This article is a review aimed at explaining and categorizing the various algorithms into groups, according to algorithm implementation. The algorithms are compared based on published data and our own laboratory results. Positive and practical uses of the various algorithms for different applications are discussed, with a focus on image guidance.  相似文献   

18.
《Medical image analysis》2015,25(1):106-124
In this paper we present a group-wise non-rigid registration/mosaicing algorithm based on block-matching, which is developed within a probabilistic framework. The discrete form of its energy functional is linked to a Markov Random Field (MRF) containing double and triple cliques, which can be effectively optimized using modern MRF optimization algorithms popular in computer vision. Also, the registration problem is simplified by introducing a mosaicing function which partitions the composite volume into regions filled with data from unique, partially overlapping source volumes. Ultrasound confidence maps are incorporated into the registration framework in order to give accurate results in the presence of image artifacts. The algorithm is initially tested on simulated images where shadows have been generated. Also, validation results for the group-wise registration algorithm using real ultrasound data from an abdominal phantom are presented. Finally, composite obstetrics image volumes are constructed using clinical scans of pregnant subjects, where fetal movement makes registration/mosaicing especially difficult. In addition, results are presented suggesting that a fusion approach to MRF registration can produce accurate displacement fields much faster than standard approaches.  相似文献   

19.
Fast and robust registration of PET and MR images of human brain   总被引:8,自引:0,他引:8  
In recent years, mutual information has proved to be an excellent criterion for registration of intra-individual images from different modalities. Multi-resolution coarse-to-fine optimization was proposed for speeding-up of the registration process. The aim of our work was to further improve registration speed without compromising robustness or accuracy. We present and evaluate two procedures for co-registration of positron emission tomography (PET) and magnetic resonance (MR) images of human brain that combine a multi-resolution approach with an automatic segmentation of input image volumes into areas of interest and background. We show that an acceleration factor of 10 can be achieved for clinical data and that a suitable preprocessing can improve robustness of registration. Emphasis was laid on creation of an automatic registration system that could be used routinely in a clinical environment. For this purpose, an easy-to-use graphical user interface has been developed. It allows physicians with no special knowledge of the registration algorithm to perform a fast and reliable alignment of images. Registration progress is presented on the fly on a fusion of images and enables visual checking during a registration.  相似文献   

20.
In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号