首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative.We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance.The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine X-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity.Experimental results included in the paper evaluate (1) the usefulness of shape similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding.  相似文献   

2.
背景:超分辨率重建已经在视频、遥感等许多领域内的到广泛的研究与应用。目的:介绍一种自适应超分辨率重建算法,以期从序列低分辨率图像中重建出高分辨率图像。方法:采用常数λ=2/3作为正则化参数和自适应步长作为第一种方案。第二种方案充分考虑到低分辨率图像中的运动误差估计、点扩散函数以及加性高斯白噪声对重建算法的影响。实验构造出新的非线性自适应正则化函数,进而利用实验方法分析代价函数的凸性。通过数学理论,根据代价函数凸性实验得到自适应步长因子,从而改进了图像的空间分辨率和算法的收敛速度。结果与结论:为验证此算法的有效性,采用光学图像进行实验。方案二图像峰值信噪比增高,其收敛速度为方案一的2倍以上;方案二的平均计算需要的时间为68.25s。结果证实,自适应超分辨率图像重建算法对图像分辨率和迭代的收敛速度均改善显著,其稳定性较好。  相似文献   

3.
Semantic segmentation using convolutional neural networks (CNNs) is the state-of-the-art for many medical image segmentation tasks including myocardial segmentation in cardiac MR images. However, the predicted segmentation maps obtained from such standard CNN do not allow direct quantification of regional shape properties such as regional wall thickness. Furthermore, the CNNs lack explicit shape constraints, occasionally resulting in unrealistic segmentations. In this paper, we use a CNN to predict shape parameters of an underlying statistical shape model of the myocardium learned from a training set of images. Additionally, the cardiac pose is predicted, which allows to reconstruct the myocardial contours. The integrated shape model regularizes the predicted contours and guarantees realistic shapes. We enforce robustness of shape and pose prediction by simultaneously performing pixel-wise semantic segmentation during training and define two loss functions to impose consistency between the two predicted representations: one distance-based loss and one overlap-based loss. We evaluated the proposed method in a 5-fold cross validation on an in-house clinical dataset with 75 subjects and on the ACDC and LVQuan19 public datasets. We show that the two newly defined loss functions successfully increase the consistency between shape and pose parameters and semantic segmentation, which leads to a significant improvement of the reconstructed myocardial contours. Additionally, these loss functions drastically reduce the occurrence of unrealistic shapes in the semantic segmentation output.  相似文献   

4.
5.
Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L2) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow.Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images.An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions.  相似文献   

6.
Diffusion tensor MRI (DT-MRI) is an imaging technique that is gaining importance in clinical applications. However, there is very little work concerning the human heart. When applying DT-MRI to in vivo human hearts, the data have to be acquired rapidly to minimize artefacts due to cardiac and respiratory motion and to improve patient comfort, often at the expense of image quality. This results in diffusion weighted (DW) images corrupted by noise, which can have a significant impact on the shape and orientation of tensors and leads to diffusion tensor (DT) datasets that are not suitable for fibre tracking. This paper compares regularization approaches that operate either on diffusion weighted images or on diffusion tensors. Experiments on synthetic data show that, for high signal-to-noise ratio (SNR), the methods operating on DW images produce the best results; they substantially reduce noise error propagation throughout the diffusion calculations. However, when the SNR is low, Rician Cholesky and Log-Euclidean DT regularization methods handle the bias introduced by Rician noise and ensure symmetry and positive definiteness of the tensors. Results based on a set of sixteen ex vivo human hearts show that the different regularization methods tend to provide equivalent results.  相似文献   

7.

Purpose  

For many image registration tasks, the information contained in the original resolution of the image data is crucial for a subsequent medical analysis, e.g. accurate assessment of local pulmonary ventilation. However, the complexity of a non-parametric registration scheme is directly connected to the resolution of the images. Therefore, the registration is often performed on a downsampled version in order to meet runtime demands and thereby producing suboptimal results. To enable the application of the highest resolution at least in regions of high clinical importance, an approach is presented replacing the usually taken equidistant grids by tensor grids for image representation.  相似文献   

8.
Patient-specific computational models of structure and function are increasingly being used to diagnose disease and predict how a patient will respond to therapy. Models of anatomy are often derived after segmentation of clinical images or from mapping systems which are affected by image artefacts, resolution and contrast. Quantifying the impact of uncertain anatomy on model predictions is important, as models are increasingly used in clinical practice where decisions need to be made regardless of image quality. We use a Bayesian probabilistic approach to estimate the anatomy and to quantify the uncertainty about the shape of the left atrium derived from Cardiac Magnetic Resonance images. We show that we can quantify uncertain shape, encode uncertainty about the left atrial shape due to imaging artefacts, and quantify the effect of uncertain shape on simulations of left atrial activation times.  相似文献   

9.
We describe a new 3-D statistical shape model of the heart consisting of atria, ventricles and epicardium. The model was constructed by combining information on standard short- and long-axis cardiac MR images. In the model, the variability of the shape was modeled with PCA- and ICA-based shape models as well as with non-parametric landmark probability distributions and a probabilistic atlas. The statistical atlas was built from 25 healthy subjects. The shape model was evaluated by applying it to image segmentation. The probabilistic atlas was found to be superior to the other shape models (P < 0.001) in this study.  相似文献   

10.
A new approach to the problem of automatic construction of eigenshape models is presented. Eigenshape models have proved to be successful in a variety of medical image analysis problems. However, automatic construction of eigenshape models has proved to be a difficult problem, and in many applications the models are built by hand-a painstaking process. We show that the fundamental problem is a choice of the correct pose and parametrization of each shape in the training set. Eigenshape models are not invariant under reparametrizations and pose transformations of the training shapes. Since there is no a priori correct choice for the pose and parametrization of each shape, their value should be chosen so as to produce a model that is compact and specific. This problem can be solved by finding an objective function that measures these properties and varying the pose and parametrization of each shape to optimize this function. We show that the appropriate objective function is the determinant of the covariance matrix. We go on to show how this objective function can be optimized by a genetic algorithm (GA) and thus give a practical method for building eigenshape models. The models produced are often better than hand-built ones. The advantages of a GA over other choices of optimization method are that no assumptions about the nature of the shapes being modelled is required and that the global minimum of the objective function can, in principle, be found.  相似文献   

11.
A novel method for vertebral fracture quantification from X-ray images is presented. Using pairwise conditional shape models trained on a set of healthy spines, the most likely normal vertebra shapes are estimated conditional on the shapes of all other vertebrae in the image. The difference between the true shape and the reconstructed normal shape is subsequently used as a measure of abnormality. In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into account, it develops a patient-specific reference by combining population-based information on biological variation in vertebral shape and vertebra interrelations, and it provides a continuous measure of deformity. The method is demonstrated on 282 lateral spine radiographs with in total 93 fractures. Vertebral fracture detection is shown to be in good agreement with semi-quantitative scoring by experienced radiologists and is superior to the performance of shape models alone.  相似文献   

12.

Objective

Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested.

Methods

Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations.

Results

The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes.

Conclusion

The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.  相似文献   

13.
Xu C  Dai M  You F  Shi X  Fu F  Liu R  Dong X 《Physiological measurement》2011,32(5):585-598
Delayed detection of an internal hemorrhage may result in serious disabilities and possibly death for a patient. Currently, there are no portable medical imaging instruments that are suitable for long-term monitoring of patients at risk of internal hemorrhage. Electrical impedance tomography (EIT) has the potential to monitor patients continuously as a novel functional image modality and instantly detect the occurrence of an internal hemorrhage. However, the low spatial resolution and high sensitivity to noise of this technique have limited its application in clinics. In addition, due to the circular boundary display mode used in current EIT images, it is difficult for clinicians to identify precisely which organ is bleeding using this technique. The aim of this study was to propose an optimized strategy for EIT reconstruction to promote the use of EIT for clinical studies, which mainly includes the use of anatomically accurate boundary shapes, rapid selection of optimal regularization parameters and image fusion of EIT and computed tomography images. The method was evaluated on retroperitoneal and intraperitoneal bleeding piglet data. Both traditional backprojection images and optimized images among different boundary shapes were reconstructed and compared. The experimental results demonstrated that EIT images with precise anatomical information can be reconstructed in which the image resolution and resistance to noise can be improved effectively.  相似文献   

14.
The first step in the spatial normalization of brain images is usually to determine the affine transformation that best maps the image to a template image in a standard space. We have developed a rapid and automatic method for performing this registration, which uses a Bayesian scheme to incorporate prior knowledge of the variability in the shape and size of heads. We compared affine registrations with and without incorporating the prior knowledge. We found that the affine transformations derived using the Bayesian scheme are much more robust and that the rate of convergence is greater.  相似文献   

15.
Average shape estimates are often used to characterize normal morphological variation and discriminate dysmorphology in populations. The purpose of this paper is to estimate "average" or the most representative shapes in populations by using high-resolution medical images as input. The "average" shape is computed from high-dimensional spatial transformations used to co-register each subject in the population rather than the image intensities. Inverse consistent image registration is used to help minimize correspondence errors and produce better population average estimates. Testing the method was done using a population of adult MR brain scans from 22 individuals with no know structural abnormalities. Population averages were computed using the spatial transformation method and local changes in morphology were mapped. Results suggest that this method is a feasible means for robust estimation of population average shape. It is also shown that using inverse consistent transformations produces average shape estimates with less error compared to those produced with transformations with nontrivial inverse consistent errors.  相似文献   

16.
In non-rigid registration, the tradeoff between warp regularization and image fidelity is typically determined empirically. In atlas-based segmentation, this leads to a probabilistic atlas of arbitrary sharpness: weak regularization results in well-aligned training images and a sharp atlas; strong regularization yields a "blurry" atlas. In this paper, we employ a generative model for the joint registration and segmentation of images. The atlas construction process arises naturally as estimation of the model parameters. This framework allows the computation of unbiased atlases from manually labeled data at various degrees of "sharpness", as well as the joint registration and segmentation of a novel brain in a consistent manner. We study the effects of the tradeoff of atlas sharpness and warp smoothness in the context of cortical surface parcellation. This is an important question because of the increasingly availability of atlases in public databases, and the development of registration algorithms separate from the atlas construction process. We find that the optimal segmentation (parcellation) corresponds to a unique balance of atlas sharpness and warp regularization, yielding statistically significant improvements over the FreeSurfer parcellation algorithm. Furthermore, we conclude that one can simply use a single atlas computed at an optimal sharpness for the registration-segmentation of a new subject with a pre-determined, fixed, optimal warp constraint. The optimal atlas sharpness and warp smoothness can be determined by probing the segmentation performance on available training data. Our experiments also suggest that segmentation accuracy is tolerant up to a small mismatch between atlas sharpness and warp smoothness.  相似文献   

17.
The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest.  相似文献   

18.
Incorporating shape information is essential for the delineation of many organs and anatomical structures in medical images. While previous work has mainly focused on parametric spatial transformations applied to reference template shapes, in this paper, we address the Bayesian inference of parametric shape models for segmenting medical images with the objective of providing interpretable results. The proposed framework defines a likelihood appearance probability and a prior label probability based on a generic shape function through a logistic function. A reference length parameter defined in the sigmoid controls the trade-off between shape and appearance information. The inference of shape parameters is performed within an Expectation-Maximisation approach in which a Gauss-Newton optimization stage provides an approximation of the posterior probability of the shape parameters. This framework is applied to the segmentation of cochlear structures from clinical CT images constrained by a 10-parameter shape model. It is evaluated on three different datasets, one of which includes more than 200 patient images. The results show performances comparable to supervised methods and better than previously proposed unsupervised ones. It also enables an analysis of parameter distributions and the quantification of segmentation uncertainty, including the effect of the shape model.  相似文献   

19.
Cardiovascular magnetic resonance (CMR) imaging is the gold standard for the non-invasive assessment of left-ventricular (LV) function. Prognostic value of deformation metrics extracted directly from regular SSFP CMR images has been shown by numerous studies in the clinical setting, but with some limitations to detect torsion of the myocardium. Tagged CMR introduces trackable features in the myocardium that allow for the assessment of local myocardial deformation, including torsion; it is, however, limited in the quantification of radial strain, which is a decisive metric for assessing the contractility of the heart. In order to improve SSFP-only and tagged-only approaches, we propose to combine the advantages of both image types by fusing global shape motion obtained from SSFP images with the local deformation obtained from tagged images. To this end, tracking is first performed on SSFP images, and subsequently, the resulting motion is utilized to mask and track tagged data. Our implementation is based on a recent finite element-based motion tracking tool with mechanical regularization. Joint SSFP and tagged images registration performance is assessed based on deformation metrics including LV strain and twist using human and in-house porcine datasets. Results show that joint analysis of SSFP and 3DTAG images provides better quantification of LV strain and twist as either data source alone.  相似文献   

20.
Indirect image registration is a promising technique to improve image reconstruction quality by providing a shape prior for the reconstruction task. In this paper, we propose a novel hybrid method that seeks to reconstruct high quality images from few measurements whilst requiring low computational cost. With this purpose, our framework intertwines indirect registration and reconstruction tasks is a single functional. It is based on two major novelties. Firstly, we introduce a model based on deep nets to solve the indirect registration problem, in which the inversion and registration mappings are recurrently connected through a fixed-point interaction based sparse optimisation. Secondly, we introduce specific inversion blocks, that use the explicit physical forward operator, to map the acquired measurements to the image reconstruction. We also introduce registration blocks based deep nets to predict the registration parameters and warp transformation accurately and efficiently. We demonstrate, through extensive numerical and visual experiments, that our framework outperforms significantly classic reconstruction schemes and other bi-task method; this in terms of both image quality and computational time. Finally, we show generalisation capabilities of our approach by demonstrating their performance on fast Magnetic Resonance Imaging (MRI), sparse view computed tomography (CT) and low dose CT with measurements much below the Nyquist limit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号