首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Iterative image estimation methods have been widely used in emission tomography. Accurate estimation of the uncertainty of the reconstructed images is essential for quantitative applications. While both iteration-based noise analysis and fixed-point noise analysis have been developed, current iteration-based results are limited to only a few algorithms that have an explicit multiplicative update equation and some may not converge to the fixed-point result. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient-type algorithms. Under a certain condition, the proposed method does not require an explicit expression of the preconditioner. By deriving the fixed-point expression from the iteration-based result, we show that the proposed iteration-based noise analysis is consistent with fixed-point analysis. Examples in emission tomography and transmission tomography are shown. The results are validated using Monte Carlo simulations.  相似文献   

2.
Iterative algorithms such as maximum likelihood-expectation maximization (ML-EM) become the standard for the reconstruction in emission computed tomography. However, such algorithms are sensitive to noise artifacts so that the reconstruction begins to degrade when the number of iterations reaches a certain value. In this paper, we have investigated a new iterative algorithm for penalized-likelihood image reconstruction that uses the fuzzy nonlinear anisotropic diffusion (AD) as a penalty function. The proposed algorithm does not suffer from the same problem as that of ML-EM algorithm, and it converges to a low noisy solution even if the iteration number is high. The fuzzy reasoning instead of a nonnegative monotonically decreasing function was used to calculate the diffusion coefficients which control the whole diffusion. Thus, the diffusion strength is controlled by fuzzy rules expressed in a linguistic form. The proposed method makes use of the advantages of fuzzy set theory in dealing with uncertain problems and nonlinear AD techniques in removing the noise as well as preserving the edges. Quantitative analysis shows that the proposed reconstruction algorithm is suitable to produce better reconstructed images when compared with ML-EM, ordered subsets EM (OS-EM), Gaussian-MAP, MRP, TV-EM reconstructed images.  相似文献   

3.
A reconstruction theory for intensity diffraction tomography (I-DT) has been proposed that permits reconstruction of a weakly scattering object without explicit knowledge of phase information. In this work, we examine the application of I-DT, using either planar- or spherical-wave incident wavefields, for imaging three-dimensional (3D) phase objects. We develop and investigate two algorithms for reconstructing phase objects that utilize only half of the measurements that would be needed to reconstruct a complex-valued object function. Each reconstruction algorithm reconstructs the phase object by use of different sets of intensity measurements. Although the developed reconstruction algorithms are equivalent mathematically, we demonstrate that their numerical and noise propagation properties differ considerably. We implement numerically the reconstruction algorithms and present reconstructed images to demonstrate their use and to corroborate our theoretical assertions.  相似文献   

4.
This paper discusses the iterative solution of the nonlinear problem of optical tomography. In the established forward model-based iterative image reconstruction (MOBIIR) method a linear perturbation equation containing the first derivative of the forward operator is solved to obtain the update vector for the optical properties. In MOBIIR, the perturbation equation is updated by recomputing the first derivative after each update of the optical properties. In the method presented here a nonlinear perturbation equation, containing terms up to the second derivative, is used to iteratively solve for the optical property updates. Through this modification, reconstructions with reasonable contrast recovery and accuracy are obtained without the need for updating the perturbation equation and therefore eliminating the outer iteration of the usual MOBIIR algorithm. To improve the performance of the algorithm the outer iteration is reintroduced in which the perturbation equation is recomputed without re-estimating the derivatives and with only updated computed data. The system of quadratic equations is solved using either a modified conjugate gradient descent scheme or a two-step linearized predictor-corrector scheme. A quick method employing the adjoint of the forward operator is used to estimate the derivatives. By solving the nonlinear perturbation equation it is shown that the iterative scheme is able to recover large contrast variations in absorption coefficient with improved noise tolerance in data. This ability has not been possible so far with linear algorithms. This is demonstrated by presenting results of numerical simulations from objects with inhomogeneous inclusions in absorption coefficient with different contrasts and shapes.  相似文献   

5.
We report on the development of an iterative image reconstruction scheme for optical tomography that is based on the equation of radiative transfer. Unlike the commonly applied diffusion approximation, the equation of radiative transfer accurately describes the photon propagation in turbid media without any limiting assumptions regarding the optical properties. The reconstruction scheme consists of three major parts: (1) a forward model that predicts the detector readings based on solutions of the time-independent radiative transfer equation, (2) an objective function that provides a measure of the differences between the detected and the predicted data, and (3) an updating scheme that uses the gradient of the objective function to perform a line minimization to get new guesses of the optical properties. The gradient is obtained by employing an adjoint differentiation scheme, which makes use of the structure of the finite-difference discrete-ordinate formulation of the transport forward model. Based on the new guess of the optical properties a new forward calculation is performed to get new detector predictions. The reconstruction process is completed when the minimum of the objective function is found within a defined error. To illustrate the performance of the code we present initial reconstruction results based on simulated data.  相似文献   

6.
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By the use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications.  相似文献   

7.
The expectation-maximization (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. By contrast, linear reconstruction methods, such as filtered back-projection in tomography, show a much more global noise pattern, with high-intensity regions of the object contributing to noise at rather distant low-intensity regions. The theoretical results of this paper depend on two approximations, but in the second paper in this series we demonstrate through Monte Carlo simulation that the approximations are justified over a wide range of conditions in emission tomography. The theory can, therefore, be used as a basis for calculation of objective figures of merit for image quality.  相似文献   

8.
It has been well recognized that, in comparison with the conventional positron emission tomography (PET), the differential-time measurements made available in time-of-flight (TOF) PET imaging can reduce the propagation of data noise in reconstruction and lead to images having better statistical quality. This observation has been the motivation driving the interest in developing TOF-PET systems. In this paper, we make new observations that can extend the use of TOF-PET. We develop a new mathematical formulation showing that the TOF information can be utilized to achieve new modes of reconstruction. In particular, it enables windowed and regions-of-interest reconstructions by use of TOF-PET measurements having a restricted coverage in the TOF or transverse direction, or both. A class of analytic algorithms is developed to perform such reconstructions. We employ computer-simulated TOF-PET data containing Poisson noise to validate the developed algorithms and evaluate their response to data noise with respect to a confidence-weighting analytic TOF-PET reconstruction method. We also demonstrate that in certain situations, the new reconstruction algorithms can generate images having improved statistics by recruiting suitable subsets of the TOF-PET data to minimize the use of deteriorating measurements in reconstruction. Potential implications of the new reconstruction approach to PET imaging are discussed.  相似文献   

9.
An accelerated convergent ordered subsets algorithm for emission tomography   总被引:1,自引:0,他引:1  
We propose an algorithm, E-COSEM (enhanced complete-data ordered subsets expectation-maximization), for fast maximum likelihood (ML) reconstruction in emission tomography. E-COSEM is founded on an incremental EM approach. Unlike the familiar OSEM (ordered subsets EM) algorithm which is not convergent, we show that E-COSEM converges to the ML solution. Alternatives to the OSEM include RAMLA, and for the related maximum a posteriori (MAP) problem, the BSREM and OS-SPS algorithms. These are fast and convergent, but require ajudicious choice of a user-specified relaxation schedule. E-COSEM itself uses a sequence of iteration-dependent parameters (very roughly akin to relaxation parameters) to control a tradeoff between a greedy, fast but non-convergent update and a slower but convergent update. These parameters are computed automatically at each iteration and require no user specification. For the ML case, our simulations show that E-COSEM is nearly as fast as RAMLA.  相似文献   

10.
Ordered subsets algorithms for transmission tomography   总被引:5,自引:0,他引:5  
The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested.  相似文献   

11.
In emission tomography statistically based iterative methods can improve image quality relative to analytic image reconstruction through more accurate physical and statistical modelling of high-energy photon production and detection processes. Continued exponential improvements in computing power, coupled with the development of fast algorithms, have made routine use of iterative techniques practical, resulting in their increasing popularity in both clinical and research environments. Here we review recent progress in developing statistically based iterative techniques for emission computed tomography. We describe the different formulations of the emission image reconstruction problem and their properties. We then describe the numerical algorithms that are used for optimizing these functions and illustrate their behaviour using small scale simulations.  相似文献   

12.
R W Rowe  S Dai 《Medical physics》1992,19(4):1113-1119
Although radioactive decay obeys Poisson statistics, because of the corrections that are applied to the projection data prior to reconstruction, the noise in positron emission tomography (PET) projections does not follow a Poisson distribution. Use of Poisson noise when simulating PET projections in order to test the performance of reconstruction and processing techniques is therefore not appropriate. The magnitude of PET projection noise was observed to be as much as 10 to 100 times greater than Poisson noise in some instances. A quadratic function was found to fit the relationship between noise power spectral density and total projection count. The coefficients of the quadratic function were determined for projections of different tracer distributions and types. Using these observations, a method of simulating PET projections was developed based on a pseudo-Poisson noise model. Projections simulated according to this method are good approximations to real projection data and take into account the characteristics of individual PET cameras and particular tracer distributions. Such simulated projections have been valuable in predicting the performance of reconstruction algorithms. This approach can also be applied to single photon emission tomography.  相似文献   

13.
Alessio AM  Kinahan PE 《Medical physics》2006,33(11):4095-4103
Accurate quantitation of positron emission tomography (PET) tracer uptake levels in tumors is important for staging and monitoring response to treatment. Quantitative accuracy in PET is particularly poor for small tumors because of system partial volume errors and smoothing operations. This work proposes a reconstruction algorithm to reduce the quantitative errors due to limited system resolution and due to necessary image noise reduction. We propose a method for finding and using the detection system response in the projection matrix of a statistical reconstruction algorithm. In addition, we use aligned anatomical information, available in PET/CT scanners, to govern the penalty term applied during each image update. These improvements are combined with Fourier rebinning in a clinically feasible algorithm for reconstructing fully three-dimensional PET data. Results from simulation and measured studies show improved quantitation of tumor values in terms of bias and variance across multiple tumor sizes and activity levels with the proposed method. At common clinical image noise levels for the detection task, the proposed method reduces the error in maximum tumor values by 11% compared to filtered back-projection and 5% compared to conventional iterative methods.  相似文献   

14.
Knowledge of the statistical properties of reconstructed single photon emission computed tomography (SPECT) and positron emission tomography (PET) images would be helpful for optimizing acquisition and image processing protocols. We describe a non-parametric bootstrap approach to accurately estimate the statistical properties of SPECT or PET images whatever the noise properties in the projections and the reconstruction algorithm. Using analytical simulations and real PET data, this method is shown to accurately predict the statistical properties, including the variance and covariance, of reconstructed pixel values for both linear (filtered backprojection) and non-linear (ordered subset expectation maximization) reconstruction algorithms.  相似文献   

15.
Hwang D  Zeng GL 《Medical physics》2005,32(7):2312-2319
This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases.  相似文献   

16.
Yu L  Pan X 《Medical physics》2003,30(10):2629-2637
Half-scan strategy can be used for reducing scanning time and radiation dose delivered to the patient in fan-beam computed tomography (CT). In helical CT, the data weighting/interpolation functions are often devised based upon half-scan configurations. The half-scan fan-beam filtered backprojection (FFBP) algorithm is generally used for image reconstruction from half-scan data. It can, however, be susceptible to sample aliasing and data noise for configurations with short focal lengths and/or large fan-angles, leading to nonuniform resolution and noise properties in reconstructed images. Uniform resolution and noise properties are generally desired because they may lead to an increased utility of reconstructed images in estimation and/or detection/classification tasks. In this work, we propose an algorithm for reconstruction of images with uniform noise and resolution properties in half-scan CT. In an attempt to evaluate the image-noise properties, we derive analytic expressions for image variances obtained by use of the half-scan algorithms. We also perform numerical studies to assess quantitatively the resolution and noise properties of the algorithms. The results in these studies confirm that the proposed algorithm yields images with more uniform spatial resolution and with lower and more uniform noise levels than does the half-scan FFBP algorithm. Empirical results obtained in noise studies also verify the validity of the derived expressions for image variances. The proposed algorithm would be particularly useful for image reconstruction from data acquired by use of configurations with short focal lengths and large field of measurement, which may be encountered in compact micro-CT and radiation therapeutic CT applications. The analytic results of the image-noise properties can be used for image-quality assessment in detection/classification tasks by use of model-observers.  相似文献   

17.
Gauss-Newton method for image reconstruction in diffuse optical tomography   总被引:5,自引:0,他引:5  
We present a regularized Gauss-Newton method for solving the inverse problem of parameter reconstruction from boundary data in frequency-domain diffuse optical tomography. To avoid the explicit formation and inversion of the Hessian which is often prohibitively expensive in terms of memory resources and runtime for large-scale problems, we propose to solve the normal equation at each Newton step by means of an iterative Krylov method, which accesses the Hessian only in the form of matrix-vector products. This allows us to represent the Hessian implicitly by the Jacobian and regularization term. Further we introduce transformation strategies for data and parameter space to improve the reconstruction performance. We present simultaneous reconstructions of absorption and scattering distributions using this method for a simulated test case and experimental phantom data.  相似文献   

18.
Since the introduction of the expectation-maximization (EM) algorithm for generating maximum-likelihood (ML) and maximum a posteriori (MAP) estimates in emission tomography, there have been many investigators applying the ML method. However, almost all of the previous work has been restricted to two-dimensional (2D) reconstructions. The major focus and contribution of this paper is to demonstrate a fully three-dimensional (3D) implementation of the MAP method for single-photon-emission computed tomography (SPECT). The 3D reconstruction exhibits an improvement in resolution when compared to the generation of the series of separate 2D slice reconstructions. As has been noted, the iterative EM algorithm for 2D reconstruction is highly computational; the 3D algorithm is far worse. To accommodate the computational complexity, we have extended our previous work in the 2D arena and demonstrate an implementation on the class of massively parallel processors of the 3D algorithm. Using a 16000 processor MasPar machine, the algorithm is demonstrated to execute at 1.24 s/EM iteration for the entire 64 x 64 x 64 cube of 64 planar measurements obtained from the Siemens Orbiter rotating camera operating in the high-resolution mode.  相似文献   

19.
Modern computed tomography systems allow volume imaging of the heart. Up to now, approximately two-dimensional (2D) and 3D algorithms based on filtered backprojection are used for the reconstruction. These algorithms become more sensitive to artifacts when the cone angle of the x-ray beam increases as it is the current trend of computed tomography (CT) technology. In this paper, we investigate the potential of iterative reconstruction based on the algebraic reconstruction technique (ART) for helical cardiac cone-beam CT. Iterative reconstruction has the advantages that it takes the cone angle into account exactly and that it can be combined with retrospective cardiac gating fairly easily. We introduce a modified ART algorithm for cardiac CT reconstruction. We apply it to clinical cardiac data from a 16-slice CT scanner and compare the images to those obtained with a current analytical reconstruction method. In a second part, we investigate the potential of iterative reconstruction for a large area detector with 256 slices. For the clinical cases, iterative reconstruction produces excellent images of diagnostic quality. For the large area detector, iterative reconstruction produces images superior to analytical reconstruction in terms of cone-beam artifacts.  相似文献   

20.
Statistical iterative methods for image reconstruction like maximum likelihood expectation maximization (ML-EM) are more robust and flexible than analytical inversion methods and allow for accurately modeling the counting statistics and the photon transport during acquisition. They are rapidly becoming the standard for image reconstruction in emission computed tomography. The maximum likelihood approach provides images with superior noise characteristics compared to the conventional filtered back projection algorithm. But a major drawback of the statistical iterative image reconstruction is its high computational cost. In this paper, a fast algorithm is proposed as a modified OS-EM (MOS-EM) using a penalized function, which is applied to the least squares merit function to accelerate image reconstruction and to achieve better convergence. The experimental results show that the algorithm can provide high quality reconstructed images with a small number of iterations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号