首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In emission tomography, anatomical side information, in the form of organ and lesion boundaries, derived from intra-patient coregistered CT or MR scans can be incorporated into the reconstruction. Our interest is in exploring the efficacy of such side information for lesion detectability. To assess detectability we used the SNR of a channelized Hotelling observer and a signal-known exactly/background-known exactly detection task. In simulation studies, we incorporated anatomical side information into a SPECT MAP (maximum a posteriori) reconstruction by smoothing within but not across organ or lesion boundaries. A non-anatomical prior was applied by uniform smoothing across the entire image. We investigated whether the use of anatomical priors with organ boundaries alone or with perfect lesion boundaries alone would change lesion detectability relative to the case of a prior with no anatomical information. Furthermore, we investigated whether any such detectability changes for the organ-boundary case would be a function of the distance of the lesion to the organ boundary. We also investigated whether any detectability changes for the lesion-boundary case would be a function of the degree of proximity, i.e. a difference in the radius of the true functional lesion and the radius of the anatomical lesion boundary. Our results showed almost no detectability difference with versus without organ boundaries at any lesion-to-organ boundary distance. Our results also showed no difference in lesion detectability with and without lesion boundaries, and no variation of lesion detectability with degree of proximity.  相似文献   

2.
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By the use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications.  相似文献   

3.
Statistical iterative reconstruction (SIR) algorithms have shown potential to substantially improve low-dose cone-beam CT (CBCT) image quality. The penalty term plays an important role in determining the performance of SIR algorithms. In this work, we quantitatively evaluate the impact of the penalties on the performance of a statistics-based penalized weighted least-squares (PWLS) iterative reconstruction algorithm for improving the image quality of low-dose CBCT. Three different edge-preserving penalty terms, exponential form anisotropic quadratic (AQ) penalty (PWLS-Exp), inverse square form AQ penalty (PWLS-InverseSqr) and total variation penalty (PWLS-TV), were compared against the conventional isotropic quadratic form penalty (PWLS-Iso) using both computer simulation and experimental studies. Noise in low-dose CBCT can be substantially suppressed by the PWLS reconstruction algorithm and edges are well preserved by both AQ- and TV-based penalty terms. The noise-resolution tradeoff measurement shows that the PWLS-Exp exhibits the best spatial resolution of all the three anisotropic penalty terms at matched noise level for reconstructing high-contrast objects. For the reconstruction of low-contrast objects, the TV-based penalty outperforms the AQ-based one with better resolution preservation at matched noise levels. Different penalty terms may be used for better edge preservation at different targeted contrast levels.  相似文献   

4.
It is well known that the reconstruction problem in optical tomography is ill-posed. In other words, many different spatial distributions of optical properties inside the medium can lead to the same detector readings on the surface of the medium under consideration. Therefore, the choice of an appropriate method to overcome this problem is of crucial importance for any successful optical tomographic image reconstruction algorithm. In this work we approach the problem within a gradient-based iterative image reconstruction scheme. The image reconstruction is considered to be a minimization of an appropriately defined objective function. The objective function can be separated into a least-square-error term, which compares predicted and actual detector readings, and additional penalty terms that may contain a priori information about the system. For the efficient minimization of this objective function the gradient with respect to the spatial distribution of optical properties is calculated. Besides presenting the underlying concepts in our approach to overcome ill-posedness in optical tomography, we will show numerical results that demonstrate how prior knowledge, represented as penalty terms, can improve the reconstruction results.  相似文献   

5.
Image quality assessment is required for an optimal use of mammographic units. On the one hand, there are objective image quality assessment methods based on the measurement of technical parameters such as modulation transfer function (MTF), noise power spectrum (NPS) or detection quantum efficiency (DQE) describing performances of digital detectors. These parameters are, however, without direct relationship with lesion detectability in clinical practice. On the other hand, there are image quality assessment methods involving time consuming procedures, but presenting a direct relationship with lesion detectability. This contribution describes an X-ray source/digital detector model leading to the simulation of virtual contrast-detail phantom (CDMAM) images. The virtual image computation method requires the acquisition of only few real images and allows for an objective image quality assessment presenting a direct relationship with lesion detectability. The transfer function of the proposed model takes as input physical parameters (MTF* and noise) measured under clinical conditions on mammographic units. As presented in this contribution, MTF* is a modified MTF taking into account the effects due to X-ray scatter in the breast and magnification. Results obtained with the structural similarity index prove that the simulated images are quite realistic in terms of contrast and noise. Tests using contrast detail curves highlight the fact that the simulated and real images lead to very similar data quality in terms of lesion detectability. Finally, various statistical tests show that quality factors computed for both the simulated images and the real images are very close for the two data sets.  相似文献   

6.
In this paper, we investigate the benefits of a spatiotemporal approach for reconstruction of image sequences. In the proposed approach, we introduce a temporal prior in the form of motion compensation to account for the statistical correlations among the frames in a sequence, and reconstruct all the frames collectively as a single function of space and time. The reconstruction algorithm is derived based on the maximum a posteriori estimate, for which the one-step late expectation-maximization algorithm is used. We demonstrated the method in our experiments using simulated single photon emission computed tomography (SPECT) cardiac perfusion images. The four-dimensional (4D) gated mathematical cardiac-torso phantom was used for simulation of gated SPECT perfusion imaging with Tc-99m-sestamibi. In addition to bias-variance analysis and time activity curves, we also used a channelized Hotelling observer to evaluate the detectability of perfusion defects in the reconstructed images. Our experimental results demonstrated that the incorporation of temporal regularization into image reconstruction could significantly improve the accuracy of cardiac images without causing any significant cross-frame blurring that may arise from the cardiac motion. This could lead to not only improved detection of perfusion defects, but also improved reconstruction of the heart wall which is important for functional assessment of the myocardium.  相似文献   

7.
We consider noise in computed tomography images that are reconstructed using the classical direct fan-beam filtered backprojection algorithm, from both full- and short-scan data. A new, accurate method for computing image covariance is presented. The utility of the new covariance method is demonstrated by its application to the implementation of a channelized Hotelling observer for a lesion detection task. Results from the new covariance method and its application to the channelized Hotelling observer are compared with results from Monte Carlo simulations. In addition, the impact of a bowtie filter and x-ray tube current modulation on reconstruction noise and lesion detectability are explored for full-scan reconstruction.  相似文献   

8.
Purpose: Low contrast sensitivity of CT scanners is regularly assessed by subjective scoring of low contrast detectability within phantom CT images. Since in these phantoms low contrast objects are arranged in known fixed patterns, subjective rating of low contrast visibility might be biased. The purpose of this study was to develop and validate a software for automated objective low contrast detectability based on a model observer.Methods: Images of the low contrast module of the Catphan 600 phantom were used for the evaluation of the software. This module contains two subregions: the supraslice region with three groups of low contrast objects (each consisting of nine circular objects with diameter 2-15 mm and contrast 0.3, 0.5, and 1.0%, respectively) and the subslice region with three groups of four circular objects each (diameter 3-9 mm; contrast 1.0%). The software method offered automated determination of low contrast detectability using a NPWE (nonprewhitening matched filter with an eye filter) model observer for the supraslice region. The model observer correlated templates of the low contrast objects with the acquired images of the Catphan phantom and a discrimination index d' was calculated. This index was transformed into a proportion correct (PC) value. In the two-alternative forced choice (2-AFC) experiments used in this study, a PC ≥ 75% was proposed as a threshold to decide whether objects were visible. As a proof of concept, influence of kVp (between 80 and 135 kV), mAs (25-200 mAs range) and reconstruction filter (four filters, two soft and two sharp) on low contrast detectability was investigated. To validate the outcome of the software in a qualitative way, a human observer study was performed.Results: The expected influence of kV, mAs and reconstruction filter on image quality are consistent with the results of the proposed automated model. Higher values for d' (or PC) are found with increasing mAs or kV values and for the soft reconstruction filters. For the highest contrast group (1%), PC values were fairly above 75% for all object diameters >2 mm, for all conditions. For the 0.5% contrast group, the same behavior was observed for object diameters >3 mm for all conditions. For the 0.3% contrast group, PC values were higher than 75% for object diameters >6 mm except for the series acquired at the lowest dose (25 mAs), which gave lower PC values. In the human observer study similar trends were found.Conclusions: We have developed an automated method to objectively investigate image quality using the NPWE model in combination with images of the Catphan phantom low contrast module. As a first step, low contrast detectability as a function of both acquisition and reconstruction parameter settings was successfully investigated with the software. In future work, this method could play a role in image reconstruction algorithms evaluation, dose reduction strategies or novel CT technologies, and other model observers may be implemented as well.  相似文献   

9.
Kilovotage cone-beam computed tomography (kV-CBCT) has shown potentials to improve the accuracy of a patient setup in radiotherapy. However, daily and repeated use of CBCT will deliver high extra radiation doses to patients. One way to reduce the patient dose is to lower mAs when acquiring projection data. This, however, degrades the quality of low mAs CBCT images dramatically due to excessive noises. In this work, we aim to improve the CBCT image quality from low mAs scans. Based on the measured noise properties of the sinogram, a penalized weighted least-squares (PWLS) objective function was constructed, and the ideal sinogram was then estimated by minimizing the PWLS objection function. To preserve edge information in the projection data, an anisotropic penalty term was designed using the intensity difference between neighboring pixels. The effectiveness of the presented algorithm was demonstrated by two experimental phantom studies. Noise in the reconstructed CBCT image acquired with a low mAs protocol was greatly suppressed after the proposed sinogram domain image processing, without noticeable sacrifice of the spatial resolution.  相似文献   

10.
Statistical iterative methods for image reconstruction like maximum likelihood expectation maximization (ML-EM) are more robust and flexible than analytical inversion methods and allow for accurately modeling the counting statistics and the photon transport during acquisition. They are rapidly becoming the standard for image reconstruction in emission computed tomography. The maximum likelihood approach provides images with superior noise characteristics compared to the conventional filtered back projection algorithm. But a major drawback of the statistical iterative image reconstruction is its high computational cost. In this paper, a fast algorithm is proposed as a modified OS-EM (MOS-EM) using a penalized function, which is applied to the least squares merit function to accelerate image reconstruction and to achieve better convergence. The experimental results show that the algorithm can provide high quality reconstructed images with a small number of iterations.  相似文献   

11.
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.  相似文献   

12.
We present the results of utilizing aligned anatomical information from CT images to locally adjust image smoothness during the reconstruction of three-dimensional (3D) whole-body positron emission tomography (PET) data. The ability of whole-body PET imaging to detect malignant neoplasms is becoming widely recognized. Potentially useful, however, is the role of whole-body PET in quantitative estimation of tracer uptake. The utility of PET in oncology is often limited by the high level of statistical noise in the images. Reduction in noise can be obtained by incorporating a priori image smoothness information from correlated anatomical information during the reconstruction of PET data. A combined PET/CT scanner allows the acquisition of accurately aligned PET and x-ray CT whole-body data. We use the Fourier rebinning algorithm (FORE) to accurately convert the 3D PET data to two-dimensional (2D) data to accelerate the image reconstruction process. The 2D datasets are reconstructed with successive over-relaxation of a penalized weighted least squares (PWLS) objective function to model the statistics of the acquisition, data corrections, and rebinning. A 3D voxel label model is presented that incorporates the anatomical information via the penalty weights of the PWLS objective function. This combination of FORE + PWLS + labels was developed as it allows for both reconstruction of 3D whole-body data sets in clinically feasible times and also the inclusion of anatomical information in such a way that convergence can be guaranteed. Since mismatches between anatomical (CT) and functional (PET) data are unavoidable in practice, the labels are 'blurred' to reflect the uncertainty associated with the anatomical information. Simulated and experimental results show the potential advantage of incorporating anatomical information by using blurred labels to calculate the penalty weights. We conclude that while the effect of this method on detection tasks is complicated and unclear, there is an improvement on the estimation task.  相似文献   

13.
Iterative reconstruction algorithms have been widely used in PET and SPECT emission tomography. Accurate modeling of photon noise propagation is crucial for quantitative tomography applications. Iteration-based noise propagation methods have been developed for only a few algorithms that have explicit multiplicative update equations. And there are discrepancies between the iteration-based methods and Fessler's fixed-point method because of improper approximations. In this paper, we present a unified theoretical prediction of noise propagation for any penalized expectation maximization (EM) algorithm where the EM approach incorporates a penalty term. The proposed method does not require an explicit update equation. The update equation is assumed to be implicitly defined by a differential equation of a surrogate function. We derive the expressions using the implicit function theorem, Taylor series and the chain rule from vector calculus. We also derive the fixed-point expressions when iterative algorithms converge and show the consistency between the proposed method and the fixed-point method. These expressions are solely defined in terms of the partial derivatives of the surrogate function and the Fisher information matrices. We also apply the theoretical noise predictions for iterative reconstruction algorithms in emission tomography. Finally, we validate the theoretical predictions for MAP-EM and OSEM algorithms using Monte Carlo simulations with Jaszczak-like and XCAT phantoms, respectively.  相似文献   

14.
An efficient approach for automatic detection of red lesions in ocular fundus images based on pixel classification and mathematical morphology is proposed. Experimental evaluation of the proposed approach demonstrates better performance over other red lesion detection algorithms, and when determining whether an image contains red lesions the proposed approach achieves a sensitivity of 100% and specificity of 91%.  相似文献   

15.
We have developed a two-stage Gauss-Newton reconstruction process with an automatic procedure for determining the regularization parameter. The combination is utilized by our microwave imaging system and has facilitated recovery of quantitatively improved images. The first stage employs a Levenberg-Marquardt regularization along with a spatial filtering technique for a few iterations to produce an intermediate image. In effect, the first set of iterative image reconstruction steps synthesizes a priori information from the measurement data versus actually requiring physical prior information on the interrogated object. Because of the interaction of the Levenberg-Marquardt regularization and spatial filtering at each iteration, the intermediate image produced from the first reconstruction stage represents an improvement in terms of the least squared error over the initial uniform guess; however, it has not completely converged in a least squared sense. The second stage involves using this distribution as a priori information in an iteratively regularized Gauss-Newton reconstruction with a weighted Euclidean distance penalty term. The penalized term restricts the final image to a vicinity (determined by the scale of the weighting parameter) about the intermediate image while allowing more flexibility in extracting internal object structures. The second stage makes use of an empirical Bayesian/random effects model that enables an optimal determination of the weighting parameter of the penalized term. The new approach demonstrates quantifiably improved images in simulation, phantom and in vivo experiments with particularly striking improvements with respect to the recovery of heterogeneities internal to large, high contrast scatterers such as encountered when imaging the human breast in a water-coupled configuration.  相似文献   

16.
Marker-Controlled Watershed for Lesion Segmentation in Mammograms   总被引:1,自引:0,他引:1  
Lesion segmentation, which is a critical step in computer-aided diagnosis system, is a challenging task as lesion boundaries are usually obscured, irregular, and low contrast. In this paper, an accurate and robust algorithm for the automatic segmentation of breast lesions in mammograms is proposed. The traditional watershed transformation is applied to the smoothed (by the morphological reconstruction) morphological gradient image to obtain the lesion boundary in the belt between the internal and external markers. To automatically determine the internal and external markers, the rough region of the lesion is identified by a template matching and a thresholding method. Then, the internal marker is determined by performing a distance transform and the external marker by morphological dilation. The proposed algorithm is quantitatively compared to the dynamic programming boundary tracing method and the plane fitting and dynamic programming method on a set of 363 lesions (size range, 5–42 mm in diameter; mean, 15 mm), using the area overlap metric (AOM), Hausdorff distance (HD), and average minimum Euclidean distance (AMED). The mean ± SD of the values of AOM, HD, and AMED for our method were respectively 0.72 ± 0.13, 5.69 ± 2.85 mm, and 1.76 ± 1.04 mm, which is a better performance than two other proposed segmentation methods. The results also confirm the potential of the proposed algorithm to allow reliable segmentation and quantification of breast lesion in mammograms.  相似文献   

17.
In this paper, we investigate the performance of time-of-flight (TOF) positron emission tomography (PET) in improving lesion detectability. We present a theoretical approach to compare lesion detectability of TOF versus non-TOF systems and perform computer simulations to validate the theoretical prediction. A single-ring TOF PET tomograph is simulated using SimSET software, and images are reconstructed in 2D from list-mode data using a maximum a posteriori method. We use a channelized Hotelling observer to assess the detection performance. Both the receiver operating characteristic (ROC) and localization ROC curves are compared for the TOF and non-TOF PET systems. We first studied the SNR gains for TOF PET with different scatter and random fractions, system timing resolutions and object sizes. We found that the TOF information improves the lesion detectability and the improvement is greater with larger fractions of randoms, better timing resolution and bigger objects. The scatters by themselves have little impact on the SNR gain after correction. Since the true system timing resolution may not be known precisely in practice, we investigated the effect of mismatched timing kernels and showed that using a mismatched kernel during reconstruction always degrades the detection performance, no matter whether it is narrower or wider than the real value. Using the proposed theoretical framework, we also studied the effect of lumpy backgrounds on the detection performance. Our results indicated that with lumpy backgrounds, the TOF PET still outperforms the non-TOF PET, but the improvement is smaller compared with the uniform background case. More specifically, with the same correlation length, the SNR gain reduces with bigger number of lumpy patches and greater lumpy amplitudes. With the same variance, the SNR gain reaches the minimum when the width of the Gaussian lumps is close to the size of the tumor.  相似文献   

18.
This study develops and demonstrates a realistic x-ray imaging simulator with computerized observers to maximize lesion detectability and minimize patient exposure. A software package, ViPRIS, incorporating two computational patient phantoms, has been developed for simulating x-ray radiographic images. A tomographic phantom, VIP-Man, constructed from Visible Human anatomical colour images is used to simulate the scattered portion using the ESGnrc Monte Carlo code. The primary portion of an x-ray image is simulated using the projection ray-tracing method through the Visible Human CT data set. To produce a realistic image, the software simulates quantum noise, blurring effects, lesions, detector absorption efficiency and other imaging artefacts. The primary and scattered portions of an x-ray chest image are combined to form a final image for computerized observer studies and image quality analysis. Absorbed doses in organs and tissues of the segmented VIP-Man phantom were also obtained from the Monte Carlo simulations. Approximately 25,000 simulated images and 2,500,000 data files were analysed using computerized observers. Hotelling and Laguerre-Gauss Hotelling observers are used to perform various lesion detection tasks. Several model observer tasks were used including SKE/BKE, MAFC and SKEV. The energy levels and fluence at the minimum dose required to detect a small lesion were determined with respect to lesion size, location and system parameters.  相似文献   

19.
Virtual-pinhole PET (VP-PET) imaging is a new technology in which one or more high-resolution detector modules are integrated into a conventional PET scanner with lower resolution detectors. It can locally enhance the spatial resolution and contrast recovery near the add-on detectors, and depending on the configuration, may also increase the sensitivity of the system. This novel scanner geometry makes the reconstruction problem more challenging compared to the reconstruction of data from a stand-alone PET scanner, as new techniques are needed to model and account for the non-standard acquisition. In this paper, we present a general framework for fully 3D modeling of an arbitrary VP-PET insert system. The model components are incorporated into a statistical reconstruction algorithm to estimate an image from the multi-resolution data. For validation, we apply the proposed model and reconstruction approach to one of our custom-built VP-PET systems-a half-ring insert device integrated into a clinical PET/CT scanner. Details regarding the most important implementation issues are provided. We show that the proposed data model is consistent with the measured data, and that our approach can lead to reconstructions with improved spatial resolution and lesion detectability.  相似文献   

20.
目的利用眼底图像中硬性渗出物(hard exudates,HE)的亮度与边缘特征,提出一种基于Canny边缘检测算法与形态学重构相结合的HE自动检测方法,以解决目前算法灵敏度低、检测结果中视盘和血管的干扰等问题,对糖尿病视网膜病变(diabetic retinopathy,DR)的自动筛查具有重要意义。方法检测算法包括4个步骤。步骤一,图像预处理,主要包括RGB通道选取、基于形态学的图像对比度增强。步骤二,视网膜图像关键结构的消除,利用基于Gabor滤波的血管分割方法,消除血管边缘对HE检测的影响。将本文视杯分割算法应用在眼底图像红色通道上实现视盘自动分割,消除视盘及其边缘对HE检测的影响。步骤三,利用改进的Canny边缘检测算法和形态学重构方法对HE进行提取。步骤四,基于形态学的图像后处理,消除眼底图像边缘部分假阳性区域。最后利用该算法测试公开数据库中的40幅图像(35幅HE病变图像,5幅正常图像)。结果该算法对基于病变的灵敏性(sensitivity,SE)和阳性预测值(positive predictive value,PPV)分别为93.18%和79.26%,基于图像的灵敏性、特异性(specificity,SP)和准确率(accuracy,ACC)分别为97.14%、80.00%和95.00%。结论与其他方法对比,基于Canny边缘检测算法与形态学重构相结合的HE自动检测算法具有较好的可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号