首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.  相似文献   

2.
Images reconstructed with the maximum-likelihood-by-expectation-maximization (ML) algorithm have lower noise in some regions, particularly low count areas, compared with images reconstructed with filtered backprojection (FBP). The use of statistically correct noise model coupled with the positivity constraint in the ML algorithm provides this noise improvement, but whether this model confers a general advantage for ML over FBP with no noise model and any reconstruction filter, is unclear. We have studied the quantitative impact of the correct noise model in the ML algorithm applied to simulated and real PET fluoro-deoxyglucose (FDG) brain images, given a simplified but accurate reconstruction model with spatially invariant resolution. For FBP reconstruction, several Metz filters were chosen and images with different resolution were obtained depending on the order (1-400) of the Metz filters. Comparisons were made based on the mean Fourier spectra of the projection amplitudes, the noise-power spectra, and the mean region-of-interest signal and noise behaviour in the images. For images with resolution recovery beyond the intrinsic detector resolution, the noise increased significantly for FBP compared with ML. This indicates that in the process of signal recovery using ML, the noise is decoupled from the signal. Such noise decoupling is not possible for FBP. However, for image resolution equivalent to or less than the intrinsic detector resolution, FBP with Metz filters of various orders can achieve a performance similar to ML. The significance of the noise decoupling advantage in ML is dependent on the reconstructed image resolution required for specific imaging tasks.  相似文献   

3.
Background

Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique.

Methods

Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman’s test.

Results

Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images.

Conclusions

The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.

  相似文献   

4.
Exact BPF and FBP algorithms for nonstandard saddle curves   总被引:1,自引:0,他引:1  
Yu H  Zhao S  Ye Y  Wang G 《Medical physics》2005,32(11):3305-3312
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.  相似文献   

5.
Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.  相似文献   

6.
目的:探讨图像域迭代重建算法对腹部CT平扫图像质量及辐射剂量的影响。 方法:以辽阳市中心医院2017年1月~2018年4月行腹部CT平扫的150例患者为研究对象,依据就诊先后顺序随机将其分为观察组与对照组,各75例。均行自动毫安控制技术扫描,管电压均为130 kV。观察组预设图像质量参考毫安秒150 mAs,行图像域迭代重建算法重建;对照组预设图像质量参考毫安秒250 mAs,行滤波反投影重组。通过CT值、图像噪声SD、图像信噪比、对比噪声比评价两组图像客观质量,并行图像质量主观评价,记录两组CT剂量容积指数。 结果:观察组肝脏、脾脏的图像噪声SD均显著低于对照组,图像信噪比均显著高于对照组,差异有统计学意义(P<0.05);CT值、对比噪声比、主观整体质量评分两组比较差异均无统计学意义(P>0.05);观察组CT剂量容积指数为(10.02±2.85) mGy,显著低于对照组的(15.68±4.36) mGy,差异有统计学意义(P<0.05)。 结论:图像域迭代重建算法不仅能保证腹部CT平扫图像质量,而且能有效减少辐射剂量。  相似文献   

7.
Pan X 《Medical physics》2000,27(9):2031-2036
The hybrid algorithms developed recently for the reconstruction of fan-beam images possess computational and noise properties superior to those of the fan-beam filtered backprojection (FFBP) algorithm. However, the hybrid algorithms cannot be applied directly to a halfscan fan-beam sinogram because they require knowledge of a fullscan fan-beam sinogram. In this work, we developed halfscan-hybrid algorithms for image reconstruction in halfscan computed tomography (CT). Numerical evaluation indicates that the proposed halfscan-hybrid algorithms are computationally more efficient than are the widely used halfscan-FFBP algorithms. Also, the results of quantitative studies demonstrated clearly that the noise levels in images reconstructed by use of the halfscan-hybrid algorithm are generally lower and spatially more uniform than are those in images reconstructed by use of the halfscan-FFBP algorithm. Such reduced and uniform image noise levels may be translated into improvement of the accuracy and precision of lesion detection and parameter estimation in noisy CT images without increasing the radiation dose to the patient. Therefore, the halfscan-hybrid algorithms may have significant implication for image reconstruction in conventional and helical CT.  相似文献   

8.
This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.  相似文献   

9.
For the purpose of obtaining x-ray tomographic images, statistical reconstruction (SR) provides a general framework with possible advantages over analytical algorithms such as filtered backprojection (FBP) in terms of flexibility, resolution, contrast and image noise. However, SR images may be seriously affected by some artefacts that are not present in FBP images. These artefacts appear as aliasing patterns and as severe overshoots in the areas of sharp intensity transitions ('edge artefacts'). We characterize this inherent property of iterative reconstructions and hypothesize how discretization errors during reconstruction contribute to the formation of the artefacts. An adequate solution to the problem is to perform the reconstructions on an image grid that is finer than that typically employed for FBP reconstruction, followed by a downsampling of the resulting image to a granularity normally used for display. Furthermore, it is shown that such a procedure is much more effective than post-filtering of the reconstructions. Resulting SR images have superior noise-resolution trade-off compared to FBP, which may facilitate dose reduction during CT examinations.  相似文献   

10.
目的:探讨多排螺旋CT(MSCT)轴向分辨力或断层灵敏度曲线(SSP)和图像噪声的影响因素。方法:使用临床常 用的腹部扫描模式,采用不同直径的模板,重建层厚、螺距、电压(kV)、重建算法等扫面参数,进行MSCT扫描,对不同参数 对断层图像SSP和图像噪声的影响进行统计学分析。结果:当螺距和准直器宽度保持不变,不同层厚和重建算法得到的 SSP的半高宽FWHM值基本保持不变(P>0.05);当重建算法和准直器宽度保持不变,不同螺距和层厚得到的SSP的半高 宽FWHM值基本保持不变(P>0.05);不同准直器宽度,同螺距和层厚得到的SSP 的半高宽FWHM值基本保持不变(P> 0.05);随着层厚、mAs增加,图像噪声减小(P<0.05);随着kV增加,图像噪声随之减小,不同重建算法下图像噪声存在明显 差异(P<0.05)。结论:卷积重建算法、螺距和准直器宽度对SSP的影响很小,螺距对图像噪声影响很小,而层厚、重建算 法、mAs、kV对图像噪声影响大。层厚、mAs、kV增大,图像噪声减小;重建算法分辨率越高,图像噪声越大。  相似文献   

11.
Ziegler A  Nielsen T  Grass M 《Medical physics》2008,35(4):1317-1327
It was shown that images reconstructed for transmission tomography with iterative maximum likelihood (ML) algorithms exhibit a higher signal-to-noise ratio than images reconstructed with filtered back-projection type algorithms. However, a drawback of ML reconstruction in particular and iterative reconstruction in general is the requirement that the reconstructed field of view (FOV) has to cover the whole volume that contributes to the absorption. In the case of a high resolution reconstruction, this demands a huge number of voxels. This article shows how an iterative ML reconstruction can be limited to a region of interest (ROI) without losing the advantages of a ML reconstruction. Compared with a full FOV ML reconstruction, the reconstruction speed is mainly increased by reducing the number of voxels which are necessary for a ROI reconstruction. In addition, the speed of convergence is increased.  相似文献   

12.
Statistical reconstruction (SR) methods provide a general and flexible framework for obtaining tomographic images from projections. For several applications SR has been shown to outperform analytical algorithms in terms of resolution-noise trade-off achieved in the reconstructions. A disadvantage of SR is the long computational time required to obtain the reconstructions, in particular when large data sets characteristic for x-ray computer tomography (CT) are involved. As was shown recently, by combining statistical methods with block iterative acceleration schemes [e.g., like in the ordered subsets convex (OSC) algorithm], the reconstruction time for x-ray CT applications can be reduced by about two orders of magnitude. There are, however, some factors lengthening the reconstruction process that hamper both accelerated and standard statistical algorithms to similar degree. In this simulation study based on monoenergetic and scatter-free projection data, we demonstrate that one of these factors is the extremely high number of iterations needed to remove artifacts that can appear around high-contrast structures. We also show (using the OSC method) that these artifacts can be adequately suppressed if statistical reconstruction is initialized with images generated by means of Radon inversion algorithms like filtered back projection (FBP). This allows the reconstruction time to be shortened by even as much as one order of magnitude. Although the initialization of the statistical algorithm with FBP image introduces some additional noise into the first iteration of OSC reconstruction, the resolution-noise trade-off and the contrast-to-noise ratio of final images are not markedly compromised.  相似文献   

13.
X-ray computed tomography (CT) images of patients bearing metal intracavitary applicators or other metal foreign objects exhibit severe artifacts including streaks and aliasing. We have systematically evaluated via computer simulations the impact of scattered radiation, the polyenergetic spectrum, and measurement noise on the performance of three reconstruction algorithms: conventional filtered backprojection (FBP), deterministic iterative deblurring, and a new iterative algorithm, alternating minimization (AM), based on a CT detector model that includes noise, scatter, and polyenergetic spectra. Contrary to the dominant view of the literature, FBP streaking artifacts are due mostly to mismatches between FBP's simplified model of CT detector response and the physical process of signal acquisition. Artifacts on AM images are significantly mitigated as this algorithm substantially reduces detector-model mismatches. However, metal artifacts are reduced to acceptable levels only when prior knowledge of the metal object in the patient, including its pose, shape, and attenuation map, are used to constrain AM's iterations. AM image reconstruction, in combination with object-constrained CT to estimate the pose of metal objects in the patient, is a promising approach for effectively mitigating metal artifacts and making quantitative estimation of tissue attenuation coefficients a clinical possibility.  相似文献   

14.
Positron emission tomography (PET) can provide in vivo, quantitative and functional information for diagnosis; however, PET image quality depends highly on a reconstruction algorithm. Iterative algorithms, such as the maximum likelihood expectation maximization (MLEM) algorithm, are rapidly becoming the standards for image reconstruction in emission-computed tomography. The conventional MLEM algorithm utilized the Poisson model in its system matrix, which is no longer valid for delay-subtraction of randomly corrected data. The aim of this study is to overcome this problem. The maximum likelihood estimation using the expectation maximum algorithm (MLE-EM) is adopted and modified to reconstruct microPET images using random correction from joint prompt and delay sinograms; this reconstruction method is called PDEM. The proposed joint Poisson model preserves Poisson properties without increasing the variance (noise) associated with random correction. The work here is an initial application/demonstration without applied normalization, scattering, attenuation, and arc correction. The coefficients of variation (CV) and full width at half-maximum (FWHM) values were utilized to compare the quality of reconstructed microPET images of physical phantoms acquired by filtered backprojection (FBP), ordered subsets-expected maximum (OSEM) and PDEM approaches. Experimental and simulated results demonstrate that the proposed PDEM produces better image quality than the FBP and OSEM approaches.  相似文献   

15.
Zou Y  Pan X  Xia D  Wang G 《Medical physics》2005,32(8):2639-2648
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.  相似文献   

16.
In the last few years, mathematically exact algorithms, including the backprojection-filtration (BPF) algorithm, have been developed for accurate image reconstruction in helical cone-beam CT. The BPF algorithm requires minimum data, and can reconstruct region-of-interest (ROI) images from data containing truncations. However, similar to other existing reconstruction algorithms for helical cone-beam CT, the BPF algorithm involves a backprojection with a spatially varying weighting factor, which is computationally demanding and, more importantly, can lead to undesirable numerical properties in reconstructed images. In this work, we develop a rebinned BPF algorithm in which the backprojection invokes no spatially varying weighting factor for accurate image reconstruction from helical cone-beam projections. This rebinned BPF algorithm is computationally more efficient and numerically more stable than the original BPF algorithm, while it also retains the nice properties of the original BPF algorithm such as minimum data requirement and ROI-image reconstruction from truncated data. We have also performed simulation studies to validate and evaluate the rebinned BPF algorithm.  相似文献   

17.
In this paper, we address the problem of two-dimensional image reconstruction from fan-beam data acquired along a full 2pi scan. Conventional approaches that follow the filtered-backprojection (FBP) structure require a weighted backprojection with the weight depending on the point to be reconstructed and also on the source position; this weight appears only in the case of divergent beam geometries. Compared to reconstruction from parallel-beam data, the backprojection weight implies an increase in computational effort and is also thought to have some negative impacts on noise properties of the reconstructed images. We demonstrate here that direct FBP reconstruction from full-scan fan-beam data is possible with no backprojection weight. Using computer-simulated, realistic fan-beam data, we compared our novel FBP formula with no backprojection weight to the use of an FBP formula based on equal weighting of all data. Comparisons in terms of signal-to-noise ratio, spatial resolution and computational efficiency are presented. These studies show that the formula we suggest yields images with a reduced noise level, at almost identical spatial resolution. This effect increases quickly with the distance from the center of the field of view, from 0% at the center to 20% less noise at 20 cm, and to 40% less noise at 25 cm. Furthermore, the suggested method is computationally less demanding and reduces computation time with a gain that was found to vary between 12% and 43% on the computers used for evaluation.  相似文献   

18.
Yu L  Pan X 《Medical physics》2003,30(10):2629-2637
Half-scan strategy can be used for reducing scanning time and radiation dose delivered to the patient in fan-beam computed tomography (CT). In helical CT, the data weighting/interpolation functions are often devised based upon half-scan configurations. The half-scan fan-beam filtered backprojection (FFBP) algorithm is generally used for image reconstruction from half-scan data. It can, however, be susceptible to sample aliasing and data noise for configurations with short focal lengths and/or large fan-angles, leading to nonuniform resolution and noise properties in reconstructed images. Uniform resolution and noise properties are generally desired because they may lead to an increased utility of reconstructed images in estimation and/or detection/classification tasks. In this work, we propose an algorithm for reconstruction of images with uniform noise and resolution properties in half-scan CT. In an attempt to evaluate the image-noise properties, we derive analytic expressions for image variances obtained by use of the half-scan algorithms. We also perform numerical studies to assess quantitatively the resolution and noise properties of the algorithms. The results in these studies confirm that the proposed algorithm yields images with more uniform spatial resolution and with lower and more uniform noise levels than does the half-scan FFBP algorithm. Empirical results obtained in noise studies also verify the validity of the derived expressions for image variances. The proposed algorithm would be particularly useful for image reconstruction from data acquired by use of configurations with short focal lengths and large field of measurement, which may be encountered in compact micro-CT and radiation therapeutic CT applications. The analytic results of the image-noise properties can be used for image-quality assessment in detection/classification tasks by use of model-observers.  相似文献   

19.
20.
Purpose: Low contrast sensitivity of CT scanners is regularly assessed by subjective scoring of low contrast detectability within phantom CT images. Since in these phantoms low contrast objects are arranged in known fixed patterns, subjective rating of low contrast visibility might be biased. The purpose of this study was to develop and validate a software for automated objective low contrast detectability based on a model observer.Methods: Images of the low contrast module of the Catphan 600 phantom were used for the evaluation of the software. This module contains two subregions: the supraslice region with three groups of low contrast objects (each consisting of nine circular objects with diameter 2-15 mm and contrast 0.3, 0.5, and 1.0%, respectively) and the subslice region with three groups of four circular objects each (diameter 3-9 mm; contrast 1.0%). The software method offered automated determination of low contrast detectability using a NPWE (nonprewhitening matched filter with an eye filter) model observer for the supraslice region. The model observer correlated templates of the low contrast objects with the acquired images of the Catphan phantom and a discrimination index d' was calculated. This index was transformed into a proportion correct (PC) value. In the two-alternative forced choice (2-AFC) experiments used in this study, a PC ≥ 75% was proposed as a threshold to decide whether objects were visible. As a proof of concept, influence of kVp (between 80 and 135 kV), mAs (25-200 mAs range) and reconstruction filter (four filters, two soft and two sharp) on low contrast detectability was investigated. To validate the outcome of the software in a qualitative way, a human observer study was performed.Results: The expected influence of kV, mAs and reconstruction filter on image quality are consistent with the results of the proposed automated model. Higher values for d' (or PC) are found with increasing mAs or kV values and for the soft reconstruction filters. For the highest contrast group (1%), PC values were fairly above 75% for all object diameters >2 mm, for all conditions. For the 0.5% contrast group, the same behavior was observed for object diameters >3 mm for all conditions. For the 0.3% contrast group, PC values were higher than 75% for object diameters >6 mm except for the series acquired at the lowest dose (25 mAs), which gave lower PC values. In the human observer study similar trends were found.Conclusions: We have developed an automated method to objectively investigate image quality using the NPWE model in combination with images of the Catphan phantom low contrast module. As a first step, low contrast detectability as a function of both acquisition and reconstruction parameter settings was successfully investigated with the software. In future work, this method could play a role in image reconstruction algorithms evaluation, dose reduction strategies or novel CT technologies, and other model observers may be implemented as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号