首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.  相似文献   

2.
An aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the internal current density and conductivity of the electrically imaged object by injecting current through electrodes attached to it. Due to a limited amount of injection current, one of the most important factors in MREIT is how to control the noise contained in the measured magnetic flux density data. This paper describes a new iterative algorithm called the transversal J-substitution algorithm which is robust to measured noise. As a result, the proposed transversal J-substitution algorithm considerably improves the quality of the reconstructed conductivity image under a low injection current. The relation between the reconstructed contrast of conductivity and the measured noise in the magnetic flux density is analyzed. We show that the contrast of first update of the conductivity with a homogeneous initial guess using the proposed algorithm has sufficient distinguishability to detect the anomaly. Results from numerical simulations demonstrate that the transversal J-substitution algorithm is robust to the noise. For practical implementations of MREIT, we tested real experiments in an agarose gel phantom using low current injection with amplitudes 1 mA and 5 mA to reconstruct the interior conductivity distribution.  相似文献   

3.
Motion estimation for cardiac emission tomography by optical flow methods   总被引:1,自引:0,他引:1  
This paper describes a new method for estimating the 3D, non-rigid object motion in a time sequence of images. The method is a generalization of a standard optical flow algorithm that is incorporated into a successive quadratic approximation framework. The method was evaluated for gated cardiac emission tomography using images obtained from a mathematical, 4D phantom and a physical, dynamic phantom. The results showed that the proposed method offers improved motion estimation accuracy relative to the standard optical flow method. Convergence of the proposed algorithm was evidenced with a monotonically decreasing objective function value with iteration. Practical application of the motion estimation method in cardiac emission tomography includes quantitative myocardial motion estimation and 4D, motion-compensated image reconstruction.  相似文献   

4.
目的:使用基于ROC分析和ALVIM体模的方法,对不同的乳腺CAD分割算法进行比较,获得方法的准确性和信号检出率方面的比较结果,总结出一种适于进行分割算法横向比较的方法。方法:编制出完整的乳腺CAD常用分割算法四种,进行实际影像分割和ALVIM影像分割,计算与金标准的重叠率、分割真阳性率、假阳性率等信号检出能力参数,并进行四种分割轮廓帮助的体模影像识读ROC分析,获得不同算法对识读结果的帮助程度。结果:获得了不同分割算法与实际病灶面积重叠率,不同算法的信号检出能力,算法分割对诊断医师读片的帮助程度。结论:基于ROC分析和分割ALVIM体模的方法可以全面的获得不同分割算法的分割效果和信号检出能力,可对不同分割算法进行横向比较,测试算法准确性和鲁棒性。ALVIM体模在采用五值判断法时的计算方法简便宜行,方便进行大数据量和多种方法之间的ROC比较分析。  相似文献   

5.
用两种体模作CT性能检测的对比研究   总被引:1,自引:0,他引:1  
本文介绍使用两种不同的标准体模(AAPM体模和RMI461A体模),对3台新安装的CT设备作性能检测,把两组实验数据进行双边对比研究.结果:大多数性能项目,包括高对比度分辨率、影像均匀性和噪声、层厚偏差和CT值线性等的测量数据有较好的一致性.发现低对比度分辨率的测量值存在明显差异,可能是体模(插件)长期受X线照射,化学效应引起材料CT值漂移及本底对比度升高所致,需要用对比度-细节反比关系校正.本文还就当前CT性能检测的若干技术问题,包括检测方法与评价标准的改进等进行探讨.  相似文献   

6.
Contrast-detail phantom scoring methodology   总被引:1,自引:0,他引:1  
Published results of medical imaging studies which make use of contrast detail mammography (CDMAM) phantom images for analysis are difficult to compare since data are often not analyzed in the same way. In order to address this situation, the concept of ideal contrast detail curves is suggested. The ideal contrast detail curves are constructed based on the requirement of having the same product of the diameter and contrast (disk thickness) of the minimal correctly determined object for every row of the CDMAM phantom image. A correlation and comparison of five different quality parameters of the CDMAM phantom image determined for obtained ideal contrast detail curves is performed. The image quality parameters compared include: (1) contrast detail curve--a graph correlation between "minimal correct reading" diameter and disk thickness; (2) correct observation ratio--the ratio of the number of correctly identified objects to the actual total number of objects multiplied by 100; (3) image quality figure--the sum of the product of the diameter of the smallest scored object and its relative contrast; (4) figure-of-merit--the zero disk diameter value obtained from extrapolation of the contrast detail curve to the origin (e.g., zero disk diameter); and (5) k-factor--the product of the thickness and the diameter of the smallest correctly identified disks. The analysis carried out showed the existence of a nonlinear relationship between the above parameters, which means that use of different parameters of CDMAM image quality potentially can cause different conclusions about changes in image quality. Construction of the ideal contrast detail curves for CDMAM phantom is an attempt to determine the quantitative limits of the CDMAM phantom as employed for image quality evaluation. These limits are determined by the relationship between certain parameters of a digital mammography system and the set of the gold disks sizes in the CDMAM phantom. Recommendations are made on selections of CDMAM phantom regions which should be used for scoring at different image quality and which scoring methodology may be most appropriate. Special attention is also paid to the use of the CDMAM phantom for image quality assessment of digital mammography systems particularly in the vicinity of the Nyquist frequency.  相似文献   

7.
X-ray scatter correction algorithm for cone beam CT imaging   总被引:5,自引:0,他引:5  
Ning R  Tang X  Conover D 《Medical physics》2004,31(5):1195-1202
Developing and optimizing an x-ray scatter control and reduction technique is one of the major challenges for cone beam computed tomography (CBCT) because CBCT will be much less immune to scatter than fan-beam CT. X-ray scatter reduces image contrast, increases image noise and introduces reconstruction error into CBCT. To reduce scatter interference, a practical algorithm that is based upon the beam stop array technique and image sequence processing has been developed on a flat panel detector-based CBCT prototype scanner. This paper presents a beam stop array-based scatter correction algorithm and the evaluation results through phantom studies. The results indicate that the beam stop array-based scatter correction algorithm is practical and effective to reduce and correct x-ray scatter for a CBCT imaging task.  相似文献   

8.
The x-ray imaging dose from serial cone-beam computed tomography (CBCT) scans raises a clinical concern in most image-guided radiation therapy procedures. It is the goal of this paper to develop a fast graphic processing unit (GPU)-based algorithm to reconstruct high-quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. For this purpose, we have developed an iterative tight-frame (TF)-based CBCT reconstruction algorithm. A condition that a real CBCT image has a sparse representation under a TF basis is imposed in the iteration process as regularization to the solution. To speed up the computation, a multi-grid method is employed. Our GPU implementation has achieved high computational efficiency and a CBCT image of resolution 512 × 512 × 70 can be reconstructed in ~5 min. We have tested our algorithm on a digital NCAT phantom and a physical Catphan phantom. It is found that our TF-based algorithm is able to reconstruct CBCT in the context of undersampling and low mAs levels. We have also quantitatively analyzed the reconstructed CBCT image quality in terms of the modulation-transfer function and contrast-to-noise ratio under various scanning conditions. The results confirm the high CBCT image quality obtained from our TF algorithm. Moreover, our algorithm has also been validated in a real clinical context using a head-and-neck patient case. Comparisons of the developed TF algorithm and the current state-of-the-art TV algorithm have also been made in various cases studied in terms of reconstructed image quality and computation efficiency.  相似文献   

9.
This study simulates a multi-pinhole single-photon emission computed tomography (SPECT) system using the Monte Carlo method, and investigates different multi-pinhole designs for quantitative mouse brain imaging. Prior approaches investigating multi-pinhole SPECT were not often optimal, as the number and geometrical arrangement of pinholes were usually chosen empirically. The present study seeks to optimize the number of pinholes for a given pinhole arrangement, and also for the specific application of quantitative neuroreceptor binding in the mouse brain. An analytical Monte Carlo simulation based method was used to generate the projection data for various count levels. A three-dimensional ordered-subsets expectation-maximization algorithm was developed and used to reconstruct the images, incorporating a realistic pinhole model for resolution recovery and noise reduction. Although artefacts arising from overlapping projections could be a major problem in multi-pinhole reconstruction, the cold-rod phantom study showed minimal loss of spatial resolution in multi-pinhole systems, compared to a single-pinhole system with the same pinhole diameter. A quantitative study of neuroreceptor binding sites using a mouse brain phantom and low activity (37 MBq) showed that the multi-pinhole system outperformed the single-pinhole system by maintaining the mean and lowering the variance in the measured uptake ratio. Multi-pinhole collimation can be used to reduce the injected dose and thereby reduce the radiation exposure to the animal. Results also suggest that the nine-pinhole configuration shown in this paper is a good choice for mouse brain imaging.  相似文献   

10.
The strain image contrast of some in vivo breast lesions changes with increasing applied load. This change is attributed to differences in the nonlinear elastic properties of the constituent tissues suggesting some potential to help classify breast diseases by their nonlinear elastic properties. A phantom with inclusions and long-term stability is desired to serve as a test bed for nonlinear elasticity imaging method development, testing, etc. This study reports a phantom designed to investigate nonlinear elastic properties with ultrasound elastographic techniques. The phantom contains four spherical inclusions and was manufactured from a mixture of gelatin, agar and oil. The phantom background and each of the inclusions have distinct Young's modulus and nonlinear mechanical behavior. This phantom was subjected to large deformations (up to 20%) while scanning with ultrasound, and changes in strain image contrast and contrast-to-noise ratio between inclusion and background, as a function of applied deformation, were investigated. The changes in contrast over a large deformation range predicted by the finite element analysis (FEA) were consistent with those experimentally observed. Therefore, the paper reports a procedure for making phantoms with predictable nonlinear behavior, based on independent measurements of the constituent materials, and shows that the resulting strain images (e.g., strain contrast) agree with that predicted with nonlinear FEA.  相似文献   

11.
A recently developed blind deblurring algorithm based on the edge-to-noise ratio has been applied to improve the quality of spiral CT images. Since the discrepancy measure used to quantify the edge and noise effects is not symmetric, there are several ways to formulate the edge-to-noise ratio. This article is to investigate the performance of those ratios with phantom and patient data. In the phantom study, it is shown that all the ratios share similar properties, validating the blind deblurring algorithm. The image fidelity improvement varies from 29% to 33% for different ratios, according to the root mean square error (RMSE) criterion; the optimal iteration number determined for each ratio varies from 25 to 35. Those ratios that are associated with most satisfactory performance are singled out for the image fidelity improvement of about 33% in the numerical simulation. After automatic blind deblurring with the selected ratios, the spatial resolution of CT is substantially refined in all the cases tested.  相似文献   

12.
High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.  相似文献   

13.
Digital tomosynthesis is an imaging technique to produce a tomographic image from a series of angular digital images in a manner similar to conventional focal plane tomography. Unlike film focal plane tomography, the acquisition of the data in a C-arm geometry causes the image receptor to be positioned at various angles to the reconstruction tomogram. The digital nature of the data allows for input images to be combined into the desired plane with the flexibility of generating tomograms of many separate planes from a single set of input data. Angular datasets were obtained of a low contrast detectability (LCD) phantom and cadaver breast utilizing a Lorad stereotactic biopsy unit with a coupled source and digital detector in a C-arm configuration. Datasets of 9 and 41 low-dose projections were collected over a 30 degrees angular range. Tomographic images were reconstructed using a Backprojection (BP) algorithm, an Iterative Subtraction (IS) algorithm that allows the partial subtraction of out-of-focus planes, and an Algebraic Reconstruction (AR) algorithm. These were compared with single view digital radiographs. The methods' effectiveness at enhancing visibility of an obscured LCD phantom was quantified in terms of the Signal to Noise Ratio (SNR), and Signal to Background Ratio (SBR), all normalized to the metric value for the single projection image. The methods' effectiveness at removing ghosting artifacts in a cadaver breast was quantified in terms of the Artifact Spread Function (ASF). The technology proved effective at partially removing out of focus structures and enhancing SNR and SBR. The normalized SNR was highest at 4.85 for the obscured LCD phantom, using nine projections and IS algorithm. The normalized SBR was highest at 23.2 for the obscured LCD phantom, using 41 projections and an AR algorithm. The highest normalized metric values occurred with the obscured phantom. This supports the assertion that the greatest value of tomosynthesis is in imaging fibroglandular breasts. The ASF performance was best with the AR technique and nine projections.  相似文献   

14.
15.
This work is to demonstrate that high quality cone beam CT images can be generated for a volume of interest (VOI) and to investigate the exposure reduction effect, dose saving, and scatter reduction with the VOI scanning technique. The VOI scanning technique involves inserting a filtering mask between the x-ray source and the breast during image acquisition. The mask has an opening to allow full x-ray exposure to be delivered to a preselected VOI and a lower, filtered exposure to the region outside the VOI. To investigate the effects of increased noise due to reduced exposure outside the VOI on the reconstructed VOI image, we directly extracted the projection data inside the VOI from the full-field projection data and added additional data to the projection outside the VOI to simulate the relative noise increase due to reduced exposure. The nonuniform reference images were simulated in an identical manner to normalize the projection images and measure the x-ray attenuation factor for the object. Regular Feldkamp-Davis-Kress filtered backprojection algorithm was used to reconstruct the 3D images. The noise level inside the VOI was evaluated and compared with that of the full-field higher exposure image. Calcifications phantom and low contrast phantom were imaged. Dose reduction was investigated by estimating the dose distribution in a cylindrical water phantom using Monte Carlo simulation based Geant4 package. Scatter reduction at the detector input was also studied. Our results show that with the exposure level reduced by the VOI mask, the dose levels were significantly reduced both inside and outside the VOI without compromising the accuracy of image reconstruction, allowing for the VOI to be imaged with more clarity and helping to reduce the breast dose. The contrast-to-noise ratio inside the VOI was improved. The VOI images were not adversely affected by noisier projection data outside the VOI. Scatter intensities at the detector input were also shown to decrease significantly both inside and outside the VOI in the projection images, indicating potential improvement of image quality inside the VOI and contribution to dose reduction both inside and outside the VOI.  相似文献   

16.
A new nonlinear reconstruction method for tomosynthesis is described. This method is suited for "dilute" objects, i.e., objects in which most of the voxels have negligibly small absorption. Images of blood vessels filled with contrast material approximate this condition if the background is subtracted. The technique has been tested experimentally using a wire phantom and a prepared human heart. The results show significantly less artifacts than the well-known back projection. It is possible to get diagnostic image quality with a few projections. The reconstruction algorithm can be realized with dedicated real-time hardware.  相似文献   

17.
PURPOSE: The purpose of the work is to describe a new algorithm for the automatic detection of implanted radioactive seeds within the prostate. The algorithm is based on the traditional Hough transform. A method of quality assurance is described as well as a quantitative phantom study to determine the accuracy of the algorithm. METHODS AND MATERIALS: An algorithm is described which is based on the Hough transform. The Hough transform is a well known transform traditionally used to automatically segment lines and other well defined geometric objects from images. The traditional Hough transform is extended to three-dimensions and applied to CT images of seed implanted prostate glands. A method based on digitally reconstructed radiographs is described to quality assure the determined three-dimensional positions of the detected seeds. Two phantom studies utilizing eight seeds and nine seeds are described. All eight seeds form a contiguous a square while the nine seed phantom describes seeds which are placed side-by-side in groups of two and three. The algorithm is applied to the CT scans of both phantoms and the seed positions determined. RESULTS: The algorithm has been commercially developed and used to perform postsurgical dosimetric assessment on approximately 1000 patients. Using the described quality assurance tool it was determined that the algorithm accurately determined the seed positions in all 1000 patients. The algorithm was also applied to the eight seed phantom. The algorithm successfully found all eight seeds as well as their seed coordinates. The average radial error was determined to be 0.9 mm. For the nine seed phantom, the algorithm correctly identified all nine seeds, with an average radial error of 3 mm. CONCLUSIONS: The described algorithm is a robust, accurate, automatic, three-dimensional application for CT based seed determination.  相似文献   

18.
Three algorithms for breast tomosynthesis reconstruction were compared in this paper, including (1) a back-projection (BP) algorithm (equivalent to the shift-and-add algorithm), (2) a Feldkamp filtered back-projection (FBP) algorithm, and (3) an iterative Maximum Likelihood (ML) algorithm. Our breast tomosynthesis system acquires 11 low-dose projections over a 50 degree angular range using an a-Si (CsI:Tl) flat-panel detector. The detector was stationary during the acquisition. Quality metrics such as signal difference to noise ratio (SDNR) and artifact spread function (ASF) were used for quantitative evaluation of tomosynthesis reconstructions. The results of the quantitative evaluation were in good agreement with the results of the qualitative assessment. In patient imaging, the superimposed breast tissues observed in two-dimensional (2D) mammograms were separated in tomosynthesis reconstructions by all three algorithms. It was shown in both phantom imaging and patient imaging that the BP algorithm provided the best SDNR for low-contrast masses but the conspicuity of the feature details was limited by interplane artifacts; the FBP algorithm provided the highest edge sharpness for microcalcifications but the quality of masses was poor; the information of both the masses and the microcalcifications were well restored with balanced quality by the ML algorithm, superior to the results from the other two algorithms.  相似文献   

19.
We investigated the use of multifrequency diffuse optical tomography (MF-DOT) data for the reconstruction of the optical parameters. The experiments were performed in a 63 mm diameter cylindrical phantom containing a 15 mm diameter cylindrical object. Modulation frequencies ranging from 110 MHz to 280 MHz were used in the phantom experiments changing the contrast in absorption of the object with respect to the phantom while keeping the scattering value the same. The diffusion equation was solved using the finite element method. The sensitivity information from each frequency was combined to form a single Jacobian. The inverse problem was solved iteratively by minimizing the difference between the measurements and forward problem using single and multiple modulation frequency data. A multiparameter Tikhonov scheme was used for regularization. The phantom results show that the peak absorption coefficient in a region of interest was obtained with an error less then 5% using two-frequency reconstruction for absorption contrast values up to 2.2 times higher than background and 10% for the absorption contrast values larger than 2.2. The use of two-frequency data is sufficient to improve the quantitative accuracy compared with the single frequency reconstruction with appropriate selection of these frequencies.  相似文献   

20.
Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号