首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reliable attenuation correction represents an essential component of the long chain of modules required for the reconstruction of artifact-free, quantitative brain positron emission tomography (PET) images. In this work we demonstrate the proof of principle of segmented magnetic resonance imaging (MRI)-guided attenuation and scatter corrections in three-dimensional (3D) brain PET. We have developed a method for attenuation correction based on registered T1-weighted MRI, eliminating the need of an additional transmission (TX) scan. The MR images were realigned to preliminary reconstructions of PET data using an automatic algorithm and then segmented by means of a fuzzy clustering technique which identifies tissues of significantly different density and composition. The voxels belonging to different regions were classified into air, skull, brain tissue and nasal sinuses. These voxels were then assigned theoretical tissue-dependent attenuation coefficients as reported in the ICRU 44 report followed by Gaussian smoothing and addition of a good statistics bed image. The MRI-derived attenuation map was then forward projected to generate attenuation correction factors (ACFs) to be used for correcting the emission (EM) data. The method was evaluated and validated on 10 patient data where TX and MRI brain images were available. Qualitative and quantitative assessment of differences between TX-guided and segmented MRI-guided 3D reconstructions were performed by visual assessment and by estimating parameters of clinical interest. The results indicated a small but noticeable improvement in image quality as a consequence of the reduction of noise propagation from TX into EM data. Considering the difficulties associated with preinjection TX-based attenuation correction and the limitations of current calculated attenuation correction, MRI-based attenuation correction in 3D brain PET would likely be the method of choice for the foreseeable future as a second best approach in a busy nuclear medicine center and could be applied to other functional brain imaging modalities such as SPECT.  相似文献   

2.
An accurate attenuation correction has been developed for a small-volume three-dimensional positron emission tomography (PET) system. Transmission data were measured as twenty-four 2D slices which were reconstructed and combined to form a 3D attenuation image. Emission data were reconstructed using a backproject-then-filter technique, and each event was corrected for attenuation at backprojection time by a reprojection through the attenuation image. This correction restores the spatial invariance of the point response function, thus allowing a valid deconvolution and producing an undistorted emission image. Scattering corrections were not applied to either the transmission or the emission data but simulation studies indicated that scattering made only a small contribution to the attenuation measurement. Results are presented for two phantoms, in which transmission scans of 57,500 and 18,700 events/slice were used to correct emission images of 5.2 and 2.8 million events. Although the attenuation images had poor statistical accuracy and a resolution of 13 mm, the method resulted in accurate attenuation-corrected images with no degradation in image resolution (which was 3 mm for the first emission image), and with little effect on image noise.  相似文献   

3.
Segmented attenuation correction is now a widely accepted technique to reduce noise propagation from transmission scanning in positron emission tomography (PET). In this paper, we present a new method for segmenting transmission images in whole-body scanning. This reduces the noise in the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the fuzzy C-means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are, therefore, segmented into populations of uniform attenuation based on knowledge of the human anatomy. The clustering procedure starts with an overspecified number of clusters followed by a merging process to group clusters with similar properties (redundant clusters) and removal of some undesired substructures using anatomical knowledge. The method is unsupervised, adaptive and allows the classification of both pre- or post-injection transmission images obtained using either coincident 68Ge or single-photon 137Cs sources into main tissue components in terms of attenuation coefficients. A high-quality transmission image of the scanner bed is obtained from a high statistics scan and added to the transmission image. The segmented transmission images are then forward projected to generate attenuation correction factors to be used for the reconstruction of the corresponding emission scan. The technique has been tested on a chest phantom simulating the lungs, heart cavity and the spine, the Rando-Alderson phantom, and whole-body clinical PET studies showing a remarkable improvement in image quality, a clear reduction of noise propagation from transmission into emission data allowing for reduction of transmission scan duration. There was very good correlation (R2 = 0.96) between maximum standardized uptake values (SUVs) in lung nodules measured on images reconstructed with measured and segmented attenuation correction with a statistically significant decrease in SUV (17.03% +/- 8.4%, P < 0.01) on the latter images, whereas no proof of statistically significant differences on the average SUVs was observed. Finally, the potential of the FCM algorithm as a segmentation method and its limitations as well as other prospective applications of the technique are discussed.  相似文献   

4.
An attenuation-correction method for three-dimensional PET imaging, which obtains attenuation-correction factors from transmission measurements using an uncollimated flood source, is described. This correction is demonstrated for two different phantoms using transmission data acquired with QPET, a rotating imaging system with two planar detectors developed for imaging small volumes. The scatter amplitude in the transmission projections was a maximum of 30%; to obtain accurate attenuation-correction factors the scatter distribution was first calculated and subtracted. The attenuation-corrected emission images for both phantoms indicate that their original uniform amplitudes have been restored. The attenuation correction adds only a small amount of noise to the emission images, as evaluated from the standard deviation over a central region. For the first phantom, with maximum attenuation of 48%, the noise added was 2.6%. The second phantom was attenuated by a maximum of 37%, and 1.9% noise was added. Because the transmission data are smoothed, some artifacts are visible at the edges of the phantom where the correction factors change abruptly within the emission image.  相似文献   

5.
Attenuation correction in positron emission tomography (PET) is an essential part of clinical and research studies. However, correction using noisy transmission data acquired over short scan durations has been a problem as the noise is introduced into emission images. This study investigates the effect of smoothing the two-dimensional projections of the attenuation maps (mu-map) using the nonlinear anisotropic diffusion filtering method. Experiments are presented on a whole-body study to qualitatively evaluate the efficacy of the method in reducing the random noise and streak artefacts. The results show that image quality is significantly improved with minimal resolution loss. A reduction in statistical noise was quantitatively demonstrated when the same approach was applied to a cylindrical phantom dataset.  相似文献   

6.
An accurate, low noise estimate of photon attenuation in the subject is required for quantitative microPET studies of molecular tracer distributions in vivo. In this work, several transmission-based measurement techniques were compared, including coincidence mode with and without rod windowing, singles mode with two different energy sources ((68)Ge and (57)Co), and postinjection transmission scanning. In addition, the effectiveness of transmission segmentation and the propagation of transmission bias and noise into the emission images were examined. The (57)Co singles measurements provided the most accurate attenuation coefficients and superior signal-to-noise ratio, while (68)Ge singles measurements were degraded due to scattering from the object. Scatter correction of (68)Ge transmission data improved the accuracy for a 10 cm phantom but over-corrected for a mouse phantom. (57)Co scanning also resulted in low bias and noise in postinjection transmission scans for emission activities up to 20 MBq. Segmentation worked most reliably for transmission data acquired with (57)Co but the minor improvement in accuracy of attenuation coefficients and signal-to-noise may not justify its use, particularly for small subjects. We conclude that (57)Co singles transmission scanning is the most suitable method for measured attenuation correction on the microPET Focus 220 animal scanner.  相似文献   

7.
In positron emission tomography (PET), a quantitative reconstruction of the tracer distribution requires accurate attenuation correction. We consider situations where a direct measurement of the attenuation coefficient of the tissues is not available or is unreliable, and where one attempts to estimate the attenuation sinogram directly from the emission data by exploiting the consistency conditions that must be satisfied by the non-attenuated data. We show that in time-of-flight PET, the attenuation sinogram is determined by the emission data except for a constant and that its gradient can be estimated efficiently using a simple analytic algorithm. The stability of the method is illustrated numerically by means of a 2D simulation.  相似文献   

8.
In present positron emission tomography (PET)/computed tomography (CT) scanners, PET attenuation correction is performed by relying on the information given by a single CT scan. The scaling of the linear attenuation coefficients from CT x-ray energy to PET 511 keV gamma energy is prone to errors especially in the presence of CT contrast agents. Attenuation correction based upon two CT scans at different energies but performed at the same time and patient position should reduce such errors and therefore improve the accuracy of the reconstructed PET images at the cost of introduced additional noise. Such CT scans could be provided by future PET/CT scanners that have either dual source CT or energy sensitive CT. Three different dual energy scaling methods for attenuation correction are introduced and assessed by measurements with a modified NEMA 1994 phantom with different CT contrast agent concentrations. The scaling is achieved by differentiating between (1) Compton and photoelectric effect, (2) atomic number and density, or (3) water-bone and water-iodine scaling schemes. The scaling method (3) is called hybrid dual energy computed tomography attenuation correction (hybrid DECTAC). All three dual energy scaling methods lead to a reduction of contrast agent artifacts with respect to single energy scaling. The hybrid DECTAC method resulted in PET images with the weakest artifacts. Both, the hybrid DECTAC and Compton/photoelectric effect scaling resulted also in images with the lowest PET background variability. Atomic number/density scaling and Compton/photoelectric effect scaling had problems to correctly scale water, hybrid DECTAC scaling and single energy scaling to correctly scale Teflon. Atomic number/density scaling and hybrid DECTAC could be generalized to reduce these problems.  相似文献   

9.
Accurate attenuation correction is important for quantitative positron emission tomography (PET) studies. When performing transmission measurements using an external rotating radioactive source, object motion during the transmission scan can distort the attenuation correction factors computed as the ratio of the blank to transmission counts, and cause errors and artefacts in reconstructed PET images. In this paper we report a compensation method for rigid body motion during PET transmission measurements, in which list mode transmission data are motion corrected event-by-event, based on known motion, to ensure that all events which traverse the same path through the object are recorded on a common line of response (LOR). As a result, the motion-corrected transmission LOR may record a combination of events originally detected on different LORs. To ensure that the corresponding blank LOR records events from the same combination of contributing LORs, the list mode blank data are spatially transformed event-by-event based on the same motion information. The number of counts recorded on the resulting blank LOR is then equivalent to the number of counts that would have been recorded on the corresponding motion-corrected transmission LOR in the absence of any attenuating object. The proposed method has been verified in phantom studies with both stepwise movements and continuous motion. We found that attenuation maps derived from motion-corrected transmission and blank data agree well with those of the stationary phantom and are significantly better than uncorrected attenuation data.  相似文献   

10.
Total-body positron emission tomography (PET) is a useful diagnostic tool for evaluating malignant disease. However, tumour detection is limited by image artefacts due to the lack of attenuation correction and noise. Attenuation correction may be possible using transmission data acquired after or simultaneously with emission data. Despite the elimination of attenuation artefacts, however, tumour detection is still hampered by noise, which is amplified during image reconstruction by filtered backprojection (FBP). We have investigated, as an alternative to FBP, an accelerated expectation maximization (EM) algorithm for its potential to improve tumour detectability in total-body PET. Signal to noise ratio (SNR), calculated for a tumour with respect to the surrounding background, is used as a figure of merit. A software tumour phantom, with conditions typical of those encountered in a total-body PET study using simultaneous acquisition, is used to optimize and compare various reconstruction approaches. Accelerated EM reconstruction followed by two-dimensional filtering is shown to yield significantly higher SNR than FBP for a range of tumour sizes, concentrations and counting statistics (deltaSNR = 6.3 +/- 3.9, p < 0.001). The methods developed are illustrated by examples derived from physical phantom and patient data.  相似文献   

11.
Simultaneous emission and transmission measurement is appealing in PET due to the matching of geometrical conditions between emission and transmission and reduced acquisition time for the study. A potential problem remains: when transmission statistics are low, attenuation correction could be very noisy. Although noise in the attenuation map can be controlled through regularization during statistical reconstruction, the selection of regularization parameters is usually empirical. In this paper, we investigate the use of discrete data consistency conditions (DDCC) to optimally select one or two regularization parameters. The advantages of the method are that the reconstructed attenuation map is consistent with the emission data and that it accounts for particularity in the emission reconstruction algorithm and acquisition geometry. The methodology is validated using a computer-generated whole-body phantom for both emission and transmission, neglecting random events and scattered radiation. MAP-TR was used for attenuation map reconstruction, while 3D OS-EM is used for estimating the emission image. The estimation of regularization parameters depends on the resolution of the emission image controlled by the number of iterations in OS-EM. The computer simulation shows that, on one hand, DDCC regularized attenuation map reduces propagation of the transmission scan noise to the emission image, while on the other hand DDCC prevents excessive attenuation map smoothing that could result in resolution mismatch artefacts between emission and transmission.  相似文献   

12.
The quality of the attenuation correction strongly influences the outcome of the reconstructed emission scan in positron emission tomography. The calculation of the attenuation correction factors must take into account the Poisson nature of the radioactive decay process, because-for a reasonable scan duration-the transmission measurements contain lines of response with low count numbers in the case of large attenuation factors. Our purpose in this study is to investigate a maximum likelihood estimator for attenuation correction factor calculation in positron emission tomography, which incorporates the Poisson nature of the radioactive decay into transmission and blank measurement. Therefore, the correct maximum likelihood function is used to derive two estimators for the calculation of the attenuation coefficient image and the corresponding attenuation correction factors depending on the measured blank and transmission data. Log likelihood convergence, mean differences, and the mean of squared differences for the attenuation correction factors of a mathematical thorax phantom were determined and compared. The algorithms yield adequate attenuation correction factors, however, the algorithm taking the noise in the blank scan into account can perform better for noisy blank scans. We conclude that maximum likelihood-including blank likelihood-is advantageous to reconstruct attenuation correction factors for low statistic blank and good statistic transmission data. For normal blank and transmission statistics the implementation of the statistical nature of the blank is not mandatory.  相似文献   

13.
In order to obtain an accurate and quantitative positron emission tomography (PET) image, emission data need to be corrected for random coincidences, photon attenuation and Compton scattering of photons in the tissue, and detector efficiency response or normalization. The accuracy of these corrections strongly affects the quality of the PET image. There is evidence that time-of-flight (TOF) PET reconstruction is less sensitive than non-TOF reconstruction to inconsistencies between emission data and corrections. The purpose of this study is to analyze and discuss such experimental evidence. In this work, inconsistent correction data (inconsistent normalization, absence of scatter correction and mismatched attenuation correction) are introduced in experimental phantom data. Both TOF and non-TOF reconstructed images are analyzed to examine the effect of flawed data. The behavior of TOF reconstruction in respiratory artifacts, a very common example of inconsistency in the data, is studied in patient images. TOF reconstruction is less sensitive to mismatched attenuation correction, erroneous normalization and poorly estimated scatter correction. Such robustness depends strongly on the time resolution of the TOF PET scanner. In particular, the robustness of TOF in the presence of attenuation correction inconsistencies is discussed, using a simulation of a simple model of respiratory artifacts. We expect new generations of PET scanners, with improved time resolution, to be less and less sensitive to poor quality normalization, scatter and attenuation corrections. This not only reduces artifacts in the PET image, but also opens the way to less stringent requirements for the quality of the CT image (reducing either the equipment cost or the dose to the patient), and for the normalization protocols (simplifying or shortening the normalization procedures). Moreover, TOF reconstruction can be beneficial in multimodalities such as PET/MR, where a direct attenuation measurement is not available and attenuation correction can only be approximated.  相似文献   

14.
Detection of scattered gamma quanta degrades image contrast and quantitative accuracy of single-photon emission computed tomography (SPECT) imaging. This paper reviews methods to characterize and model scatter in SPECT and correct for its image degrading effects, both for clinical and small animal SPECT. Traditionally scatter correction methods were limited in accuracy, noise properties and/or generality and were not very widely applied. For small animal SPECT, these approximate methods of correction are often sufficient since the fraction of detected scattered photons is small. This contrasts with patient imaging where better accuracy can lead to significant improvement of image quality. As a result, over the last two decades, several new and improved scatter correction methods have been developed, although often at the cost of increased complexity and computation time. In concert with (i) the increasing number of energy windows on modern SPECT systems and (ii) excellent attenuation maps provided in SPECT/CT, some of these methods give new opportunities to remove degrading effects of scatter in both standard and complex situations and therefore are a gateway to highly quantitative single- and multi-tracer molecular imaging with improved noise properties. Widespread implementation of such scatter correction methods, however, still requires significant effort.  相似文献   

15.
Three-dimensional positron emission tomography admits a significant scatter fraction due to the large aperture of the detectors, and requires accurate scatter subtraction. A scatter-correction method, applicable to both emission and transmission imaging, calculates the projections of the single-scatter distribution, using an approximate image of the source and attenuating object. The scatter background is subtracted in projection space for transmission data and in image space for emission data, yielding corrected attenuation and emission images. The accuracy of this single-scatter distribution is validated for the authors' small imaging system by comparison with Monte Carlo simulations. The correction is demonstrated using transmission and emission data obtained from measurements on the authors' QPET imaging system using two acrylic phantoms. For the transmission data, generated with a flood source, errors of up to 24% in the linear attenuation coefficients resulted with no scatter subtraction, but the correction yielded an accurate value of mu =0.11+or-0.01 cm-1. For the emission data, the corrected images show that the scattered background has been removed to within the level of the background noise outside the source. The residual amplitude within a cold spot in one of the phantoms was reduced from 21% to 3% of the image amplitude.  相似文献   

16.
Emission scan data acquired by a multi-ring PET camera operated with septa retracted must be corrected for (1) geometrical and detector sensitivity variations between the different lines of response (normalisation), (2) photon attenuation, and (3) mispositioned events due to photon scattering. These corrections must be applied to the full 3-D set of lines of response before reconstruction. The standard normalisation and attenuation correction procedures for 2-D scans increase the statistical noise in the emission scan, a problem which becomes even more serious in 3-D because of the large number of LORs involved (approximately 8 million). This paper will describe a fully 3-D reconstruction algorithm for multi-angle PET data incorporating a practical normalisation and attenuation correction procedure which minimises the increase in emission scan statistical noise. The correction factors are derived from 2-D, septa extended scans. The algorithm is currently used to reconstruct 3-D emission data from an ECAT 953B, a sixteen-ring PET camera with retractable septa.  相似文献   

17.
In this study, the quantitative accuracy of different attenuation correction strategies presently available for the High Resolution Research Tomograph (HRRT) was investigated. These attenuation correction methods differ in reconstruction and processing (segmentation) algorithms used for generating a micro-image from measured 2D transmission scans, an intermediate step in the generation of 3D attenuation correction factors. Available methods are maximum-a-posteriori reconstruction (MAP-TR), unweighted OSEM (UW-OSEM) and NEC-TR, which transforms sinogram values back to their noise equivalent counts (NEC) to restore Poisson distribution. All methods can be applied with or without micro-image segmentation. However, for MAP-TR a micro-histogram is a prior during reconstruction. All possible strategies were evaluated using phantoms of various sizes, simulating preclinical and clinical situations. Furthermore, effects of emission contamination of the transmission scan on the accuracy of various attenuation correction strategies were studied. Finally, the accuracy of various attenuation corrections strategies and its relative impact on the reconstructed activity concentration (AC) were evaluated using small animal and human brain studies. For small structures, MAP-TR with human brain priors showed smaller differences in micro-values for transmission scans with and without emission contamination (<8%) than the other methods (<26%). In addition, it showed best agreement with true AC (deviation <4.5%). A specific prior designed to take into account the presence of small animal fixation devices only very slightly improved AC precision to 4.3%. All methods scaled micro-values of a large homogeneous phantom to within 4% of the water peak, but MAP-TR provided most accurate AC after reconstruction. However, for clinical data MAP-TR using the default prior settings overestimated the thickness of the skull, resulting in overestimations of micro-values in regions near the skull and thus in incorrect AC for cortical regions. Using NEC-TR with segmentation or MAP-TR with an adjusted human brain prior showed less overestimation in both skull thickness and AC for these structures and are therefore the recommended methods for human brain studies.  相似文献   

18.
Positron emission tomography (PET) can provide in vivo, quantitative and functional information for diagnosis; however, PET image quality depends highly on a reconstruction algorithm. Iterative algorithms, such as the maximum likelihood expectation maximization (MLEM) algorithm, are rapidly becoming the standards for image reconstruction in emission-computed tomography. The conventional MLEM algorithm utilized the Poisson model in its system matrix, which is no longer valid for delay-subtraction of randomly corrected data. The aim of this study is to overcome this problem. The maximum likelihood estimation using the expectation maximum algorithm (MLE-EM) is adopted and modified to reconstruct microPET images using random correction from joint prompt and delay sinograms; this reconstruction method is called PDEM. The proposed joint Poisson model preserves Poisson properties without increasing the variance (noise) associated with random correction. The work here is an initial application/demonstration without applied normalization, scattering, attenuation, and arc correction. The coefficients of variation (CV) and full width at half-maximum (FWHM) values were utilized to compare the quality of reconstructed microPET images of physical phantoms acquired by filtered backprojection (FBP), ordered subsets-expected maximum (OSEM) and PDEM approaches. Experimental and simulated results demonstrate that the proposed PDEM produces better image quality than the FBP and OSEM approaches.  相似文献   

19.
Nye JA  Esteves F  Votaw JR 《Medical physics》2007,34(6):1901-1906
The introduction of positron emission/computed tomography (PET/CT) systems coupled with multidetector CT arrays has greatly increased the amount of clinical information in myocardial perfusion studies. The CT acquisition serves the dual role of providing high spatial anatomical detail and attenuation correction for PET. However, the differences between the interaction of respiratory and cardiac cycles in the CT and PET acquisitions presents a challenge when using the CT to determine PET attenuation correction. Three CT attenuation correction protocols were tested for their ability to produce accurate emission images: gated, a step mode acquisition covering the diastolic heart phase; normal, a high-pitch helical CT; and slow, a low-pitch, low-temporal-resolution helical CT. The amount of cardiac tissue in the emission image that overlaid lung tissue in the transmission image was used as the measure of mismatch between acquisitions. Phantom studies simulating misalignment of the heart between the transmission and emission sequences were used to correlate the amount of mismatch with the artificial defect changes in the emission image. Consecutive patients were studied prospectively with either paired gated (diastolic phase, 120 kVp, 280 mA, 2.6 s) and slow CT (0.562:1 pitch, 120 kVp, Auto-mA, 16 s) or paired normal (0.938:1 pitch, 120 kVp, Auto-mA, 4.8 s) and slow CT protocols, prior to a Rb-82 perfusion study. To determine the amount of mismatch, the transmission and emission images were converted to binary representations of attenuating tissue and cardiac tissue and overlaid using their native registration. The number of cardiac tissue pixels from the emission image present in the CT lung field yielded the magnitude of misalignment represented in terms of volume, of where a small volume indicates better registration. Acquiring a slow CT improved registration between the transmission and emission acquisitions compared to the gated and normal CT protocols. The volume of PET cardiac tissue in the CT lung field was significantly lower (p < 0.03) for the slow CT protocol in both the rest and stress emission studies. Phantom studies showed that an overlaying volume greater than 2.6 mL would produce significant artificial defects as determined by a quantitative software package that employs a normal database. The percentage of patient studies with overlaying volume greater than 2.6 mL was reduced from 71% with the normal CT protocol to 28% with the slow CT protocol. The remaining 28% exhibited artifacts consistent with heart drift and patient motion that could not be corrected by adjusting the CT acquisition protocol. The low pitch of the slow CT protocol provided the best match to the emission study and is recommended for attenuation correction in cardiac PET/CT studies. Further reduction in artifacts arising from cardiac drift is required and warrants an image registration solution.  相似文献   

20.
A spatially variant convolution subtraction scatter correction was developed for a Hamamatsu SHR-7700 animal PET scanner. This scanner, with retractable septa and a gantry that can be tilted 90 degrees, was designed for studies of conscious monkeys. The implemented dual-exponential scatter kernel takes into account both radiation scattered inside the object and radiation scattered in gantry and detectors. This is necessary because of the relatively large contribution of gantry and detector scatter in this scanner. The correction is used for scatter correction of emission as well as transmission data. Transmission scatter correction using the dual-exponential kernel leads to a measured attenuation coefficient of 0.096 cm(-1) in water, compared to 0.089 cm(-1) without scatter correction. Scatter correction on both emission and transmission data resulted in a residual correction error of 2.1% in water, as well as improved image contrast and hot spot quantification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号