首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
张新阳        贺鹏博        刘新国        戴中颖        马圆圆        申国盛        张晖        陈卫强        李强       《中国医学物理学杂志》2021,(10):1223-1228
【摘要】目的:提出一种基于深度学习的计算机断层扫描(CT)单视图断层成像三维(3D)重建方法,在减少数据采集量和降低成像剂量的情况下对不同患者进行CT图像的3D重建。方法:对不同患者的CT图像进行数据增强和模拟生成对应的数字重建放射影像(DRR),并进行数据归一化操作。利用预处理后的数据通过卷积神经网络训练出一个普适于不同患者的神经网络模型。将训练好的神经网络模型部署在测试数据集上,使用平均绝对误差(MAE)、均方根误差(RMSE)、结构相似性(SSIM)和峰值信噪比(PSNR)对重建结果进行评估。结果:定性和定量分析的结果表明,该方法可以使用不同患者的单张2D图像分别重建出质量较高的3D CT图像,MAE、RMSE、SSIM和PSNR分别为0.006、0.079、0.982、38.424 dB。此外,相比特定于单个患者的情况,该方法可以大幅度提高重建速度并节省70%的模型训练时间。结论:构建的神经网络模型可通过不同患者的2D单视图重建出相应患者的3D CT图像。因此,本研究对简化临床成像设备和放射治疗当中的图像引导具有重要作用。  相似文献   

2.
An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.  相似文献   

3.
Bremsstrahlung spectra from thick cylindrical targets of Be, Al, and Pb have been measured at angles of 0 degrees, 1 degree, 2 degrees, 4 degrees, 10 degrees, 30 degrees, 60 degrees, and 90 degrees relative to the beam axis for electrons of 15-MeV incident energy. The spectra are absolute (photons per incident electron) and have a 145-keV lower-energy cutoff. The target thickness were nominally 110% of the electron CSDA range. A thin transmission detector, calibrated against a toroidal current monitor, was placed upstream of the target to measure the beam current. The spectrometer was a 20-cm-diam by 25-cm-long cylindrical NaI detector. Measured spectra were corrected for pile-up, background, detector response, detector efficiency, attenuation in materials between the target and detector and collimator effects. Spectra were also calculated using the EGS4 Monte Carlo system for simulating the radiation transport. There was excellent agreement between the measured and calculated spectral shapes. The measured yield of photons per incident electron was 9% and 7% greater than the calculated yield for Be and Al, respectively, and 2% less for Pb, all with an uncertainty of +/- 5%. There was no significant angular variation in the ratio of the measured and calculated yields. The angular distributions of bremsstrahlung calculated using available analytical theories dropped off more quickly with angle than the measured distributions. The predictions of the theories would be improved by including target-scattered photons.  相似文献   

4.
Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually.  相似文献   

5.
Laplace reconstruction of experimental diagnostic x-ray spectra   总被引:1,自引:0,他引:1  
This paper displays the results of a blind study used to determine the capability of a Laplace transform pair model to accurately reconstruct diagnostic x-ray spectra from experimental attenuation data. Spectra reconstructed from attenuation measurements are compared to experimental spectra obtained on the same unit using an intrinsic germanium spectrometer system. The results show that when pure attenuation materials are used, good agreement is obtained between the experimental and computed spectra. If an alloy attenuator like 1100 aluminum is used, the proportion of contaminants must be included in the Laplace formulation for accurate reconstruction.  相似文献   

6.
Simultaneous emission and transmission measurement is appealing in PET due to the matching of geometrical conditions between emission and transmission and reduced acquisition time for the study. A potential problem remains: when transmission statistics are low, attenuation correction could be very noisy. Although noise in the attenuation map can be controlled through regularization during statistical reconstruction, the selection of regularization parameters is usually empirical. In this paper, we investigate the use of discrete data consistency conditions (DDCC) to optimally select one or two regularization parameters. The advantages of the method are that the reconstructed attenuation map is consistent with the emission data and that it accounts for particularity in the emission reconstruction algorithm and acquisition geometry. The methodology is validated using a computer-generated whole-body phantom for both emission and transmission, neglecting random events and scattered radiation. MAP-TR was used for attenuation map reconstruction, while 3D OS-EM is used for estimating the emission image. The estimation of regularization parameters depends on the resolution of the emission image controlled by the number of iterations in OS-EM. The computer simulation shows that, on one hand, DDCC regularized attenuation map reduces propagation of the transmission scan noise to the emission image, while on the other hand DDCC prevents excessive attenuation map smoothing that could result in resolution mismatch artefacts between emission and transmission.  相似文献   

7.
Zhang B  Zeng GL 《Medical physics》2006,33(9):3124-3134
A rotating slat collimator can be used to acquire planar-integral data. It achieves higher geometric efficiency than a parallel-hole collimator by accepting more photons, but the planar-integral data contain less tomographic information that may result in larger noise amplification in the reconstruction. Lodge evaluated the rotating slat system and the parallel-hole system based on noise behavior for an FBP reconstruction. Here, we evaluate the noise propagation properties of the two collimation systems for iterative reconstruction. We extend Huesman's noise propagation analysis of the line-integral system to the planar-integral case, and show that approximately 2.0(D/dp) SPECT angles, 2.5(D/dp) self-spinning angles at each detector position, and a 0.5dp detector sampling interval are required in order for the planar-integral data to be efficiently utilized. Here, D is the diameter of the object and dp is the linear dimension of the voxels that subdivide the object. The noise propagation behaviors of the two systems are then compared based on a least-square reconstruction using the ratio of the SNR in the image reconstructed using a planar-integral system to that reconstructed using a line-integral system. The ratio is found to be proportional to the square root of F/D, where F is a geometric efficiency factor. This result has been verified by computer simulations. It confirms that for an iterative reconstruction, the noise tradeoff of the two systems is not only dependent on the increase of the geometric efficiency afforded by the planar projection method, but also dependent on the size of the object. The planar-integral system works better for small objects, while the line-integral system performs better for large ones. This result is consistent with Lodge's results based on the FBP method.  相似文献   

8.
Beam-hardening-free synthetic images with absolute CT numbers that radiologists are used to can be constructed from spectral CT data by forming 'dichromatic" images after basis decomposition. The CT numbers are accurate for all tissues and the method does not require additional reconstruction. This method prevents radiologists from having to relearn new rules-of-thumb regarding absolute CT numbers for various organs and conditions as conventional CT is replaced by spectral CT. Displaying the synthetic Hounsfield unit images side-by-side with images reconstructed for optimal detectability for a certain task can ease the transition from conventional to spectral CT.  相似文献   

9.
Electronic portal imagers have promising dosimetric applications in external beam radiation therapy. In this study a patient dose computation algorithm based on Monte Carlo (MC) simulations and on portal images is developed and validated. The patient exit fluence from primary photons is obtained from the portal image after correction for scattered radiation. The scattered radiation at the portal imager and the spectral energy distribution of the primary photons are estimated from MC simulations at the treatment planning stage. The patient exit fluence and the spectral energy distribution of the primary photons are then used to ray-trace the photons from the portal image towards the source through the CT geometry of the patient. Photon weights which reflect the probability of a photon being transmitted are computed during this step. A dedicated MC code is used to transport back these photons from the source through the patient CT geometry to obtain patient dose. Only Compton interactions are considered. This code also produces a reconstructed portal image which is used as a verification tool to ensure that the dose reconstruction is reliable. The dose reconstruction algorithm is compared against MC dose calculation (MCDC) predictions and against measurements in phantom. The reconstructed absolute absorbed doses and the MCDC predictions in homogeneous and heterogeneous phantoms agree within 3% for simple open fields. Comparison with film-measured relative dose distributions for IMRT fields yields agreement within 3 mm, 5%. This novel dose reconstruction algorithm allows for daily patient-specific dosimetry and verification of patient movement.  相似文献   

10.
There are a number of different quantitative models that can be used in a medical diagnostic decision support system including parametric methods (linear discriminant analysis or logistic regression), nonparametric models (k nearest neighbor or kernel density) and several neural network models. The complexity of the diagnostic task is thought to be one of the prime determinants of model selection. Unfortunately, there is no theory available to guide model selection. This paper illustrates the use of combined neural network models to guide model selection for diagnosis of ophthalmic and internal carotid arterial disorders. The ophthalmic and internal carotid arterial Doppler signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first-level networks were implemented for the diagnosis of ophthalmic and internal carotid arterial disorders using the statistical features as inputs. To improve diagnostic accuracy, the second-level networks were trained using the outputs of the first-level networks as input data. The combined neural network models achieved accuracy rates which were higher than that of the stand-alone neural network models.  相似文献   

11.
Segmented attenuation correction is now a widely accepted technique to reduce noise propagation from transmission scanning in positron emission tomography (PET). In this paper, we present a new method for segmenting transmission images in whole-body scanning. This reduces the noise in the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the fuzzy C-means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are, therefore, segmented into populations of uniform attenuation based on knowledge of the human anatomy. The clustering procedure starts with an overspecified number of clusters followed by a merging process to group clusters with similar properties (redundant clusters) and removal of some undesired substructures using anatomical knowledge. The method is unsupervised, adaptive and allows the classification of both pre- or post-injection transmission images obtained using either coincident 68Ge or single-photon 137Cs sources into main tissue components in terms of attenuation coefficients. A high-quality transmission image of the scanner bed is obtained from a high statistics scan and added to the transmission image. The segmented transmission images are then forward projected to generate attenuation correction factors to be used for the reconstruction of the corresponding emission scan. The technique has been tested on a chest phantom simulating the lungs, heart cavity and the spine, the Rando-Alderson phantom, and whole-body clinical PET studies showing a remarkable improvement in image quality, a clear reduction of noise propagation from transmission into emission data allowing for reduction of transmission scan duration. There was very good correlation (R2 = 0.96) between maximum standardized uptake values (SUVs) in lung nodules measured on images reconstructed with measured and segmented attenuation correction with a statistically significant decrease in SUV (17.03% +/- 8.4%, P < 0.01) on the latter images, whereas no proof of statistically significant differences on the average SUVs was observed. Finally, the potential of the FCM algorithm as a segmentation method and its limitations as well as other prospective applications of the technique are discussed.  相似文献   

12.
13.
The purpose of the study was to evaluate the resolution recovery in the list-mode iterative reconstruction algorithm (LMIRA) for SPECT. In this study we compare the performance of the proposed method with other iterative resolution recovery methods for different noise levels. We developed an iterative reconstruction method which uses list-mode data instead of binned data. The new algorithm makes use of a more accurate model of the collimator structure. We compared the SPECT list-mode reconstruction with MLEM, OSEM and RBI, all including resolution recovery. For the evaluation we used Gaussian shaped sources with different FWHM at three different locations and three noise levels. For these distributions we calculated the reconstructed images for a different number of iterations. The absolute error for the reconstructed images was used to evaluate the performance. The performance of all four methods is comparable for the sources located in the centre of the field of view. For the sources located out of the centre, the error of the list-mode method is significantly lower than that of the other methods. Splitting the system model into a separate object-dependent and detector-dependent module gives us a flexible reconstruction method. With this we can very easily adapt the resolution recovery to different collimator types.  相似文献   

14.
A Monte Carlo study is carried out to quantify the effects of seed anisotropy and interseed attenuation for 103Pd and 125I prostate implants. Two idealized and two real prostate implants are considered. Full Monte Carlo simulation (FMCS) of implants (seeds are physically and simultaneously simulated) is compared with isotropic point-source dose-kernel superposition (PSKS) and line-source dose-kernel superposition (LSKS) methods. For clinical pre- and post-procedure implants, the dose to the different structures (prostate, rectum wall, and urethra) is calculated. The discretized volumes of these structures are reconstructed using transrectal ultrasound contours. Local dose differences (PSKS versus FMCS and LSKS versus FMCS) are investigated. The dose contributions from primary versus scattered photons are calculated separately. For 103Pd, the average absolute total dose difference between FMCS and PSKS can be as high as 7.4% for the idealized model and 6.1% for the clinical preprocedure implant. Similarly, the total dose difference is lower for the case of 125I: 4.4% for the idealized model and 4.6% for a clinical post-procedure implant. Average absolute dose differences between LSKS and FMCS are less significant for both seed models: 3 to 3.6% for the idealized models and 2.9 to 3.2% for the clinical plans. Dose differences between PSKS and FMCS are due to the absence of both seed anisotropy and interseed attenuation modeling in the PSKS approach. LSKS accounts for seed anisotropy but not for the interseed effect, leading to systematically overestimated dose values in comparison with the more accurate FMCS method. For both idealized and clinical implants the dose from scattered photons represent less than 1/3 of the total dose. For all studied cases, LSKS prostate DVHs overestimate D90 by 2 to 5% because of the missing interseed attenuation effect. PSKS and LSKS predictions of V150 and V200 are overestimated by up to 9% in comparison with the FMCS results. Finally, effects of seed anisotropy and interseed attenuation must be viewed in the context of other significant sources of dose uncertainty, namely seed orientation, source misplacement, prostate morphological changes and tissue heterogeneity.  相似文献   

15.
We investigated the accuracy of qSPECT, a quantitative SPECT reconstruction algorithm we have developed which employs corrections for collimator blurring, photon attenuation and scatter, and provides images in units of absolute radiotracer concentrations (kBq cm(-3)). Using simulated and experimental phantom data with characteristics similar to clinical cardiac perfusion data, we studied the implementation of a scatter correction (SC) as part of an iterative reconstruction protocol. Additionally, with experimental phantom studies we examined the influence of CT-based attenuation maps, relative to those obtained from conventional SPECT transmission scans, on SCs and quantitation. Our results indicate that the qSPECT estimated scatter corrections did not change appreciably after the third iteration of the reconstruction. For the simulated data, qSPECT concentrations agreed with images reconstructed using ideal, scatter-free, simulated data to within 6%. For the experimental data, we observed small systematic differences in the scatter fractions for data using different combinations of SCs and attenuation maps. The SCs were found to be significantly influenced by errors in image coregistration. The reconstructed concentrations using CT-based corrections were more quantitatively accurate than those using attenuation maps from conventional SPECT transmission scans. However, segmenting the attenuation maps from SPECT transmission scans could provide sufficient accuracy for most applications.  相似文献   

16.
An artificial neural network (ANN) trained on high-quality medical tomograms or phantom images may be able to learn the planar data-to-tomographic image relationship with very high precision. As a result, a properly trained ANN can produce comparably accurate image reconstruction without the high computational cost inherent in some traditional reconstruction techniques. We have previously shown that a standard backpropagation neural network can be trained to reconstruct sections of single photon emission computed tomography (SPECT) images based on the planar image projections as inputs. In this study, we present a method of deriving activation functions for a backpropagation ANN that make it readily trainable for full SPECT image reconstruction. The activation functions used for this work are based on the estimated probability density functions (PDFs) of the ANN training set data. The statistically tailored ANN and the standard sigmoidal backpropagation ANN methods are compared both in terms of their trainability and generalization ability. The results presented show that a statistically tailored ANN can reconstruct novel tomographic images of a quality comparable with that of the images used to train the network. Ultimately, an adequately trained ANN should be able to properly compensate for physical photon transport effects, background noise, and artifacts while reconstructing the tomographic image.  相似文献   

17.
The LIPOMETER is an optical device for measuring the thickness of a subcutaneous adipose tissue layer. It illuminates the interesting layer, measures the backscattered light signals and from these, it computes absolute values of subcutaneous adipose tissue layer thickness (in mm). Previously, these light pattern values were fitted by nonlinear regression analysis to absolute values provided by computed tomography. Nonlinear regression analysis might provide slight limitations for our problem: a selected curve type cannot be changed afterwards during the application of the measurement device. Artificial neural networks yield a more flexible approach to this fitting problem and might be able to refine the fitting results. In the present paper we compare nonlinear regression analysis with the behaviour of different architectures of multilayer feed forward neural networks trained by error back propagation. Specifically, we are interested whether neural networks are able to yield a better fit of the LIPOMETER light patterns to absolute subcutaneous adipose tissue layer thicknesses than the nonlinear regression techniques. Different architectures of these networks are able to surpass the best result of regression analysis in training and test, providing higher correlation coefficients, regression lines with absolute values obtained from computed tomography closer to the line of identity, decreased sums of absolute and squared deviations, and higher measurement agreement.  相似文献   

18.
Schmidt TG  Fahrig R  Pelc NJ 《Medical physics》2005,32(11):3234-3245
An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system.  相似文献   

19.
Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two-compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods.  相似文献   

20.
Simultaneous 99mTC/ 123I SPECT allows the assessment of two physiological functions under identical conditions. The separation of these radionuclides is difficult, however, because their energies are close. Most energy-window-based scatter correction methods do not fully model either physical factors or patient-specific activity and attenuation distributions. We have developed a fast Monte Carlo (MC) simulation-based multiple-radionuclide and multiple-energy joint ordered-subset expectation-maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. MC-JOSEM simultaneously corrects for scatter and cross talk as well as detector response within the reconstruction algorithm. We evaluated MC-JOSEM for simultaneous brain profusion (99mTc-HMPAO) and neurotransmission (123I-altropane) SPECT. MC simulations of 99mTc and 123I studies were generated separately and then combined to mimic simultaneous 99mTc/ 123I SPECT. All the details of photon transport through the brain, the collimator, and detector, including Compton and coherent scatter, septal penetration, and backscatter from components behind the crystal, were modeled. We reconstructed images from simultaneous dual-radionuclide projections in three ways. First, we reconstructed the photopeak-energy-window projections (with an asymmetric energy window for 1231) using the standard ordered-subsets expectation-maximization algorithm (NSC-OSEM). Second, we used standard OSEM to reconstruct 99mTc photopeak-energy-window projections, while including an estimate of scatter from a Compton-scatter energy window (SC-OSEM). Third, we jointly reconstructed both 99mTc and 123I images using projection data associated with two photo-peak energy windows and an intermediate-energy window using MC-JOSEM. For 15 iterations of reconstruction, the bias and standard deviation of 99mTc activity estimates in several brain structures were calculated for NSC-OSEM, SC-OSEM, and MC-JOSEM, using images reconstructed from primary (unscattered) photons as a reference. Similar calculations were performed for 123I images for NSC-OSEM and MC-JOSEM. For 123I images, dopamine binding potential (BP) at equilibrium and its signal-to-noise ratio (SNR) were also calculated. Our results demonstrate that MC-JOSEM performs better than NSC- and SC-OSEM for quantitation tasks. After 15 iterations of reconstruction, the relative bias of 99mTc activity estimates in the thalamus, striata, white matter, and gray matter volumes from MC-JOSEM ranged from -2.4% to 1.2%, while the same estimates for NSC-OSEM (SC-OSEM) ranged from 20.8% to 103.6% (7.2% to 41.9%). Similarly, the relative bias of 123I activity estimates from 15 iterations of MC-JOSEM in the striata and background ranged from -1.4% to 2.9%, while the same estimates for NSC-OSEM ranged from 1.6% to 10.0%. The relative standard deviation of 99mTc activity estimates from MC-JOSEM ranged from 1.1% to 4.8% versus 1.2% to 6.7% (1.2% to 5.9%) for NSC-OSEM (SC-OSEM). The relative standard deviation of 123I activity estimates using MC-JOSEM ranged from 1.1% to 1.9% versus 1.5% to 2.7% for NSC-OSEM. Using the 123I dopamine BP obtained from the reconstruction produced by primary photons as a reference, the result for MC-JOSEM was 50.5% closer to the reference than that of NSC-OSEM after 15 iterations. The SNR for dopamine BP was 23.6 for MC-JOSEM as compared to 18.3 for NSC-OSEM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号