首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
ObjectiveWe provide a survey of recent advances in biomedical image analysis and classification from emergent imaging modalities such as terahertz (THz) pulse imaging (TPI) and dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) and identification of their underlining commonalities.MethodsBoth time and frequency domain signal pre-processing techniques are considered: noise removal, spectral analysis, principal component analysis (PCA) and wavelet transforms. Feature extraction and classification methods based on feature vectors using the above processing techniques are reviewed. A tensorial signal processing de-noising framework suitable for spatiotemporal association between features in MRI is also discussed.ValidationExamples where the proposed methodologies have been successful in classifying TPIs and DCE-MRIs are discussed.ResultsIdentifying commonalities in the structure of such heterogeneous datasets potentially leads to a unified multi-channel signal processing framework for biomedical image analysis.ConclusionThe proposed complex valued classification methodology enables fusion of entire datasets from a sequence of spatial images taken at different time stamps; this is of interest from the viewpoint of inferring disease proliferation. The approach is also of interest for other emergent multi-channel biomedical imaging modalities and of relevance across the biomedical signal processing community.  相似文献   

2.
A post-processing noise suppression technique for biomedical MRI images is presented. The described procedure recovers both sharp edges and smooth surfaces from a given noisy MRI image; it does not blur the edges and does not introduce spikes or other artefacts. The fine details of the image are also preserved. The proposed algorithm first extracts the edges from the original image and then performs noise reduction by using a wavelet de-noise method. After the application of the wavelet method, the edges are restored to the filtered image. The result is the original image with less noise, fine detail and sharp edges. Edge extraction is performed by using an algorithm based on Sobel operators. The wavelet de-noise method is based on the calculation of the correlation factor between wavelet coefficients belonging to different scales. The algorithm was tested on several MRI images and, as an example of its application, we report the results obtained from a spin echo (multi echo) MRI image of a human wrist collected with a low field experimental scanner (the signal-to-noise ratio, SNR, of the experimental image was 12). Other filtering operations have been performed after the addition of white noise on both channels of the experimental image, before the magnitude calculation. The results at SNR = 7, SNR = 5 and SNR = 3 are also reported. For SNR values between 5 and 12, the improvement in SNR was substantial and the fine details were preserved, the edges were not blurred and no spikes or other artefacts were evident, demonstrating the good performances of our method. At very low SNR (SNR = 3) our result is worse than that obtained by a simpler filtering procedure.  相似文献   

3.
超声图像易受斑点噪声的干扰,限制了其在医学诊断中的进一步应用。提出了一种将双树复小波变换(DT-CWT)与非线性扩散相结合的超声图像去噪方法。首先,对图像进行双树复小波分解;然后,高频部分和低频部分分别采用自适应对比度扩散和全变差扩散,最后重构图像。给出了实验结果,并与小波阈值收缩和全变差扩散结合的方法、基于小波和基于多小波的非线性扩散方法的图像去噪效果进行了比较。结果表明,本文提出的方法去噪效果更为优越:不但抑制噪声的能力更强,而且能够更好地保留超声图像原有的边缘和纹理特征。  相似文献   

4.
An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed. The TV norm has been described in the literature as a method for reducing noise in two-dimensional images while preserving edges, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularization function in Bayesian reconstruction, implemented in an expectation maximization algorithm (TV-EM). The TV-EM was developed for 2D single photon emission computed tomography imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. The TV norm was extended from 2D to 3D and incorporated into an ordered subsets expectation maximization algorithm for limited view angle geometry. The algorithm, called TV3D-EM, was evaluated using a modelled point spread function and digital phantoms. Reconstructed images were compared with those reconstructed with the 2D filtered backprojection algorithm currently used in ectomography. Results show a substantial reduction in artefacts related to the limited view angle geometry, and noise levels were also improved. Perhaps most important, depth resolution was improved by at least 45%. In conclusion, the proposed algorithm has been shown to improve the perceived image quality.  相似文献   

5.
Pan X  Yu L 《Medical physics》2003,30(4):590-600
In computed tomography (CT), the fan-beam filtered backprojection (FFBP) algorithm is used widely for image reconstruction. It is known that the FFBP algorithm can significantly amplify data noise and aliasing artifacts in situations where the focal lengths are comparable to or smaller than the size of the field of measurement (FOM). In this work, we propose an algorithm that is less susceptible to data noise, aliasing, and other data inconsistencies than is the FFBP algorithm while retaining the favorable resolution properties of the FFBP algorithm. In an attempt to evaluate the noise properties in reconstructed images, we derive analytic expressions for image variances obtained by use of the FFBP algorithm and the proposed algorithm. Computer simulation studies are conducted for quantitative evaluation of the spatial resolution and noise properties of images reconstructed by use of the algorithms. Numerical results of these studies confirm the favorable spatial resolution and noise properties of the proposed algorithm and verify the validity of the theoretically predicted image variances. The proposed algorithm and the derived analytic expressions for image variances can have practical implications for both estimation and detection/classification tasks making use of CT images, and they can readily be generalized to other fan-beam geometries.  相似文献   

6.
Conventional injected-current electrical impedance tomography (EIT) and magnetic resonance imaging (MRI) techniques can be combined to reconstruct high resolution true conductivity images. The magnetic flux density distribution generated by the internal current density distribution is extracted from MR phase images. This information is used to form a fine detailed conductivity image using an Ohm's law based update equation. The reconstructed conductivity image is assumed to differ from the true image by a scale factor. EIT surface potential measurements are then used to scale the reconstructed image in order to find the true conductivity values. This process is iterated until a stopping criterion is met. Several simulations are carried out for opposite and cosine current injection patterns to select the best current injection pattern for a 2D thorax model. The contrast resolution and accuracy of the proposed algorithm are also studied. In all simulation studies, realistic noise models for voltage and magnetic flux density measurements are used. It is shown that, in contrast to the conventional EIT techniques, the proposed method has the capability of reconstructing conductivity images with uniform and high spatial resolution. The spatial resolution is limited by the larger element size of the finite element mesh and twice the magnetic resonance image pixel size.  相似文献   

7.
Arterial spin-labeling (ASL) perfusion MRI is a non-invasive method for quantifying cerebral blood flow (CBF). Standard ASL CBF calibration mainly relies on pair-wise subtraction of the spin-labeled images and controls images at each voxel separately, ignoring the abundant spatial correlations in ASL data. To address this issue, we previously proposed a multivariate support vector machine (SVM) learning-based algorithm for ASL CBF quantification (SVMASLQ). But the original SVMASLQ was designed to do CBF quantification for all image voxels simultaneously, which is not ideal for considering local signal and noise variations. To fix this problem, we here in this paper extended SVMASLQ into a patch-wise method by using a patch-wise classification kernel. At each voxel, an image patch centered at that voxel was extracted from both the control images and labeled images, which was then input into SVMASLQ to find the corresponding patch of the surrogate perfusion map using a non-linear SVM classifier. Those patches were eventually combined into the final perfusion map. Method evaluations were performed using ASL data from 30 young healthy subjects. The results showed that the patch-wise SVMASLQ increased perfusion map SNR by 6.6% compared to the non-patch-wise SVMASLQ.  相似文献   

8.
Diffusion‐weighted imaging, a contrast unique to MRI, is used for assessment of tissue microstructure in vivo. However, this exquisite sensitivity to finer scales far above imaging resolution comes at the cost of vulnerability to errors caused by sources of motion other than diffusion motion. Addressing the issue of motion has traditionally limited diffusion‐weighted imaging to a few acquisition techniques and, as a consequence, to poorer spatial resolution than other MRI applications. Advances in MRI imaging methodology have allowed diffusion‐weighted MRI to push to ever higher spatial resolution. In this review we focus on the pulse sequences and associated techniques under development that have pushed the limits of image quality and spatial resolution in diffusion‐weighted MRI.  相似文献   

9.
Voxel-based estimation of PET images, generally referred to as parametric imaging, can provide invaluable information about the heterogeneity of an imaging agent in a given tissue. Due to high level of noise in dynamic images, however, the estimated parametric image is often noisy and unreliable. Several approaches have been developed to address this challenge, including spatial noise reduction techniques, cluster analysis and spatial constrained weighted nonlinear least-square (SCWNLS) methods. In this study, we develop and test several noise reduction techniques combined with SCWNLS using simulated dynamic PET images. Both spatial smoothing filters and wavelet-based noise reduction techniques are investigated. In addition, 12 different parametric imaging methods are compared using simulated data. With the combination of noise reduction techniques and SCWNLS methods, more accurate parameter estimation can be achieved than with either of the two techniques alone. A less than 10% relative root-mean-square error is achieved with the combined approach in the simulation study. The wavelet denoising based approach is less sensitive to noise and provides more accurate parameter estimation at higher noise levels. Further evaluation of the proposed methods is performed using actual small animal PET datasets. We expect that the proposed method would be useful for cardiac, neurological and oncologic applications.  相似文献   

10.
Medical images exchanged over public networks require a methodology to provide confidentiality for the image, authenticity of the image ownership and source of origin, and image integrity verification. To provide these three security requirements, we propose in this paper a region-based algorithm based on multiple watermarking in the frequency and spatial domains. Confidentiality and authenticity are provided by embedding robust watermarks in the region-of-non-interest (RONI) of the image using a blind scheme in the discrete wavelet transform and singular value decomposition domain (DWT-SVD). On the other hand, integrity is provided by embedding local fragile watermarks in the region-of-interest (ROI) of the image using a reversible scheme in the spatial domain. The integrity provided by the proposed algorithm is implemented on a block-level of the partitioned-image, thus enabling localized detection of tampered regions. The algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization capability, using MRI, Ultrasound, and X-ray gray-scale medical images. Performance results demonstrate the effectiveness of the proposed algorithm in providing the required security services for telemedicine applications.  相似文献   

11.
噪声和偏移场是影响磁共振(MRI)图像质量的主要因素。以含加性噪声和乘性偏移场的脑MRI图像组织分割为目标,提出一种抗噪局部相干模糊聚类算法,通过在目标函数中加入模糊算子和一致局部信息约束,达到同时抑制噪声和偏移场不利影响的目的,提高分割准确性和稳定性。采用20例合成图像、60例来自BrainWeb的模拟脑MRI图像、100例来自IBSR真实脑MRI图像,对算法的聚类性能进行评价。实验结果表明,在噪声和偏移场干扰并存的情况下,所提出算法与其他几种经典FCM改进算法相比,对合成图像集的平均分类准确度SA达到0.97,高于其他算法,最大可提高0.37;对真实脑MRI图像集的脑脊液分割有明显优势,相似性测度KI平均提高约0.1。分析表明,所提出算法有更好的分类准确性和稳定性。  相似文献   

12.
Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.  相似文献   

13.
The multiwire camera (MWC) produces high speed, quantitative autoradiography of radiolabelled substances in two-dimensional systems. While greatly superior to film-based systems in respect of speed and quantitativity the MWC has significantly poorer spatial resolution (particularly for high energy beta-emitting radiolabels) and the performance is ultimately limited by the noise induced in the images by Poisson statistics and counter background. Processing the MWC images with a maximum entropy algorithm significantly improves the performance of the system in these respects. The algorithm has been tested using one-dimensional data taken from images of known tritium, 14C and 125I distributions. Processed images are visually more acceptable with improved quantitative accuracy and spatial resolution. Quantitative accuracy, calculated as the root mean square deviation between an image and the known sample activities, is 10-40% lower for processed images compared with original camera images. Spatial resolution, calculated from slopes in the images representing edges of activity in the sources, is improved by 20-40% for the processed images. The algorithm is used to improve a two-dimensional image from a biological study. The source distribution consisted of a set of circular dots of varying activity. The dots with lowest activity were barely discernible in the raw MWC image but are clearly resolved after processing. The algorithm used is simple and effective and executes acceptably quickly on a personal computer. It should prove useful in any context where the imaging performance of a system is limited by Poisson statistics.  相似文献   

14.
High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.  相似文献   

15.
A fully 4D joint-estimation approach to reconstruction of temporal sequences of 3D positron emission tomography (PET) images is proposed. The method estimates both a set of temporal basis functions and the corresponding coefficient for each basis function at each spatial location within the image. The joint estimation is performed through a fully 4D version of the maximum likelihood expectation maximization (ML-EM) algorithm in conjunction with two different models of the mean of the Poisson measured data. The first model regards the coefficients of the temporal basis functions as the unknown parameters to be estimated and the second model regards the temporal basis functions themselves as the unknown parameters. The fully 4D methodology is compared to the conventional frame-by-frame independent reconstruction approach (3D ML-EM) for varying levels of both spatial and temporal post-reconstruction smoothing. It is found that using a set of temporally extensive basis functions (estimated from the data by 4D ML-EM) significantly reduces the spatial noise when compared to the independent method for a given level of image resolution. In addition to spatial image quality advantages, for smaller regions of interest (where statistical quality is often limited) the reconstructed time-activity curves show a lower level of bias and a lower level of noise compared to the independent reconstruction approach. Finally, the method is demonstrated on clinical 4D PET data.  相似文献   

16.
In FDG-PET imaging of thoracic tumors, blurring due to breathing motion often significantly degrades the quality of the observed image, which then obscures the tumor boundary. We demonstrate a deblurring technique that combines patient-specific motion estimates of tissue trajectories with image deconvolution techniques, thereby partially eliminating breathing-motion induced artifacts. Two data sets were used to evaluate the methodology including mobile phantoms and clinical images. The clinical images consist of PET/CT co-registered images of patients diagnosed with lung cancer. A breathing motion model was used to locally estimate the location-dependent tissue location probability function (TLP) due to breathing. The deconvolution process is carried by an expectation-maximization (EM) iterative algorithm using the motion-based TLP. Several methods were used to improve the robustness of the deblurring process by mitigating noise amplification and compensating for motion estimate uncertainties. The mobile phantom study with controlled settings demonstrated significant reduction in underestimation error of concentration in high activity case without significant superiority between the different applied methods. In case of medium activity concentration (moderate noise levels), less improvement was reported (10%-15% reduction in underestimation error relative to 15%-20% reduction in high concentration). Residual denoising using wavelets offered the best performance for this case. In the clinical data case, the image spatial resolution was significantly improved, especially in the direction of greatest motion (cranio-caudal). The EM algorithm converged within 15 and 5 iterations in the large and small tumor cases, respectively. A compromise between a figure-of-merit and entropy minimization was suggested as a stopping criterion. Regularization techniques such as wavelets and Bayesian methods provided further refinement by suppressing noise amplification. Our initial results show that the proposed method provides a feasible framework for improving PET thoracic images, without the need for gated/4-D PET imaging, when 4-D CT is available to estimate tumor motion.  相似文献   

17.
Statistical methods for image reconstruction such as the maximum likelihood expectation maximization are more robust and flexible than analytical inversion methods and allow for accurate modelling of the counting statistics and photon transport during acquisition of projection data. Statistical reconstruction is prohibitively slow when applied to clinical x-ray CT due to the large data sets and the high number of iterations required for reconstructing high-resolution images. Recently, however, powerful methods for accelerating statistical reconstruction have been proposed which, instead of accessing all projections simultaneously for updating an image estimate, are based on accessing a subset of projections at the time during iterative reconstruction. In this paper we study images generated by the convex algorithm accelerated by the use of ordered subsets (the OS convex algorithm (OSC)) for data sets with sizes, noise levels and spatial resolution representative of x-ray CT imaging. It is only in the case of extremely high acceleration factors (higher than 50, corresponding to fewer than 20 projections per subset), that areas with incorrect grey values appear in the reconstructed images, and that image noise increases compared with the standard convex algorithm. These image degradations can be adequately corrected for by running the final iteration of OSC with a reduced number of subsets. Even by applying such a relatively slow final iteration, OSC produces almost an equal resolution and lesion contrast as the standard convex algorithm, but more than two orders of magnitude faster.  相似文献   

18.
It has long been recognized that the problems of motion artifacts in conventional time subtraction digital subtraction angiography (DSA) may be overcome using energy subtraction techniques. Of the variety of energy subtraction techniques investigated, non-k-edge dual-energy subtraction offers the best signal-to-noise ratio (SNR). However, this technique achieves only 55% of the temporal DSA SNR. Noise reduction techniques that average the noisier high-energy image produce various degrees of noise improvement while minimally affecting iodine contrast and resolution. A more significant improvement in dual-energy DSA iodine SNR, however, results when the correlated noise that exists in material specific images is appropriately cancelled. The correlated noise reduction (CNR) algorithm presented here follows directly from the dual-energy computed tomography work of Kalender who made explicit use of noise correlations in material specific images to reduce noise. The results are identical to those achieved using a linear version of the two-stage filtering process described by Macovski in which the selective image is filtered to reduce high-frequency noise and added to a weighted, high SNR, nonselective image which has been processed with a high-frequency bandpass filter. The dual-energy DSA CNR algorithm presented here combines selective tissue and iodine images to produce a significant increase in the iodine SNR while fully preserving iodine spatial resolution. Theoretical calculations predict a factor of 2-4 improvement in SNR compared to conventional dual-energy images. The improvement factor achieved is dependent upon the x-ray beam spectra and the size of blurring kernel used in the algorithm.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

19.
Bayesian methods have been widely applied to the ill-posed problem of image reconstruction. Typically the prior information of the objective image is needed to produce reasonable reconstructions. In this paper, we propose a novel generalized Gibbs prior (GG-Prior), which exploits the basic affinity structure information in an image. The motivation for using the GG-Prior is that it has been shown to be effective noise suppression, while also maintaining sharp edges without oscillations. This feature makes it particularly attractive for the reconstruction of positron emission tomography (PET) where the aim is to identify the shape of objects from the background by sharp edges. We show that the standard paraboloidal surrogate coordinate ascent (PSCA) algorithm can be modified to incorporate the GG-Prior using a local linearized scheme in each iteration process. The proposed GG-Prior MAP reconstruction algorithm based on PSCA has been tested on simulated, real phantom data. Comparison studies with conventional filtered backprojection (FBP) method and Huber prior clearly demonstrate that the proposed GG-Prior performs better in lowering the noise, preserving the image edge and in higher signal noise ratio (SNR) condition.  相似文献   

20.
Yu L  Pan X 《Medical physics》2003,30(10):2629-2637
Half-scan strategy can be used for reducing scanning time and radiation dose delivered to the patient in fan-beam computed tomography (CT). In helical CT, the data weighting/interpolation functions are often devised based upon half-scan configurations. The half-scan fan-beam filtered backprojection (FFBP) algorithm is generally used for image reconstruction from half-scan data. It can, however, be susceptible to sample aliasing and data noise for configurations with short focal lengths and/or large fan-angles, leading to nonuniform resolution and noise properties in reconstructed images. Uniform resolution and noise properties are generally desired because they may lead to an increased utility of reconstructed images in estimation and/or detection/classification tasks. In this work, we propose an algorithm for reconstruction of images with uniform noise and resolution properties in half-scan CT. In an attempt to evaluate the image-noise properties, we derive analytic expressions for image variances obtained by use of the half-scan algorithms. We also perform numerical studies to assess quantitatively the resolution and noise properties of the algorithms. The results in these studies confirm that the proposed algorithm yields images with more uniform spatial resolution and with lower and more uniform noise levels than does the half-scan FFBP algorithm. Empirical results obtained in noise studies also verify the validity of the derived expressions for image variances. The proposed algorithm would be particularly useful for image reconstruction from data acquired by use of configurations with short focal lengths and large field of measurement, which may be encountered in compact micro-CT and radiation therapeutic CT applications. The analytic results of the image-noise properties can be used for image-quality assessment in detection/classification tasks by use of model-observers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号