首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
目的在CT检查时,有限角度投影和稀疏矩阵投影能够减少X射线的剂量,然而这会导致投影数据不足,给图像重建带来一定的困难。为了克服这一难题得到较好的重建图像,本文提出一种基于计算投影矩阵广义逆的CT迭代重建算法。方法该算法在计算过程中,将重建图像表示为投影矩阵以及其广义逆的乘积。首先使用一阶迭代计算广义逆矩阵,但是由于投影矩阵和其广义逆矩阵都比较大,在迭代过程中以投影和滤波反投影代替。然后通过不同的算法分别对平行束投影、有限角度投影、稀疏矩阵投影的数据进行重建,并对重建结果的均方差、通用图像质量指标以及图像互信息进行比较。结果本文提出的方法重建出图像的均方差、通用图像质量指标和图像互信息更优,而且重建时间较短。结论该方法能够在没有未知图像先验结构信息和伪影猜想的情况下有效地提高重建图像的质量,而且该算法不需要计算投影过程,重建过程简单易行。  相似文献   

2.
目的:传统的CT迭代算法中为了简化运算,投影系数的计算选择用成像射线路径是否穿过像素内来确定投影系数矩阵中元素的数值,穿过为1,不穿过为0。有些射线只穿过像素的边缘,也被赋值为1,扩大了该射线对相应像素投影值的贡献。为减小图像重建的误差,提高图像重建质量,提出了基于穿越长度权重的迭代重建算法,通过精确建模,选择用成像射线路径在像素内的穿越长度来确定投影系数矩阵中元素的数值。方法:采用MATLAB7.0仿真工具,对Shepp-Logan模型进行计算机仿真扫描,分别以传统投影系数和穿越长度权重计算的投影系数进行ML-EM迭代重建图像。结果:仿真数据表明基于穿越长度权重投影系数的ML-EM迭代重建算法,相比基于传统投影系数的迭代重建算法,可以提高重建图像的质量。结论:基于穿越长度权重的ML-EM迭代重建算法,通过精确建模达到了控制噪声、减小误差、较为准确重建的目的。通过对成像的几何和物理因素进行精确的建模,能有效地控制其中的非随机部分的影响,减小图像重建误差,一方面对迭代重建算法提供一种新的投影系数的计算方法,另一方面进一步提高迭代算法在CT重建中的图像质量。  相似文献   

3.
目的 构建采自临床的2D-3D医学图像配准数据集,是实现各种学习算法应用于实际医疗的重要环节。然而临床数据的获取过程中存在多种不确定因素,致使数据集的标定结果需要分析和评价。本文对采自胸主动脉腔内修复术的一组X线和CT图像的几组标定数据进行分析和评价,并确定正确标定结果。方法 分别采用相似性度量法和投影距离误差法对标定结果进行分析和评价。选用相似性准则,计算CT图像生成的二维数字放射重建图像和X线图像的相似性,相似程度越高,对应的标定值越接近真实值。读取X线图像中的标记物影像位置作为参考位置;将计算得到的CT图像中标记物位置在X线图像上投影,得到投影位置;计算参考位置和投影位置的距离,距离值越小,对应的标定值越接近真实值。结果 提供的几组标定数据,在比较数字放射重建图像和X线图像相似性方面,相似度接近,没有明显指向性;而投影距离误差法的分析结果指向性明显,能够定量描述标定结果的优劣。主要原因在于各组标定值之间差别不突出;生成的数字放射重建图像和X线图像之间模态差异较大等。结论 投影距离误差法是评价2D-3D医学图像配准数据集标定结果的有效手段。另外,若提供的标定结果计算数据差异明显,或...  相似文献   

4.
针对CT欠投影数据进行成像问题,本文提出了一种基于双边滤波迭代修正的代数迭代(ART)重建算法。该算法在每一次迭代过程中,先采用ART算法重建图像并进行非负约束,然后采用双边滤波法对以上约束后的图像进行修正,再进入下一次迭代,直到满足迭代终止条件。为了进一步提高图像重建质量和加快迭代收敛速度,利用改进的双边滤波算法以提高迭代效能。通过对Shepp-Logan体模和真实投影数据进行重建,验证了本文算法的可行性,并与滤波反投影(FBP)算法、ART算法、ART混合高斯滤波(GF-ART)算法相比较。结果表明,本文算法重建出的图像信噪比更高,能够更好的保持图像边缘信息。  相似文献   

5.
目的:只将稀疏MRI数据重建公式中的正则化项作为最小化的目标函数,避免在迭代过程中系统矩阵参与运算,以降低算法的运算量,提高稀疏MRI数据重建的速度.方法:本文中所用的正则化函数是图像全变分与小波系数L1范数的组合,其最小化问题是用次梯度优化算法来求解的.在每一步迭代过程中,首先求出正则化项的次梯度,用次梯度优化算法求解得到中间图像并对其进行傅立叶变换,再根据凸集投影原理,直接将在相位编码方向上随机降采样的K空间数据替换到中间图像频域值的相应位置上,然后对替换后得到的频域值进行反傅立叶变换并将求得的图像作为下一次迭代过程的初始图像.结果:在正则化函数和迭代步数均相同的条件下,本文方法重建的图像质量与NCG-SMRI方法的相当,但重建速度是NCG-SMRI方法的2倍多.结论:实验表明,在不降低重建图像的质量的前提下,本文方法可以提高稀疏MRI数据的重建速度,能进一步满足临床上对MRI重建速度的要求.  相似文献   

6.
C形臂X线投影图像3D建模是指以C形臂获取的X线2D投影图像为基础,实现骨骼3D模型的术中重建。与单纯的2D切片图像或投影图像相比,3D重建模型不仅含有更为丰富的骨骼外部形状等解剖结构信息,而且还可包含骨密度及强度等骨骼内部多元有用信息。该技术在骨组织活检、椎弓根螺钉植入、髓内钉固定、及手足骨折修复等手术方面具有广阔的应用前景。对C形臂X线投影图像3D建模技术的研究意义、现状及现存问题进行介绍。在此基础上,分析了该技术所涉及的主要研究内容,提出了可分别沿两条主线研究基于普通C形臂2D投影图像的人体骨骼3D解剖模型构建:一、以C形臂按指定角度间隔获取的密集2D投影图像为基础,采用有限角锥束X射线投影合成方法进行3D重建;二、以C形臂在正位、侧位等姿态下获取的少量2D投影图像为基础,采用基于统计可变模型的非刚性配准方法进行3D重建。对每条主线都提出了对应的解决方案。  相似文献   

7.
光线投影算法是体绘制算法中图像效果比较好的方法,但存在运算量大,绘制速度慢的问题。为此本文提出了一种新的光线投影体绘制的加速算法,利用重采样点在两坐标系中的矩阵变换特性。减少矩阵运算量,加快重采样计算过程,并且通过将Bresenham算法扩展至三维,利用包围盒技术避免对空体元的采样,从而加速了光线投影的效率。实验结果表明改进后的加速算法,既能保证绘制质量,又能显著减少计算量,提高体绘制的速度。  相似文献   

8.
TV算法是一种很好的有限角度投影数据图像重建算法,但在应用于三维的有限角度投影数据重建时,该算法存在的高耗时问题就更为突出并成为其应用瓶颈。本文提出了一套基于通用GPU技术在图形处理器上快速实现的三维TV算法。实验结果表明,与运行在CPU上的三维TV算法相比,该算法在获得可比的重建结果的前提下,有效地提高了重建速度。  相似文献   

9.
背景:基于C型臂2D投影的3D模型重建是一种以XRII图像作为基础,经过校正后的运用一定数值函数进行3D模型重建的技术,可在手术过程中提供给手术者丰富的图像信息,方便手术的进行。 目的:探讨基于C型臂2D投影的3D模型重建技术诸多方面的问题。 方法:由第一作者检索1990/2010 PubMed数据库、CNKI系列数据库及万方数据库有关图像引导手术技术、C型臂2D投影图像校正与重建、基于2D图像的3D模型重建以及医学图像配准等方面的文献。 结果与结论:基于C型臂2D投影图像的3D模型重建是指以C型臂获取的2D投影图像为基础,实现骨骼3D模型的术中重建。3D重建模型不仅含有更为丰富的骨骼外部形状等解剖结构信息,而且还可包含骨密度及强度等骨骼内部多元有用信息。该技术可分为两条主线:有限角度锥形束X射线摄影合成方法;基于统计可变模型的非刚性配准方法。未来的研究可将该技术与手术导航相关技术进行结合从而建立手术导航系统。 关键词:数字化医学;C型臂;2D投影;工作原理;关键技术;重建 doi:10.3969/j.issn.1673-8225.2012.13.036   相似文献   

10.
目的:针对稀疏投影的CT重建图像附带噪声和伪影的特性,使用神经网络模型对稀疏投影得到的低质量CT重建 图像进行图像增强。方法:在残差编码-解码卷积神经网络基础上提出一种基于对抗训练的U-Net神经网络模型,并使用 公开数据集TCGA-CESC癌症CT影像进行模型训练和测试。评价模型处理效果的指标包括峰值信噪比(PSNR)、结构相 似性(SSIM)和均方根误差(RMSE)。结果:在对180 次探测的CT重建图像的测试中,模型处理后的图像相比未处理图 像,PSNR、SSIM和RMSE指标平均值分别提升15.10%、37.89%和38.20%。在PSNR和SSIM指标平均值意义下,模型处 理后的图像优于1 800次探测的未处理CT重建图像。结论:本研究提出的神经网络模型能够减少伪影和噪点,对稀疏投 影CT图像增强有一定效果。  相似文献   

11.
In recent years, image reconstruction methods for cone-beam computed tomography (CT) have been extensively studied. However, few of these studies discussed computing parallel-beam projections from cone-beam projections. In this paper, we focus on the exact synthesis of complete or incomplete parallel-beam projections from cone-beam projections. First, an extended central slice theorem is described to establish a relationship between the Radon space and the Fourier space. Then, data sufficiency conditions are proposed for computing parallel-beam projection data from cone-beam data. Using these results, a general filtered backprojection algorithm is formulated that can exactly synthesize parallel-beam projection data from cone-beam projection data. As an example, we prove that parallel-beam projections can be exactly synthesized in an angular range in the case of circular cone-beam scanning. Interestingly, this angular range is larger than that derived in the Feldkamp reconstruction framework. Numerical experiments are performed in the circular scanning case to verify our method.  相似文献   

12.
Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.  相似文献   

13.
Algorithms for image reconstruction using back projection with filtering by convolution for fan and parallel scanning geometry are described. The algorithm for regrouping projections from fan to parallel geometry is presented. Impact analysis of regrouping parameters (number of detectors, number of angle shots) of error level between images reconstructed in fan geometry and in parallel geometry after regrouping is discussed.  相似文献   

14.
背景:CT成像质量的优劣不仅取决于仪器的精密性和先进性,在很大程度上也取决于重建算法,由二维扇束扫描向三维锥束扫描是CT技术的发展方向,因此,寻找一种合适的锥束重建算法具有无法忽略的意义。 目的:探讨基于C型臂超短扫描路径锥束投影的图像合成,为实现基于C型臂2D投影图像的3D模型重建提供算法支持。 方法:由第一作者于2012年3至5月检索PubMed数据库、CNKI系列数据库及万方数据库1990年至2011年文献。检索词为“C型臂,超短扫描路径,FDK算法,有限角锥形束三维重建,超短扫描扇束重建算法”,检索文章的语言种类为中文和英文。计算机初检得到58篇文献,其中19篇符合纳入标准被保留。 结果与结论:基于C型臂2D投影图像的3D模型重建必须进行三维模型的重建,目前应用最为广泛的的三维图像重建方法仍然是FDK。但是FDK算法适用于全路径,对超短路径而言不能直接采用,而通过将二维扇束重建算法推广到三维空间中而获得的短扫描轨迹的FDK类型锥束重建算法可对采集到的锥束投影数据进行感兴趣区域重建。未来的研究可针对减少噪声等干扰数据对重建质量造成的影响进行探讨。  相似文献   

15.
In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2pi angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2pi angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm.  相似文献   

16.
Filtered backprojection reconstruction is an efficient image reconstruction method which is widely used in CT and 3D x-ray imaging. The way data have to be filtered depends on the acquisition geometry and the number of projections (views) which were acquired. For standard geometries like circle or helix it is known how to effectively filter the data. But there are acquisition geometries for which the application of standard filters yields poor results, e.g. in situations where the number of views is very small or for a limited angular range. In tomosynthesis, both conditions apply, i.e. the number of projections is typically very small and, moreover, the angular coverage is much less than 180°. This paper proposes a new method to design effective filters which are specific for the acquisition geometry. Examples from x-ray tomosynthesis are utilized to demonstrate the excellent performance of the proposed method.  相似文献   

17.
Dual energy computed tomography (DECT) is currently a subject of extensive investigation. DECT is currently implemented using either a dual source scanner with high and low kVp data acquired from separate sources or a single source scanner with both high and low kVp data acquired in an alternating manner. Both methods require dedicated hardware to enable data acquisition and image reconstruction for DECT. In this paper, we present a method to enable DECT using a single x-ray source with a slow kVp switching data acquisition. The enabling reconstruction technique allowing for the reduction in slew rate is the prior image constrained compressed sensing (PICCS) algorithm. When a slow kVp switching data acquisition method is used, the projection data with high and low kVp values are undersampled and the conventional filtered backprojection (FBP) image reconstruction does not enable streaking artifact-free images for material decomposition in DECT. In this paper, all of the acquired high and low kVp projection data were used to generate a prior image using the conventional FBP method. The PICCS algorithm was then used to reconstruct both high and low kVp images to enable material decomposition in the image domain. Both numerical simulations and physical phantom experimental studies were conducted to validate the proposed DECT scheme. The results demonstrate that a slew rate corresponding to 123 views at high and low kVp (high and low kVp values used for dual energy decomposition) is sufficient for the PICCS-based DECT method. In contrast, the slew rate should be high enough to obtain over 500 projections at each kVp for artifact-free reconstruction using an FBP-based DECT method.  相似文献   

18.
Statistical methods for image reconstruction such as the maximum likelihood expectation maximization are more robust and flexible than analytical inversion methods and allow for accurate modelling of the counting statistics and photon transport during acquisition of projection data. Statistical reconstruction is prohibitively slow when applied to clinical x-ray CT due to the large data sets and the high number of iterations required for reconstructing high-resolution images. Recently, however, powerful methods for accelerating statistical reconstruction have been proposed which, instead of accessing all projections simultaneously for updating an image estimate, are based on accessing a subset of projections at the time during iterative reconstruction. In this paper we study images generated by the convex algorithm accelerated by the use of ordered subsets (the OS convex algorithm (OSC)) for data sets with sizes, noise levels and spatial resolution representative of x-ray CT imaging. It is only in the case of extremely high acceleration factors (higher than 50, corresponding to fewer than 20 projections per subset), that areas with incorrect grey values appear in the reconstructed images, and that image noise increases compared with the standard convex algorithm. These image degradations can be adequately corrected for by running the final iteration of OSC with a reduced number of subsets. Even by applying such a relatively slow final iteration, OSC produces almost an equal resolution and lesion contrast as the standard convex algorithm, but more than two orders of magnitude faster.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号