首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
目的:尝试一种基于体表定位的二维图像配准方法,逐一实现PET、MRI和CT异机图像之间的精确三维融合.方法:输入PET/CT/MRI原始数据后采用数字化格式转换,设计"9点3面"立体定位法进行配准,在实时工作站Mimics按照信息交互自动融合模式,通过讯号叠加技术完成图像融合.结果:以肺癌患者的头、胸、膝为实例交叉试验CT+MRI、PET+MRI和PET+CT立体图像的异机融合,生成了分辨软、硬组织病变性质和位置的清晰互补影像.结论:这种先进的数字化融合算法对提高早期诊断和鉴别诊断具有临床意义,虽然异机融合工序目前尚未像PET+CT的同机融合那样完全成熟,但这一实验将为医学成像厂家进一步研制CT+MRI或PET+MRI同机融合设备提供经验借鉴.  相似文献   

2.
基于体素灰度三维多模医学图像配准中相似性测度的选取   总被引:2,自引:1,他引:1  
目的:在基于体素灰度医学图像配准领域,找出最适合于临床应用的多模医学图像配准相似性测度。方法:在极端的刚体配准条件下,检验出互相关系数,互信息和相关比相似性测度为适合的相似性测度。同时进一步解释了基于互信息相似性测度的医学图像配准易于陷入局部最优,而基于相关比相似性测度的方法易于保证配准得到全局最优,最后,利用加速的多分辨率配准方案和Powell‘s优化算法,对临床医学图像进行了基于相关比相似性测度的多模图像配准试验。结果:通过临床医学专家的判断,利用相关比相似性测度进行多模医学图像配准,安全能满足临床的要求,进行MR/CT,MR/PET三维多模医学图像配准时效果非常理想,结论:相比于其他相似性测度,互相关比相似性测度在基于体素灰度,三维多模医学图像配准领域,是一个更为适宜和准确的相似性测度。  相似文献   

3.
OBJECTIVE: The conventional registration of PET images of the chest with CT images is performed by rotating and shifting those images while used median lines and contours on axial images as the reference indexes. For the thoracic and the abdominal regions, therefore, the respiratory movements have prevented us from achieving satisfactory levels of registration reproducibility and accuracy. In order to solve this, we have analyzed respiratory movements of the chest and derived an image fusion method. METHODS: Respiratory movements of the lung along each axis (X-axis: left-right, Y-axis: dorsoventral, and Z-axis: craniocaudal) during deep breathing were analyzed using CT-3D images. In addition, respiratory movements of the lung and thorax in the Y-axis and Z-axis directions during deep breathing and at rest were also analyzed by using an MR system that is the non-invasive method and allows for acquiring arbitrary tomographic images. Respiratory movements were compensated for on PET images of the lung. Moving average deviations in the Y-axis and Z-axis directions, which were obtained from the analytical result of respiration (30 samples), were used to derive the compensatory values. RESULTS: The analysis of CT-3D images showed that the movements in the X-axis direction were negligible. Registration of PET images with CT images was found useful when it performed on the sagittal planes. The analysis of MR images on sagittal planes revealed that the region extending from the apex of the lung to the posterior wall of the lung was useful for reference indexes for registration. The PET image by the compensation of the respiration transfer difference in the pulmonary hilum division was fusion on the CT image. In the pulmonary hilum division, the improvement in the accuracy of 3.6 mm in the dorsoventral and 6.1 mm in the craniocaudal direction was obtained in comparison with the fusion only of the reference index. CONCLUSION: The developed image fusion technique compensating the respiratory movements was found to be effective over the region of the hilum of the lung than the conventional technique.  相似文献   

4.
目的 探讨磁共振成像(MRI)和CT图像融合技术对喉癌精确放疗定位的应用价值及意义.方法 对10例经内镜及手术病理证实的喉癌患者行图像融合,先行CT定位薄层扫描,然后行颈部MRI定位薄层扫描,最后将MRI图像资料拷贝至图像融合工作站进行配准和融合.结果 所有病变在融合的图像上均能清楚显示并配准满意,MRI薄层扫描可以大大弥补CT定位扫描中软组织分辨率低,病变显示范围不足的缺点,对进一步精确放疗提供可靠的信息.结论 MRI薄层扫描可以弥补CT定位图像在喉部软组织病变的显示不清晰的缺点,提高了喉癌放疗定位的精确性.  相似文献   

5.
Three-dimensional (3D) CT and 3D magnetic resonance (MR) imaging were performed in four patients with congenital dysplasia of the hip. Two patients were studied by 3D CT and two by 3D MR. Prior to volume segmentation, two-dimensional (2D) MR image preprocessing was used to correct for nonuniform signal intensity distribution from local variations in field strength and coil response. An unsharp mask of the original MR scan was computed by extreme blurring of the image to suppress the details of the object. The unsharp mask was divided into the image on a pixel-by-pixel basis. For improved object contrast first and second echo images were combined in a 1:2 ratio. To add an additional feature for volume segmentation, 2D MR image homogeneity was computed based on 3 X 3 pixel neighborhoods. Volume segmentation was performed using one feature for CT, i.e., attenuation range, and two features for MR, i.e., signal intensity and image homogeneity range. Three dimensional CT and 3D MR demonstrated the 3D relationships of femoral heads and acetabula. Three-dimensional CT was limited to patients who had ossified femoral heads, whereas 3D MR demonstrated the cartilaginous femoral head. The extent of acetabular coverage on which the mode of therapy is based was shown. Three-dimensional MR permitted imaging without gonadal irradiation. The 2D MR image preprocessing described here should provide even better results in objects with greater contrast, i.e., nonosseus structures, and those of larger size with relation to image degradation from partial volume effect.  相似文献   

6.

Objective

To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one.

Method

A cubic oriented scheme of“9-point &; 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”.

Result

The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics.

Conclusion

Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.  相似文献   

7.
Positron emission tomography (PET) imaging is rapidly expanding its role in clinical practice for cancer management. The high sensitivity of PET for functional abnormalities associated with cancer can be confounded by the minimal anatomical information it provides for cancer localization. Computed tomography (CT) provides detailed anatomical information but is less sensitive to pathologies than PET. Thus, combining (i.e., registering) PET and CT images would enable both accurate and sensitive cancer localization with respect to detailed patient anatomy. An additional application area of registration is to align CT–CT scans from serial studies on a patient on a PET/CT scanner to facilitate accurate assessment of therapeutic response from the co-aligned PET images. To facilitate image fusion, we are developing a deformable registration software system using mutual information and a B-spline model of the deformation. When applying deformable registration to whole body images, one of the obstacles is that the arms are present in PET images but not in CT images or are in different positions in serial CT images. This feature mismatch requires a preprocessing step to remove the arms where present and thus adds a manual step in an otherwise automatic algorithm. In this paper, we present a simple yet effective method for automatic arm removal. We demonstrate the efficiency and robustness of this algorithm on both clinical PET and CT images. By streamlining the entire registration process, we expect that the fusion technology will soon find its way into clinics, greatly benefiting cancer diagnosis, staging, therapy planning and treatment monitoring.  相似文献   

8.
PURPOSE: A system for digital integration of an open MR scanner (0.23 T, Figure 1) in therapy simulation and 3D radiation treatment planning is described. METHOD: MR images were acquired using the body coil and various positioning and immobilization aids. A gradient echo sequence (TR/TE 320 ms/24 ms) was used to create axial and coronal data sets. Image distortions were measured and corrected using phantom measurements (Figure 2) and specially developed software. RESULTS: Maximal and mean distortions of the MR images could be reduced from 19 mm to 8.2 mm and from 2.7 mm to 0.7 mm, respectively (Figure 3 to 5, Table 1). Coronal MR images were recalculated in fan beam projection for use at the therapy simulator. Tumor and organ contours were transferred from the MR image to the digitally acquired and corrected simulator image using a landmark matching algorithm (Figure 6 and 7). For 3D treatment planning, image fusion of axial MR images with standard CT planning images was performed using a landmark matching algorithm, as well (Figure 8). Representative cases are shown to demonstrate potential applications of the system. CONCLUSION: The described system enables the integration of the imaging information from an open MR system in therapy simulation and 3D treatment planning. The low-field MR scanner is an attractive adjunct for the radio-oncologist because of the open design and the low costs.  相似文献   

9.
PURPOSE: A pilot study to detect volume changes of cerebral structures in growth hormone (GH)-deficient adults treated with GH using serial 3D MR image processing and to assess need for segmentation prior to registration was conducted. METHOD: Volume MR scans of the brain were obtained in five patients and six control subjects. Patients were scanned before and after 3 and 6 months of therapy. Control subjects were scanned at the same intervals. A phantom was used to quantify scaling errors. Second and third volumes were aligned with the baseline by maximizing normalized mutual information and transformed using sinc interpolation. Registration was performed with and without brain segmentation and correction of scaling errors. Each registered, transformed image had the original subtracted, generating a difference image. Structural change and effects of segmentation and scaling error correction were assessed on original and difference images. The radiologists' ability to detect volume change was also assessed. RESULTS: Compared with control subjects, GH-treated subjects had an increase in cerebral volume and reduction in ventricular volume (p = 0.91 x 10(-3)). Scale correction and segmentation made no difference (p = 1 and p = 0.873). Structural changes were identified in the difference images but not in the original (p = 0.136). The radiologists detected changes >200 microm. CONCLUSION: GH treatment in deficient patients results in cerebral volume changes detectable by registration and subtraction of serial MR studies but not by standard assessment of images. This registration method did not require prior segmentation.  相似文献   

10.
RATIONALE AND OBJECTIVES: The therapeutic response to radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC) often is evaluated by comparing pre- and post-RFA computed tomography (CT). However, judgment about whether an ablative margin, ie, 5-10 mm of normal hepatic tissue, is ensured sometimes is difficult. The aim of this study is to assess the feasibility of fusion images of pre- and post-RFA CT. MATERIALS AND METHODS: HCCs (n = 20) sized 13 +/- 5 mm (range, 4-23 mm) were included. For pre-RFA CT, the arterial phase of intravenous dynamic CT (n = 17), CT arterioportography (n = 2), and CT hepatic arteriography (n = 1) was used. Using automatic image registration software (n = 20) and a manual segmentation technique (n = 4), fusion images were created in combination with post-RFA CT (equilibrium phase of intravenous CT). RESULTS: Automatic image registration and manual segmentation technique took approximately 2-3 and 5 minutes, respectively. Total time required for the creation of fusion images was less than 10 minutes in all cases. Fusion images enabled easier understanding of the relationship between the tumor and ablation zone, helping judge whether an ablative margin was ensured. CONCLUSION: Fusion of pre- and post-RFA CT images is considered a feasible tool in the evaluation of RFA therapy for HCC.  相似文献   

11.
BACKGROUND AND PURPOSE: Precise registration of CT and MR images is crucial in many clinical cases for proper diagnosis, decision making or navigation in surgical interventions. Various algorithms can be used to register CT and MR datasets, but prior to clinical use the result must be validated. To evaluate the registration result by visual inspection is tiring and time-consuming. We propose a new automatic registration assessment method, which provides the user a color-coded fused representation of the CT and MR images, and indicates the location and extent of poor registration accuracy. METHODS: The method for local assessment of CT-MR registration is based on segmentation of bone structures in the CT and MR images, followed by a voxel correspondence analysis. The result is represented as a color-coded overlay. The algorithm was tested on simulated and real datasets with different levels of noise and intensity non-uniformity. RESULTS: Based on tests on simulated MR imaging data, it was found that the algorithm was robust for noise levels up to 7% and intensity non-uniformities up to 20% of the full intensity scale. Due to the inability to distinguish clearly between bone and cerebro-spinal fluids in the MR image (T1-weighted), the algorithm was found to be optimistic in the sense that a number of voxels are classified as well-registered although they should not. However, nearly all voxels classified as misregistered are correctly classified. CONCLUSION: The proposed algorithm offers a new way to automatically assess the CT-MR image registration accuracy locally in all the areas of the volume that contain bone and to represent the result with a user-friendly, intuitive color-coded overlay on the fused dataset.  相似文献   

12.
Magnetic resonance (MR) imaging is useful for the diagnosis of brain atrophy and intracranial abnormalities. We have developed a method of automated volumetry to evaluate the degree of brain atrophy for the diagnosis of dementia. Whole-brain MR images with thin slices without gaps are required for segmentation and volumetry. However, obtaining such images requires that the patient remain at rest for a prolonged period, thereby reducing the throughput of MR imaging examinations. Therefore, a method is needed for the reconstruction of isotropic three-dimensional (3D) data using routine axial, sagittal, and coronal MR images with 30% gaps and measurement of brain volume. The method of reconstructing 3D data consists of four processes: 1) segmentation of the brain region on axial, sagittal, and coronal MR images using the region-growing technique; 2) setting data to a 3D domain; 3) registration by manual operation; and 4) interpolation between the data based on linear interpolation. In clinical MR images, the differences between this method and the conventional technique were less than 10%. These results demonstrate that this technique is able to construct 3D data from axial, sagittal, and coronal MR images.  相似文献   

13.
Segmentation and registration tools are commonly used in radiotherapy for target and at risk organs localisation. In this work the performances of three different segmentation tools and of a surface matching registration technique, used on computed tomography (CT) and magnetic resonance (MR) images for the treatment planning of conformal prostate carcinoma, are studied. The accuracy of the segmentation and registration tools was evaluated by phantom experiment and on patient data, respectively. A preliminary estimate of MR image distortion was also performed.  相似文献   

14.
We evaluated 4 volume-based automatic image registration algorithms from 2 commercially available treatment planning systems (Philips Syntegra and BrainScan). The algorithms based on cross correlation (CC), local correlation (LC), normalized mutual information (NMI), and BrainScan mutual information (BSMI) were evaluated with: (1) the synthetic computed tomography (CT) images, (2) the CT and magnetic resonance (MR) phantom images, and (3) the CT and MR head image pairs from 12 patients with brain tumors. For the synthetic images, the registration results were compared with known transformation parameters, and all algorithms achieved accuracy of submillimeter in translation and subdegree in rotation. For the phantom images, the registration results were compared with those provided by frame and marker-based manual registration. For the patient images, the results were compared with anatomical landmark–based manual registration to qualitatively determine how the results were close to a clinically acceptable registration. NMI and LC outperformed CC and BSMI, with the sense of being closer to a clinically acceptable result. As for the robustness, NMI and BSMI outperformed CC and LC. A guideline of image registration in our institution was given, and final visual assessment is necessary to guarantee reasonable results.  相似文献   

15.
We evaluated 4 volume-based automatic image registration algorithms from 2 commercially available treatment planning systems (Philips Syntegra and BrainScan). The algorithms based on cross correlation (CC), local correlation (LC), normalized mutual information (NMI), and BrainScan mutual information (BSMI) were evaluated with: (1) the synthetic computed tomography (CT) images, (2) the CT and magnetic resonance (MR) phantom images, and (3) the CT and MR head image pairs from 12 patients with brain tumors. For the synthetic images, the registration results were compared with known transformation parameters, and all algorithms achieved accuracy of submillimeter in translation and subdegree in rotation. For the phantom images, the registration results were compared with those provided by frame and marker-based manual registration. For the patient images, the results were compared with anatomical landmark–based manual registration to qualitatively determine how the results were close to a clinically acceptable registration. NMI and LC outperformed CC and BSMI, with the sense of being closer to a clinically acceptable result. As for the robustness, NMI and BSMI outperformed CC and LC. A guideline of image registration in our institution was given, and final visual assessment is necessary to guarantee reasonable results.  相似文献   

16.
颅颌面CT与MR图像的配准   总被引:1,自引:0,他引:1  
目的 :实现颅颌面CT MR医学图像的配准。材料和方法 :基于轮廓特征的奇异值分解 迭代最近点法 (SingularValueDecomposition IterativeClosestPoint ,SVD ICP)。结果 :该配准操作简便、图像满意、可靠性好 ,尚可以用于任意维度向量集合的匹配。结论 :在临床实践中颅颌面CT MR医学图像的配准是可行的 ,为进一步实现图像的融合奠定了基础  相似文献   

17.
用影像存档与通讯系统进行影像融合   总被引:1,自引:0,他引:1  
目的 应用影像存档与通讯系统(PACS)和影像融合软件进行不同显像形式图像融合的方法学探索。方法 图像的原始采集和处理分别使用Siemens螺旋CT、MR和E.CAM^ 双探头带符合线路SPECT仪。经CT、MR和核医学等科室间的PACS图像查询、传输和存取,用安装在SPECT仪计算机上的Medical image merge (MIM)软件进行影像融合处理。原始影像容积数据用重新分层方法创建图像,通过平移和旋转相关的观察端面、调节影像对比度和透明度等实现融合影像。结果 通过PACS成功地进行了CT、MR和核医学影像的传输、存取和载入,进行MR与核医学脑显像、CT和核医学胸部显像等的图像融合,获得了理想的图像。结论 应用PACS和MIM软件进行影像融合能获取良好的图像。  相似文献   

18.
CT、MR图像融合技术临床应用研究   总被引:17,自引:0,他引:17  
目的 利用医学图像融合技术为临床提供新的诊断信息。方法 选取30例(男18例、女12例)颅脑病变患者为研究对象,其中20例在1-2周内分别进行了CT和MR检查,10例在CT确诊后,行MR复查;将此图像数据运用Legendre矩找出图像的质心和主轴,进而完成图像的平移、缩放和旋转,以实现CT和MR图像的融合。结果 在30例CT和MR图像融合中,使二者图像信息相互补充的有28例,较单纯地观察CT或MR图像能明确判断病变发展趋势的有19例,手术证实的4例,但有2例图像融合后无明显的优越性。结论 不同来源的多模态图像进行融合,可为临床医师明确诊断、设计手术、放疗方案提供有利信息。在融合算法上,利用Legendre矩完成运算不失为一种比较直接、快速、简洁的方法。  相似文献   

19.
目的:尝试一种基于体表定位的二维图像配准方法,实现PET和MRI异机图像的精确融合。方法:输入PET/MRI原始数据后采用数字化格式转换,设计"3面9点"立体定位法进行配准,在实时工作站Mimics按照信息交互自动融合模式,通过讯号叠加技术完成图像融合。结果:以肺癌患者的胸部和髋部为实例交叉试验PET+MRI二维图像的异机融合,生成同时呈现胸髋解剖结构和代谢状况的互补影像。结论:在同机设备成本昂贵、不易普及的条件下,这种异机融合无疑是现有同机成像的必要补充。  相似文献   

20.

Purpose

This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model.

Methods

A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed.

Results

The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6?mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10?% in most of the organs considered.

Conclusion

The proposed automated quantification technique is reliable, robust and suitable for fast quantification of preclinical PET data in large serial studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号