首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5 T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15 mm while the MRI-based models contained an average error of 0.23 mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.  相似文献   

2.
Fast segmentation of bone in CT images using 3D adaptive thresholding   总被引:1,自引:0,他引:1  
Fast bone segmentation is often important in computer-aided medical systems. Thresholding-based techniques have been widely used to identify the object of interest (bone) against dark backgrounds. However, the darker areas that are often present in bone tissue may adversely affect the results obtained using existing thresholding-based segmentation methods. We propose an automatic, fast, robust and accurate method for the segmentation of bone using 3D adaptive thresholding. An initial segmentation is first performed to partition the image into bone and non-bone classes, followed by an iterative process of 3D correlation to update voxel classification. This iterative process significantly improves the thresholding performance. A post-processing step of 3D region growing is used to extract the required bone region. The proposed algorithm can achieve sub-voxel accuracy very rapidly. In our experiments, the segmentation of a CT image set required on average less than 10 s per slice. This execution time can be further reduced by optimizing the iterative convergence process.  相似文献   

3.
High resolution peripheral quantitative computed tomography (HR-pQCT) is a promising method for detailed in vivo 3D characterization of the densitometric, geometric, and microstructural features of human bone. Currently, a hybrid densitometric, direct, and plate model-based calculation is used to quantify trabecular microstructure. In the present study, this legacy methodology is compared to direct methods derived from a new local thresholding scheme independent of densitometric and model assumptions. Human femoral trabecular bone samples were acquired from patients undergoing hip replacement surgery. HR-pQCT (82 μm isotropic voxels) and micro-tomography (16 μm isotropic voxels) images were acquired. HR-pQCT images were segmented and analyzed in three ways: (1) using the hybrid method provided by the manufacturer based on a fixed global threshold, (2) using direct 3D methods based on the fixed global threshold segmentation, and (3) using direct 3D methods based on a novel local threshold scheme. The results were compared against standard direct 3D indices from μCT analysis. Standard trabecular parameters determined by HR-pQCT correlated strongly to μCT. BV/TV and Tb.Th were significantly underestimated by the hybrid method and significantly overestimated by direct methods based on the global threshold segmentation while the local method yielded optimal intermediate results. The direct-local method also performed favorably for Tb.N (R 2 = 0.85 vs. R 2 = 0.70 for direct-global method) and Tb.Sp (R 2 = 0.93 vs. R 2 = 0.85 for the hybrid method and R 2 = 0.87 for the direct-global method). These results indicate that direct methods, with the aid of advanced segmentation techniques, may yield equivalent or improved accuracy for quantification of trabecular bone microstructure without relying on densitometric or model assumptions.  相似文献   

4.
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.  相似文献   

5.
This work aims to test accuracy and comparability of 3D models of human skeletal fibulae generated by clinical CT and laser scanner virtual acquisitions. Mesh topology, segmentation and smoothing protocols were tested to assess variation among meshes generated with different scanning methods and procedures, and to evaluate meshes‐interchangeability in 3D geometric morphometric analysis. A sample of 13 left human fibulae were scanned separately with Revolution Discovery CT dual energy (0.625 mm resolution) and ARTEC Space Spider 3D structured light laser scanner (0.1 mm resolution). Different segmentation methods, including half‐maximum height (HMH) and MIA‐clustering protocols, were compared to their high‐resolution standard generated with laser‐scanner by calculating topological surface deviations. Different smoothing algorithms were also evaluated, such as Laplacian and Taubin smoothing. A total of 142 semilandmarks were used to capture the shape of both proximal and distal fibular epiphyses. After Generalized Procrustes superimposition, the Procrustes coordinates of the proximal and distal fibular epiphyses were used separately to assess variation due to scanning methods and the operator error. Smoothing algorithms at low iteration do not provide significant variation among reconstructions, but segmentation protocol may influence final mesh quality (0.09–0.24 mm). Mean deviation among CT‐generated meshes that were segmented with MIA‐clustering protocol, and laser scanner‐generated ones, is optimal (0.42 mm, ranging 0.35–0.56 mm). Principal component analysis reveals that homologous samples scanned with the two methods cluster together for both the proximal and distal fibular epiphyses. Similarly, Procrustes ANOVA reveals no shape differences between scanning methods and replicates, and only 1.38–1.43% of shape variation is due to scanning device. Topological similarities support the comparability of CT‐ and laser scanner‐generated meshes and validate its simultaneous use in shape analysis with potential clinical relevance. We precautionarily suggest that dedicated trials should be performed in each study when merging different data sources prior to analyses.  相似文献   

6.
目的构建Wallis腰椎非融合系统有限元模型,为临床应用提供生物力学基础。方法 8例志愿者采用连续螺旋CT扫描,将获得的断层Dicom格式图像,导入Materialise Mimics10.01软件,定义骨组织阈值、提取各层面轮廓线、图像边缘分割、三维重建L4、5椎体及椎间盘三维模型,将重建的模型以.stl格式保存,导入Materialise3-Matic4.3软件,进行三角面片优化;在AutoCAD 2009软件中建立Wallis系统模型,以.stl格式保存,导入Materialise 3-Matic4.3软件,进行三角面片优化,将重建的Wallis模型按标准手术模式与腰椎模型拟合,导入Ansys10.0软件进行赋值和网格划分,生成有限元模型。结果重建的三维模型可以精确的模拟Wallis非融合系统固定情况。结论应用CT扫描技术,图像Dicom标准,Mimics软件能直接与Ansys软件进行对接,并能根据CT值直接赋值使Wallis腰椎非融合系统有限元模型的建立更加快捷、精确。  相似文献   

7.
Different kinds of bone measurements are commonly derived from computed-tomography (CT) volumes to answer a multitude of questions in biology and related fields. The underlying steps of bone segmentation and, optionally, polygon surface generation are crucial to keep the measurement error small. In this study, the performance of different, easily accessible segmentation techniques (global thresholding, automatic local thresholding, weighted random walk, neural network, and watershed) and surface generation approaches (different algorithms combined with varying degrees of simplification) was analyzed and recommendations for minimizing inaccuracies were derived. The different approaches were applied to synthetic CT volumes for which the correct segmentation and surface geometry were known. The most accurate segmentations of the synthetic volumes were achieved by setting a case-specific window to the gray value histogram and subsequently applying automatic local thresholding with appropriately chosen thresholding method and radius. Surfaces generated by the Amira® module Generate Lego Surface in combination with careful surface simplification were the most accurate. Surfaces with sub-voxel accuracy were obtained even for synthetic CT volumes with low contrast-to-noise ratios. Segmentation trials with real CT volumes supported the findings. Very accurate segmentations and surfaces can be derived from CT volumes by using readily accessible software packages. The presented results and derived recommendations will help to reduce the measurement error in future studies. Furthermore, the demonstrated strategies for assessing segmentation and surface qualities can be adopted to quantify the performance of new segmentation approaches in future studies.  相似文献   

8.
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07 ± 0.52 mm standard deviation (SD) for edge distances and −0.647 ± 0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CT-based 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.  相似文献   

9.
3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts.One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques.The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample.The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.  相似文献   

10.
Hangartner TN  Short DF 《Medical physics》2007,34(10):3777-3784
In computed tomography (CT), the representation of edges between objects of different densities is influenced by the limited spatial resolution of the scanner. This results in the misrepresentation of density of narrow objects, leading to errors of up to 70% and more. Our interest is in the imaging and measurement of narrow bone structures, and the issues are the same for imaging with clinical CT scanners, peripheral quantitative CT scanners or micro CT scanners. Mathematical models, phantoms and tests with patient data led to the following procedures: (i) extract density profiles at one-degree increments from the CT images at right angles to the bone boundary; (ii) consider the outer and inner edge of each profile separately due to different adjacent soft tissues; (iii) measure the width of each profile based on a threshold at fixed percentage of the difference between the soft-tissue value and a first approximated bone value; (iv) correct the underlying material density of bone for each profile based on the measured width with the help of the density-versus-width curve obtained from computer simulations and phantom measurements. This latter curve is specific to a certain scanner and is not dependent on the densities of the tissues within the range seen in patients. This procedure allows the calculation of the material density of bone. Based on phantom measurements, we estimate the density error to be below 2% relative to the density of normal bone and the bone-width error about one tenth of a pixel size.  相似文献   

11.
Automatic bone segmentation of computed tomography (CT) images is an important step in image-guided surgery that requires both high accuracy and minimal user interaction. Previous attempts include global thresholding, region growing, region competition, watershed segmentation, and parametric active contour (AC) approaches, but none claim fully satisfactory performance. Recently, geometric or level-set-based AC models have been developed and appear to have characteristics suitable for automatic bone segmentation such as initialization insensitivity and topology adaptability. In this study, we have tested the feasibility of five level-set-based AC approaches for automatic CT bone segmentation with both synthetic and real CT images: namely, the geometric AC, geodesic AC, gradient vector flow fast geometric AC, Chan–Vese (CV) AC, and our proposed density distance augmented CV AC (Aug. CV AC). Qualitative and quantitative evaluations have been made in comparison with the segmentation results from standard commercial software and a medical expert. The first three models showed their robustness to various image contrasts, but their performances decreased much when noise level increased. On the contrary, the CV AC’s performance was more robust to noise, yet dependent on image contrast. On the other hand, the Aug. CV AC demonstrated its robustness to both noise and contrast levels and yielded improved performances on a set of real CT data compared with the commercial software, proving its suitability for automatic bone segmentation from CT images.  相似文献   

12.
目的利用眼底图像中硬性渗出物(hard exudates,HE)的亮度与边缘特征,提出一种基于Canny边缘检测算法与形态学重构相结合的HE自动检测方法,以解决目前算法灵敏度低、检测结果中视盘和血管的干扰等问题,对糖尿病视网膜病变(diabetic retinopathy,DR)的自动筛查具有重要意义。方法检测算法包括4个步骤。步骤一,图像预处理,主要包括RGB通道选取、基于形态学的图像对比度增强。步骤二,视网膜图像关键结构的消除,利用基于Gabor滤波的血管分割方法,消除血管边缘对HE检测的影响。将本文视杯分割算法应用在眼底图像红色通道上实现视盘自动分割,消除视盘及其边缘对HE检测的影响。步骤三,利用改进的Canny边缘检测算法和形态学重构方法对HE进行提取。步骤四,基于形态学的图像后处理,消除眼底图像边缘部分假阳性区域。最后利用该算法测试公开数据库中的40幅图像(35幅HE病变图像,5幅正常图像)。结果该算法对基于病变的灵敏性(sensitivity,SE)和阳性预测值(positive predictive value,PPV)分别为93.18%和79.26%,基于图像的灵敏性、特异性(specificity,SP)和准确率(accuracy,ACC)分别为97.14%、80.00%和95.00%。结论与其他方法对比,基于Canny边缘检测算法与形态学重构相结合的HE自动检测算法具有较好的可行性。  相似文献   

13.
解决传统模糊连接度难以较好分割CT图像肝血管、需要多个种子点和较耗时等问题。改进传统模糊连接度分割算法:对最新的Jerman血管增强算法进行改进;将改进的血管增强响应引入模糊亲和度函数;使用Otsu多阈值算法代替置信连接度,进行模糊连接度算法的初始化。预处理包括自适应S型非线性灰度映射和各向同性插值采样;随后,执行改进的Jerman血管增强算法;再将其增强响应引入模糊亲和度函数,同时利用Otsu多阈值算法统计前景目标信息,对模糊连接度进行初始化;最终,结合单一种子点实现三维肝脏血管的自动分割。选用内含20例CT的公开数据集,定量评估改进的血管增强算法和模糊连接度分割算法。评价标准主要包括对比度噪声比、准确性、敏感性和特异性。该血管增强算法的平均对比度噪声比为8.43 dB,优于传统血管增强算法。该血管分割算法的准确性达98.11%,优于基于置信连接度的传统模糊连接度分割算法、区域生长算法和水平集分割算法。此外,在分割算法的耗时方面,该算法也具有明显优势。提出的三维分割方法能有效解决传统模糊连接度分割CT影像中肝血管结构的不足,可提升分割精度和效率。  相似文献   

14.
Mullally W  Betke M  Wang J  Ko JP 《Medical physics》2004,31(4):839-848
Several segmentation methods to evaluate growth of small isolated pulmonary nodules on chest computed tomography (CT) are presented. The segmentation methods are based on adaptively thresholding attenuation levels and use measures of nodule shape. The segmentation methods were first tested on a realistic chest phantom to evaluate their performance with respect to specific nodule characteristics. The segmentation methods were also tested on sequential CT scans of patients. The methods' estimation of nodule growth were compared to the volume change calculated by a chest radiologist. The best method segmented nodules on average 43% smaller or larger than the actual nodule when errors were computed across all nodule variations on the phantom. Some methods achieved smaller errors when examined with respect to certain nodule properties. In particular, on the phantom individual methods segmented solid nodules to within 23% of their actual size and nodules with 60.7 mm3 volumes to within 14%. On the clinical data, none of the methods examined showed a statistically significant difference in growth estimation from the radiologist.  相似文献   

15.
Inner views of tubular structures based on computer tomography (CT) and magnetic resonance (MR) data sets may be created by virtual endoscopy. After a preliminary segmentation procedure for selecting the organ to be represented, the virtual endoscopy is a new postprocessing technique using surface or volume rendering of the data sets. In the case of surface rendering, the segmentation is based on a grey level thresholding technique. To avoid artifacts owing to the noise created in the imaging process, and to restore spurious resolution degradations, a robust Wiener filter was applied. This filter working in Fourier space approximates the noise spectrum by a simple function that is proportional to the square root of the signal amplitude. Thus, only points with tiny amplitudes consisting mostly of noise are suppressed. Further artifacts are avoided by the correct selection of the threshold range. Afterwards, the lumen and the inner walls of the tubular structures are well represented and allow one to distinguish between harmless fluctuations and medically significant structures.  相似文献   

16.
BackgroundProper use of three-dimensional (3D) models generated from medical imaging data in clinical preoperative planning, training and consultation is based on the preliminary proved accuracy of the replication of the patient anatomy. Therefore, this study investigated the dimensional accuracy of 3D reconstructions of the knee joint generated from computed tomography scans via automatic segmentation by comparing them with 3D models generated through manual segmentation.MethodsThree unpaired, fresh-frozen right legs were investigated. Three-dimensional models of the femur and the tibia of each leg were manually segmented using a commercial software and compared in terms of geometrical accuracy with the 3D models automatically segmented using proprietary software. Bony landmarks were identified and used to calculate clinically relevant distances: femoral epicondylar distance; posterior femoral epicondylar distance; femoral trochlear groove length; tibial knee center tubercle distance (TKCTD). Pearson's correlation coefficient and Bland and Altman plots were used to evaluate the level of agreement between measured distances.ResultsDifferences between parameters measured on 3D models manually and automatically segmented were below 1 mm (range: −0.06 to 0.72 mm), except for TKCTD (between 1.00 and 1.40 mm in two specimens). In addition, there was a significant strong correlation between measurements.ConclusionsThe results obtained are comparable to those reported in previous studies where accuracy of bone 3D reconstruction was investigated. Automatic segmentation techniques can be used to quickly reconstruct reliable 3D models of bone anatomy and these results may contribute to enhance the spread of this technology in preoperative and operative settings, where it has shown considerable potential.  相似文献   

17.
The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.  相似文献   

18.
Recent advances in physical models of skeletal dosimetry utilize high-resolution 3-dimensional microscopic computed tomography images of trabecular spongiosa. These images are coupled to radiation transport codes to assess energy deposition within active bone marrow and trabecular endosteum. These transport codes rely primarily on the segmentation of the spongiosa images into bone and marrow voxels. Image thresholding has been the segmentation of choice for bone sample images because of its extreme simplicity. However, the ability of the segmentation to reproduce the physical boundary between bone and marrow depends on the selection of the threshold value. Statistical models, as well as visual inspection of the image, have been employed extensively to determine the correct threshold. Both techniques are affected by partial volume effect and can provide unexpected results if performed without care. In this study, we propose a new technique to threshold trabecular spongiosa images based on visual inspection of the image gradient magnitude. We first show that the gradient magnitude of the image reaches a maximum along a surface that remains almost independent of partial volume effect and that is a good representation of the physical boundary between bone and marrow. A computer program was then developed to allow a user to compare the position of the iso-surface produced by a threshold with the gradient magnitude. The threshold that produces the iso-surface that best coincides with the maximum gradient is chosen. The technique was finally tested with a set of images of a true bone sample with different resolutions, as well as with three images of a cube of Duocell aluminium foam of known mass and density. Both tests demonstrate the ability of the gradient magnitude technique to retrieve sample volumes or media volume fractions with 1% accuracy at 30 microm voxel size.  相似文献   

19.
目的:把肝脏从医学图像中提取出来,为肝脏三维定位以及放疗计划制定提供准确的数据。肝脏与其周围器官组织灰度差别小、边界不明显,而传统区域生长算法生长准则单一,不能满足分割精确度需求,并且未经处理的轮廓比较粗糙。针对这些问题,本文提出一种改进的区域生长算法。方法:本文算法主要从三个方面改进:基于先验经验和肝脏特性的种子区域选择;基于Canny算子边缘检测结果的区域生长准则动态优化;基于漫水填充法和曲线拟合的轮廓后处理。结果:本文使用多套临床实际腹部CT序列测试算法,以医生手动勾画结果为标准进行评价。在大多数CT切片上的肝脏自动分割都能取得较好的结果,并且分割用时很短,保证了效率。结论:测试结果表明,本文算法在动态控制区域生长和平滑轮廓方面有很好的作用,在保证速度的同时有效提高了肝脏自动分割精度。  相似文献   

20.
针对目前传统的Snake模型图像分割算法的力场捕捉范围小、对初始轮廓的选取敏感以及对轮廓曲线难以收敛到 细小深凹边界的缺陷,提出一种基于Snake 模型的脑部CT图像分割新算法。算法首先运用Canny 边缘算子对图像进行 边缘检测,将边缘检测图像叠加到原始图像上,然后再运用Snake模型和梯度向量流(GVF)Snake模型分别对叠加图像进 行分割。实验结果表明,该算法克服了传统Snake 模型和GVF Snake 模型因边缘轮廓不清晰造成的漏分割情况,防止了 GVF Snake模型由于GVF力场的相互作用所造成的过分割现象,同时,还能促使轮廓线收敛到细小深凹边界,提高定位精 度,具有更好的分割效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号