首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
SUVmax is currently the most common semi-quantitative method of response assessment on FDG PET. By defining the tumour volume of interest (VOI), a measure of total glycolytic volume (TGV) may be obtained. We aimed to comprehensively examine, in a phantom setting, the accuracy of TGV in reflecting actual lesion activity and to compare TGV with SUVmax for response assessment. The algorithms for VOI generation from which TGV was derived included fixed threshold techniques at 50% of maximum (MAX50), 70% of maximum (MAX70), an adaptive threshold of 50% of (maximum + background)/2 (BM50) and a semi-automated iterative region-growing algorithm, GRAB. Comparison with both actual lesion activity and response scenarios was performed. SUVmax correlated poorly with actual lesion activity (r = 0.651) and change in lesion activity (r = 0.605). In a response matrix scenario SUVmax performed poorly when all scenarios were considered, but performed well when only clinically likely scenarios were included. The TGV derived using MAX50 and MAX70 algorithms performed poorly in evaluation of lesion change. The TGV derived from BM50 and GRAB algorithms however performed extremely well in correlation with actual lesion activity (r = 0.993 and r = 0.982, respectively), change in lesion activity (r = 0.972 and r = 0.963, respectively) and in the response scenario matrix. TGV(GRAB) demonstrated narrow confidence bands when modelled with actual lesion activity. Measures of TGV generated by iterative algorithms such as GRAB show potential for increased sensitivity of metabolic response monitoring compared to SUVmax, which may have important implications for improved patient care.  相似文献   

2.
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.  相似文献   

3.
Li X  Li L  Lu H  Liang Z 《Medical physics》2005,32(7):2337-2345
Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.  相似文献   

4.
Motion-related artifacts are still a major problem in data analysis of functional magnetic resonance imaging (FMRI) studies of brain activation. However, the traditional image registration algorithm is prone to inaccuracy when there are residual variations owing to counting statistics, partial volume effects or biological variation. In particular, susceptibility artifacts usually result in remarkable signal intensity variance, and they can mislead the estimation of motion parameters. In this study, Two robust estimation algorithms for the registration of FMRI images are described. The first estimation algorithm was based on the Newton method and used Tukey's biweight objective function. The second estimation algorithm was based on the Levenberg-Marquardt technique and used a skipped mean objective function. The robust M-estimators can suppress the effects of the outliers by scaling down their error magnitudes or completely rejecting outliers using a weighting function. The proposed registration methods consisted of the following steps: fast segmentation of the brain region from noisy background as a preprocessing step; pre-registration of the volume centroids to provide a good initial estimation; and two robust estimation algorithms and a voxel sampling technique to find the affine transformation parameters. The accuracy of the algorithms was within 0.5 mm in translation and within 0.5° in rotation. For the FMRI data sets, the performance of the algorithms was visually compared with the AIR 2.0 software, which is a software for image registration, using colour-coded statistical mapping by the Kolmogorov-Smirov method. Experimental results showed, that the algorithms provided significant improvement in correcting motion-related artifacts and can enhance the detection of real brain activation.  相似文献   

5.
针对基于CTA图像进行冠脉钙化量化时存在的无法克服噪声以及阈值选择不稳定问题,提出一种基于聚类算法与自适应阈值的冠脉钙化分割与量化方法。首先根据CT值和空间位置对冠脉血管内的像素点构建特征向量,继而根据血管骨架点数目构建自适应聚类数,使用模糊C均值(FCM)聚类算法将冠脉区域划分为CT值分布相似的区域;然后使用高斯函数拟合冠脉灰度直方图,根据高斯拟合参数构造自适应阈值,对上述区域进行钙化分割;最后根据分割结果,参考Agatston钙化分量化标准进行钙化分计算。在30组人体冠脉CTA数据的测试结果中,对冠脉钙化量化的灵敏度和特异性分别达到89.5%与98.6%,计算得到的钙化体积和Agatston钙化分与标准结果的皮尔逊系数分别为0.974与0.975,远高于同类型基于一阶微分进行阈值选择方法(DBTD)对应的0.523与0.501。 实验结果表明,该方法可用于冠脉钙化分割与量化,且具有全自动、鲁棒性好、能有效抗噪等特点。  相似文献   

6.
Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm2) and the 'narrow' (1.2 x 1.2 cm2) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm is demonstrated to be the most accurate one for multi-resolution dose calculations.  相似文献   

7.
Yuan Y  Giger ML  Li H  Suzuki K  Sennett C 《Medical physics》2007,34(11):4180-4193
Mass lesion segmentation on mammograms is a challenging task since mass lesions are usually embedded and hidden in varying densities of parenchymal tissue structures. In this article, we present a method for automatic delineation of lesion boundaries on digital mammograms. This method utilizes a geometric active contour model that minimizes an energy function based on the homogeneities inside and outside of the evolving contour. Prior to the application of the active contour model, a radial gradient index (RGI)-based segmentation method is applied to yield an initial contour closer to the lesion boundary location in a computationally efficient manner. Based on the initial segmentation, an automatic background estimation method is applied to identify the effective circumstance of the lesion, and a dynamic stopping criterion is implemented to terminate the contour evolution when it reaches the lesion boundary. By using a full-field digital mammography database with 739 images, we quantitatively compare the proposed algorithm with a conventional region-growing method and an RGI-based algorithm by use of the area overlap ratio between computer segmentation and manual segmentation by an expert radiologist. At an overlap threshold of 0.4, 85% of the images are correctly segmented with the proposed method, while only 69% and 73% of the images are correctly delineated by our previous developed region-growing and RGI methods, respectively. This resulting improvement in segmentation is statistically significant.  相似文献   

8.
Wang J  Engelmann R  Li Q 《Medical physics》2007,34(12):4678-4689
Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key "spiral-scanning" technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the "north pole" to the "south pole." The voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the "optimal" outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.  相似文献   

9.
The purpose of this study is to assess the variance and error in nodule diameter measurement associated with variations in nodule-slice position in cross-sectional imaging. A computer program utilizing a standard geometric model was used to simulate theoretical slices through a perfectly spherical nodule of known size, position, and density within a background of “lung” of known fixed density. Assuming a threshold density, partial volume effect of a voxel was simulated using published slice and pixel sensitivity profiles. At a given slice thickness and nodule size, 100 scans were simulated differing only in scan start position, then repeated for multiple node sizes at three simulated slice thicknesses. Diameter was measured using a standard, automated algorithm. The frequency of measured diameters was tabulated; average errors and standard deviations (SD) were calculated. For a representative 5-mm nodule, average measurement error ranged from +10 to −23 % and SD ranged from 0.07 to 0.99 mm at slice thicknesses of 0.75 to 5 mm, respectively. At fixed slice thickness, average error and SD decreased from peak values as nodule size increased. At fixed nodule size, SD increased as slice thickness increased. Average error exhibited dependence on both slice thickness and threshold. Variance and error in nodule diameter measurement associated with nodule-slice position exists due to geometrical limitations. This can lead to false interpretations of nodule growth or stability that could affect clinical management. The variance is most pronounced at higher slice thicknesses and for small nodule sizes. Measurement error is slice thickness and threshold dependent.  相似文献   

10.
Spatial smoothing using isotropic Gaussian kernels to remove noise reduces spatial resolution and increases the partial volume effect of functional magnetic resonance images (fMRI), thereby reducing localization power. To minimize these limitations, we propose a novel anisotropic smoothing method for fMRI data. To extract an anisotropic tensor for each voxel of the functional data, we derived an intensity gradient using the distance transformation of the segmented gray matter of the fMRI-coregistered T1-weighted image. The intensity gradient was then used to determine the anisotropic smoothing kernel at each voxel of the fMRI data. Performance evaluations on both real and simulated data showed that the proposed method had 10% higher statistical power and about 20% higher gray matter localization compared to isotropic smoothing and robustness to the registration errors (up to 4 mm translations and 4° rotations) between T1 structural images and fMRI data. The proposed method also showed higher performance than the anisotropic smoothing with diffusion gradients derived from the fMRI intensity data.  相似文献   

11.
目的磁共振成像(magnetic resonance imaging,MRI)对脑组织有较好的成像效果,但噪声、偏移场和部分容积效应(partial volume effect,PVE)的存在,使得全自动分割MRI图像面临一定的困难。模糊C均值(fuzzy C-means,FCM)聚类算法在脑组织分割中得到较广泛研究。本文以存在噪声和偏移场影响的脑MRI图像分割为应用背景,研究了大量相关方法,探讨FCM算法分割脑部图像的改进思想。方法本文主要研究了9种FCM算法的理论基础,并通过脑组织分割实验对各种算法进行了分析。结果比较了不同算法的优劣,给出各类算法直观及定量评价结果。结论偏移场和噪声对脑磁共振图像组织分类质量有明显影响。其中几种方法可以减弱这些不利影响,但由于难以选择合适的参数,其分类效果并不理想。如何合理利用空间信息在未来仍有较大研究价值。  相似文献   

12.
Digital breast tomosynthesis (DBT) has recently emerged as a new and promising three-dimensional modality in breast imaging. In DBT, the breast volume is reconstructed from 11 projection images, taken at source angles equally spaced over an arc of 50 degrees. Reconstruction algorithms for this modality are not fully optimized yet. Because computerized lesion detection in the reconstructed breast volume will be affected by the reconstruction technique, we are developing a novel mass detection algorithm that operates instead on the set of raw projection images. Mass detection is done in three stages. First, lesion candidates are obtained for each projection image separately, using a mass detection algorithm that was initially developed for screen-film mammography. Second, the locations of a lesion candidate are backprojected into the breast volume. In this feature volume, voxel intensities are a combined measure of detection frequency (e.g., the number of projections in which a given lesion candidate was detected), and a measure of the angular range over which a given lesion was detected. Third, features are extracted after reprojecting the three-dimensional (3-D) locations of lesion candidates into projection images. Features are combined using linear discriminant analysis. The database used to test the algorithm consisted of 21 mass cases (13 malignant, 8 benign) and 15 cases without mass lesions. Based on this database, the algorithm yielded a sensitivity of 90% at 1.5 false positives per breast volume. Algorithm performance is positively biased because this dataset was used for development, training, and testing, and because the number of algorithm parameters was approximately the same as the number.of patient cases. Our results indicate that computerized mass detection in the sequence of projection images for DBT may be effective despite the higher noise level in those images.  相似文献   

13.
This work is to demonstrate that high quality cone beam CT images can be generated for a volume of interest (VOI) and to investigate the exposure reduction effect, dose saving, and scatter reduction with the VOI scanning technique. The VOI scanning technique involves inserting a filtering mask between the x-ray source and the breast during image acquisition. The mask has an opening to allow full x-ray exposure to be delivered to a preselected VOI and a lower, filtered exposure to the region outside the VOI. To investigate the effects of increased noise due to reduced exposure outside the VOI on the reconstructed VOI image, we directly extracted the projection data inside the VOI from the full-field projection data and added additional data to the projection outside the VOI to simulate the relative noise increase due to reduced exposure. The nonuniform reference images were simulated in an identical manner to normalize the projection images and measure the x-ray attenuation factor for the object. Regular Feldkamp-Davis-Kress filtered backprojection algorithm was used to reconstruct the 3D images. The noise level inside the VOI was evaluated and compared with that of the full-field higher exposure image. Calcifications phantom and low contrast phantom were imaged. Dose reduction was investigated by estimating the dose distribution in a cylindrical water phantom using Monte Carlo simulation based Geant4 package. Scatter reduction at the detector input was also studied. Our results show that with the exposure level reduced by the VOI mask, the dose levels were significantly reduced both inside and outside the VOI without compromising the accuracy of image reconstruction, allowing for the VOI to be imaged with more clarity and helping to reduce the breast dose. The contrast-to-noise ratio inside the VOI was improved. The VOI images were not adversely affected by noisier projection data outside the VOI. Scatter intensities at the detector input were also shown to decrease significantly both inside and outside the VOI in the projection images, indicating potential improvement of image quality inside the VOI and contribution to dose reduction both inside and outside the VOI.  相似文献   

14.
基于HMM的低信噪比离子通道信号的恢复及参数估计   总被引:1,自引:0,他引:1  
细胞膜离子单通道信号是皮安级的随机电流 ,膜片钳技术可以记录这些信号。一般认为它是一阶的、状态有限的 Markov过程。某些种类的离子通道 ,电流信号特别微弱 ,完全淹没在背景噪声中 ,传统的膜片钳技术很难检测到 ,只能运用数学方法恢复和估计。在低采样频率情况下 ,由于混叠效应 ,可认为背景噪声是白色的 ;在高采样频率条件下 (高于奈奎斯特频率 ) ,背景噪声是有色的。本文分别综述了白色背景噪声条件下基于隐式 Markov模型和有色背景噪声条件下基于隐式矢量 Markov模型的低信噪比离子单通道信号的恢复和参数估计 ,主要包括前后向算法、EM算法等  相似文献   

15.
Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.  相似文献   

16.
Contrast-enhanced ultrasound (CEUS), with the recent development of both contrast-specific imaging modalities and microbubble-based contrast agents, allows noninvasive quantification of microcirculation in vivo. Nevertheless, functional parameters obtained by modeling contrast uptake kinetics could be impaired by respiratory motion. Accordingly, we developed an automatic respiratory gating method and tested it on 35 CEUS hepatic datasets with focal lesions. Each dataset included fundamental mode and cadence contrast pulse sequencing (CPS) mode sequences acquired simultaneously. The developed method consisted in (1) the estimation of the respiratory kinetics as a linear combination of the first components provided by a principal components analysis constrained by a prior knowledge on the respiratory rate in the frequency domain, (2) the automated generation of two respiratory-gated subsequences from the CPS mode sequence by detecting end-of-inspiration and end-of-expiration phases from the respiratory kinetics. The fundamental mode enabled a more reliable estimation of the respiratory kinetics than the CPS mode. The k-means algorithm was applied on both the original CPS mode sequences and the respiratory-gated subsequences resulting in clustering maps and associated mean kinetics. Our respiratory gating process allowed better superimposition of manually drawn lesion contours on k-means clustering maps as well as substantial improvement of the quality of contrast uptake kinetics. While the quality of maps and kinetics was satisfactory in only 11/35 datasets before gating, it was satisfactory in 34/35 datasets after gating. Moreover, noise amplitude estimated within the delineated lesions was reduced from 62 ± 21 to 40 ± 10 (p < 0.01) after gating. These findings were supported by the low residual horizontal (0.44 ± 0.29 mm) and vertical (0.15 ± 0.16 mm) shifts found during manual motion correction of each respiratory-gated subsequence. The developed technique could be used as a basis for accurate quantification of perfusion parameters for the evaluation and follow-up of patients under antiangiogenic therapies.  相似文献   

17.
基于随机松弛的离散HMM参数估计和信号恢复   总被引:1,自引:1,他引:1  
细胞膜离子单通道信号是皮安级的跨膜随机离子电流,由于信号的微弱性,膜片钳技术记录中单通道电流往往淹没在强背景噪声中.传统上采用阈值检测器来恢复通道电流信号,这需要人为设定阈值,尤其是信噪比低时,阈值检测器失效.本研究采用隐马尔可夫模型(HMM)的通道信号恢复及参数估计技术,首先利用基于随机松弛(SR)的离散HMM参数全局优化算法,估计通道的动力学参数,确保模型训练中参数收敛到全局最优.在此基础上,从噪声污染的膜片钳记录中恢复通道电流信号.理论和实验结果表明,在低信噪比情况下(SNR<5.0),该方法用于白噪声背景下细胞膜离子单通道参数估计和信号恢复时,参数收敛速度快,信号恢复精度高,算法抗噪能力强,可以较好地描述实际对象特性.  相似文献   

18.
Unified Approach for Multiple Sclerosis Lesion Segmentation on Brain MRI   总被引:2,自引:0,他引:2  
The presence of large number of false lesion classification on segmented brain MR images is a major problem in the accurate determination of lesion volumes in multiple sclerosis (MS) brains. In order to minimize the false lesion classifications, a strategy that combines parametric and nonparametric techniques is developed and implemented. This approach uses the information from the proton density (PD)- and T2-weighted and fluid attenuation inversion recovery (FLAIR) images. This strategy involves CSF and lesion classification using the Parzen window classifier. Image processing, morphological operations, and ratio maps of PD- and T2-weighted images are used for minimizing false positives. Contextual information is exploited for minimizing the false negative lesion classifications using hidden Markov random field-expectation maximization (HMRF-EM) algorithm. Lesions are delineated using fuzzy connectivity. The performance of this algorithm is quantitatively evaluated on 23 MS patients. Similarity index, percentages of over, under, and correct estimations of lesions are computed by spatially comparing the results of present procedure with expert manual segmentation. The automated processing scheme detected 80% of the manually segmented lesions in the case of low lesion load and 93% of the lesions in those cases with high lesion load.  相似文献   

19.
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.  相似文献   

20.
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p?<?0.05) and was significantly higher on the phantom dataset compared to the other datasets (p?<?0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p?<?0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号