首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Armato SG  Altman MB  Wilkie J  Sone S  Li F  Doi K  Roy AS 《Medical physics》2003,30(6):1188-1197
We have evaluated the performance of an automated classifier applied to the task of differentiating malignant and benign lung nodules in low-dose helical computed tomography (CT) scans acquired as part of a lung cancer screening program. The nodules classified in this manner were initially identified by our automated lung nodule detection method, so that the output of automated lung nodule detection was used as input to automated lung nodule classification. This study begins to narrow the distinction between the "detection task" and the "classification task." Automated lung nodule detection is based on two- and three-dimensional analyses of the CT image data. Gray-level-thresholding techniques are used to identify initial lung nodule candidates, for which morphological and gray-level features are computed. A rule-based approach is applied to reduce the number of nodule candidates that correspond to non-nodules, and the features of remaining candidates are merged through linear discriminant analysis to obtain final detection results. Automated lung nodule classification merges the features of the lung nodule candidates identified by the detection algorithm that correspond to actual nodules through another linear discriminant classifier to distinguish between malignant and benign nodules. The automated classification method was applied to the computerized detection results obtained from a database of 393 low-dose thoracic CT scans containing 470 confirmed lung nodules (69 malignant and 401 benign nodules). Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate between nodule candidates that correspond to malignant nodules and nodule candidates that correspond to benign lesions. The area under the ROC curve for this classification task attained a value of 0.79 during a leave-one-out evaluation.  相似文献   

2.
Automated detection of lung nodules in CT scans: preliminary results   总被引:15,自引:0,他引:15  
We have developed a fully automated computerized method for the detection of lung nodules in helical computed tomography (CT) scans of the thorax. This method is based on two-dimensional and three-dimensional analyses of the image data acquired during diagnostic CT scans. Lung segmentation proceeds on a section-by-section basis to construct a segmented lung volume within which further analysis is performed. Multiple gray-level thresholds are applied to the segmented lung volume to create a series of thresholded lung volumes. An 18-point connectivity scheme is used to identify contiguous three-dimensional structures within each thresholded lung volume, and those structures that satisfy a volume criterion are selected as initial lung nodule candidates. Morphological and gray-level features are computed for each nodule candidate. After a rule-based approach is applied to greatly reduce the number of nodule candidates that corresponds to nonnodules, the features of remaining candidates are merged through linear discriminant analysis. The automated method was applied to a database of 43 diagnostic thoracic CT scans. Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate nodule candidates that correspond to actual nodules from false-positive candidates. The area under the ROC curve for this categorization task attained a value of 0.90 during leave-one-out-by-case evaluation. The automated method yielded an overall nodule detection sensitivity of 70% with an average of 1.5 false-positive detections per section when applied to the complete 43-case database. A corresponding nodule detection sensitivity of 89% with an average of 1.3 false-positive detections per section was achieved with a subset of 20 cases that contained only one or two nodules per case.  相似文献   

3.
We developed and tested a fully automated computerized scheme that identifies pulmonary airway sections depicted on computed tomography (CT) images and computes their sizes including the lumen and airway wall areas. The scheme includes four processing modules that (1) segment left and right lung areas, (2) identify airway locations, (3) segment airway walls from neighboring pixels, and (4) compute airway sizes. The scheme uses both a raster scanning and a labeling algorithm complemented by simple classification rules for region size and circularity to automatically search for and identify airway sections of interest. A profile tracking method is used to segment airway walls from neighboring pixels including those associated with dense tissue (i.e., pulmonary arteries) along scanning radial rays. A partial pixel membership method is used to compute airway size. The scheme was tested on ten randomly selected CT studies that included 26 sets of CT images acquired using both low and conventional dose CT examinations with one of four reconstruction algorithms (namely, "bone," "lung," "soft," and "standard" convolution kernels). Three image section thicknesses (1.25, 2.5, and 5 mm) were evaluated. The scheme detected a large number of quantifiable airway sections when the CT images were reconstructed using high spatial frequency convolution kernels. The detection results demonstrated a consistent trend for all test image sets in that as airway lumen size increases, on average the airway wall area increases as well and the wall area percentage decreases. The study suggested that CT images reconstructed using high spatial frequency convolution kernels and thin-section thickness were most amenable to automated detection, reasonable segmentation, and quantified assessment when the airways are close to being perpendicular to the CT image plane.  相似文献   

4.
The purpose of this paper is to develop a method of eliminating CT image artifacts generated by objects extending outside the scan field of view, such as obese or inadequately positioned patients. CT projection data are measured only within the scan field of view and thus are abruptly discontinuous at the projection boundaries if the scanned object extends outside the scan field of view. This data discontinuity causes an artifact that consists of a bright peripheral band that obscures objects near the boundary of the scan field of view. An adaptive mathematical extrapolation scheme with low computational expense was applied to reduce the data discontinuity prior to convolution in a filtered backprojection reconstruction. Despite extended projection length, the convolution length was not increased and thus the reconstruction time was not affected. Raw projection data from ten patients whose bodies extended beyond the scan field of view were reconstructed using a conventional method and our extended reconstruction method. Limitations of the algorithm are investigated and extensions for further improvement are discussed. The images reconstructed by conventional filtered backprojection demonstrated peripheral bright-band artifacts near the boundary of the scan field of view. Images reconstructed with our technique were free of such artifacts and clearly showed the anatomy at the periphery of the scan field of view with correct attenuation values. We conclude that bright-band artifacts generated by obese patients whose bodies extend beyond the scan field of view were eliminated with our reconstruction method, which reduces boundary data discontinuity. The algorithm can be generalized to objects with inhomogeneous peripheral density and to true "Region of Interest Reconstruction" from truncated projections.  相似文献   

5.
The concept of internal target volume (ITV) is highly significant in radiotherapy for the lung, an organ which is hampered by organ motion. To date, different methods to obtain the ITV have been published and are therefore available. To define ITV, we developed a new method by adapting a time filter to the four-dimensional CT scan technique (4DCT) which is projection-data processing (4D projection data maximum attenuation (4DPM)), and compared it with reconstructed image processing (4D image maximum intensity projection (4DIM)) using a phantom and clinical evaluations. 4DIM and 4DPM captured accurate maximum intensity volume (MIV), that is tumour encompassing volume, easily. Although 4DIM increased the CT number 1.8 times higher than 4DPM, 4DPM provided the original tumour CT number for MIV via a reconstruction algorithm. In the patient with lung fibrosis honeycomb, the MIV with 4DIM is 0.7 cm larger than that for cine imaging in the cranio-caudal direction. 4DPM therefore provided an accurate MIV independent of patient characteristics and reconstruction conditions. These findings indicate the usefulness of 4DPM in determining ITV in radiotherapy.  相似文献   

6.
Digital breast tomosynthesis (DBT) has recently emerged as a new and promising three-dimensional modality in breast imaging. In DBT, the breast volume is reconstructed from 11 projection images, taken at source angles equally spaced over an arc of 50 degrees. Reconstruction algorithms for this modality are not fully optimized yet. Because computerized lesion detection in the reconstructed breast volume will be affected by the reconstruction technique, we are developing a novel mass detection algorithm that operates instead on the set of raw projection images. Mass detection is done in three stages. First, lesion candidates are obtained for each projection image separately, using a mass detection algorithm that was initially developed for screen-film mammography. Second, the locations of a lesion candidate are backprojected into the breast volume. In this feature volume, voxel intensities are a combined measure of detection frequency (e.g., the number of projections in which a given lesion candidate was detected), and a measure of the angular range over which a given lesion was detected. Third, features are extracted after reprojecting the three-dimensional (3-D) locations of lesion candidates into projection images. Features are combined using linear discriminant analysis. The database used to test the algorithm consisted of 21 mass cases (13 malignant, 8 benign) and 15 cases without mass lesions. Based on this database, the algorithm yielded a sensitivity of 90% at 1.5 false positives per breast volume. Algorithm performance is positively biased because this dataset was used for development, training, and testing, and because the number of algorithm parameters was approximately the same as the number.of patient cases. Our results indicate that computerized mass detection in the sequence of projection images for DBT may be effective despite the higher noise level in those images.  相似文献   

7.
Li Q  Doi K 《Medical physics》2006,33(2):320-328
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an "optimal" method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect.  相似文献   

8.
Pan X 《Medical physics》2000,27(9):2031-2036
The hybrid algorithms developed recently for the reconstruction of fan-beam images possess computational and noise properties superior to those of the fan-beam filtered backprojection (FFBP) algorithm. However, the hybrid algorithms cannot be applied directly to a halfscan fan-beam sinogram because they require knowledge of a fullscan fan-beam sinogram. In this work, we developed halfscan-hybrid algorithms for image reconstruction in halfscan computed tomography (CT). Numerical evaluation indicates that the proposed halfscan-hybrid algorithms are computationally more efficient than are the widely used halfscan-FFBP algorithms. Also, the results of quantitative studies demonstrated clearly that the noise levels in images reconstructed by use of the halfscan-hybrid algorithm are generally lower and spatially more uniform than are those in images reconstructed by use of the halfscan-FFBP algorithm. Such reduced and uniform image noise levels may be translated into improvement of the accuracy and precision of lesion detection and parameter estimation in noisy CT images without increasing the radiation dose to the patient. Therefore, the halfscan-hybrid algorithms may have significant implication for image reconstruction in conventional and helical CT.  相似文献   

9.
A multi-criterion algorithm for automatic delineation of small pulmonary nodules on helical CT images has been developed. In a slice-by-slice manner, the algorithm uses density, gradient strength, and a shape constraint of the nodule to automatically control segmentation process. The multiple criteria applied to separation of the nodule from its surrounding structures in lung are based on the fact that typical small pulmonary nodules on CT images have high densities, show a distinct difference in density at the boundary, and tend to be compact in shape. Prior to the segmentation, a region-of-interest containing the nodule is manually selected on the CT images. Then the segmentation process begins with a high density threshold that is decreased stepwise, resulting in expansion of the area of nodule candidates. This progressive region growing approach is terminated when subsequent thresholds provide either a diminished gradient strength of the nodule contour or significant changes of nodule shape from the compact form. The shape criterion added to the algorithm can effectively prevent the high density surrounding structures (e.g., blood vessels) from being falsely segmented as nodule, which occurs frequently when only the gradient strength criterion is applied. This has been demonstrated by examples given in the Results section. The algorithm's accuracy has been compared with that of radiologist's manual segmentation, and no statistically significant difference has been found between the nodule areas delineated by radiologist and those obtained by the multi-criterion algorithm. The improved nodule boundary allows for more accurate assessment of nodule size and hence nodule growth over a short time period, and for better characterization of nodule edges. This information is useful in determining malignancy status of a nodule at an early stage and thus provides significant guidance for further clinical management.  相似文献   

10.
针对有限投影角度的CT图像重建问题,提出一种改进的基于自适应图像全变差(Total p Variation, TpV)约束的代数迭代重建算法。改进算法采用两相式重建结构,首先利用代数重建技术(ART)算法重建中间图像并做非负修正,然后利用自适应TpV正则项约束图像稀疏特性,进一步优化重建结果,其中正则项可根据图像区域特性自适应的调整决定平滑强度的参数p,两项交替进行直到满足收敛要求。本文应用经典的Shepp-Logan体模对改进算法进行仿真重建,以重建图像及其局部放大图作为主观分析依据,以profile图和归一化绝对距离值作为客观评估标准,与经典的ART-TV算法进行比较,对比分析重建结果发现:本文算法重建图像不仅与真实体模更接近,重建误差更小,而且能更好地保护图像的边缘特性。  相似文献   

11.
Lung nodule detection in low-dose and thin-slice computed tomography   总被引:3,自引:0,他引:3  
A computer-aided detection (CAD) system for the identification of small pulmonary nodules in low-dose and thin-slice CT scans has been developed. The automated procedure for selecting the nodule candidates is mainly based on a filter enhancing spherical-shaped objects. A neural approach based on the classification of each single voxel of a nodule candidate has been purposely developed and implemented to reduce the amount of false-positive findings per scan. The CAD system has been trained to be sensitive to small internal and sub-pleural pulmonary nodules collected in a database of low-dose and thin-slice CT scans. The system performance has been evaluated on a data set of 39 CT containing 75 internal and 27 sub-pleural nodules. The FROC curve obtained on this data set shows high values of sensitivity to lung nodules (80-85% range) at an acceptable level of false positive findings per patient (10-13 FP/scan).  相似文献   

12.
We present a number of approaches based on the radial gradient index (RGI) to achieve false-positive reduction in automated CT lung nodule detection. A database of 38 cases was used that contained a total of 82 lung nodules. For each CT section, a complementary image known as an "RGI map" was constructed to enhance regions of high circularity and thus improve the contrast between nodules and normal anatomy. Thresholds on three RGI parameters were varied to construct RGI filters that sensitively eliminated false-positive structures. In a consistency approach, RGI filtering eliminated 36% of the false-positive structures detected by the automated method without the loss of any true positives. Use of an RGI filter prior to a linear discriminant classifier yielded notable improvements in performance, with the false-positive rate at a sensitivity of 70% being reduced from 0.5 to 0.28 per section. Finally, the performance of the linear discriminant classifier was evaluated with RGI-based features. RGI-based features achieved a substantial improvement in overall performance, with a 94.8% reduction in the false-positive rate at a fixed sensitivity of 70%. These results demonstrate the potential role of RGI analysis in an automated lung nodule detection method.  相似文献   

13.
We developed a novel digital tomosynthesis (DTS) reconstruction method using a deformation field map to optimally estimate volumetric information in DTS images. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous cone-beam CT (CBCT) or planning CT data are used as the prior information, and the new patient volume to be reconstructed is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient DTS volume is then obtained by deforming the prior patient CBCT or CT volume according to the solution to the deformation field. This method is novel because it is the first method to combine deformable registration with limited angle image reconstruction. The method was tested in 2D cases using simulated projections of a Shepp-Logan phantom, liver, and head-and-neck patient data. The accuracy of the reconstruction was evaluated by comparing both organ volume and pixel value differences between DTS and CBCT images. In the Shepp-Logan phantom study, the reconstructed pixel signal-to-noise ratio (PSNR) for the 60 degrees DTS image reached 34.3 dB. In the liver patient study, the relative error of the liver volume reconstructed using 60 degrees projections was 3.4%. The reconstructed PSNR for the 60 degrees DTS image reached 23.5 dB. In the head-and-neck patient study, the new method using 60 degrees projections was able to reconstruct the 8.1 degrees rotation of the bony structure with 0.0 degrees error. The reconstructed PSNR for the 60 degrees DTS image reached 24.2 dB. In summary, the new reconstruction method can optimally estimate the volumetric information in DTS images using 60 degrees projections. Preliminary validation of the algorithm showed that it is both technically and clinically feasible for image guidance in radiation therapy.  相似文献   

14.
We present and validate a computed tomography (CT) metal artifact reduction method that is effective for a wide spectrum of clinical implant materials. Projections through low-Z implants such as titanium were corrected using a novel physics correction algorithm that reduces beam hardening errors. In the case of high-Z implants (dental fillings, gold, platinum), projections through the implant were considered missing and regularized iterative reconstruction was performed. Both algorithms were combined if multiple implant materials were present. For comparison, a conventional projection interpolation method was implemented. In a blinded and randomized evaluation, ten radiation oncologists ranked the quality of patient scans on which the different methods were applied. For scans that included low-Z implants, the proposed method was ranked as the best method in 90% of the reviews. It was ranked superior to the original reconstruction (p = 0.0008), conventional projection interpolation (p < 0.0001) and regularized limited data reconstruction (p = 0.0002). All reviewers ranked the method first for scans with high-Z implants, and better as compared to the original reconstruction (p < 0.0001) and projection interpolation (p = 0.004). We conclude that effective reduction of CT metal artifacts can be achieved by combining algorithms tailored to specific types of implant materials.  相似文献   

15.
Reconstruction algorithms for cone-beam CT have been the focus of many studies. Several exact and approximate reconstruction algorithms were proposed for step-and-shoot and helical scanning trajectories to combat cone-beam related artefacts. In this paper, we present a new closed-form cone-beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed in the past to combat errors induced by the gantry tilt, none of the algorithms addresses the scenario in which the cone-beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. We show that the image quality advantages of the rebinned parallel-beam reconstruction are significant, which makes the development of such an algorithm necessary. Because of the rebinning process, the reconstruction algorithm becomes more complex and the amount of iso-centre adjustment depends not only on the projection and tilt angles, but also on the reconstructed pixel location. In this paper, we first demonstrate the advantages of the row-wise fan-to-parallel rebinning and derive a closed-form solution for the reconstruction algorithm for the step-and-shoot and constant-pitch helical scans. The proposed algorithm requires the 'warping' of the reconstruction matrix on a view-by-view basis prior to the backprojection step. We further extend the algorithm to the variable-pitch helical scans in which the patient table travels at non-constant speeds. The algorithm was tested extensively on both the 16- and 64-slice CT scanners. The efficacy of the algorithm is clearly demonstrated by multiple experiments.  相似文献   

16.
A computer-aided detection (CAD) system for the selection of lung nodules in computer tomography (CT) images is presented. The system is based on region growing (RG) algorithms and a new active contour model (ACM), implementing a local convex hull, able to draw the correct contour of the lung parenchyma and to include the pleural nodules. The CAD consists of three steps: (1) the lung parenchymal volume is segmented by means of a RG algorithm; the pleural nodules are included through the new ACM technique; (2) a RG algorithm is iteratively applied to the previously segmented volume in order to detect the candidate nodules; (3) a double-threshold cut and a neural network are applied to reduce the false positives (FPs). After having set the parameters on a clinical CT, the system works on whole scans, without the need for any manual selection. The CT database was recorded at the Pisa center of the ITALUNG-CT trial, the first Italian randomized controlled trial for the screening of the lung cancer. The detection rate of the system is 88.5% with 6.6 FPs/CT on 15 CT scans (about 4700 sectional images) with 26 nodules: 15 internal and 11 pleural. A reduction to 2.47 FPs/CT is achieved at 80% efficiency.  相似文献   

17.
Predicting malignancy of solitary pulmonary nodules from computer tomography scans is a difficult and important problem in the diagnosis of lung cancer. This paper investigates the contribution of nodule characteristics in the prediction of malignancy. Using data from Lung Image Database Consortium (LIDC) database, we propose a weighted rule based classification approach for predicting malignancy of pulmonary nodules. LIDC database contains CT scans of nodules and information about nodule characteristics evaluated by multiple annotators. In the first step of our method, votes for nodule characteristics are obtained from ensemble classifiers by using image features. In the second step, votes and rules obtained from radiologist evaluations are used by a weighted rule based method to predict malignancy. The rule based method is constructed by using radiologist evaluations on previous cases. Correlations between malignancy and other nodule characteristics and agreement ratio of radiologists are considered in rule evaluation. To handle the unbalanced nature of LIDC, ensemble classifiers and data balancing methods are used. The proposed approach is compared with the classification methods trained on image features. Classification accuracy, specificity and sensitivity of classifiers are measured. The experimental results show that using nodule characteristics for malignancy prediction can improve classification results.  相似文献   

18.
A computer-aided diagnosis (CAD) scheme is being developed to identify image regions considered suspicious for lung nodules in chest radiographs to assist radiologists in making correct diagnoses. Automated classifiers—an artificial neural network, discriminant analysis, and a rule-based scheme—are used to reduce the number of false-positive detections of the CAD scheme. The CAD scheme first detects nodule candidates from chest radiographs based on a difference image technique. Nine image features characterizing nodules are extracted automatically for each of the nodule candidates. The extracted image features are then used as input data to the classifiers for distinguishing actual nodules from the false-positive detections. The performances of the classifiers are evaluated by receiver-operating characteristic analysis. On the basis of the database of 30 normal and 30 abnormal chest images, the neural network achieves an AZ value (area under the receiver-operating-characteristic curve) of 0.79 in detecting lung nodules, as tested by the round-robin method. The neural network, after being trained with a training database, is able to eliminate more than 83% of the false-positive detections reported by the CAD scheme. Moreover, the combination of the trained neural network and a rule-based scheme eliminates 96% of the false-positive detections of the CAD scheme.  相似文献   

19.
We are developing a computer-aided diagnosis (CAD) system for lung nodule detection on thoracic helical computed tomography (CT) images. In the first stage of this CAD system, lung regions are identified by a k-means clustering technique. Each lung slice is classified as belonging to the upper, middle, or the lower part of the lung volume. Within each lung region, structures are segmented again using weighted k-means clustering. These structures may include true lung nodules and normal structures consisting mainly of blood vessels. Rule-based classifiers are designed to distinguish nodules and normal structures using 2D and 3D features. After rule-based classification, linear discriminant analysis (LDA) is used to further reduce the number of false positive (FP) objects. We performed a preliminary study using 1454 CT slices from 34 patients with 63 lung nodules. When only LDA classification was applied to the segmented objects, the sensitivity was 84% (53/63) with 5.48 (7961/1454) FP objects per slice. When rule-based classification was used before LDA, the free response receiver operating characteristic (FROC) curve improved over the entire sensitivity and specificity ranges of interest. In particular, the FP rate decreased to 1.74 (2530/1454) objects per slice at the same sensitivity. Thus, compared to FP reduction with LDA alone, the inclusion of rule-based classification lead to an improvement in detection accuracy for the CAD system. These preliminary results demonstrate the feasibility of our approach to lung nodule detection and FP reduction on CT images.  相似文献   

20.
Considering that the traditional lung segmentation algorithms are not adaptive for the situations that most of the juxtapleural nodules, which are excluded as fat, and lung are not segmented perfectly. In this paper, several methods are comprehensively utilized including optimal iterative threshold, three-dimensional connectivity labeling, three-dimensional region growing for the initial segmentation of the lung parenchyma, based on improved chain code, and Bresenham algorithms to repair the lung parenchyma. The paper thus proposes a fully automatic method for lung parenchyma segmentation and repairing. Ninety-seven lung nodule thoracic computed tomography scans and 25 juxtapleural nodule scans are used to test the proposed method and compare with the most-cited rolling-ball method. Experimental results show that the algorithm can segment lung parenchyma region automatically and accurately. The sensitivity of juxtapleural nodule inclusion is 100 %, the segmentation accuracy of juxtapleural nodule regions is 98.6 %, segmentation accuracy of lung parenchyma is more than 95.2 %, and the average segmentation time is 0.67 s/frame. The algorithm can achieve good results for lung parenchyma segmentation and repairing in various cases that nodules/tumors adhere to lung wall.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号