首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the major challenges in computer-aided detection (CAD) of polyps in CT colonography (CTC) is the reduction of false-positive detections (FPs) without a concomitant reduction in sensitivity. A large number of FPs is likely to confound the radiologist's task of image interpretation, lower the radiologist's efficiency, and cause radiologists to lose their confidence in CAD as a useful tool. Major sources of FPs generated by CAD schemes include haustral folds, residual stool, rectal tubes, the ileocecal valve, and extra-colonic structures such as the small bowel and stomach. Our purpose in this study was to develop a method for the removal of various types of FPs in CAD of polyps while maintaining a high sensitivity. To achieve this, we developed a "mixture of expert" three-dimensional (3D) massive-training artificial neural networks (MTANNs) consisting of four 3D MTANNs that were designed to differentiate between polyps and four categories of FPs: (1) rectal tubes, (2) stool with bubbles, (3) colonic walls with haustral folds, and (4) solid stool. Each expert 3D MTANN was trained with examples from a specific non-polyp category along with typical polyps. The four expert 3D MTANNs were combined with a mixing artificial neural network (ANN) such that different types of FPs could be removed. Our database consisted of 146 CTC datasets obtained from 73 patients whose colons were prepared by standard pre-colonoscopy cleansing. Each patient was scanned in both supine and prone positions. Radiologists established the locations of polyps through the use of optical-colonoscopy reports. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. The CTC cases were subjected to our previously reported CAD method consisting of centerline-based extraction of the colon, shape-based detection of polyp candidates, and a Bayesian-ANN-based classification of polyps. The original CAD method yielded 96.4% (27/28) by-polyp sensitivity with an average of 3.1 (224/73) FPs per patient. The mixture of expert 3D MTANNs removed 63% (142/224) of the FPs without the loss of any true positive; thus, the FP rate of our CAD scheme was improved to 1.1 (82/73) FPs per patient while the original sensitivity was maintained. By use of the mixture of expert 3D MTANNs, the specificity of a CAD scheme for detection of polyps in CTC was substantially improved while a high sensitivity was maintained.  相似文献   

2.
PURPOSE: To eliminate false-positive (FP) polyp detections on the rectal tube (RT) in CT colonography (CTC) computer-aided detection (CAD). METHODS: We use a three-stage approach to detect the RT: detect the RT shaft, track the tube to the tip and label all the voxels that belong to the RT. We applied our RT detection algorithm on a CTC dataset consisting of 80 datasets (40 patients scanned in both prone and supine positions). Two different types of RTs were present, characterized by differences in shaft/bulb diameters, wall intensities, and shape of tip. RESULTS: The algorithm detected 90% of RT shafts and completely tracked 72% of them. We labeled all the voxels belonging to the completely tracked RTs (72%) and in 11 out of 80 (14%) cases the RT voxels were partially labeled. We obtained a 9.2% reduction of the FPs in the initial polyp candidates' population, and a 7.9% reduction of the FPs generated by our CAD system. None of the true-positive detections were mislabeled. CONCLUSIONS: The algorithm detects the RTs with good accuracy, is robust with respect to the two different types of RT used in our study, and is effective at reducing the number of RT FPs reported by our CAD system.  相似文献   

3.
The prevalence of colon cancer has seen strong demand in screening for colorectal neoplasia, and this has drawn considerable attention to the technological advances in Computed Tomographic Colonography (CTC). With the assistance of an oral contrast agent, an imaging technique known as Electronic Cleansing (EC), can affect virtual cleaning of the computed tomography (CT) images, to remove fecal material that is tagged by the agent. Technical problems can arise with electronic cleansing however, when the air lumen causes distortions to the tagged regions which result in partial volume effects.Combining the simple image arithmetic of an electronic cleansing algorithm, with a vertical motion filter at the fluid level of the bowel, artifacts such as those caused by an air lumen are eliminated. Essentially, the filter becomes a vector for that carries the measurement of vertical motion to neutralise the artifact that is causing partial volume effects. Results demonstrate that despite its simplicity, this technique offers accuracy and is able to successfully maintain the normal intra-colonic structure, while supporting digital leaning of tagged residual material appearing on the colon wall.  相似文献   

4.
One of the limitations of the current computer-aided detection (CAD) of polyps in CT colonography (CTC) is a relatively large number of false-positive (FP) detections. Rectal tubes (RTs) are one of the typical sources of FPs because a portion of a RT, especially a portion of a bulbous tip, often exhibits a cap-like shape that closely mimics the appearance of a small polyp. Radiologists can easily recognize and dismiss RT-induced FPs; thus, they may lose their confidence in CAD as an effective tool if the CAD scheme generates such "obvious" FPs due to RTs consistently. In addition, RT-induced FPs may distract radiologists from less common true positives in the rectum. Therefore, removal RT-induced FPs as well as other types of FPs is desirable while maintaining a high sensitivity in the detection of polyps. We developed a three-dimensional (3D) massive-training artificial neural network (MTANN) for distinction between polyps and RTs in 3D CTC volumetric data. The 3D MTANN is a supervised volume-processing technique which is trained with input CTC volumes and the corresponding "teaching" volumes. The teaching volume for a polyp contains a 3D Gaussian distribution, and that for a RT contains zeros for enhancement of polyps and suppression of RTs, respectively. For distinction between polyps and nonpolyps including RTs, a 3D scoring method based on a 3D Gaussian weighting function is applied to the output of the trained 3D MTANN. Our database consisted of CTC examinations of 73 patients, scanned in both supine and prone positions (146 CTC data sets in total), with optical colonoscopy as a reference standard for the presence of polyps. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. These CTC cases were subjected to our previously reported CAD scheme that included centerline-based segmentation of the colon, shape-based detection of polyps, and reduction of FPs by use of a Bayesian neural network based on geometric and texture features. Application of this CAD scheme yielded 96.4% (27/28) by-polyp sensitivity with 3.1 (224/73) FPs per patient, among which 20 FPs were caused by RTs. To eliminate the FPs due to RTs and possibly other normal structures, we trained a 3D MTANN with ten representative polyps and ten RTs, and applied the trained 3D MTANN to the above CAD true- and false-positive detections. In the output volumes of the 3D MTANN, polyps were represented by distributions of bright voxels, whereas RTs and other normal structures partly similar to RTs appeared as darker voxels, indicating the ability of the 3D MTANN to suppress RTs as well as other normal structures effectively. Application of the 3D MTANN to the CAD detections showed that the 3D MTANN eliminated all RT-induced 20 FPs, as well as 53 FPs due to other causes, without removal of any true positives. Overall, the 3D MTANN was able to reduce the FP rate of the CAD scheme from 3.1 to 2.1 FPs per patient (33% reduction), while the original by-polyp sensitivity of 96.4% was maintained.  相似文献   

5.
6.
Suzuki K  Armato SG  Li F  Sone S  Doi K 《Medical physics》2003,30(7):1602-1617
In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the "likelihood of being a nodule." The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).  相似文献   

7.
There is rapid expansion of newborn screening throughout the United States. A recent report from the American College of Medical Genetics has recommended a mechanism by which decisions are made about adding tests to the screening panel, and a core panel of 29 conditions has been recommended. Implementing such a program is a major undertaking in every state and involves not only the laboratory but also a number of diagnostic and follow-up services. It is essential to have laboratory programs in place to minimize false positive screening tests and at the same time communicate in the most effective manner possible about the nature of false positives.  相似文献   

8.

Purpose

The purpose of this study is to assess the performance of computer-aided detection (CAD) software in detecting and measuring polyps for CT Colonography, based on an in vitro phantom study.

Material and methods

A colon phantom was constructed with a PVC pipe of 3.8 cm diameter. Nine simulated polyps of various sizes (3.2mm-25.4mm) were affixed inside the phantom that was placed in a water bath. The phantom was scanned on a 64-slice CT scanner with tube voltage of 120 kV and current of 205 mAs. Two separate scans were performed, with different slice thickness and reconstruction interval. The first scan (thin) had a slice thickness of 1mm and reconstruction interval 0.5mm. The second scan (thick) had a slice thickness of 2mm and reconstruction interval of 1mm. Images from both scans were processed using CT Colonography software that automatically segments the colon phantom and applies CAD that automatically highlights and provides the size (maximum and minimum diameters, volume) of each polyp. Two readers independently measured each polyp (two orthogonal diameters) using both 2D and 3D views. Readers’ manual measurements (diameters) and automatic measurements from CAD (diameters and volume) were compared to actual polyp sizes as measured by mechanical calipers.

Results

All polyps except the smallest (3.2mm) were detected by CAD. CAD achieved 100% sensitivity in detecting polyps ≥6mm. Mean errors in CAD automated volume measurements for thin and thick slice scans were 8.7% and 6.8%, respectively. Almost all CAD and manual readers’ 3D measurements overestimated the size of polyps to variable extent. Both over- and underestimation of polyp sizes were observed in the readers’ manual 2D measurements. Overall, Reader 1 (expert) had smaller mean error than Reader 2 (non-expert).

Conclusion

CAD provided accurate size measurements for all polyps, and results were comparable to the two readers'' manual measurements  相似文献   

9.
Using images from the Image Database Consortium and Image Database Resource Initiative (LIDC–IDRI), we developed a methodology for classifying lung nodules. The proposed methodology uses image processing and pattern recognition techniques. To classify volumes of interest into nodules and non-nodules, we used shape measurements only, analyzing their shape using shape diagrams, proportion measurements, and a cylinder-based analysis. In addition, we use the support vector machine classifier. To test the proposed methodology, it was applied to 833 images from the LIDC–IDRI database, and cross-validation with k-fold, where \(k = 5\), was used to validate the results. The proposed methodology for the classification of nodules and non-nodules achieved a mean accuracy of 95.33 %. Lung cancer causes more deaths than any other cancer worldwide. Therefore, precocious detection allows for faster therapeutic intervention and a more favorable prognosis for the patient. Our proposed methodology contributes to the classification of lung nodules and should help in the diagnosis of lung cancer.  相似文献   

10.
11.
Several studies have found evidence for more positive selection on the chimpanzee lineage compared with the human lineage since the two species split. A potential concern, however, is that these findings may simply reflect artifacts of the data: inaccuracies in the underlying chimpanzee genome sequence, which is of lower quality than human. To test this hypothesis, we generated de novo genome assemblies of chimpanzee and macaque and aligned them with human. We also implemented a novel bioinformatic procedure for producing alignments of closely related species that uses synteny information to remove misassembled and misaligned regions, and sequence quality scores to remove nucleotides that are less reliable. We applied this procedure to re-examine 59 genes recently identified as candidates for positive selection in chimpanzees. The great majority of these signals disappear after application of our new bioinformatic procedure. We also carried out laboratory-based resequencing of 10 of the regions in multiple chimpanzees and humans, and found that our alignments were correct wherever there was a conflict with the published results. These findings throw into question previous findings that there has been more positive selection in chimpanzees than in humans since the two species diverged. Our study also highlights the challenges of searching the extreme tails of distributions for signals of natural selection. Inaccuracies in the genome sequence at even a tiny fraction of genes can produce false-positive signals, which make it difficult to identify loci that have genuinely been targets of selection.A powerful approach for finding genes affected by positive selection is to align the coding sequences of closely related species (for example human and chimpanzee) and more distantly related out-groups (for example macaque), and to screen these alignments for loci, where on one lineage there is a much higher rate of protein coding changes than is observed on other lineages (Hughes and Nei 1988; Nielsen et al. 2005; Bakewell et al. 2007; Rhesus Macaque Genome Sequencing and Analysis Consortium 2007). This test has been formalized as the study of the ratio of the rate of nonsynonymous substitutions per site that could harbor a nonsynonymous mutation (dN), to the rate of synonymous substitutions per site that could harbor a synonymous mutation (dS). If the value of ω = dN/dS is significantly greater than 1 in specific codons or on a specific lineage, the observation is interpreted as evidence of a history of positive selection (Nielsen 2001).The macaque genome (Rhesus Macaque Genome Sequencing and Analysis Consortium 2007) provides a valuable reference for studies comparing the human and chimpanzee genomes (The Chimpanzee Sequencing and Analysis Consortium 2005), both by making it possible to determine the lineage on which a mutation occurred and by providing a way to estimate the degree of sequence conservation at each codon averaged over primate evolutionary history. Two recent analyses have scanned the genome to identify lists of putative positively selected genes (PSGs) in which there is statistically significant evidence of an acceleration in the rate of amino acid changes on the human or chimpanzee lineages since the two species diverged (Bakewell et al. 2007; Rhesus Macaque Genome Sequencing and Analysis Consortium 2007). Intriguingly, of the genes that met thresholds for being PSGs in human or chimpanzee, but not both, the studies found a significant excess on the chimpanzee side. For example, 59 of 61 genes in the study by Bakewell et al. (2007) that met a false discovery rate (FDR) threshold of <5% showed evidence of positive selection in chimpanzees; we call this set of genes “test set 1.” Similarly, 13 of the 14 genes in a second analysis that met a P-value threshold of <0.001 showed evidence of positive selection in chimpanzees (Rhesus Macaque Genome Sequencing and Analysis Consortium 2007); we call this set of genes “test set 2.” (Of concern, however, the lists of the most significant chimpanzee PSGs in the two studies did not overlap.) A third study aligned human, chimpanzee, mouse, rat, and dog genes and also found evidence for accelerated positive selection in chimpanzees (Arbiza et al. 2006).A potential concern for dN/dS-based tests for positive selection, when applied on a genome-wide scale, is that they can be confounded by a small error rate in the data. Even if a great majority of bases are correctly determined, if there are a handful affected by errors, and especially if these errors are clustered within particular codons, a statistical signal can be generated that will cause these genes to artifactually appear as PSGs. A genome scan examines many thousands of genes, so that even if the overall error rate is low (<<1%), enough genes with false clusters of mutations could be observed to make it difficult to distinguish true signals. The concern is particularly acute for a comparison of human and chimpanzee. Due to the lower quality of the chimpanzee than that of the human genome sequence, more false-positive mutations are expected in the chimpanzee. The errors in the chimpanzee sequence can produce an artifactual signal of accelerated evolution on the chimpanzee lineage if they appear to reflect multiple nonsynonymous changes specific to the chimpanzee lineage. Moreover, such artifactual signals can be statistically significant in light of the low average divergence between these closely related species. This could provide a trivial explanation for the signal of accelerated chimpanzee evolution that has been suggested by several recent studies (Arbiza et al. 2006; Bakewell et al. 2007; Rhesus Macaque Genome Sequencing and Analysis Consortium 2007).The two analyses that compared human, chimpanzee, and macaque genes applied multiple filters to increase the quality of their alignments and to minimize errors. Bakewell et al (2007) (who primarily analyzed the 4× chimpanzee assembly; panTro1), repeated their analyses in data sets in which they only analyzed nucleotides with chimpanzee sequence quality scores of at least Q0, Q10, and Q20 (corresponding to estimated error rates of <1, <0.1, and <0.01 per base pair) (Ewing et al. 1998). They found that the dN/dS ratio averaged across the genome achieved an asymptote with the most stringent of these filters. However, this method for assessing the efficacy of quality filtering may not be sufficient, as false-positive signals are expected to arise from the extreme tail of the statistical distribution, and genome averages are not very sensitive to the behavior of the extreme tail. Quality score filtering also cannot eliminate errors arising from misassembly of the chimpanzee genome or inaccuracies in multiple sequence alignment. The Rhesus Macaque Genome Sequencing and Analysis Consortium (2007) applied a different set of filters to their alignments using the more complete 6× chimpanzee assembly (panTro2). The most novel of these filters were synteny and frame-shift filters. The latter filter prohibited insertion/deletion changes (indels) that produced a frame shift in the alignment that was not compensated within 15 bases.Here we reanalyzed genes that were highlighted as positively selected in chimpanzees in both test set 1 and test set 2 (see Methods). We implemented a bioinformatics procedure (Fig. 1) whose goal was to generate aligned bases of high reliability, even at the expense of a loss of some exon coverage. The procedure had three steps:
  • (1) We used the ARACHNE genome assembler to generate a de novo genome assembly of chimpanzee, corresponding to about 7× coverage of the genome since it used approximately the same raw data as the panTro2 6× assembly, but also included an additional ∼7 million sequencing reads that became available in public databases after the preparation of that assembly. We also generated a de novo assembly of macaque, which included about 6× coverage and corresponded to approximately the same raw data as the rheMac2 assembly (Jaffe et al. 2003; S. Gnerre, E. Lander, K. Lindblad-Toh, and D. Jaffe, in prep.). We modified ARACHNE so that we did not automatically set heterozygous sites within the sequenced genomes (single nucleotide polymorphisms [SNPs]) to be of low quality as is done in many current assemblies including chimpanzee. Instead, if we could identify a SNP with confidence, we picked one of the bases and allowed its score to be high (see Methods). A particular benefit of our bioinformatic procedure was that the genome assembly for each species was compared with human in a way that generated a syntenic map between that species and human, reducing the rate of misalignment.
  • (2) We generated alignments of each of the genomes with human, breaking long alignments into a series of small alignment problems that can be more reliably processed using conventional aligners (we used ClustalW, version 1.83; Larkin et al. 2007). The position of each of the smaller alignments was guided by the synteny map built during our reassembly of chimpanzee and macaque. This acted as a filter to prevent possible alignments to paralogous regions, in contrast to the more common reciprocal BLAST approach (Nembaware et al. 2002). There was an advantage in using our own assemblies, as it allowed us to customize the generation of consensus sequence in each species.
  • (3) We applied a series of filters to remove problematic regions. This included short alignments (<100 base pairs [bp]), regions near the ends of alignments, and near insertion/deletion polymorphisms (Methods). Alignments of genes could then be obtained by stripping out introns. We identified divergent sites only at nucleotides that passed a set of aggressive base quality filters. We required the quality score of every nucleotide used in analysis to be at least Q30, all bases within five nucleotides to have a quality score of at least Q20, and no base to be in a hypermutable CpG dinucleotide.
Open in a separate windowFigure 1.Alignment pipeline. Flowchart of our bioinformatic procedure for generating multiple sequence alignments. For non-human species in step 1, publicly available traces are turned into genome assemblies using ARACHNE (Jaffe et al. 2003). This allows us to construct a synteny map and to use assembly information to guide the positioning of the non-human sequence on the reference (human) genome. In step 2, pairwise alignments of non-human sequence with its human counterpart are constructed using synteny information and information on the uniqueness of the alignment to filter out spurious alignments and regions of duplication. BLASTZ (Schwartz et al. 2003) is used to generate local alignments that are then combined to create a nonoverlapping pairwise alignment, allowing for the possibility of local inversions. The human genome is scanned to determine regions that have alignments to all the non-human species. Multiple sequence alignments are constructed using ClustalW (Larkin et al. 2007). In step 3, alignments are scanned to determine divergent sites, after which aggressive filters are applied (see Methods).We applied this procedure to 49 of the chimpanzee PSGs from test set 1 and 10 of the chimpanzee PSGs from test set 2, corresponding to all the genes for which we obtained enough coverage in our alignments (after filtering) to permit useful comparison. If these genes genuinely reflect accelerated evolution on the chimpanzee lineage since the split from humans, we would expect to confirm a signal of accelerated evolution in chimpanzees at these genes by “branch-site” tests of evolution similar to the tests that the authors applied. We only replicated 1 of the 49 signals of accelerated evolution on the chimpanzee lineage that we were able to reanalyze from test set 1, and 5 of the 10 signals of accelerated evolution that we were able to reanalyze from test set 2. We also experimentally resequenced 10 of the regions where previous analyses had reported a signal of selection, while our reanalysis had not, and confirmed that our alignments were correct wherever a direct comparison could be made.  相似文献   

12.
A method is described which can be used to determine whether a series of PCR reactions carried out in a microtitre plate are inherently unlikely to have occurred by chance, and hence to show 'false' results. The method is an extension of the familiar 'runs test' which can be used for tube-based PCRs. A Monte Carlo simulation program is discussed which can be used to generate expected probability distributions for either symmetric or asymmetric plate designs. In addition, systematic departures from a random pattern due to 'edge effects' can be detected.  相似文献   

13.
Li P  Napel S  Acar B  Paik DS  Jeffrey RB  Beaulieu CF 《Medical physics》2004,31(10):2912-2923
Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed a two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.  相似文献   

14.
Wu YT  Wei J  Hadjiiski LM  Sahiner B  Zhou C  Ge J  Shi J  Zhang Y  Chan HP 《Medical physics》2007,34(8):3334-3344
We have developed a false positive (FP) reduction method based on analysis of bilateral mammograms for computerized mass detection systems. The mass candidates on each view were first detected by our unilateral computer-aided detection (CAD) system. For each detected object, a regional registration technique was used to define a region of interest (ROI) that is "symmetrical" to the object location on the contralateral mammogram. Texture features derived from the spatial gray level dependence matrices and morphological features were extracted from the ROI containing the detected object on a mammogram and its corresponding ROI on the contralateral mammogram. Bilateral features were then generated from corresponding pairs of unilateral features for each object. Two linear discriminant analysis (LDA) classifiers were trained from the unilateral and the bilateral feature spaces, respectively. Finally, the scores from the unilateral LDA classifier and the bilateral LDA asymmetry classifier were fused with a third LDA whose output score was used to distinguish true mass from FPs. A data set of 341 cases of bilateral two-view mammograms was used in this study, of which 276 cases with 552 bilateral pairs contained 110 malignant and 166 benign biopsy-proven masses and 65 cases with 130 bilateral pairs were normal. The mass data set was divided into two subsets for twofold cross-validation training and testing. The normal data set was used for estimation of FP rates. It was found that our bilateral CAD system achieved a case-based sensitivity of 70%, 80%, and 85% at average FP rates of 0.35, 0.75, and 0.95 FPs/image, respectively, on the test data sets with malignant masses. In comparison to the average FP rates for the unilateral CAD system of 0.58, 1.33, and 1.63, respectively, at the corresponding sensitivities, the FP rates were reduced by 40%, 44%, and 42% with the bilateral symmetry information. The improvement was statistically significance (p < 0.05) as estimated by JAFROC analysis.  相似文献   

15.
Improved methods for evaluation and quantification of the three-dimensional (3D) architecture of bone are needed in order to more fully understand the role of trabecular architecture in bone strength. Computed tomography (microCT) is capable of examining bone at resolutions below 30 microm (isotropic), with collection of a three-dimensional data set which can then be subjected to image analysis. In this paper, we discuss automated methods for important steps in this analysis, including methods for (1) segmenting the image into bone and background; (2) defining the volume of interest for determination of structural parameters; and (3) segmenting the bone into trabecular and cortical components. Evaluation of bone structure using these techniques provides new information about the 3D architecture of bone tissue, and may be useful for evaluation of structural changes in bone caused by aging, disease, or drug treatment.  相似文献   

16.
17.
Summary Anti-hepatitis C virus antibody screening of blood donors in different countries revealed prevalences ranging from 0,4–1,4%. These results were obtained with an enzyme immunoassay based on a recombinant hepatitis C virus antigen. We applied a specific inhibition assay (neutralization assay) and a recombinant immunoblot assay to determine the specificity of positive reactions in the enzyme immunoassay.Of 2836 blood donor sera tested, 10 (0,35%) were reactive in the enzyme immunoassay, however, only 3 sera (0,1%) proved to be specifically anti-HCV positive in the inhibition assay. The recombinant immunoblot assay gave similar results. The prevalence of anti-hepatitis C virus antibodies among blood donors has been overestimated in recent publications. Furthermore the high rate of false positives in the enzyme immunoassay may explain reports claiming that only a minor part of EIA positive blood units transmitted the hepatitis C virus to recipients.The inhibition assay was also applied to sera of haemophiliacs and of patients with hepatopathy which had reacted positively in the anti-hepatitis C virus antibody enzyme immunoassay. The antihepatitis C virus specificity was confirmed for all sera from the haemophiliacs group (100%) and for 77% of the hepatopathy patients group. Thus, the anti-hepatitis C virus enzyme immunoassay has a high predictive value when it is used to screen groups with high risks of parenteral hepatitis C virus infection, however, its predictive value is very low when it is used for blood donor screening.Abbreviations EIA enzyme immunoassay - HCV hepatitis C virus - RIBA recombinant immunoblot assay - SOD superoxide dismutase  相似文献   

18.
19.
The detection limitations inherent in statistically limited computed tomographic (CT) images are described through the application of signal detection theory. The detectability of large-area, low-contrast objects is shown to be chiefly dependent upon the low-frequency content of the noise power spectral density. For projection data containg uncorrelated noise, the resulting ramplike, low-frequency behavior of the noise power spectrum of CT reconstructions may be conveniently characterized by the number of noise-equivalent x-ray quanta (NEQ) detected in the projection measurements. The NEQ for a given image may be determined either from a measurement of the noise power spectrum or from the noise granularity computed with an appropriate weighting function. A measure of the efficiency of scanner dose utilization is proposed which compares the average dose to that required by an ideal scanner to obtain the same NEQ.  相似文献   

20.
目的本研究旨在利用三维重建技术和容积再现技术定位和测量乙状窦后入路重要结构之间的距离。方法对120名志愿者进行头部薄层CT扫描来得到最终结果。结果 AC的距离经测量为39.46(4.22)mm(范围,15.80~50.80 mm;95%置信区间,38.69~40.22 mm)。内耳门的直径经测量为5.39(0.77)mm(范围,3.40-8.20 mm;95%置信区间,5.25~5.53 mm)。AB的距离经测量为41.10(4.22)mm(范围,34.90~51.30 mm;95%置信区间,39.43~42.77 mm)。BC的距离经测量为5.93(1.31)mm(范围,4.10~7.50 mm;95%置信区间,5.70~6.17 mm)。垂直距离经测量为2.33(0.26)mm(范围,1.87~2.80 mm;95%置信区间,2.23~2.42 mm)。结论通过容积再现技术和更为精确的三维测量工具对乙状窦后入路中的重要解剖结构之间的距离,垂直距离以及内听道的直径进行了精准测定,并通过上述数据计算出安全距离,使医生在手术中规避风险,为手术的成功提供了保障。上述结果可以帮助定位这些结构从而可以减少手术中对神经和血管的创伤。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号