首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
线性回归分析中异常点的诊断统计量   总被引:2,自引:2,他引:0  
在线性回归分析中,可将观察资料中出现的离群点分为高杠杆点和异常点。t分布化普通残差及t分布化预报残差对捡出资料中的异常点部不理想。作者提出用残差平方和减少量及由此导出的偏F检验作为检验异常点的统计量。这一统计量也适用于诊断非线性回归中的资料异常点。文中用实例对这一统计量作了详细说明。  相似文献   

2.
Rapid and accurate diagnosis is of the utmost importance in the control of epizootic diseases such as classical swine fever (CSF), and efficacious vaccination can be used as a supporting tool. While most of the recently developed CSF vaccines and diagnostic kits are mostly validated according to World Organisation for Animal Health (OIE) standards, not all of the well-established traditional vaccines and diagnostic tests were subject to these validation procedures and requirements. In this report, data were compiled on performance and validation of CSF diagnostic tests and vaccines. In addition, current strategies for differentiating infected from vaccinated animals are reviewed, as is information on the control of CSF in wildlife. Evaluation data on diagnostic tests were kindly provided by National Reference Laboratories for CSF in various European countries.  相似文献   

3.
Using heuristics offers several cognitive advantages, such as increased speed and reduced effort when making decisions, in addition to allowing us to make decision in situations where missing data do not allow for formal reasoning. But the traditional view of heuristics is that they trade accuracy for efficiency. Here the authors discuss sources of bias in the literature implicating the use of heuristics in diagnostic error and highlight the fact that there are also data suggesting that under certain circumstances using heuristics may lead to better decisions that formal analysis. They suggest that diagnostic error is frequently misattributed to the use of heuristics and propose an alternative view whereby content knowledge is the root cause of diagnostic performance and heuristics lie on the causal pathway between knowledge and diagnostic error or success.  相似文献   

4.
介绍一种医用超声诊断设备声功率计量数据的处理方法。对同一台医用超声诊断设备的同一个换能器多次测量声功率,将测量的数据建立恰当的数学模型,应用Kalman滤波器算法对其进行处理,去除超声诊断设备及换能器本身的声功率输出波动和超声声功率计的测量不稳定误差,最后得到一个相对稳定、真实的超声声功率。经过Kalman滤波嚣处理的超声功率测量数据大大改善了超声功率的测量结果的离散性,与常用的均值处理方法相比,它的结果更能反映超声功率的真实情况,值得推广应用到其它计量数据的处理中。  相似文献   

5.
The goal in diagnostic medicine is often to estimate the diagnostic accuracy of multiple experimental tests relative to a gold standard reference. When a gold standard reference is not available, investigators commonly use an imperfect reference standard. This paper proposes methodology for estimating the diagnostic accuracy of multiple binary tests with an imperfect reference standard when information about the diagnostic accuracy of the imperfect test is available from external data sources. We propose alternative joint models for characterizing the dependence between the experimental tests and discuss the use of these models for estimating individual‐test sensitivity and specificity as well as prevalence and multivariate post‐test probabilities (predictive values). We show using analytical and simulation techniques that, as long as the sensitivity and specificity of the imperfect test are high, inferences on diagnostic accuracy are robust to misspecification of the joint model. The methodology is demonstrated with a study examining the diagnostic accuracy of various HIV‐antibody tests for HIV. Published in 2008 by John Wiley & Sons, Ltd.  相似文献   

6.
There has been a recent increase in the diagnosis of diseases through radiographic images such as x-rays, magnetic resonance imaging, and computed tomography. The outcome of a radiological diagnostic test is often in the form of discrete ordinal data, and we usually summarize the performance of the diagnostic test using the receiver operating characteristic (ROC) curve and the area under the curve (AUC). The ROC curve will be concave and called proper when the outcomes of the diagnostic test in the actually positive subjects are higher than in the actually negative subjects. The diagnostic test for disease detection is clinically useful when a ROC curve is proper. In this study, we develop a hierarchical Bayesian model to estimate the proper ROC curve and AUC using stochastic ordering in several domains when the outcome of the diagnostic test is discrete ordinal data and compare it with the model without stochastic ordering. The model without stochastic ordering can estimate the improper ROC curve with a nonconcave shape or a hook when the true ROC curve of the population is a proper ROC curve. Therefore, the model with stochastic ordering is preferable over the model without stochastic ordering to estimate the proper ROC curve with clinical usefulness for ordinal data.  相似文献   

7.
Data from routine CT scan examinations are employed to illustrate the use of the polychotomous logistic regression model as a statistical diagnostic tool. The assumptions of the model, the interpretation of its parameters, and its capabilities are described in detail. In carrying out the analysis on the CT data, a large, relatively sparse data set, many technical difficulties were encountered. Modifications to the methodology that were necessary to permit its implementation are described, and it is demonstrated that an unbiased analysis of T + 1 diagnostic categories can be implemented by separately performing T individual simple logistic analyses. The limitations of the methodology are discussed. It is hoped that this paper may serve as a basis for the practical implementation of the polychotomous logistic model in similar diagnostic settings.  相似文献   

8.
The purpose of this study was to review economic considerations related to establishing a diagnosis of Crohn's disease, and to compare the costs of a diagnostic algorithm incorporating wireless capsule endoscopy (WCE) with the current algorithm for diagnosing Crohn's disease suspected in the small bowel. Published literature, clinical trial data on WCE in comparison to other diagnostic tools, and input from clinical experts were used as data sources for (1) identifying contributors to the costs of diagnosing Crohn's disease; (2) exploring where WCE should be placed within the diagnostic algorithm for Crohn's; and (3) constructing decision tree models with sensitivity analyses to explore costs (from a payor perspective) of diagnosing Crohn's disease using WCE compared to other diagnostic methods. Literature review confirms that Crohn's disease is a significant and growing public health concern from clinical, humanistic and economic perspectives, and results in a long-term burden for patients, their families, providers, insurers, and employers. Common diagnostic procedures include radiologic studies such as small bowel follow through (SBFT), enteroclysis, CT scans, ultrasounds, and MRIs, as well as serologic testing, and various forms of endoscopy. Diagnostic costs for Crohn's disease can be considerable, especially given the cycle of repeat testing due to the low diagnostic yield of certain procedures and the inability of current diagnostic procedures to image the entire small bowel. WCE has a higher average diagnostic yield than comparative procedures due to imaging clarity and the ability to visualize the entire small bowel. Literature review found the average diagnostic yield of SBFT and colonoscopy for work-up of Crohn's disease to be 53.87%, whereas WCE had a diagnostic yield of 69.59%. A simple decision tree model comparing two arms--colonoscopy and SBFT, or WCE--estimates that WCE produces a cost savings of 291dollars for each case presenting for diagnostic work-up for Crohn's. Sensitivity analysis varying diagnostic yields of colonoscopy and SBFT vs. WCE demonstrates that WCE is still less costly than SBFT and colonoscopy even at their highest reported yields, as long as the diagnostic yield of WCE is 64.10% or better. Employing WCE as a first-line diagnostic procedure appears to be less costly, from a payor perspective, than current common procedures for diagnosing suspected Crohn's disease in the small bowel. Although not addressed in this model, earlier diagnosis with WCE (due to higher diagnostic yield) also could lead to earlier management, improved quality of life and workplace productivity for people with Crohn's disease.  相似文献   

9.
In many areas of medical research, 'gold standard' diagnostic tests do not exist and so evaluating the performance of standardized diagnostic criteria or algorithms is problematic. In this paper we propose an approach to evaluating the operating characteristics of diagnoses using a latent class model. By defining 'true disease' as our latent variable, we are able to estimate sensitivity, specificity and negative and positive predictive values of the diagnostic test. These methods are applied to diagnostic criteria for depression using Baltimore's Epidemiologic Catchment Area Study Wave 3 data.  相似文献   

10.
11.
目的 介绍熵方法的思想并探讨其在临床诊断评价中的应用。方法 以医学影像资料为例计算其熵值。结果熵方法能将模糊凌乱的实验数据做出定量的判断。结论熵方法在临床诊断评价中的应用是可行的 ,是对传统ROC曲线评价方法的补充。  相似文献   

12.
Technological advances continue to develop for early detection of disease. Research studies are required to define the statistical properties of such screening or diagnostic tests. However, statistical methodology currently used to evaluate diagnostic tests is limited. We propose the use of marginal regression models with robust sandwich variance estimators to make inference about the sensitivity and specificity of diagnostic tests. This method is more flexible than standard methods in that it allows comparison of sensitivity between two or more tests even if all tests are not carried out on all subjects, it can accommodate correlated data, and the effect of covariates can be evaluated. This last feature is important since it allows researchers to understand the effects on sensitivity and specificity of various environmental and patient characteristics. If such factors are under the control of the clinician, it provides the opportunity to modify the diagnostic testing program to maximize sensitivity and/or specificity. We show that the marginal regression modelling methods generalize standard statistical methods. In particular, when we compare two screening tests and we test each subject with both screens, the method corresponds to McNemar's test. We describe data from an ongoing audiology screening study and we analyse a simulated version of the data to illustrate the methodology. We also analyse data from a longitudinal study of PCR as a diagnostic test for cytomegalovirus. © 1997 by John Wiley & Sons, Ltd.  相似文献   

13.
ROC methodology within a monitoring framework   总被引:1,自引:0,他引:1  
Receiver operating characteristic (ROC) methodology is widely used to evaluate and compare diagnostic tests. Generally, each diagnostic test is applied once to each subject in a population and the results, reported on a continuous scale, are used to construct the ROC curve. We extend the standard method to accommodate a framework in which the diagnostic test is repeated over time to monitor for occurrence of an event. Unlike the usual situation in which event status is static, the problem we address involves event status that is not constant over the monitoring period. Subjects generally are classified as non-events, or controls, until they experience events that convert them to cases. Viewing the data as incomplete discrete failure time data with time-varying covariates, potentially useful diagnostic markers can be related appropriately in time with the true condition and varying amounts of information per individual can be taken into account. The ROC curve provides an assessment of the performance of the test in combination with the schedule of testing. Within this framework, a computational simplification is introduced to calculate variances and covariances for the areas under the ROC curves. Periodic monitoring for reperfusion following thrombolytic treatment for acute myocardial infarction provides a detailed example, whereby the lengths of the testing interval combined with different diagnostic markers are compared.  相似文献   

14.
There is now a large literature on the analysis of diagnostic test data. In the absence of a gold standard test, latent class analysis is most often used to estimate the prevalence of the condition of interest and the properties of the diagnostic tests. When test results are measured on a continuous scale, both parametric and nonparametric models have been proposed. Parametric methods such as the commonly used bi-normal model may not fit the data well; nonparametric methods developed to date have been relatively complex to apply in practice, and their properties have not been carefully evaluated in the diagnostic testing context. In this paper, we propose a simple yet flexible Bayesian nonparametric model which approximates a Dirichlet process for continuous data. We compare results from the nonparametric model with those from the bi-normal model via simulations, investigating both how much is lost in using a nonparametric model when the bi-normal model is correct and how much can be gained in using a nonparametric model when normality does not hold. We also carefully investigate the trade-offs that occur between flexibility and identifiability of the model as different Dirichlet process prior distributions are used. Motivated by an application to tuberculosis clustering, we extend our nonparametric model to accommodate two additional dichotomous tests and proceed to analyze these data using both the continuous test alone as well as all three tests together.  相似文献   

15.
A diagnosis in practice is a sequential process starting with a patient with a particular set of signs and symptoms. To serve practice, diagnostic research should aim to quantify the added value of a test to clinical information that is commonly available before the test will be applied. Routine care databases commonly include all documented patient information, and therefore seem to be suitable to quantify a tests' added value to prior information. It is well known, however, that retrospective use of routine care data in diagnostic research may cause various methodologic problems. But, given the increased attention of electronic patient records including data from routine patient care, we believe it is time to reconsider these problems. We discuss four problems related to routine care databases. First, most databases do not label patients by their symptoms or signs but by their final diagnosis. Second, in routine care the diagnostic workup of a patient is by definition determined by previous diagnostic (test) results. Therefore, routinely documented data are subject to so-called workup bias. Third, in practice, the reference test is always interpreted with knowledge of the preceding test information, such that in scientific studies using routine data the diagnostic value of a test under evaluation is commonly overestimated. Fourth, routinely documented databases are likely to suffer from missing data. Per problem we discuss methods that are presently available and may (partly) overcome each problem. All this could contribute to more frequent and appropriate use of routine care data in diagnostic research. The discussed methods to overcome the above problems may well be similarly useful to prospective diagnostic studies.  相似文献   

16.
This paper presents a general approach for simultaneously assessing, from serial data, diagnostic consistency, interrater reliability and incidence of a strictly progressive disease. Observed data are viewed as incomplete: diagnostic errors are not distinguished from true diagnoses. We introduce a broad class of models to separate rater errors from underlying patterns of disease incidence. The analysis can include covariates and risk factors. We provide variance expressions for parameter estimates. Categorical data for estimating the incidence of dental caries serve as an example.  相似文献   

17.
In this paper, we develop methods to combine multiple biomarker trajectories into a composite diagnostic marker using functional data analysis (FDA) to achieve better diagnostic accuracy in monitoring disease recurrence in the setting of a prospective cohort study. In such studies, the disease status is usually verified only for patients with a positive test result in any biomarker and is missing in patients with negative test results in all biomarkers. Thus, the test result will affect disease verification, which leads to verification bias if the analysis is restricted only to the verified cases. We treat verification bias as a missing data problem. Under both missing at random (MAR) and missing not at random (MNAR) assumptions, we derive the optimal classification rules using the Neyman-Pearson lemma based on the composite diagnostic marker. We estimate thresholds adjusted for verification bias to dichotomize patients as test positive or test negative, and we evaluate the diagnostic accuracy using the verification bias corrected area under the ROC curves (AUCs). We evaluate the performance and robustness of the FDA combination approach and assess the consistency of the approach through simulation studies. In addition, we perform a sensitivity analysis of the dependency between the verification process and disease status for the approach under the MNAR assumption. We apply the proposed method on data from the Religious Orders Study and from a non-small cell lung cancer trial.  相似文献   

18.
Latent class analysis of diagnostic agreement   总被引:4,自引:0,他引:4  
We describe methods based on latent class analysis for analysis and interpretation of agreement on dichotomous diagnostic ratings. This approach formulates agreement in terms of parameters directly related to diagnostic accuracy and leads to many practical applications, such as estimation of the accuracy of individual ratings and the extent to which accuracy may improve with multiple opinions. We describe refinements in the estimation of parameters for varying panel designs, and apply latent class methods successfully to examples of medical agreement data that include data previously found to be poorly fitted by two-class models. Latent class techniques provide a powerful and flexible set of tools to analyse diagnostic agreement and one should consider them routinely in the analysis of such data.  相似文献   

19.
In analysis of diagnostic data with multiple tests, it is often the case that these tests are correlated. Modeling the correlation explicitly not only produces valid inference results but also enables borrowing of information. Motivated by the Physician Reliability Study (PRS) that investigated the diagnostic performance of physicians in diagnosing endometriosis, we construct a correlated modeling framework to estimate ROC curves and the associated area under the curves. This correlated approach is quite appealing for the PRS data set that suffers from the problem of small sample sizes, as it enables information borrowing between physician groups and sessions. Given that the test scores appear to be non-normal even after logarithm transformation, we use the ranks of the data to conduct likelihood estimation and inference. We use the deviance information criterion to select competing models and conduct simulation studies to assess model performances. In application to the PRS data set, we found that the physicians are not significantly different in their diagnostic performance between groups; however, they are different between the sessions. This suggests that clinical information may play a more important role in physicians' diagnostic performance than their experiences. Our empirical evidence also demonstrates that when using both woman- and physician-specific random effects, the model parameter estimates are much smoother.  相似文献   

20.
OBJECTIVE: For diagnostic tests, the most common graphical representation of the information is the receiver-operating characteristic (ROC) curve. The "agreement chart" displays the information of two observers independently classifying the same n items into the same k categories, and can be used if one considers one of the "observers" as the diagnostic test and the other as the known outcome. This study compares the two charts and their ability to visually portray the various relevant summary statistics that assess how good a diagnostic test may be, such as sensitivity, specificity, predictive values, and likelihood ratios. STUDY DESIGN AND SETTING: The geometric relationships displayed in the charts are first described. The relationship between the two graphical representations and various summary statistics is illustrated using data from three common epidemiologically relevant health issues: coronary heart disease, screening for breast cancer, and screening for tuberculosis. RESULTS: Whereas the ROC curve incorporates information on sensitivity and specificity, the agreement chart includes information on the positive and negative predictive values of the diagnostic test. CONCLUSION: The agreement chart should be considered as an alternative visual representation to the ROC for diagnostic tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号