首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 0 毫秒
1.
Current analysis of event‐related potentials (ERP) data is usually based on the a priori selection of channels and time windows of interest for studying the differences between experimental conditions in the spatio‐temporal domain. In this work we put forward a new strategy designed for situations when there is not a priori information about ‘when’ and ‘where’ these differences appear in the spatio‐temporal domain, simultaneously testing numerous hypotheses, which increase the risk of false positives. This issue is known as the problem of multiple comparisons and has been managed with methods that control the false discovery rate (FDR), such as permutation test and FDR methods. Although the former has been previously applied, to our knowledge, the FDR methods have not been introduced in the ERP data analysis. Here we compare the performance (on simulated and real data) of permutation test and two FDR methods (Benjamini and Hochberg (BH) and local‐fdr, by Efron). All these methods have been shown to be valid for dealing with the problem of multiple comparisons in the ERP analysis, avoiding the ad hoc selection of channels and/or time windows. FDR methods are a good alternative to the common and computationally more expensive permutation test. The BH method for independent tests gave the best overall performance regarding the balance between type I and type II errors. The local‐fdr method is preferable for high dimensional (multichannel) problems where most of the tests conform to the empirical null hypothesis. Differences among the methods according to assumptions, null distributions and dimensionality of the problem are also discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
The multiplicity problem has become increasingly important in genetic studies as the capacity for high-throughput genotyping has increased. The control of False Discovery Rate (FDR) (Benjamini and Hochberg. [1995] J. R. Stat. Soc. Ser. B 57:289-300) has been adopted to address the problems of false positive control and low power inherent in high-volume genome-wide linkage and association studies. In many genetic studies, there is often a natural stratification of the m hypotheses to be tested. Given the FDR framework and the presence of such stratification, we investigate the performance of a stratified false discovery control approach (i.e. control or estimate FDR separately for each stratum) and compare it to the aggregated method (i.e. consider all hypotheses in a single stratum). Under the fixed rejection region framework (i.e. reject all hypotheses with unadjusted p-values less than a pre-specified level and then estimate FDR), we demonstrate that the aggregated FDR is a weighted average of the stratum-specific FDRs. Under the fixed FDR framework (i.e. reject as many hypotheses as possible and meanwhile control FDR at a pre-specified level), we specify a condition necessary for the expected total number of true positives under the stratified FDR method to be equal to or greater than that obtained from the aggregated FDR method. Application to a recent Genome-Wide Association (GWA) study by Maraganore et al. ([2005] Am. J. Hum. Genet. 77:685-693) illustrates the potential advantages of control or estimation of FDR by stratum. Our analyses also show that controlling FDR at a low rate, e.g. 5% or 10%, may not be feasible for some GWA studies.  相似文献   

3.
Tong T  Zhao H 《Statistics in medicine》2008,27(11):1960-1972
One major goal in microarray studies is to identify genes having different expression levels across different classes/conditions. In order to achieve this goal, a study needs to have an adequate sample size to ensure the desired power. Owing to the importance of this topic, a number of approaches to sample size calculation have been developed. However, due to the cost and/or experimental difficulties in obtaining sufficient biological materials, it might be difficult to attain the required sample size. In this article, we address more practical questions for assessing power and false discovery rate (FDR) for a fixed sample size. The relationships between power, sample size and FDR are explored. We also conduct simulations and a real data study to evaluate the proposed findings.  相似文献   

4.
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross‐validated log‐likelihood (max‐cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness‐of‐fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one‐standard‐error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Control rate regression is a diffuse approach to account for heterogeneity among studies in meta‐analysis by including information about the outcome risk of patients in the control condition. Correcting for the presence of measurement error affecting risk information in the treated and in the control group has been recognized as a necessary step to derive reliable inferential conclusions. Within this framework, the paper considers the problem of small sample size as an additional source of misleading inference about the slope of the control rate regression. Likelihood procedures relying on first‐order approximations are shown to be substantially inaccurate, especially when dealing with increasing heterogeneity and correlated measurement errors. We suggest to address the problem by relying on higher‐order asymptotics. In particular, we derive Skovgaard's statistic as an instrument to improve the accuracy of the approximation of the signed profile log‐likelihood ratio statistic to the standard normal distribution. The proposal is shown to provide much more accurate results than standard likelihood solutions, with no appreciable computational effort. The advantages of Skovgaard's statistic in control rate regression are shown in a series of simulation experiments and illustrated in a real data example. R code for applying first‐ and second‐order statistic for inference on the slope on the control rate regression is provided.  相似文献   

6.
This paper demonstrates an inflation of the type I error rate that occurs when testing the statistical significance of a continuous risk factor after adjusting for a correlated continuous confounding variable that has been divided into a categorical variable. We used Monte Carlo simulation methods to assess the inflation of the type I error rate when testing the statistical significance of a risk factor after adjusting for a continuous confounding variable that has been divided into categories. We found that the inflation of the type I error rate increases with increasing sample size, as the correlation between the risk factor and the confounding variable increases, and with a decrease in the number of categories into which the confounder is divided. Even when the confounder is divided in a five-level categorical variable, the inflation of the type I error rate remained high when both the sample size and the correlation between the risk factor and the confounder were high.  相似文献   

7.
Objective: To examine the factors related to Papanicolaou (Pap) tests, mammography and cholesterol testing in mid‐aged Australian women as they age. Methods: Data were obtained from the 1946–51 cohort of the Australian Longitudinal Study on Women's Health, a prospective study of the health and lifestyle of Australian women. Data were collected via self‐report mailed surveys on a three‐yearly basis since 1996, when participants were aged 45–50. Demographic factors, health service use and health‐related factors were examined in relation to screening practices in a lagged analysis. Results: As women aged, they were less likely to have a Pap test and more likely to report having a mammogram and a cholesterol test. Smokers were less likely to have all screening tests, and HRT use and more general practitioner (GP) visits were associated with increased odds of having health checks. Compared to healthy weight, higher BMI was associated with increased odds of cholesterol testing but decreased odds for Pap testing; obese women had lower odds for mammography. Underweight women had lower odds for mammography and Pap testing. Worse self‐rated health and self‐report of a chronic condition were significantly related to increased likelihood of cholesterol testing. While some demographic and area of residence factors were also significantly associated with screening, large inequities based on socioeconomic status were not evident. Conclusions: Health and healthcare use are important determinants of screening. Implications: Greater advantage needs to be taken of opportunities to encourage women with more health risk behaviours and health problems to engage in screening.  相似文献   

8.
Step‐up procedures have been shown to be powerful testing methods in clinical trials for comparisons of several treatments with a control. In this paper, a determination of the optimal sample size for a step‐up procedure that allows a pre‐specified power level to be attained is discussed. Various definitions of power, such as all‐pairs power, any‐pair power, per‐pair power and average power, in one‐ and two‐sided tests are considered. An extensive numerical study confirms that square root allocation of sample size among treatments provides a better approximation of the optimal sample size relative to equal allocation. Based on square root allocation, tables are constructed, and users can conveniently obtain the approximate required sample size for the selected configurations of parameters and power. For clinical studies with difficulties in recruiting patients or when additional subjects lead to a significant increase in cost, a more precise computation of the required sample size is recommended. In such circumstances, our proposed procedure may be adopted to obtain the optimal sample size. It is also found that, contrary to conventional belief, the optimal allocation may considerably reduce the total sample size requirement in certain cases. The determination of the required sample sizes using both allocation rules are illustrated with two examples in clinical studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
10.
目的 分析南宁市基层医疗卫生机构管理的肺结核患者对结核病防治核心知识的知晓率水平,探索其影响因素,为完善结核病健康管理提供依据。方法 对311名南宁市基层医疗卫生机构服务的对象(肺结核患者)采用结构化问卷进行面对面调查,了解患者对结核病防治核心知识的知晓情况、认知态度及知识来源情况;统计学分析采用统计软件IBM SPSS Statistics23 进行。结果 311名肺结核患者对结核病核心知识总的知晓率为80.02%(1742/2177)。对结核病归口管理政策的知晓率最高(94.86%),对结核病治疗转归的知晓率最低(46.95%),结核病防治7条核心知识的知晓率差异有统计学意义(χ2 = 379.459,P<0.01)。不同民族和职业分类患者的知晓率差异有统计学意义(χ2值分别为36.613、18.178;P值分别为<0.001、0.003)。多因素分析显示,年龄和职业是结核病防治核心知识知晓率的影响因素。311名患者中,70%以上患者对结核病的认知态度比较正面,欢迎基层医疗卫生机构人员为其提供健康管理服务。患者获取结核病防治核心知识的途径比较分散,从医生处获得最高(28.63%),从学校教育中获得最低(1.83%)。结论 南宁市基层医疗卫生机构管理的肺结核患者对结核病防治核心知识的总的知晓率仍未达到结核病防治规划提出的85%的指标要求,今后需要在继续发挥基层医疗卫生机构人员在结核病防治健康促进工作中的作用、联合卫生健康部门和教育部门将结核病健康教育列入学校教学计划中去及探索针对不同人群特点的健康促进方式,最终提高总人群的结核病防治核心知识知晓率。  相似文献   

11.
Most North American workers drink coffee throughout their workday, although the cumulative effect of job stress and coffee is not well known. Research has shown that coffee affects the cardiovascular system and mental alertness primarily through the active ingredient caffeine; however, the dose of caffeine used in these studies is greater than a normal cup of coffee. In addition, these changes have been mostly determined in male caffeine-habituated consumers. Therefore, this study examined the effect of a normal cup of coffee on the cardiovascular and mental alertness response both before and after a mental stress task in 10 caffeine-naïve (23 ± 5.0 years) and 10 caffeine-habituated (25 ± 6 years) females. Blood pressure, heart rate, and mental alertness were taken at baseline (before coffee), 50 minutes after finishing coffee and immediately after a 9-minute mental stress task. The volume of coffee ingested for a 15-minute period was 350 mL (12 oz), which is equivalent to 140 mg of caffeine. The combined effect of coffee and mental stress significantly decreased diastolic blood pressure (Δ8 mm Hg) and increased heart rate (Δ6 beats per minute) and mental alertness (Δ67.3%) in caffeine-naïve and caffeine-habituated females, whereas systolic blood pressure (Δ10.3 mm Hg) only increased in the caffeine-naïve participants. Our results indicate that a normal cup of coffee can effect changes in blood pressure and mental alertness and that mental stress may alter the magnitude of change; however, the transient increase in systolic blood pressure after drinking coffee in caffeine-naïve participants requires further investigation.  相似文献   

12.
Common data sources for assessing the health of a population of interest include large‐scale surveys based on interviews that often pose questions requiring a self‐report, such as, ‘Has a doctor or other health professional ever told you that you have 〈 health condition of interest〉 ?’ or ‘What is your 〈 height/weight〉 ?’ Answers to such questions might not always reflect the true prevalences of health conditions (for example, if a respondent misreports height/weight or does not have access to a doctor or other health professional). Such ‘measurement error’ in health data could affect inferences about measures of health and health disparities. Drawing on two surveys conducted by the National Center for Health Statistics, this paper describes an imputation‐based strategy for using clinical information from an examination‐based health survey to improve on analyses of self‐reported data in a larger interview‐based health survey. Models predicting clinical values from self‐reported values and covariates are fitted to data from the National Health and Nutrition Examination Survey (NHANES), which asks self‐report questions during an interview component and also obtains clinical measurements during a physical examination component. The fitted models are used to multiply impute clinical values for the National Health Interview Survey (NHIS), a larger survey that obtains data solely via interviews. Illustrations involving hypertension, diabetes, and obesity suggest that estimates of health measures based on the multiply imputed clinical values are different from those based on the NHIS self‐reported data alone and have smaller estimated standard errors than those based solely on the NHANES clinical data. The paper discusses the relationship of the methods used in the study to two‐phase/two‐stage/validation sampling and estimation, along with limitations, practical considerations, and areas for future research. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

13.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号