首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Identifying genes that are differentially expressed between classes of samples is an important objective of many microarray experiments. Because of the thousands of genes typically considered, there is a tension between identifying as many of the truly differentially expressed genes as possible, but not too many genes that are not really differentially expressed (false discoveries). Controlling the proportion of identified genes that are false discoveries, the false discovery proportion (FDP), is a goal of interest. In this paper, two multivariate permutation methods are investigated for controlling the FDP. One is based on a multivariate permutation testing (MPT) method that probabilistically controls the number of false discoveries, and the other is based on the Significance Analysis of Microarrays (SAM) procedure that provides an estimate of the FDP. Both methods account for the correlations among the genes. We find the ability of the methods to control the proportion of false discoveries varies substantially depending on the implementation characteristics. For example, for both methods one can proceed from the most significant gene to the least significant gene until the estimated FDP is just above the targeted level ('top-down' approach), or from the least significant gene to the most significant gene until the estimated FDP is just below the targeted level ('bottom-up' approach). We find that the top-down MPT-based method probabilistically controls the FDP, whereas our implementation of the top-down SAM-based method does not. Bottom-up MPT-based or SAM-based methods can result in poor control of the FDP.  相似文献   

2.
ObjectivesProcedures for controlling the false positive rate when performing many hypothesis tests are commonplace in health and medical studies. Such procedures, most notably the Bonferroni adjustment, suffer from the problem that error rate control cannot be localized to individual tests, and that these procedures do not distinguish between exploratory and/or data-driven testing vs. hypothesis-driven testing. Instead, procedures derived from limiting false discovery rates may be a more appealing method to control error rates in multiple tests.Study Design and SettingControlling the false positive rate can lead to philosophical inconsistencies that can negatively impact the practice of reporting statistically significant findings. We demonstrate that the false discovery rate approach can overcome these inconsistencies and illustrate its benefit through an application to two recent health studies.ResultsThe false discovery rate approach is more powerful than methods like the Bonferroni procedure that control false positive rates. Controlling the false discovery rate in a study that arguably consisted of scientifically driven hypotheses found nearly as many significant results as without any adjustment, whereas the Bonferroni procedure found no significant results.ConclusionAlthough still unfamiliar to many health researchers, the use of false discovery rate control in the context of multiple testing can provide a solid basis for drawing conclusions about statistical significance.  相似文献   

3.
Large exploratory studies are often characterized by a preponderance of true null hypotheses, with a small though multiple number of false hypotheses. Traditional multiple-test adjustments consider either each hypothesis separately, or all hypotheses simultaneously, but it may be more desirable to consider the combined evidence for subsets of hypotheses, in order to reduce the number of hypotheses to a manageable size. Previously, Zaykin et al. ([2002] Genet. Epidemiol. 22:170-185) proposed forming the product of all P-values at less than a preset threshold, in order to combine evidence from all significant tests. Here we consider a complementary strategy: form the product of the K most significant P-values. This has certain advantages for genomewide association scans: K can be chosen on the basis of a hypothesised disease model, and is independent of sample size. Furthermore, the alternative hypothesis corresponds more closely to the experimental situation where all loci have fixed effects. We give the distribution of the rank truncated product and suggest some methods to account for correlated tests in genomewide scans. We show that, under realistic scenarios, it provides increased power to detect genomewide association, while identifying a candidate set of good quality and fixed size for follow-up studies.  相似文献   

4.
Multiple testing has been widely adopted for genome-wide studies such as microarray experiments. To improve the power of multiple testing, Storey (J. Royal Statist. Soc. B 2007; 69: 347-368) recently developed the optimal discovery procedure (ODP) which maximizes the number of expected true positives for each fixed number of expected false positives. However, in applying the ODP, we must estimate the true status of each significance test (null or alternative) and the true probability distribution corresponding to each test. In this article, we derive the ODP under hierarchical, random effects models and develop an empirical Bayes estimation method for the derived ODP. Our methods can effectively circumvent the estimation problems in applying the ODP presented by Storey. Simulations and applications to clinical studies of leukemia and breast cancer demonstrated that our empirical Bayes method achieved theoretical optimality and performed well in comparison with existing multiple testing procedures.  相似文献   

5.
High-throughput screening (HTS) is a large-scale hierarchical process in which a large number of chemicals are tested in multiple stages. Conventional statistical analyses of HTS studies often suffer from high testing error rates and soaring costs in large-scale settings. This article develops new methodologies for false discovery rate control and optimal design in HTS studies. We propose a two-stage procedure that determines the optimal numbers of replicates at different screening stages while simultaneously controlling the false discovery rate in the confirmatory stage subject to a constraint on the total budget. The merits of the proposed methods are illustrated using both simulated and real data. We show that, at the expense of a limited budget, the proposed screening procedure effectively controls the error rate and the design leads to improved detection power.  相似文献   

6.
We address the problem of testing whether a possibly high-dimensional vector may act as a mediator between some exposure variable and the outcome of interest. We propose a global test for mediation, which combines a global test with the intersection-union principle. We discuss theoretical properties of our approach and conduct simulation studies that demonstrate that it performs equally well or better than its competitor. We also propose a multiple testing procedure, ScreenMin, that provides asymptotic control of either familywise error rate or false discovery rate when multiple groups of potential mediators are tested simultaneously. We apply our approach to data from a large Norwegian cohort study, where we look at the hypothesis that smoking increases the risk of lung cancer by modifying the level of DNA methylation.  相似文献   

7.
Validation studies have been used to increase the reliability of the statistical conclusions for scientific discoveries; such studies improve the reproducibility of the findings and reduce the possibility of false positives. Here, one of the important roles of statistics is to quantify reproducibility rigorously. Two concepts were recently defined for this purpose: (i) rediscovery rate (RDR), which is the expected proportion of statistically significant findings in a study that can be replicated in the validation study and (ii) false discovery rate in the validation study (vFDR). In this paper, we aim to develop a nonparametric approach to estimate the RDR and vFDR and show an explicit link between the RDR and the FDR. Among other things, the link explains why reproducing statistically significant results even with low FDR level may be difficult. Two metabolomics datasets are considered to illustrate the application of the RDR and vFDR concepts in high‐throughput data analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
The genetic basis of multiple phenotypes such as gene expression, metabolite levels, or imaging features is often investigated by testing a large collection of hypotheses, probing the existence of association between each of the traits and hundreds of thousands of genotyped variants. Appropriate multiplicity adjustment is crucial to guarantee replicability of findings, and the false discovery rate (FDR) is frequently adopted as a measure of global error. In the interest of interpretability, results are often summarized so that reporting focuses on variants discovered to be associated to some phenotypes. We show that applying FDR‐controlling procedures on the entire collection of hypotheses fails to control the rate of false discovery of associated variants as well as the expected value of the average proportion of false discovery of phenotypes influenced by such variants. We propose a simple hierarchical testing procedure that allows control of both these error rates and provides a more reliable basis for the identification of variants with functional effects. We demonstrate the utility of this approach through simulation studies comparing various error rates and measures of power for genetic association studies of multiple traits. Finally, we apply the proposed method to identify genetic variants that impact flowering phenotypes in Arabidopsis thaliana, expanding the set of discoveries.  相似文献   

9.
Pharmacovigilance spontaneous reporting systems are primarily devoted to early detection of the adverse reactions of marketed drugs. They maintain large spontaneous reporting databases (SRD) for which several automatic signalling methods have been developed. A common limitation of these methods lies in the fact that they do not provide an auto‐evaluation of the generated signals so that thresholds of alerts are arbitrarily chosen. In this paper, we propose to revisit the Gamma Poisson Shrinkage (GPS) model and the Bayesian Confidence Propagation Neural Network (BCPNN) model in the Bayesian general decision framework. This results in a new signal ranking procedure based on the posterior probability of null hypothesis of interest and makes it possible to derive with a non‐mixture modelling approach Bayesian estimators of the false discovery rate (FDR), false negative rate, sensitivity and specificity. An original data generation process that can be suited to the features of the SRD under scrutiny is proposed and applied to the French SRD to perform a large simulation study. Results indicate better performances according to the FDR for the proposed ranking procedure in comparison with the current ones for the GPS model. They also reveal identical performances according to the four operating characteristics for the proposed ranking procedure with the BCPNN and GPS models but better estimates when using the GPS model. Finally, the proposed procedure is applied to the French data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
We study the link between two quality measures of SNP (single nucleotide polymorphism) data in genome‐wide association (GWA) studies, that is, per SNP call rates (CR) and p‐values for testing Hardy–Weinberg equilibrium (HWE). The aim is to improve these measures by applying methods based on realized randomized p‐values, the false discovery rate and estimates for the proportion of false hypotheses. While exact non‐randomized conditional p‐values for testing HWE cannot be recommended for estimating the proportion of false hypotheses, their realized randomized counterparts should be used. P‐values corresponding to the asymptotic unconditional chi‐square test lead to reasonable estimates only if SNPs with low minor allele frequency are excluded. We provide an algorithm to compute the probability that SNPs violate HWE given the observed CR, which yields an improved measure of data quality. The proposed methods are applied to SNP data from the KORA (Cooperative Health Research in the Region of Augsburg, Southern Germany) 500 K project, a GWA study in a population‐based sample genotyped by Affymetrix GeneChip 500 K arrays using the calling algorithm BRLMM 1.4.0. We show that all SNPs with CR = 100 per cent are nearly in perfect HWE which militates in favor of the population to meet the conditions required for HWE at least for these SNPs. Moreover, we show that the proportion of SNPs not being in HWE increases with decreasing CR. We conclude that using a single threshold for judging HWE p‐values without taking the CR into account is problematic. Instead we recommend a stratified analysis with respect to CR. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
The multiplicity problem has become increasingly important in genetic studies as the capacity for high-throughput genotyping has increased. The control of False Discovery Rate (FDR) (Benjamini and Hochberg. [1995] J. R. Stat. Soc. Ser. B 57:289-300) has been adopted to address the problems of false positive control and low power inherent in high-volume genome-wide linkage and association studies. In many genetic studies, there is often a natural stratification of the m hypotheses to be tested. Given the FDR framework and the presence of such stratification, we investigate the performance of a stratified false discovery control approach (i.e. control or estimate FDR separately for each stratum) and compare it to the aggregated method (i.e. consider all hypotheses in a single stratum). Under the fixed rejection region framework (i.e. reject all hypotheses with unadjusted p-values less than a pre-specified level and then estimate FDR), we demonstrate that the aggregated FDR is a weighted average of the stratum-specific FDRs. Under the fixed FDR framework (i.e. reject as many hypotheses as possible and meanwhile control FDR at a pre-specified level), we specify a condition necessary for the expected total number of true positives under the stratified FDR method to be equal to or greater than that obtained from the aggregated FDR method. Application to a recent Genome-Wide Association (GWA) study by Maraganore et al. ([2005] Am. J. Hum. Genet. 77:685-693) illustrates the potential advantages of control or estimation of FDR by stratum. Our analyses also show that controlling FDR at a low rate, e.g. 5% or 10%, may not be feasible for some GWA studies.  相似文献   

12.
Inference based on large sample results can be highly inaccurate if applied to logistic regression with small data sets. Furthermore, maximum likelihood estimates for the regression parameters will on occasion not exist, and large sample results will be invalid. Exact conditional logistic regression is an alternative that can be used whether or not maximum likelihood estimates exist, but can be overly conservative. This approach also requires grouping the values of continuous variables corresponding to nuisance parameters, and inference can depend on how this is done. A simple permutation test of the hypothesis that a regression parameter is zero can overcome these limitations. The variable of interest is replaced by the residuals from a linear regression of it on all other independent variables. Logistic regressions are then done for permutations of these residuals, and a p-value is computed by comparing the resulting likelihood ratio statistics to the original observed value. Simulations of binary outcome data with two independent variables that have binary or lognormal distributions yield the following results: (a) in small data sets consisting of 20 observations, type I error is well-controlled by the permutation test, but poorly controlled by the asymptotic likelihood ratio test; (b) in large data sets consisting of 1000 observations, performance of the permutation test appears equivalent to that of the asymptotic test; and (c) in small data sets, the p-value for the permutation test is usually similar to the mid-p-value for exact conditional logistic regression.  相似文献   

13.
The aim of this paper is to generalize permutation methods for multiple testing adjustment of significant partial regression coefficients in a linear regression model used for microarray data. Using a permutation method outlined by Anderson and Legendre [1999] and the permutation P-value adjustment from Simon et al. [2004], the significance of disease related gene expression will be determined and adjusted after accounting for the effects of covariates, which are not restricted to be categorical. We apply these methods to a microarray dataset containing confounders and illustrate the comparisons between the permutation-based adjustments and the normal theory adjustments. The application of a linear model is emphasized for data containing confounders and the permutation-based approaches are shown to be better suited for microarray data.  相似文献   

14.
Tong T  Zhao H 《Statistics in medicine》2008,27(11):1960-1972
One major goal in microarray studies is to identify genes having different expression levels across different classes/conditions. In order to achieve this goal, a study needs to have an adequate sample size to ensure the desired power. Owing to the importance of this topic, a number of approaches to sample size calculation have been developed. However, due to the cost and/or experimental difficulties in obtaining sufficient biological materials, it might be difficult to attain the required sample size. In this article, we address more practical questions for assessing power and false discovery rate (FDR) for a fixed sample size. The relationships between power, sample size and FDR are explored. We also conduct simulations and a real data study to evaluate the proposed findings.  相似文献   

15.
Generalized additive models (GAMs) with bivariate smoothers are frequently used to map geographic disease risks in epidemiology studies. A challenge in identifying health disparities has been the lack of intuitive and computationally feasible methods to assess whether the pattern of spatial effects varies over time. In this research, we accommodate time-stratified smoothers into the GAM framework to estimate time-specific spatial risk patterns while borrowing information from confounding effects across time. A backfitting algorithm for model estimation is proposed along with a permutation testing framework for assessing temporal heterogeneity of geospatial risk patterns across two or more time points. Simulation studies show that our proposed permuted mean squared difference (PMSD) test performs well with respect to type I error and power in various settings when compared with existing methods. The proposed model and PMSD test are used geospatial risk patterns of patent ductus arteriosus (PDA) in the state of Massachusetts over 2003-2009. We show that there is variation over time in spatial patterns of PDA risk, adjusting for other known risk factors, suggesting the presence of potential time-varying and space-related risk factors other than the adjusted ones.  相似文献   

16.
With recent advances in genomewide microarray technologies, whole-genome association (WGA) studies have aimed at identifying susceptibility genes for complex human diseases using hundreds of thousands of single nucleotide polymorphisms (SNPs) genotyped at the same time. In this context and to take into account multiple testing, false discovery rate (FDR)-based strategies are now used frequently. However, a critical aspect of these strAtegies is that they are applied to a collection or a family of hypotheses and, thus, critically depend on these precise hypotheses. We investigated how modifying the family of hypotheses to be tested affected the performance of FDR-based procedures in WGA studies. We showed that FDR-based procedures performed more poorly when excluding SNPs with high prior probability of being associated. Results of simulation studies mimicking WGA studies according to three scenarios are reported, and show the extent to which SNPs elimination (family contraction) prior to the analysis impairs the performance of FDR-based procedures. To illustrate this situation, we used the data from a recent WGA study on type-1 diabetes (Clayton et al. [2005] Nat. Genet. 37:1243-1246) and report the results obtained when excluding or not SNPs located inside the human leukocyte antigen region. Based on our findings, excluding markers with high prior probability of being associated cannot be recommended for the analysis of WGA data with FDR-based strategies.  相似文献   

17.
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross‐validated log‐likelihood (max‐cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness‐of‐fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one‐standard‐error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
Recent work on prospective power and sample size calculations for analyses of high‐dimension gene expression data that control the false discovery rate (FDR) focuses on the average power over all the truly nonnull hypotheses, or equivalently, the expected proportion of nonnull hypotheses rejected. Using another characterization of power, we adapt Efron's ([2007] Ann Stat 35:1351–1377) empirical Bayes approach to post hoc power calculation to develop a method for prospective calculation of the “identification power” for individual genes. This is the probability that a gene with a given true degree of association with clinical outcome or state will be included in a set within which the FDR is controlled at a specified level. An example calculation using proportional hazards regression highlights the effects of large numbers of genes with little or no association on the identification power for individual genes with substantial association.  相似文献   

19.
Analyzing safety data from clinical trials to detect safety signals worth further examination involves testing multiple hypotheses, one for each observed adverse event (AE) type. There exists certain hierarchical structure for these hypotheses due to the classification of the AEs into system organ classes, and these AEs are also likely correlated. Many approaches have been proposed to identify safety signals under the multiple testing framework and tried to achieve control of false discovery rate (FDR). The FDR control concerns the expectation of the false discovery proportion (FDP). In practice, the control of the actual random variable FDP could be more relevant and has recently drawn much attention. In this paper, we proposed a two-stage procedure for safety signal detection with direct control of FDP, through a permutation-based approach for screening groups of AEs and a permutation-based approach of constructing simultaneous upper bounds for false discovery proportion. Our simulation studies showed that this new approach has controlled FDP. We demonstrate our approach using data sets derived from a drug clinical trial.  相似文献   

20.
Lipidomics is an emerging field of science that holds the potential to provide a readout of biomarkers for an early detection of a disease. Our objective was to identify an efficient statistical methodology for lipidomics—especially in finding interpretable and predictive biomarkers useful for clinical practice. In two case studies, we address the need for data preprocessing for regression modeling of a binary response. These are based on a normalization step, in order to remove experimental variability, and on a multiple imputation step, to make the full use of the incompletely observed data with potentially informative missingness. Finally, by cross‐validation, we compare stepwise variable selection to penalized regression models on stacked multiple imputed data sets and propose the use of a permutation test as a global test of association. Our results show that, depending on the design of the study, these data preprocessing methods modestly improve the precision of classification, and no clear winner among the variable selection methods is found. Lipidomics profiles are found to be highly important predictors in both of the two case studies. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号