首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Meta-analyses of genome-wide association studies require numerous study partners to conduct pre-defined analyses and thus simple but efficient analyses plans. Potential differences between strata (e.g. men and women) are usually ignored, but often the question arises whether stratified analyses help to unravel the genetics of a phenotype or if they unnecessarily increase the burden of analyses. To decide whether to stratify or not to stratify, we compare general analytical power computations for the overall analysis with those of stratified analyses considering quantitative trait analyses and two strata. We also relate the stratification problem to interaction modeling and exemplify theoretical considerations on obesity and renal function genetics. We demonstrate that the overall analyses have better power compared to stratified analyses as long as the signals are pronounced in both strata with consistent effect direction. Stratified analyses are advantageous in the case of signals with zero (or very small) effect in one stratum and for signals with opposite effect direction in the two strata. Applying the joint test for a main SNP effect and SNP-stratum interaction beats both overall and stratified analyses regarding power, but involves more complex models. In summary, we recommend to employ stratified analyses or the joint test to better understand the potential of strata-specific signals with opposite effect direction. Only after systematic genome-wide searches for opposite effect direction loci have been conducted, we will know if such signals exist and to what extent stratified analyses can depict loci that otherwise are missed.  相似文献   

2.
Improving power in genome-wide association studies: weights tip the scale   总被引:3,自引:0,他引:3  
The potential of genome-wide association analysis can only be realized when they have power to detect signals despite the detrimental effect of multiple testing on power. We develop a weighted multiple testing procedure that facilitates the input of prior information in the form of groupings of tests. For each group a weight is estimated from the observed test statistics within the group. Differentially weighting groups improves the power to detect signals in likely groupings. The advantage of the grouped-weighting concept, over fixed weights based on prior information, is that it often leads to an increase in power even if many of the groupings are not correlated with the signal. Being data dependent, the procedure is remarkably robust to poor choices in groupings. Power is typically improved if one (or more) of the groups clusters multiple tests with signals, yet little power is lost when the groupings are totally random. If there is no apparent signal in a group, relative to a group that appears to have several tests with signals, the former group will be down-weighted relative to the latter. If no groups show apparent signals, then the weights will be approximately equal. The only restriction on the procedure is that the number of groups be small, relative to the total number of tests performed.  相似文献   

3.
Case‐control association studies often collect from their subjects information on secondary phenotypes. Reusing the data and studying the association between genes and secondary phenotypes provide an attractive and cost‐effective approach that can lead to discovery of new genetic associations. A number of approaches have been proposed, including simple and computationally efficient ad hoc methods that ignore ascertainment or stratify on case‐control status. Justification for these approaches relies on the assumption of no covariates and the correct specification of the primary disease model as a logistic model. Both might not be true in practice, for example, in the presence of population stratification or the primary disease model following a probit model. In this paper, we investigate the validity of ad hoc methods in the presence of covariates and possible disease model misspecification. We show that in taking an ad hoc approach, it may be desirable to include covariates that affect the primary disease in the secondary phenotype model, even though these covariates are not necessarily associated with the secondary phenotype. We also show that when the disease is rare, ad hoc methods can lead to severely biased estimation and inference if the true disease model follows a probit model instead of a logistic model. Our results are justified theoretically and via simulations. Applied to real data analysis of genetic associations with cigarette smoking, ad hoc methods collectively identified as highly significant () single nucleotide polymorphisms from over 10 genes, genes that were identified in previous studies of smoking cessation.  相似文献   

4.
Association mapping based on family studies can identify genes that influence complex human traits while providing protection against population stratification. Because no gene is likely to have a very large effect on a complex trait, most family studies have limited power. Among the commonly used family-based tests of association for quantitative traits, the quantitative transmission-disequilibrium tests (QTDT) based on the variance-components model is the most flexible and most powerful. This method assumes that the trait values are normally distributed. Departures from normality can inflate the type I error and reduce the power. Although the family-based association tests (FBAT) and pedigree disequilibrium tests (PDT) do not require normal traits, nonnormality can also result in loss of power. In many cases, approximate normality can be achieved by transforming the trait values. However, the true transformation is unknown, and incorrect transformations may compromise the type I error and power. We propose a novel class of association tests for arbitrarily distributed quantitative traits by allowing the true transformation function to be completely unspecified and empirically estimated from the data. Extensive simulation studies showed that the new methods provide accurate control of the type I error and can be substantially more powerful than the existing methods. We applied the new methods to the Collaborative Study on the Genetics of Alcoholism and discovered significant association of single nucleotide polymorphisms (SNP) tsc0022400 on chromosome 7 with the quantitative electrophysiological phenotype TTTH1, which was not detected by any existing methods. We have implemented the new methods in a freely available computer program.  相似文献   

5.
Multiple linear regression is commonly used to test for association between genetic variants and continuous traits and estimate genetic effect sizes. Confounding variables are controlled for by including them as additional covariates. An alternative technique that is increasingly used is to regress out covariates from the raw trait and then perform regression analysis with only the genetic variants included as predictors. In the case of single-variant analysis, this adjusted trait regression (ATR) technique is known to be less powerful than the traditional technique when the genetic variant is correlated with the covariates We extend previous results for single-variant tests by deriving exact relationships between the single-variant score, Wald, likelihood-ratio, and F test statistics and their ATR analogs. We also derive the asymptotic power of ATR analogs of the multiple-variant score and burden tests. We show that the maximum power loss of the ATR analog of the multiple-variant score test is completely characterized by the canonical correlations between the set of genetic variants and the set of covariates. Further, we show that for both single- and multiple-variant tests, the power loss for ATR analogs increases with increasing stringency of Type 1 error control () and increasing correlation (or canonical correlations) between the genetic variant (or multiple variants) and covariates. We recommend using ATR only when maximum canonical correlation between variants and covariates is low, as is typically true.  相似文献   

6.
The genetic dissection of quantitative traits, or endophenotypes, usually involves genetic linkage or association analysis in pedigrees and subsequent fine mapping association analysis in the population. The ascertainment procedure for quantitative traits often results in unequal variance of observations. For example, some phenotypes may be clinically measured whilst others are from self‐reports, or phenotypes may be the average of multiple measures but with the number of measurements varying. The resulting heterogeneity of variance poses no real problem for analysis, as long as it is properly modelled and thereby taken into account. However, if statistical significance is determined using an empirical permutation procedure, it is not obvious what the units of sampling are. We investigated a number of permutation approaches in a simulation study of an association analysis between a quantitative trait and a single nucleotide polymorphism. Our simulations were designed such that we knew the true p‐value of the test statistics. A number of permutation methods were compared from the regression of true on empirical p‐values and the precision of the empirical p‐values. We show that the best procedure involves an implicit adjustment of the original data for the effects in the model before permutation, and that other methods, some of which seemed appropriate a priori, are relatively biased. Genet. Epidemiol. 33:710–716, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

7.
Genome-wide association (GWA) studies have been extremely successful in identifying novel loci contributing effects to a wide range of complex human traits. However, despite this success, the joint marginal effects of these loci account for only a small proportion of the heritability of these traits. Interactions between variants in different loci are not typically modelled in traditional GWA analysis, but may account for some of the missing heritability in humans, as they do in other model organisms. One of the key challenges in performing gene-gene interaction studies is the computational burden of the analysis. We propose a two-stage interaction analysis strategy to address this challenge in the context of both quantitative traits and dichotomous phenotypes. We have performed simulations to demonstrate only a negligible loss in power of this two-stage strategy, while minimizing the computational burden. Application of this interaction strategy to GWA studies of T2D and obesity highlights potential novel signals of association, which warrant follow-up in larger cohorts.  相似文献   

8.
In case-control studies of unrelated subjects, gene-based hypothesis tests consider whether any tested feature in a candidate gene--single nucleotide polymorphisms (SNPs), haplotypes, or both--are associated with disease. Standard statistical tests are available that control the false-positive rate at the nominal level over all polymorphisms considered. However, more powerful tests can be constructed that use permutation resampling to account for correlations between polymorphisms and test statistics. A key question is whether the gain in power is large enough to justify the computational burden. We compared the computationally simple Simes Global Test to the min P test, which considers the permutation distribution of the minimum p-value from marginal tests of each SNP. In simulation studies incorporating empirical haplotype structures in 15 genes, the min P test controlled the type I error, and was modestly more powerful than the Simes test, by 2.1 percentage points on average. When disease susceptibility was conferred by a haplotype, the min P test sometimes, but not always, under-performed haplotype analysis. A resampling-based omnibus test combining the min P and haplotype frequency test controlled the type I error, and closely tracked the more powerful of the two component tests. This test achieved consistent gains in power (5.7 percentage points on average), compared to a simple Bonferroni test of Simes and haplotype analysis. Using data from the Shanghai Biliary Tract Cancer Study, the advantages of the newly proposed omnibus test were apparent in a population-based study of bile duct cancer and polymorphisms in the prostaglandin-endoperoxide synthase 2 (PTGS2) gene.  相似文献   

9.
In anticipation of the availability of next‐generation sequencing data, there is increasing interest in investigating association between complex traits and rare variants (RVs). In contrast to association studies for common variants (CVs), due to the low frequencies of RVs, common wisdom suggests that existing statistical tests for CVs might not work, motivating the recent development of several new tests for analyzing RVs, most of which are based on the idea of pooling/collapsing RVs. However, there is a lack of evaluations of, and thus guidance on the use of, existing tests. Here we provide a comprehensive comparison of various statistical tests using simulated data. We consider both independent and correlated rare mutations, and representative tests for both CVs and RVs. As expected, if there are no or few non‐causal (i.e. neutral or non‐associated) RVs in a locus of interest while the effects of causal RVs on the trait are all (or mostly) in the same direction (i.e. either protective or deleterious, but not both), then the simple pooled association tests (without selecting RVs and their association directions) and a new test called kernel‐based adaptive clustering (KBAC) perform similarly and are most powerful; KBAC is more robust than simple pooled association tests in the presence of non‐causal RVs; however, as the number of non‐causal CVs increases and/or in the presence of opposite association directions, the winners are two methods originally proposed for CVs and a new test called C‐alpha test proposed for RVs, each of which can be regarded as testing on a variance component in a random‐effects model. Interestingly, several methods based on sequential model selection (i.e. selecting causal RVs and their association directions), including two new methods proposed here, perform robustly and often have statistical power between those of the above two classes. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc. 35:606‐619, 2011  相似文献   

10.
We develop novel statistical tests for transmission disequilibrium testing (tests of linkage in the presence of association) for quantitative traits using parents and offspring. These joint tests utilize information in both the covariance (or more generally, dependency) between genotype and phenotype and the marginal distribution of genotype. Using computer simulation we test the validity (Type I error rate control) and power of the proposed methods, for additive, dominant, and recessive modes of inheritance, locus-specific heritability of the trait 0.05, 0.1, 0.2 with allele frequencies of P=0.2 and 0.4, and sample sizes of 500, 200, and 100 trios. Both random sampling and extreme sampling schemes were investigated. A multinomial logistic joint test provides the highest overall power irrespective of sample size, allele frequency, heritability, and modes of inheritance.  相似文献   

11.
Meta-analysis has become a key component of well-designed genetic association studies due to the boost in statistical power achieved by combining results across multiple samples of individuals and the need to validate observed associations in independent studies. Meta-analyses of genetic association studies based on multiple SNPs and traits are subject to the same multiple testing issues as single-sample studies, but it is often difficult to adjust accurately for the multiple tests. Procedures such as Bonferroni may control the type-I error rate but will generally provide an overly harsh correction if SNPs or traits are correlated. Depending on study design, availability of individual-level data, and computational requirements, permutation testing may not be feasible in a meta-analysis framework. In this article, we present methods for adjusting for multiple correlated tests under several study designs commonly employed in meta-analyses of genetic association tests. Our methods are applicable to both prospective meta-analyses in which several samples of individuals are analyzed with the intent to combine results, and retrospective meta-analyses, in which results from published studies are combined, including situations in which (1) individual-level data are unavailable, and (2) different sets of SNPs are genotyped in different studies due to random missingness or two-stage design. We show through simulation that our methods accurately control the rate of type I error and achieve improved power over multiple testing adjustments that do not account for correlation between SNPs or traits.  相似文献   

12.
We applied sib-pair and association methods to a GAW data set of nuclear families with quantitative traits. Our approaches included 1) preliminary statistical studies including correlations and linear regressions, 2) sib-pair methods, and 3) association studies. We used a single data set to screen for linkage and association and, subsequently, additional data sets to confirm the preliminary results. Using this sequential approach, sib-pair analysis provided evidence for the genes influencing Q1, Q2, and Q4. We correctly predicted MG1 for Q1, MG2 for Q2, and MG4 for Q4. We did not find any false positives using this approach. Association studies identified chromosomes 8 and 9 to be associated with Q4; however these are assumed to be false positives as no associations were modeled into the data. © 1997 Wiley-Liss, Inc.  相似文献   

13.
Qin H  Zhu X 《Genetic epidemiology》2012,36(3):235-243
When dense markers are available, one can interrogate almost every common variant across the genome via imputation and single nucleotide polymorphism (SNP) test, which has become a routine in current genome-wide association studies (GWASs). As a complement, admixture mapping exploits the long-range linkage disequilibrium (LD) generated by admixture between genetically distinct ancestral populations. It is then questionable whether admixture mapping analysis is still necessary in detecting the disease associated variants in admixed populations. We argue that admixture mapping is able to reduce the burden of massive comparisons in GWASs; it therefore can be a powerful tool to locate the disease variants with substantial allele frequency differences between ancestral populations. In this report we studied a two-stage approach, where candidate regions are defined by conducting admixture mapping at stage 1, and single SNP association tests are followed at stage 2 within the candidate regions defined at stage 1. We first established the genome-wide significance levels corresponding to the criteria to define the candidate regions at stage 1 by simulations. We next compared the power of the two-stage approach with direct association analysis. Our simulations suggest that the two-stage approach can be more powerful than the standard genome-wide association analysis when the allele frequency difference of a causal variant in ancestral populations, is larger than 0.4. Our conclusion is consistent with a theoretical prediction by Risch and Tang ([2006] Am J Hum Genet 79:S254). Surprisingly, our study also suggests that power can be improved when we use less strict criteria to define the candidate regions at stage 1.  相似文献   

14.
We propose a cost-effective two-stage approach to investigate gene-disease associations when testing a large number of candidate markers using a case-control design. Under this approach, all the markers are genotyped and tested at stage 1 using a subset of affected cases and unaffected controls, and the most promising markers are genotyped on the remaining individuals and tested using all the individuals at stage 2. The sample size at stage 1 is chosen such that the power to detect the true markers of association is 1-beta(1) at significance level alpha(1). The most promising markers are tested at significance level alpha(2) at stage 2. In contrast, a one-stage approach would evaluate and test all the markers on all the cases and controls to identify the markers significantly associated with the disease. The goal is to determine the two-stage parameters (alpha(1), beta(1), alpha(2)) that minimize the cost of the study such that the desired overall significance is alpha and the desired power is close to 1-beta, the power of the one-stage approach. We provide analytic formulae to estimate the two-stage parameters. The properties of the two-stage approach are evaluated under various parametric configurations and compared with those of the corresponding one-stage approach. The optimal two-stage procedure does not depend on the signal of the markers associated with the study. Further, when there is a large number of markers, the optimal procedure is not substantially influenced by the total number of markers associated with the disease. The results show that, compared to a one-stage approach, a two-stage procedure typically halves the cost of the study.  相似文献   

15.
Linkage disequilibrium mapping of quantitative traits is a powerful method for dissecting the genetic etiology of complex phenotypes. Quantitative traits, however, often exhibit characteristics that make their use problematic. For example, the distribution of the trait may be censored, highly skewed, or contaminated with outlying values. We propose here a rank-based framework for deriving tests of gene and trait association that explicitly take censoring into account and are insensitive to skewness and outlying values. Standard methods for mapping quantitative traits do not take these characteristics into account, which leads to the discarding of valuable information or their improper application. We show how this framework can be applied in nuclear families and discuss its implementation in general pedigrees. The power and efficacy of the approach is illustrated through a series of simulation experiments in which the approach is compared to existing methods.  相似文献   

16.
Genome wide association studies (GWAS) have revealed many fascinating insights into complex diseases even from simple, single-marker statistical tests. Most of these tests are designed for testing of associations between a phenotype and an autosomal genotype and are therefore not applicable to X chromosome data. Testing for association on the X chromosome raises unique challenges that have motivated the development of X-specific statistical tests in the literature. However, to date there has been no study of these methods under a wide range of realistic study designs, allele frequencies and disease models to assess the size and power of each test. To address this, we have performed an extensive simulation study to investigate the effects of the sex ratios in the case and control cohorts, as well as the allele frequencies, on the size and power of eight test statistics under three different disease models that each account for X-inactivation. We show that existing, but under-used, methods that make use of both male and female data are uniformly more powerful than popular methods that make use of only female data. In particular, we show that Clayton's one degree of freedom statistic [Clayton, 2008] is robust and powerful across a wide range of realistic simulation parameters. Our results provide guidance on selecting the most appropriate test statistic to analyse X chromosome data from GWAS and show that much power can be gained by a more careful analysis of X chromosome GWAS data.  相似文献   

17.
A model was developed to detect effects of quantitative trait loci (QTLs) in sibships from simulated nuclear family data using the full covariance structure of the data and analyzing all five quantitative traits simultaneously in a multivariate model. Evidence of the presence of loci was detected on chromosomes 4,8,9, and 10. The method provided stable results and is worth further exploration for its performance and optimal sample size requirements under realistic conditions. © 1997 Wiley-Liss, Inc.  相似文献   

18.
We develop linear mixed models (LMMs) and functional linear mixed models (FLMMs) for gene-based tests of association between a quantitative trait and genetic variants on pedigrees. The effects of a major gene are modeled as a fixed effect, the contributions of polygenes are modeled as a random effect, and the correlations of pedigree members are modeled via inbreeding/kinship coefficients. -statistics and χ 2 likelihood ratio test (LRT) statistics based on the LMMs and FLMMs are constructed to test for association. We show empirically that the -distributed statistics provide a good control of the type I error rate. The -test statistics of the LMMs have similar or higher power than the FLMMs, kernel-based famSKAT (family-based sequence kernel association test), and burden test famBT (family-based burden test). The -statistics of the FLMMs perform well when analyzing a combination of rare and common variants. For small samples, the LRT statistics of the FLMMs control the type I error rate well at the nominal levels and . For moderate/large samples, the LRT statistics of the FLMMs control the type I error rates well. The LRT statistics of the LMMs can lead to inflated type I error rates. The proposed models are useful in whole genome and whole exome association studies of complex traits.  相似文献   

19.
Rare variant studies are now being used to characterize the genetic diversity between individuals and may help to identify substantial amounts of the genetic variation of complex diseases and quantitative phenotypes. Family data have been shown to be powerful to interrogate rare variants. Consequently, several rare variants association tests have been recently developed for family‐based designs, but typically, these assume the normality of the quantitative phenotypes. In this paper, we present a family‐based test for rare‐variants association in the presence of non‐normal quantitative phenotypes. The proposed model relaxes the normality assumption and does not specify any parametric distribution for the marginal distribution of the phenotype. The dependence between relatives is modeled via a Gaussian copula. A score‐type test is derived, and several strategies to approximate its distribution under the null hypothesis are derived and investigated. The performance of the proposed test is assessed and compared with existing methods by simulations. The methodology is illustrated with an association study involving the adiponectin trait from the UK10K project. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Genome‐wide association studies have achieved unprecedented success in the identification of novel genes and pathways implicated in complex traits. Typically, studies for disease use a case‐control (CC) design and studies for quantitative traits (QT) are population based. The question that we address is what is the equivalence between CC and QT association studies in terms of detection power and sample size? We compare the binary and continuous traits by assuming a threshold model for disease and assuming that the effect size on disease liability has similar feature as on QT. We derive the approximate ratio of the non‐centrality parameter (NCP) between CC and QT association studies, which is determined by sample size, disease prevalence (K) and the proportion of cases (v) in the CC study. For disease with prevalence <0.1, CC association study with equal numbers of cases and controls (v=0.5) needs smaller sample size than QT association study to achieve equivalent power, e.g. a CC association study of schizophrenia (K=0.01) needs only ~55% sample size required for association study of height. So a planned meta‐analysis for height on ~120,000 individuals has power equivalent to a CC study on 33,100 schizophrenia cases and 33,100 controls, a size not yet achievable for this disease. With equal sample size, when v=K, the power of CC association study is much less than that of QT association study because of the information lost by transforming a quantitative continuous trait to a binary trait. Genet. Epidemiol. 34: 254–257, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号