首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The genetic case-control association study of unrelated subjects is a leading method to identify single nucleotide polymorphisms (SNPs) and SNP haplotypes that modulate the risk of complex diseases. Association studies often genotype several SNPs in a number of candidate genes; we propose a two-stage approach to address the inherent statistical multiple comparisons problem. In the first stage, each gene's association with disease is summarized by a single p-value that controls a familywise error rate. In the second stage, summary p-values are adjusted for multiplicity using a false discovery rate (FDR) controlling procedure. For the first stage, we consider marginal and joint tests of SNPs and haplotypes within genes, and we construct an omnibus test that combines SNP and haplotype analysis. Simulation studies show that when disease susceptibility is conferred by a SNP, and all common SNPs in a gene are genotyped, marginal analysis of SNPs using the Simes test has similar or higher power than marginal or joint haplotype analysis. Conversely, haplotype analysis can be more powerful when disease susceptibility is conferred by a haplotype. The omnibus test tracks the more powerful of the two approaches, which is generally unknown. Multiple testing balances the desire for statistical power against the implicit costs of false positive results, which up to now appear to be common in the literature.  相似文献   

2.
Chen HY  Li M 《Genetic epidemiology》2011,35(8):823-830
Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs.  相似文献   

3.
One of the current issues in genetic epidemiology is detecting susceptibility genes on the genome. It is common now to undertake systematic screening of the genome using approaches based on a measure of the haplotype sharing in sib pairs. Here, we compare the efficiency of two statistics, the maximum likelihood score (MLS) and the nonparametric linkage score (NPLa) on the simulated data provided for GAW11. A question often raised is whether it is better to perform a single-step or a two-step strategy. For the simulated model, and whatever the strategy used, we show here that the answer is not unequivocal. In both cases, the power to detect susceptibility genes in a single replicate with MLS or NPL is extremely low. With two replicates, only one of the four simulated loci could be detected with reasonable power. When gametic disequilibrium is suspected, methods testing for both linkage and association might be more powerful.  相似文献   

4.
The usefulness of association studies for fine mapping loci with common susceptibility alleles for complex genetic diseases in outbred populations is unclear. We investigate this issue for a battery of tightly linked anonymous genetic markers spanning a candidate region centered around a disease locus, and study the joint behavior of chi-square statistics used to discover and to localize the disease locus. We used simulation methods based on a coalescent process with mutation, recombination, and genetic drift to examine the spatial distribution of markers with large noncentrality parameters in a case-control study design. Simulations with a disease allele at intermediate frequency, presumably representing an old mutation, tend to exhibit the largest noncentrality parameter values at markers near the disease locus. In contrast, simulations with a disease allele at low frequency, presumably representing a young mutation, often exhibit the largest noncentrality parameter values at markers scattered over the candidate region. In the former cases, sample sizes or marker densities sufficient to detect association are likely to lead to useful localization, whereas, in the latter case, localization of the disease locus within the candidate region is much less likely, regardless of the sample size or density of the map. The effects of increasing sample size or marker density are also investigated. Based upon a single marker analysis, we find that a simple strategy of choosing the marker with the smallest associated P value to begin a laboratory search for the disease locus performs adequately for a common disease allele. We also investigated a strategy of pooling nearby sites to form multiple allele markers. Using multiple degree of freedom chi-square tests for two or three nearby sites, we found no clear advantage of this form of pooling over a single marker analysis. Genet. Epidemiol. 20:432-457, 2001. Published by Wiley-Liss, 2001.  相似文献   

5.
We present a Bayesian semiparametric model for the meta-analysis of candidate gene studies with a binary outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping genetic markers in the same genetic region. Meta-analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequilibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta-analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian semiparametric model which models the observed genotype group frequencies conditional to the case/control status and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach allows borrowing of strength across studies and across markers. The analysis is based on a mixture of Dirichlet processes model as the underlying semiparametric model. Full posterior inference is performed through Markov chain Monte Carlo algorithms. The approach is demonstrated on simulated and real data.  相似文献   

6.
A Bayesian toolkit for genetic association studies   总被引:3,自引:0,他引:3  
We present a range of modelling components designed to facilitate Bayesian analysis of genetic-association-study data. A key feature of our approach is the ability to combine different submodels together, almost arbitrarily, for dealing with the complexities of real data. In particular, we propose various techniques for selecting the "best" subset of genetic predictors for a specific phenotype (or set of phenotypes). At the same time, we may control for complex, non-linear relationships between phenotypes and additional (non-genetic) covariates as well as accounting for any residual correlation that exists among multiple phenotypes. Both of these additional modelling components are shown to potentially aid in detecting the underlying genetic signal. We may also account for uncertainty regarding missing genotype data. Indeed, at the heart of our approach is a novel method for reconstructing unobserved haplotypes and/or inferring the values of missing genotypes. This can be deployed independently or, alternatively, it can be fully integrated into arbitrary genotype- or haplotype-based association models such that the missing data and the association model are "estimated" simultaneously. The impact of such simultaneous analysis on inferences drawn from the association model is shown to be potentially significant. Our modelling components are packaged as an "add-on" interface to the widely used WinBUGS software, which allows Markov chain Monte Carlo analysis of a wide range of statistical models. We illustrate their use with a series of increasingly complex analyses conducted on simulated data based on a real pharmacogenetic example.  相似文献   

7.
Selecting the best design for genetic association studies requires careful deliberation; different study designs can be used to scan for different genetic effects, and each design has its own set of strengths and limitations. A variety of family and unrelated control configurations are amenable to genetic association analyses, including the case-control design, case-parent triads, and case-parent triads in combination with unrelated controls or control-parent triads. Ultimately, the goal is to choose the design that achieves the highest statistical power using the lowest cost. For given parameter values and genotyped individuals, designs can be compared directly by computing the power. However, a more informative and general design comparison can be achieved by studying the relative efficiency, defined as the ratio of variances of two different parameter estimators, corresponding to two separate designs. Using log-linear modeling, we derive the relative efficiency from the asymptotic variance of the parameter estimators and relate it to the concept of Pitman efficiency. The relative efficiency takes into account the fact that different designs impose different costs relative to the number of genotyped individuals. We show that while optimal efficiency for analyses of regular autosomal effects is achieved using the standard case-control design, the case-parent triad design without unrelated controls is efficient when searching for parent-of-origin effects. Due to the potential loss of efficiency, maternal genes should generally not be adjusted for in an initial genome-wide association study scan of offspring genes but instead checked post hoc. The relative efficiency calculations are implemented in our R package Haplin.  相似文献   

8.

Background  

Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE.  相似文献   

9.
Large-scale genome-wide association studies (GWAS) have become feasible recently because of the development of bead and chip technology. However, the success of GWAS partially depends on the statistical methods that are able to manage and analyze this sort of large-scale data. Currently, the commonly used tests for GWAS include the Cochran-Armitage trend test, the allelic χ(2) test, the genotypic χ(2) test, the haplotypic χ(2) test, and the multi-marker genotypic χ(2) test among others. From a methodological point of view, it is a great challenge to improve the power of commonly used tests, since these tests are commonly used precisely because they are already among the most powerful tests. In this article, we propose an improved score test that is uniformly more powerful than the score test based on the generalized linear model. Since the score test based on the generalized linear model includes the aforementioned commonly used tests as its special cases, our proposed improved score test is thus uniformly more powerful than these commonly used tests. We evaluate the performance of the improved score test by simulation studies and application to a real data set. Our results show that the power increases of the improved score test over the score test cannot be neglected in most cases.  相似文献   

10.
Evaluating the association of multiple genetic variants with a trait of interest by use of kernel-based methods has made a significant impact on how genetic association analyses are conducted. An advantage of kernel methods is that they tend to be robust when the genetic variants have effects that are a mixture of positive and negative effects, as well as when there is a small fraction of causal variants. Another advantage is that kernel methods fit within the framework of mixed models, providing flexible ways to adjust for additional covariates that influence traits. Herein, we review the basic ideas behind the use of kernel methods for genetic association analysis as well as recent methodological advancements for different types of traits, multivariate traits, pedigree data, and longitudinal data. Finally, we discuss opportunities for future research.  相似文献   

11.
Zheng G  Meyer M  Li W  Yang Y 《Statistics in medicine》2008,27(24):5054-5075
To test for genetic association between a marker and a complex disease using a case-control design, Cochran-Armitage trend tests (CATTs) and Pearson's chi-square test are often employed. Both tests are genotype-based. Song and Elston (Statist. Med. 2006; 25:105-126) introduced the Hardy-Weinberg disequilibrium trend test and combined it with CATT to test for association. Compared to using a single statistic to test for case-control genetic association (referred to as single-phase analysis), two-phase analysis is a new strategy in that it employs two test statistics in one analysis framework, each statistic using all available case-control data. Two such two-phase analysis procedures were studied, in which Hardy-Weinberg equilibrium (HWE) in the population is a key assumption, although the procedures are robust to moderate departure from HWE. Our goal in this article is to study a new two-phase procedure and compare all three two-phase analyses and common single-phase procedures by extensive simulation studies. For illustration, the results are applied to real data from two case-control studies. On the basis of the results, we conclude that with an appropriate choice of significance level for the analysis in phase 1, some two-phase analyses could be more powerful than commonly used test statistics.  相似文献   

12.
The explosion of genetic information over the last decade presents an analytical challenge for genetic association studies. As the number of genetic variables examined per individual increases, both variable selection and statistical modeling tasks must be performed during analysis. While these tasks could be performed separately, coupling them is necessary to select meaningful variables that effectively model the data. This challenge is heightened due to the complex nature of the phenotypes under study and the complex underlying genetic etiologies. To address this problem, a number of novel methods have been developed. In the current study, we compare the performance of six analytical approaches to detect both main effects and gene-gene interactions in a range of genetic models. Multifactor dimensionality reduction, grammatical evolution neural networks, random forests, focused interaction testing framework, step-wise logistic regression, and explicit logistic regression were compared. As one might expect, the relative success of each method is context dependent. This study demonstrates the strengths and weaknesses of each method and illustrates the importance of continued methods development.  相似文献   

13.
14.
A major concern for all copy number variation (CNV) detection algorithms is their reliability and repeatability. However, it is difficult to evaluate the reliability of CNV-calling strategies due to the lack of gold-standard data that would tell us which CNVs are real. We propose that if CNVs are called in duplicate samples, or inherited from parent to child, then these can be considered validated CNVs. We used two large family-based genome-wide association study (GWAS) datasets from the GENEVA consortium to look at concordance rates of CNV calls between duplicate samples, parent-child pairs, and unrelated pairs. Our goal was to make recommendations for ways to filter and use CNV calls in GWAS datasets that do not include family data. We used PennCNV as our primary CNV-calling algorithm, and tested CNV calls using different datasets and marker sets, and with various filters on CNVs and samples. Using the Illumina core HumanHap550 single nucleotide polymorphism (SNP) set, we saw duplicate concordance rates of approximately 55% and parent-child transmission rates of approximately 28% in our datasets. GC model adjustment and sample quality filtering had little effect on these reliability measures. Stratification on CNV size and DNA sample type did have some effect. Overall, our results show that it is probably not possible to find a CNV-calling strategy (including filtering and algorithm) that will give us a set of "reliable" CNV calls using current chip technologies. But if we understand the error process, we can still use CNV calls appropriately in genetic association studies.  相似文献   

15.
Association mapping based on family studies can identify genes that influence complex human traits while providing protection against population stratification. Because no gene is likely to have a very large effect on a complex trait, most family studies have limited power. Among the commonly used family-based tests of association for quantitative traits, the quantitative transmission-disequilibrium tests (QTDT) based on the variance-components model is the most flexible and most powerful. This method assumes that the trait values are normally distributed. Departures from normality can inflate the type I error and reduce the power. Although the family-based association tests (FBAT) and pedigree disequilibrium tests (PDT) do not require normal traits, nonnormality can also result in loss of power. In many cases, approximate normality can be achieved by transforming the trait values. However, the true transformation is unknown, and incorrect transformations may compromise the type I error and power. We propose a novel class of association tests for arbitrarily distributed quantitative traits by allowing the true transformation function to be completely unspecified and empirically estimated from the data. Extensive simulation studies showed that the new methods provide accurate control of the type I error and can be substantially more powerful than the existing methods. We applied the new methods to the Collaborative Study on the Genetics of Alcoholism and discovered significant association of single nucleotide polymorphisms (SNP) tsc0022400 on chromosome 7 with the quantitative electrophysiological phenotype TTTH1, which was not detected by any existing methods. We have implemented the new methods in a freely available computer program.  相似文献   

16.
When testing genotype–phenotype associations using linear regression, departure of the trait distribution from normality can impact both Type I error rate control and statistical power, with worse consequences for rarer variants. Because genotypes are expected to have small effects (if any) investigators now routinely use a two-stage method, in which they first regress the trait on covariates, obtain residuals, rank-normalize them, and then use the rank-normalized residuals in association analysis with the genotypes. Potential confounding signals are assumed to be removed at the first stage, so in practice, no further adjustment is done in the second stage. Here, we show that this widely used approach can lead to tests with undesirable statistical properties, due to both combination of a mis-specified mean–variance relationship and remaining covariate associations between the rank-normalized residuals and genotypes. We demonstrate these properties theoretically, and also in applications to genome-wide and whole-genome sequencing association studies. We further propose and evaluate an alternative fully adjusted two-stage approach that adjusts for covariates both when residuals are obtained and in the subsequent association test. This method can reduce excess Type I errors and improve statistical power.  相似文献   

17.
Genetic association studies are a powerful tool to detect genetic variants that predispose to human disease. Once an associated variant is identified, investigators are also interested in estimating the effect of the identified variant on disease risk. Estimates of the genetic effect based on new association findings tend to be upwardly biased due to a phenomenon known as the “winner's curse.” Overestimation of genetic effect size in initial studies may cause follow‐up studies to be underpowered and so to fail. In this paper, we quantify the impact of the winner's curse on the allele frequency difference and odds ratio estimators for one‐ and two‐stage case‐control association studies. We then propose an ascertainment‐corrected maximum likelihood method to reduce the bias of these estimators. We show that overestimation of the genetic effect by the uncorrected estimator decreases as the power of the association study increases and that the ascertainment‐corrected method reduces absolute bias and mean square error unless power to detect association is high. Genet. Epidemiol. 33:453–462, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

18.
For a dense set of genetic markers such as single nucleotide polymorphisms (SNPs) on high linkage disequilibrium within a small candidate region, a haplotype-based approach for testing association between a disease phenotype and the set of markers is attractive in reducing the data complexity and increasing the statistical power. However, due to unknown status of the underlying disease variant, a comprehensive association test may require consideration of various combinations of the SNPs, which often leads to severe multiple testing problems. In this paper, we propose a latent variable approach to test for association of multiple tightly linked SNPs in case-control studies. First, we introduce a latent variable into the penetrance model to characterize a putative disease susceptible locus (DSL) that may consist of a marker allele, a haplotype from a subset of the markers, or an allele at a putative locus between the markers. Next, through using of a retrospective likelihood to adjust for the case-control sampling ascertainment and appropriately handle the Hardy-Weinberg equilibrium constraint, we develop an expectation-maximization (EM)-based algorithm to fit the penetrance model and estimate the joint haplotype frequencies of the DSL and markers simultaneously. With the latent variable to describe a flexible role of the DSL, the likelihood ratio statistic can then provide a joint association test for the set of markers without requiring an adjustment for testing of multiple haplotypes. Our simulation results also reveal that the latent variable approach may have improved power under certain scenarios comparing with classical haplotype association methods.  相似文献   

19.
20.
The heritability of complex diseases including cancer is often attributed to multiple interacting genetic alterations. Such a non-linear, non-additive gene–gene interaction effect, that is, epistasis, renders univariable analysis methods ineffective for genome-wide association studies. In recent years, network science has seen increasing applications in modeling epistasis to characterize the complex relationships between a large number of genetic variations and the phenotypic outcome. In this study, by constructing a statistical epistasis network of colorectal cancer (CRC), we proposed to use multiple network measures to prioritize genes that influence the disease risk of CRC through synergistic interaction effects. We computed and analyzed several global and local properties of the large CRC epistasis network. We utilized topological properties of network vertices such as the edge strength, vertex centrality, and occurrence at different graphlets to identify genes that may be of potential biological relevance to CRC. We found 512 top-ranked single-nucleotide polymorphisms, among which COL22A1, RGS7, WWOX, and CELF2 were the four susceptibility genes prioritized by all described metrics as the most influential on CRC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号