首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we investigate how the correlation structure of independent variables affects the discrimination of risk prediction model. Using multivariate normal data and binary outcome, we prove that zero correlation among predictors is often detrimental for discrimination in a risk prediction model and negatively correlated predictors with positive effect sizes are beneficial. A very high multiple R‐squared from regressing the new predictor on the old ones can also be beneficial. As a practical guide to new variable selection, we recommend to select predictors that have negative correlation with the risk score based on the existing variables. This step is easy to implement even when the number of new predictors is large. We illustrate our results by using real‐life Framingham data suggesting that the conclusions hold outside of normality. The findings presented in this paper might be useful for preliminary selection of potentially important predictors, especially is situations where the number of predictors is large. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
We compare the calibration and variability of risk prediction models that were estimated using various approaches for combining information on new predictors, termed ‘markers’, with parameter information available for other variables from an earlier model, which was estimated from a large data source. We assess the performance of risk prediction models updated based on likelihood ratio (LR) approaches that incorporate dependence between new and old risk factors as well as approaches that assume independence (‘naive Bayes’ methods). We study the impact of estimating the LR by (i) fitting a single model to cases and non‐cases when the distribution of the new markers is in the exponential family or (ii) fitting separate models to cases and non‐cases. We also evaluate a new constrained maximum likelihood method. We study updating the risk prediction model when the new data arise from a cohort and extend available methods to accommodate updating when the new data source is a case‐control study. To create realistic correlations between predictors, we also based simulations on real data on response to antiviral therapy for hepatitis C. From these studies, we recommend the LR method fit using a single model or constrained maximum likelihood. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
The first metabolite of alcohol, acetaldehyde, may trigger replication errors and mutations in DNA, which may predispose to developing colorectal cancer (CRC). In a prospective study on colon and rectal cancer, we investigated the following hypotheses: alcohol consumption is associated with an increased risk of mutations in the K-ras oncogene, and beer consumption is associated with an increased risk of G-->A mutations in this gene. Therefore, we studied the associations between consumption of alcohol and alcoholic beverages and the risk of CRC without and with specific K-ras gene mutations. In 1986, 120,852 men and women, aged 55-69 years, completed a questionnaire on risk factors for cancer. The case-cohort approach was used for data processing and analyses. After 7.3 years of follow-up, excluding the first 2.3 years, complete data from 4,076 subcohort members, 428 colon and 150 rectal cancer patients, were available for data analyses. Incidence rate ratios (RRs) and corresponding 95% confidence intervals (95% CIs) were estimated using Cox proportional hazards models. Compared to abstaining, a total alcohol consumption of 30.0 g/day and more was associated with the risk of colon and rectal cancer with and without a K-ras mutation in both men and women. Independent from alcohol intake, liquor consumption when compared to nonliquor consumption was associated with an increased risk of rectal cancer with a wild type K-ras in men (RR: 2.25, 95% CI: 1.0-5.0). Beer consumption was not clearly associated with the risk of colon and rectal tumors harboring G-->A mutations in the K-ras gene in men. This association could not be assessed in women because of sparse beer consumption. In conclusion, alcohol does not seem to be involved in predisposing to CRC through mutations in the K-ras gene, and specifically beer consumption is not associated with colon and rectal tumors harboring a G-->A mutation.  相似文献   

4.
Family data are useful for estimating disease risk in carriers of specific genotypes of a given gene (penetrance). Penetrance is frequently estimated assuming that relatives' phenotypes are independent, given their genotypes for the gene of interest. This assumption is unrealistic when multiple shared risk factors contribute to disease risk. In this setting, the phenotypes of relatives are correlated even after adjustment for the genotypes of any one gene (residual correlation). Many methods have been proposed to address this problem, but their performance has not been evaluated systematically. In simulations we generated genotypes for a rare (frequency 0.35%) allele of moderate penetrance, and a common (frequency 15%) allele of low penetrance, and then generated correlated disease survival times using the Clayton‐Oakes copula model. We ascertained families using both population and clinic designs. We then compared the estimates of several methods to the optimal ones obtained from the model used to generate the data. We found that penetrance estimates for common low‐risk genotypes were more robust to model misspecification than those for rare, moderate‐risk genotypes. For the latter, penetrance estimates obtained ignoring residual disease correlation had large biases. Also biased were estimates based only on families that segregate the risk allele. In contrast, a method for accommodating phenotype correlation by assuming the presence of genetic heterogeneity performed nearly optimally, even when the survival data were coded as binary outcomes. We conclude that penetrance estimates that accommodate residual phenotype correlation (even only approximately) outperform those that ignore it, and that coding censored survival outcomes as binary does not substantially increase the mean‐square errror of the estimates, provided the censoring is not extensive. Genet. Epidemiol. 34: 373–381, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

5.
We are interested in developing integrative approaches for variable selection problems that incorporate external knowledge on a set of predictors of interest. In particular, we have developed an integrative Bayesian model uncertainty (iBMU) method, which formally incorporates multiple sources of data via a second‐stage probit model on the probability that any predictor is associated with the outcome of interest. Using simulations, we demonstrate that iBMU leads to an increase in power to detect true marginal associations over more commonly used variable selection techniques, such as least absolute shrinkage and selection operator and elastic net. In addition, iBMU leads to a more efficient model search algorithm over the basic BMU method even when the predictor‐level covariates are only modestly informative. The increase in power and efficiency of our method becomes more substantial as the predictor‐level covariates become more informative. Finally, we demonstrate the power and flexibility of iBMU for integrating both gene structure and functional biomarker information into a candidate gene study investigating over 50 genes in the brain reward system and their role with smoking cessation from the Pharmacogenetics of Nicotine Addiction and Treatment Consortium. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Case‐control genome‐wide association (GWA) studies have facilitated the identification of susceptibility loci for many complex diseases; however, these studies are often not adequately powered to detect gene‐environment (G×E) and gene‐gene (G×G) interactions. Case‐only studies are more efficient than case‐control studies for detecting interactions and require no data on control subjects. In this article, we discuss the concept and utility of the case‐only genome‐wide interaction (COGWI) study, in which common genetic variants, measured genome‐wide, are screened for association with environmental exposures or genetic variants of interest. An observed G‐E (or G‐G) association, as measured by the case‐only odds ratio (OR), suggests interaction, but only if the interacting factors are unassociated in the population from which the cases were drawn. The case‐only OR is equivalent to the interaction risk ratio. In addition to risk‐related interactions, we discuss how the COGWI design can be used to efficiently detect G×G, G×E and pharmacogenetic interactions related to disease outcomes in the context of observational clinical studies or randomized clinical trials. Such studies can be conducted using only data on individuals experiencing an outcome of interest or individuals not experiencing the outcome of interest. Sharing data among GWA and COGWI studies of disease risk and outcome can further enhance efficiency. Sample size requirements for COGWI studies, as compared to case‐control GWA studies, are provided. In the current era of genome‐wide analyses, the COGWI design is an efficient and straightforward method for detecting G×G, G×E and pharmacogenetic interactions related to disease risk, prognosis and treatment response. Genet. Epidemiol. 34:7–15, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

7.
This paper proposes a new statistical approach for predicting postoperative morbidity such as intensive care unit length of stay and number of complications after cardiac surgery in children. In a recent multi‐center study sponsored by the National Institutes of Health, 311 children undergoing cardiac surgery were enrolled. Morbidity data are count data in which the observations take only nonnegative integer values. Often, the number of zeros in the sample cannot be accommodated properly by a simple model, thus requiring a more complex model such as the zero‐inflated Poisson regression model. We are interested in identifying important risk factors for postoperative morbidity among many candidate predictors. There is only limited methodological work on variable selection for the zero‐inflated regression models. In this paper, we consider regularized zero‐inflated Poisson models through penalized likelihood function and develop a new expectation–maximization algorithm for numerical optimization. Simulation studies show that the proposed method has better performance than some competing methods. Using the proposed methods, we analyzed the postoperative morbidity, which improved the model fitting and identified important clinical and biomarker risk factors. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Many clinical decisions require accurate estimates of disease risks associated with mutations of known disease-susceptibility genes. Such risk estimation is difficult when the mutations are rare. We used computer simulations to compare the performance of estimates obtained from two types of designs based on family data. In the first (clinic-based designs), families are ascertained because they meet certain criteria concerning multiple disease occurrences among family members. In the second (population-based designs), families are sampled through a population-based registry of affected individuals called probands, with oversampling of probands whose families are more likely to segregate mutations. We generated family structures, genotypes, and phenotypes using models that reflect the frequencies and penetrances of mutations of the BRCA1/2 genes. We studied the effects of risk heterogeneity due to unmeasured, shared risk factors by including risk variation due to unmeasured genotypes of another gene. The simulations were chosen to mimic the ascertainment and selection processes commonly used in the two types of designs. We found that penetrance estimates from both designs are nearly unbiased in the absence of unmeasured shared risk factors, but are biased upward in the presence of such factors. The bias increases with increasing variation in risks across genotypes of the second gene. However, it is small compared to the standard error of the estimates. Standard errors from population-based designs are roughly twice those from clinic-based designs with the same number of families. Using the root-mean-square error as a measure of performance, we found that in all instances, the clinic-based designs gave more accurate estimates than did the population-based designs with the same numbers of families. Rough variance calculations suggest that clinic-based designs give more accurate estimates because they include more identified mutation carriers.  相似文献   

9.
Most complex human diseases are likely the consequence of the joint actions of genetic and environmental factors. Identification of gene‐environment (G × E) interactions not only contributes to a better understanding of the disease mechanisms, but also improves disease risk prediction and targeted intervention. In contrast to the large number of genetic susceptibility loci discovered by genome‐wide association studies, there have been very few successes in identifying G × E interactions, which may be partly due to limited statistical power and inaccurately measured exposures. Although existing statistical methods only consider interactions between genes and static environmental exposures, many environmental/lifestyle factors, such as air pollution and diet, change over time, and cannot be accurately captured at one measurement time point or by simply categorizing into static exposure categories. There is a dearth of statistical methods for detecting gene by time‐varying environmental exposure interactions. Here, we propose a powerful functional logistic regression (FLR) approach to model the time‐varying effect of longitudinal environmental exposure and its interaction with genetic factors on disease risk. Capitalizing on the powerful functional data analysis framework, our proposed FLR model is capable of accommodating longitudinal exposures measured at irregular time points and contaminated by measurement errors, commonly encountered in observational studies. We use extensive simulations to show that the proposed method can control the Type I error and is more powerful than alternative ad hoc methods. We demonstrate the utility of this new method using data from a case‐control study of pancreatic cancer to identify the windows of vulnerability of lifetime body mass index on the risk of pancreatic cancer as well as genes that may modify this association.  相似文献   

10.
Biomedical studies have a common interest in assessing relationships between multiple related health outcomes and high‐dimensional predictors. For example, in reproductive epidemiology, one may collect pregnancy outcomes such as length of gestation and birth weight and predictors such as single nucleotide polymorphisms in multiple candidate genes and environmental exposures. In such settings, there is a need for simple yet flexible methods for selecting true predictors of adverse health responses from a high‐dimensional set of candidate predictors. To address this problem, one may either consider linear regression models for the continuous outcomes or convert these outcomes into binary indicators of adverse responses using predefined cutoffs. The former strategy has the disadvantage of often leading to a poorly fitting model that does not predict risk well, whereas the latter approach can be very sensitive to the cutoff choice. As a simple yet flexible alternative, we propose a method for adverse subpopulation regression, which relies on a two‐component latent class model, with the dominant component corresponding to (presumed) healthy individuals and the risk of falling in the minority component characterized via a logistic regression. The logistic regression model is designed to accommodate high‐dimensional predictors, as occur in studies with a large number of gene by environment interactions, through the use of a flexible nonparametric multiple shrinkage approach. The Gibbs sampler is developed for posterior computation. We evaluate the methods with the use of simulation studies and apply these to a genetic epidemiology study of pregnancy outcomes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
BACKGROUND: In 1994/95, two genes, BRCA1/2, associated with a predisposition to breast or ovarian cancer were identified. Genetic testing of deleterious BRCA1/2 mutations consequently can be proposed to individuals with a family history of breast or ovarian cancer to identify who is at risk. The granting of U.S. patents on BRCA1/2 to a privately owned company has led to the monopoly use of a unique technique (Direct Sequencing of the gene, DS) for BRCA1/2 testing in this country. Alternative strategies using prescreening techniques, however, have been experienced worldwide. METHODS: On the basis of data collected at three laboratories of French public hospitals, we carried out a cost-effectiveness study comparing DS to 19 alternative strategies with the number of deleterious BRCA1 mutations detected as the outcome. RESULTS: Results show that the DS strategy presents the highest average cost per mutation detected (9,882.5 Euro) and that there exist strategies using prescreening techniques that can reach similar effectiveness while reducing total costs. Moreover, other strategies can obtain a four- to sevenfold reduction in the average cost per mutation detected as soon as some rates of false negatives (2% to 13%) are deemed to be acceptable. CONCLUSIONS: Results suggest that gene patents with a very broad scope, covering all potential medical applications, may prevent health care systems from identifying and adopting the most efficient genetic testing strategies due to the monopoly granted for the exploitation of the gene. Policy implications for regulatory authorities, in the current context of the extension of BRCA1/2 patents in other countries, are discussed.  相似文献   

12.
《Statistics in medicine》2017,36(29):4705-4718
Methods have been developed for Mendelian randomization that can obtain consistent causal estimates while relaxing the instrumental variable assumptions. These include multivariable Mendelian randomization, in which a genetic variant may be associated with multiple risk factors so long as any association with the outcome is via the measured risk factors (measured pleiotropy), and the MR‐Egger (Mendelian randomization‐Egger) method, in which a genetic variant may be directly associated with the outcome not via the risk factor of interest, so long as the direct effects of the variants on the outcome are uncorrelated with their associations with the risk factor (unmeasured pleiotropy). In this paper, we extend the MR‐Egger method to a multivariable setting to correct for both measured and unmeasured pleiotropy. We show, through theoretical arguments and a simulation study, that the multivariable MR‐Egger method has advantages over its univariable counterpart in terms of plausibility of the assumption needed for consistent causal estimation and power to detect a causal effect when this assumption is satisfied. The methods are compared in an applied analysis to investigate the causal effect of high‐density lipoprotein cholesterol on coronary heart disease risk. The multivariable MR‐Egger method will be useful to analyse high‐dimensional data in situations where the risk factors are highly related and it is difficult to find genetic variants specifically associated with the risk factor of interest (multivariable by design), and as a sensitivity analysis when the genetic variants are known to have pleiotropic effects on measured risk factors.  相似文献   

13.
Despite the successful discovery of hundreds of variants for complex human traits using genome‐wide association studies, the degree to which genes and environmental risk factors jointly affect disease risk is largely unknown. One obstacle toward this goal is that the computational effort required for testing gene‐gene and gene‐environment interactions is enormous. As a result, numerous computationally efficient tests were recently proposed. However, the validity of these methods often relies on unrealistic assumptions such as additive main effects, main effects at only one variable, no linkage disequilibrium between the two single‐nucleotide polymorphisms (SNPs) in a pair or gene‐environment independence. Here, we derive closed‐form and consistent estimates for interaction parameters and propose to use Wald tests for testing interactions. The Wald tests are asymptotically equivalent to the likelihood ratio tests (LRTs), largely considered to be the gold standard tests but generally too computationally demanding for genome‐wide interaction analysis. Simulation studies show that the proposed Wald tests have very similar performances with the LRTs but are much more computationally efficient. Applying the proposed tests to a genome‐wide study of multiple sclerosis, we identify interactions within the major histocompatibility complex region. In this application, we find that (1) focusing on pairs where both SNPs are marginally significant leads to more significant interactions when compared to focusing on pairs where at least one SNP is marginally significant; and (2) parsimonious parameterization of interaction effects might decrease, rather than increase, statistical power.  相似文献   

14.
BACKGROUND: Each branch of the U.S. armed forces has standards for physical fitness as well as programs for ensuring compliance with these standards. In the U.S. Air Force (USAF), physical fitness is assessed using submaximal cycle ergometry to estimate maximal oxygen uptake (VO2(max)). The purpose of this study was to identify the independent effects of demographic and behavioral factors on risk of failure to meet USAF fitness standards (hereafter called low fitness). METHODS: A retrospective cohort study (N=38,837) was conducted using self-reported health risk assessment data and cycle ergometry data from active-duty Air Force (ADAF) members. Poisson regression techniques were used to estimate the associations between the factors studied and low fitness. RESULTS: The factors studied had different effects depending on whether members passed or failed fitness testing in the previous year. All predictors had weaker effects among those with previous failure. Among those with a previous pass, demographic groups at increased risk were toward the upper end of the ADAF age distribution, senior enlisted men, and blacks. Overweight/obesity was the behavioral factor with the largest effect among men, with aerobic exercise frequency ranked second; among women, the order of these two factors was reversed. Cigarette smoking only had an adverse effect among men. For a hypothetical ADAF man who was sedentary, obese, and smoked, the results suggested that aggressive behavioral risk factor modification would produce a 77% relative decrease in risk of low fitness. CONCLUSIONS: Among ADAF members, both demographic and behavioral factors play important roles in physical fitness. Behavioral risk factors are prevalent and potentially modifiable. These data suggest that, depending on a member's risk factor profile, behavioral risk factor modification may produce impressive reductions in risk of low fitness among ADAF personnel.  相似文献   

15.
Genome-wide association studies have facilitated the construction of risk predictors for disease from multiple Single Nucleotide Polymorphism markers. The ability of such "genetic profiles" to predict outcome is usually quantified in an independent data set. Coefficients of determination (R(2) ) have been a useful measure to quantify the goodness-of-fit of the genetic profile. Various pseudo-R(2) measures for binary responses have been proposed. However, there is no standard or consensus measure because the concept of residual variance is not easily defined on the observed probability scale. Unlike other nongenetic predictors such as environmental exposure, there is prior information on genetic predictors because for most traits there are estimates of the proportion of variation in risk in the population due to all genetic factors, the heritability. It is this useful ability to benchmark that makes the choice of a measure of goodness-of-fit in genetic profiling different from that of nongenetic predictors. In this study, we use a liability threshold model to establish the relationship between the observed probability scale and underlying liability scale in measuring R(2) for binary responses. We show that currently used R(2) measures are difficult to interpret, biased by ascertainment, and not comparable to heritability. We suggest a novel and globally standard measure of R(2) that is interpretable on the liability scale. Furthermore, even when using ascertained case-control studies that are typical in human disease studies, we can obtain an R(2) measure on the liability scale that can be compared directly to heritability.  相似文献   

16.
Down syndrome (DS) is a complex genetic and metabolic disorder attributed to the presence of three copies of chromosome 21. The extra chromosome derives from the mother in 93% of cases and is due to abnormal chromosome segregation during meiosis (nondisjunction). Except for advanced age at conception, maternal risk factors for meiotic nondisjunction are not well established. A recent preliminary study suggested that abnormal folate metabolism and the 677 (C-->T) mutation in the methylene-tetrahydrofolate reductase (MTHFR) gene may be maternal risk factors for DS. Frequency of the MTHFR 677 (C-->T) and 1298 (A-->C) mutations was evaluated in 36 mothers of children with DS and in 200 controls. The results are consistent with the observation that the MTHFR 677 (C-->T) and 1298 (A-->C) mutations are more prevalent among mothers of children with DS than controls. In addition, the most prevalent genotype was the combination of both mutations. The results suggest that mutations in the MTHFR gene are associated with maternal risk for DS  相似文献   

17.
In genetic and genomic studies, gene‐environment (G×E) interactions have important implications. Some of the existing G×E interaction methods are limited by analyzing a small number of G factors at a time, by assuming linear effects of E factors, by assuming no data contamination, and by adopting ineffective selection techniques. In this study, we propose a new approach for identifying important G×E interactions. It jointly models the effects of all E and G factors and their interactions. A partially linear varying coefficient model is adopted to accommodate possible nonlinear effects of E factors. A rank‐based loss function is used to accommodate possible data contamination. Penalization, which has been extensively used with high‐dimensional data, is adopted for selection. The proposed penalized estimation approach can automatically determine if a G factor has an interaction with an E factor, main effect but not interaction, or no effect at all. The proposed approach can be effectively realized using a coordinate descent algorithm. Simulation shows that it has satisfactory performance and outperforms several competing alternatives. The proposed approach is used to analyze a lung cancer study with gene expression measurements and clinical variables. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Mendelian randomization analyses are often performed using summarized data. The causal estimate from a one‐sample analysis (in which data are taken from a single data source) with weak instrumental variables is biased in the direction of the observational association between the risk factor and outcome, whereas the estimate from a two‐sample analysis (in which data on the risk factor and outcome are taken from non‐overlapping datasets) is less biased and any bias is in the direction of the null. When using genetic consortia that have partially overlapping sets of participants, the direction and extent of bias are uncertain. In this paper, we perform simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap. We consider both a continuous outcome and a case‐control setting with a binary outcome. For a continuous outcome, bias due to sample overlap is a linear function of the proportion of overlap between the samples. So, in the case of a null causal effect, if the relative bias of the one‐sample instrumental variable estimate is 10% (corresponding to an F parameter of 10), then the relative bias with 50% sample overlap is 5%, and with 30% sample overlap is 3%. In a case‐control setting, if risk factor measurements are only included for the control participants, unbiased estimates are obtained even in a one‐sample setting. However, if risk factor data on both control and case participants are used, then bias is similar with a binary outcome as with a continuous outcome. Consortia releasing publicly available data on the associations of genetic variants with continuous risk factors should provide estimates that exclude case participants from case‐control samples.  相似文献   

19.
Background: Home parenteral nutrition (HPN) is increasingly used for nutrition support after patients are discharged from the hospital. Catheter‐related bloodstream infections (CR‐BSI) are a common and potentially fatal complication of HPN. The risk factors for development of CR‐BSI in the outpatient setting are poorly understood. Methods: We conducted an observational, retrospective study of 225 patients discharged from Barnes‐Jewish Hospital on HPN between January 1, 2007, and December 31, 2009. HPN complications were defined as any cause that led to either premature discontinuation of HPN therapy or catheter replacement. CR‐BSI events were identified by provider documentation. We calculated the overall complication rate and the complication rate specifically due to CR‐BSI. Backward stepwise Cox regression analyses were used to assess for independent predictors of catheter‐related complications. Results: In total, 111 of 225 patients (49%) developed complications while receiving HPN (incidence = 5.06 episodes/1000 catheter days). Sixty‐eight of 225 patients (30%) required catheter removal for CR‐BSI (incidence = 3.10 episodes/1000 catheter days). Independent predictors of line removal specifically due to infection included anticoagulant use, ulcer or open wound, and Medicare or Medicaid insurance. The following risk factors were associated with catheter‐associated complications and/or CR‐BSI: the presence of ulcers, the use of systemic anticoagulants, public insurance (Medicare or Medicaid), and patient age. Independent predictors of line removal for any complication included age and anticoagulant use. Conclusion: Catheter‐related complications were extremely common in patients receiving HPN. Healthcare providers caring for individuals who require HPN should be aware of risk factors for complications.  相似文献   

20.
Since the development of next generation sequencing (NGS) technology, researchers have been extending their efforts on genome‐wide association studies (GWAS) from common variants to rare variants to find the missing inheritance. Although various statistical methods have been proposed to analyze rare variants data, they generally face difficulties for complex disease models involving multiple genes. In this paper, we propose a tree‐based analysis of rare variants (TARV) that adopts a nonparametric disease model and is capable of exploring gene–gene interactions. We found that TARV outperforms the sequence kernel association test (SKAT) in most of our simulation scenarios, and by notable margins in some cases. By applying TARV to the study of addiction: genetics and environment (SAGE) data, we successfully detected gene CTNNA2 and its 43 specific variants that increase the risk of alcoholism in women, with an odds ratio (OR) of 1.94. This gene has not been detected in the SAGE data. Post hoc literature search also supports the role of CTNNA2 as a likely risk gene for alcohol addiction. In addition, we also detected a plausible protective gene CNTNAP2, whose 97 rare variants can reduce the risk of alcoholism in women, with an OR of 0.55. These findings suggest that TARV can be effective in dissecting genetic variants for complex diseases using rare variants data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号