首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Familial aggregation of prostate cancer is likely to be due to multiple susceptibility loci, perhaps acting in conjunction with shared lifestyle risk factors. Models that assume a single mode of inheritance may be unrealistic. We analyzed genetic models of susceptibility to prostate cancer using segregation analysis of occurrence in families ascertained through population‐based series totaling 4390 incident cases. We investigated major gene models (dominant, recessive, general, X‐linked), polygenic models, and mixed models of susceptibility using the pedigree analysis software MENDEL. The hypergeometric model was used to approximate polygenic inheritance. The best‐fitting model for the familial aggregation of prostate cancer was the mixed recessive model. The frequency of the susceptibility allele in the population was estimated to be 0.15 (95% confidence interval (CI) 0.11–0.20), with a relative risk for homozygote carriers of 94 (95% CI 46–192), and a polygenic standard deviation of 2.01 (95% CI 1.72–2.34). These analyses suggest that one or more genes having a strong recessively inherited effect on risk, as well as a number of genes with variants having small multiplicative effects on risk, may account for the genetic susceptibility to prostate cancer. The recessive component would predict the observed higher familial risk for siblings of cases than for fathers, but this could also be due to other factors such as shared lifestyle by siblings, targeted screening effects, and/or non‐additive effects of one or more genes. Genet. Epidemiol. 34:42–50, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

2.
Pedigrees collected for linkage studies are a valuable resource that could be used to estimate genetic relative risks (RRs) for genetic variants recently discovered in case‐control genome wide association studies. To estimate RRs from highly ascertained pedigrees, a pedigree “retrospective likelihood” can be used, which adjusts for ascertainment by conditioning on the phenotypes of pedigree members. We explore a variety of approaches to compute the retrospective likelihood, and illustrate a Newton‐Raphson method that is computationally efficient particularly for single nucleotide polymorphisms (SNPs) modeled as log‐additive effect of alleles on the RR. We also illustrate, by simulations, that a naïve “composite likelihood” method that can lead to biased RR estimates, mainly by not conditioning on the ascertainment process—or as we propose—the disease status of all pedigree members. Applications of the retrospective likelihood to pedigrees collected for a prostate cancer linkage study and recently reported risk‐SNPs illustrate the utility of our methods, with results showing that the RRs estimated from the highly ascertained pedigrees are consistent with odds ratios estimated in case‐control studies. We also evaluate the potential impact of residual correlations of disease risk among family members due to shared unmeasured risk factors (genetic or environmental) by allowing for a random baseline risk parameter. When modeling only the affected family members in our data, there was little evidence for heterogeneity in baseline risks across families. Genet. Epidemiol. 34: 287–298, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

3.
Providing valid risk estimates of a genetic disease with variable age of onset is a major challenge for prevention strategies. When data are obtained from pedigrees ascertained through affected individuals, an adjustment for ascertainment bias is necessary. This article focuses on ascertainment through at least one affected and presents an estimation method based on maximum likelihood, called the Proband's phenotype exclusion likelihood or PEL for estimating age‐dependent penetrance using disease status and genotypic information of family members in pedigrees unselected for family history. We studied the properties of the PEL and compared with another method, the prospective likelihood, in terms of bias and efficiency in risk estimate. For that purpose, family samples were simulated under various disease risk models and under various ascertainment patterns. We showed that, whatever the genetic model and the ascertainment scheme, the PEL provided unbiased estimates, whereas the prospective likelihood exhibited some bias in a number of situations. As an illustration, we estimated the disease risk for transthyretin amyloid neuropathy from a French sample and a Portuguese sample and for BRCA1/2 associated breast cancer from a sample ascertained on early‐onset breast cancer cases. Genet. Epidemiol. 33:379–385, 2009. © 2008 Wiley‐Liss, Inc.  相似文献   

4.
Instrumental variable estimates of causal effects can be biased when using many instruments that are only weakly associated with the exposure. We describe several techniques to reduce this bias and estimate corrected standard errors. We present our findings using a simulation study and an empirical application. For the latter, we estimate the effect of height on lung function, using genetic variants as instruments for height. Our simulation study demonstrates that, using many weak individual variants, two‐stage least squares (2SLS) is biased, whereas the limited information maximum likelihood (LIML) and the continuously updating estimator (CUE) are unbiased and have accurate rejection frequencies when standard errors are corrected for the presence of many weak instruments. Our illustrative empirical example uses data on 3631 children from England. We used 180 genetic variants as instruments and compared conventional ordinary least squares estimates with results for the 2SLS, LIML, and CUE instrumental variable estimators using the individual height variants. We further compare these with instrumental variable estimates using an unweighted or weighted allele score as single instruments. In conclusion, the allele scores and CUE gave consistent estimates of the causal effect. In our empirical example, estimates using the allele score were more efficient. CUE with corrected standard errors, however, provides a useful additional statistical tool in applications with many weak instruments. The CUE may be preferred over an allele score if the population weights for the allele score are unknown or when the causal effects of multiple risk factors are estimated jointly. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

5.
Family data are useful for estimating disease risk in carriers of specific genotypes of a given gene (penetrance). Penetrance is frequently estimated assuming that relatives' phenotypes are independent, given their genotypes for the gene of interest. This assumption is unrealistic when multiple shared risk factors contribute to disease risk. In this setting, the phenotypes of relatives are correlated even after adjustment for the genotypes of any one gene (residual correlation). Many methods have been proposed to address this problem, but their performance has not been evaluated systematically. In simulations we generated genotypes for a rare (frequency 0.35%) allele of moderate penetrance, and a common (frequency 15%) allele of low penetrance, and then generated correlated disease survival times using the Clayton‐Oakes copula model. We ascertained families using both population and clinic designs. We then compared the estimates of several methods to the optimal ones obtained from the model used to generate the data. We found that penetrance estimates for common low‐risk genotypes were more robust to model misspecification than those for rare, moderate‐risk genotypes. For the latter, penetrance estimates obtained ignoring residual disease correlation had large biases. Also biased were estimates based only on families that segregate the risk allele. In contrast, a method for accommodating phenotype correlation by assuming the presence of genetic heterogeneity performed nearly optimally, even when the survival data were coded as binary outcomes. We conclude that penetrance estimates that accommodate residual phenotype correlation (even only approximately) outperform those that ignore it, and that coding censored survival outcomes as binary does not substantially increase the mean‐square errror of the estimates, provided the censoring is not extensive. Genet. Epidemiol. 34: 373–381, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

6.
Evidence for age-specific genetic relative risks in lung cancer   总被引:9,自引:0,他引:9  
Recent studies of familial aggregation suggest that family history of lung cancer among first-degree relatives is associated with increased risk for early-onset, but not late-onset, lung cancer. To assess whether this could be explained by variability in genetic relative risk across age, segregation analysis was performed on the Louisiana Lung Cancer Dataset. This data set consisted of 337 probands who died of lung cancer between 1976 and 1979 and their first-degree relatives. A variation of the Cox proportional hazards model was used that allowed estimation of age- and genotype-specific incidence rates, from which the authors obtained estimates of age-specific genetic relative risks. The best-fitting model included an autosomal dominant locus (allele frequency, 0.043), with carrier-to-noncarrier relative risks that exceeded 100 for ages less than 60 years and declined monotonically to 1.6 by age 80. The hypothesis of proportional genetic relative risk across age was rejected (p = 0.009). The estimated proportion of persons with lung cancer who carry the high-risk allele exceeds 90% for cases with onset at age 60 years or less and decreases to approximately 10% for cases with onset at age 80 years or older. These findings support previous evidence of a major susceptibility locus for lung cancer and suggest that linkage studies should preferentially recruit young lung cancer cases and their relatives.  相似文献   

7.
The case-crossover method is an efficient study design for evaluating associations between transient exposures and the onset of acute events. In one common implementation of this design, odds ratios are estimated using conditional logistic or stratified Cox proportional hazards models, with data stratified on each individual event. In environmental epidemiology, where aggregate time-series data are often used, combining strata with identical exposure histories may be computationally convenient. However, when the SAS software package (SAS Institute Inc., Cary, North Carolina) is used for analysis, users can obtain biased results if care is not taken to properly account for multiple cases observed at the same time. The authors show that fitting a stratified Cox model with the "Breslow" option for handling tied failure times (i.e., ties = Breslow) provides unbiased health-effects estimates in case-crossover studies with shared exposures. The authors' simulations showed that using conditional logistic regression-or equivalently a stratified Cox model with the "ties = discrete" option-in this setting leads to health-effect estimates which can be biased away from the null hypothesis of no association by 22%-39%, even for small simulated relative risks. All methods tested by the authors yielded unbiased results under a simulated scenario with a relative risk of 1.0. This potential bias does not arise in R (R Foundation for Statistical Computing, Vienna, Austria) or Stata (Stata Corporation, College Station, Texas).  相似文献   

8.
Family studies to identify disease-related genes frequently collect only families with multiple cases. It is often desirable to determine if risk factors that are known to influence disease risk in the general population also play a role in the study families. If so, these factors should be incorporated into the genetic analysis to control for confounding. Pfeiffer et al. [2001 Biometrika 88: 933-948] proposed a variance components or random effects model to account for common familial effects and for different genetic correlations among family members. After adjusting for ascertainment, they found maximum likelihood estimates of the measured exposure effects. Although it is appealing that this model accounts for genetic correlations as well as for the ascertainment of families, in order to perform an analysis one needs to specify the distribution of random genetic effects. The current work investigates the robustness of the proposed model with respect to various misspecifications of genetic random effects in simulations. When the true underlying genetic mechanism is polygenic with a small dominant component, or Mendelian with low allele frequency and penetrance, the effects of misspecification on the estimation of fixed effects in the model are negligible. The model is applied to data from a family study on nasopharyngeal carcinoma in Taiwan.  相似文献   

9.
We assess the asymptotic bias of estimates of exposure effects conditional on covariates when summary scores of confounders, instead of the confounders themselves, are used to analyze observational data. First, we study regression models for cohort data that are adjusted for summary scores. Second, we derive the asymptotic bias for case‐control studies when cases and controls are matched on a summary score, and then analyzed either using conditional logistic regression or by unconditional logistic regression adjusted for the summary score. Two scores, the propensity score (PS) and the disease risk score (DRS) are studied in detail. For cohort analysis, when regression models are adjusted for the PS, the estimated conditional treatment effect is unbiased only for linear models, or at the null for non‐linear models. Adjustment of cohort data for DRS yields unbiased estimates only for linear regression; all other estimates of exposure effects are biased. Matching cases and controls on DRS and analyzing them using conditional logistic regression yields unbiased estimates of exposure effect, whereas adjusting for the DRS in unconditional logistic regression yields biased estimates, even under the null hypothesis of no association. Matching cases and controls on the PS yield unbiased estimates only under the null for both conditional and unconditional logistic regression, adjusted for the PS. We study the bias for various confounding scenarios and compare our asymptotic results with those from simulations with limited sample sizes. To create realistic correlations among multiple confounders, we also based simulations on a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Association studies assessing the relationship between a common polymorphism and disease generally compare allele frequencies in cases and controls. In such studies, a limited amount of information is often available about disease incidence in relatives. We hypothesised that more power could be obtained by incorporating the constraints imposed by the properties of a genetic polymorphism, and that power could be further increased by using family history (FH) information. We have developed a simple method for incorporating basic FH information from cases and controls into a genetic association study, assuming Hardy-Weinberg equilibrium (HWE) in the general population. We model the likelihood of the data in terms of the allele frequency and its relative risk (RR) of disease and perform likelihood ratio tests. Using simulations, we compared the power to detect an association using this approach with that of a 2 x 2 chi-squared test, for a range of disease models. The sample size required to detect an association is consistently lower for tests including the HWE constraint, with the largest reduction for more common alleles. The required sample size is reduced further by stratifying by FH. Stratifying by FH also improves the precision of the RR estimates. In situations where basic FH data are already available, this study shows that efficiency can be improved by the inclusion of even this small amount of extra information.  相似文献   

11.
BACKGROUND: Cross-sectional genetic association studies are now widely employed to look for genes which confer longevity. Such studies are based on two assumptions; (a) initial relative allele frequencies in the different age cohorts are similar, and (b) the risk of mortality conferred by genotypes does not depend on year of birth. METHODS: We explored the validity of these assumptions and reviewed 15 cross-sectional studies of common apolipoprotein E (APOE) polymorphisms and longevity. RESULTS: Higher relative epsilon2 frequencies, and lower relative epsilon4 allele frequencies were observed in elderly versus younger populations. If assumptions (a) and (b) were correct the estimates for epsilon2 and epsilon4 alleles respectively versus epsilon3 alleles would be 1.34 (95% CI: 1.19, 1.35) and 0.54 (95% CI: 0.46, 0.63) in elderly versus younger individuals. However, there was an association between relative epsilon4 allele frequency in controls and APOE epsilon4 effect (beta = -0.45, 95% CI: -0.89, 0.00). In relation to assumption (a) there is substantial variation in relative APOE allele frequencies (4-21%), with considerable heterogeneity evident within geographically proximate populations, population admixture is likely to have resulted in changes in allele frequency over time, and assumption (b) APOE related causes of death are context specific and have changed considerably over the last 30 years. CONCLUSION: The validity of case-control type studies of the genetic basis of longevity based on the above assumptions is questionable, especially when considerable differences exist in allele frequency by population and when the genes in question interact with environmental factors, which vary by time and place.  相似文献   

12.
Mendelian randomization analyses are often performed using summarized data. The causal estimate from a one‐sample analysis (in which data are taken from a single data source) with weak instrumental variables is biased in the direction of the observational association between the risk factor and outcome, whereas the estimate from a two‐sample analysis (in which data on the risk factor and outcome are taken from non‐overlapping datasets) is less biased and any bias is in the direction of the null. When using genetic consortia that have partially overlapping sets of participants, the direction and extent of bias are uncertain. In this paper, we perform simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap. We consider both a continuous outcome and a case‐control setting with a binary outcome. For a continuous outcome, bias due to sample overlap is a linear function of the proportion of overlap between the samples. So, in the case of a null causal effect, if the relative bias of the one‐sample instrumental variable estimate is 10% (corresponding to an F parameter of 10), then the relative bias with 50% sample overlap is 5%, and with 30% sample overlap is 3%. In a case‐control setting, if risk factor measurements are only included for the control participants, unbiased estimates are obtained even in a one‐sample setting. However, if risk factor data on both control and case participants are used, then bias is similar with a binary outcome as with a continuous outcome. Consortia releasing publicly available data on the associations of genetic variants with continuous risk factors should provide estimates that exclude case participants from case‐control samples.  相似文献   

13.
In this population-based case control study, we recruited 1601 pulmonary tuberculosis cases and 1526 healthy controls, aiming to investigate the association of genetic polymorphisms of the P2X7 gene with the susceptibility to and prognosis of pulmonary tuberculosis in a Chinese Han population. Five single-nucleotide polymorphisms (SNPs) in the P2X7 gene were genotyped. The odds ratio (OR) or relative risk (RR) together with 95% confidence interval (CI) were used to estimated the effect of genetic polymorphisms on the disease. After correction for multiple comparisons, the SNP rs1718119 remained significant. The allele A of rs1718119 was related to a reduced risk for all active tuberculosis (OR for each additional allele A: 0.81, 95% CI: 0.69–0.94) and sputum smear-positive cases (OR for each additional allele A: 0.78, 95% CI: 0.66–0.93). The effects of these genetic variations were more evident among smokers. Survival analysis showed a weak association between rs7958311 and treatment outcome, where each additional allele A of the SNP rs7958311 contributed to a 59% increase in the probability of a successful treatment outcome (adjusted RR: 1.59, 95% CI: 1.05–2.40, P = 0.028); but it wasn't significant after the Bonferroni correction. We demonstrated that genetic variations of the P2X7 gene might be involved in the risk and prognosis of human tuberculosis.  相似文献   

14.
In affected-sib-pair (ASP) studies, parameters such as the locus-specific sibling relative risk, lambda(s), may be estimated and used to decide whether or not to continue the search for susceptibility genes. Typically, a maximum likelihood point estimate of lambda(s) is given, but since this estimate may have substantial variability, it is of interest to obtain confidence limits for the true value of lambda(s). While a variety of methods for doing this exist, there is considerable uncertainty over their reliability. This is because the discrete nature of ASP data and the imposition of genetic "possible triangle" constraints during the likelihood maximization mean that asymptotic results may not apply. In this paper, we use simulation to evaluate the reliability of various asymptotic and simulation-based confidence intervals, the latter being based on a resampling, or bootstrap approach. We seek to identify, from the large pool of methods available, those methods that yield short intervals with accurate coverage probabilities for ASP data. Our results show that many of the most popular bootstrap confidence interval methods perform poorly for ASP data, giving coverage probabilities much lower than claimed. The test-inversion, profile-likelihood, and asymptotic methods, however, perform well, although some care is needed in choice of nuisance parameter. Overall, in simulations under a variety of different genetic hypotheses, we find that the asymptotic methods of confidence interval evaluation are the most reliable, even in small samples. We illustrate our results with a practical application to a real data set, obtaining confidence intervals for the sibling relative risks associated with several loci involved in type 1 diabetes.  相似文献   

15.
Next generation sequencing technologies have made it possible to investigate the role of rare variants (RVs) in disease etiology. Because RVs associated with disease susceptibility tend to be enriched in families with affected individuals, study designs based on affected sib pairs (ASP) can be more powerful than case–control studies. We construct tests of RV-set association in ASPs for single genomic regions as well as for multiple regions. Single-region tests can efficiently detect a gene region harboring susceptibility variants, while multiple-region extensions are meant to capture signals dispersed across a biological pathway, potentially as a result of locus heterogeneity. Within ascertained ASPs, the test statistics contrast the frequencies of duplicate rare alleles (usually appearing on a shared haplotype) against frequencies of a single rare allele copy (appearing on a nonshared haplotype); we call these allelic parity tests. Incorporation of minor allele frequency estimates from reference populations can markedly improve test efficiency. Under various genetic penetrance models, application of the tests in simulated ASP data sets demonstrates good type I error properties as well as power gains over approaches that regress ASP rare allele counts on sharing state, especially in small samples. We discuss robustness of the allelic parity methods to the presence of genetic linkage, misspecification of reference population allele frequencies, sequencing error and de novo mutations, and population stratification. As proof of principle, we apply single- and multiple-region tests in a motivating study data set consisting of whole exome sequencing of sisters ascertained with early onset breast cancer.  相似文献   

16.
In epidemiology, one approach to investigating the dependence of disease risk on an explanatory variable in the presence of several confounding variables is by fitting a binary regression using a conditional likelihood, thus eliminating the nuisance parameters. When the explanatory variable is measured with error, the estimated regression coefficient is biased usually towards zero. Motivated by the need to correct for this bias in analyses that combine data from a number of case-control studies of lung cancer risk associated with exposure to residential radon, two approaches are investigated. Both employ the conditional distribution of the true explanatory variable given the measured one. The method of regression calibration uses the expected value of the true given measured variable as the covariate. The second approach integrates the conditional likelihood numerically by sampling from the distribution of the true given measured explanatory variable. The two approaches give very similar point estimates and confidence intervals not only for the motivating example but also for an artificial data set with known properties. These results and some further simulations that demonstrate correct coverage for the confidence intervals suggest that for studies of residential radon and lung cancer the regression calibration approach will perform very well, so that nothing more sophisticated is needed to correct for measurement error.  相似文献   

17.
We consider a relative risk and a risk difference model for binomial data, and a rate difference model for Poisson )person year( data. It is assumed that the data are stratified in a large number of small strata. If each stratum has its own parameter in the model, then, due to the large number of parameters, straightforward maximum likelihood leads to inconsistent estimates of the relevant parameters. By contrast to the logistic model, conditioning on the number of events per stratum does not help in eliminating the stratum nuisance parameters. We propose a pseudo likelihood method to overcome these consistency problems. The resulting pseudo maximum likelihood estimates can easily be computed with standard statistical software. Our approach gives a more general framework for the Mantel–Haenszel type estimators proposed in the literature. In the special case of a series of 2 × 2 tables, for the risk and rate difference models, our approach yields exactly these ad hoc Mantel–Haenszel estimators, while for the relative risk model it gives a close approximation of the Mantel–Haenszel relative risk estimator. For the regression models corresponding to the association measures relative risk, risk difference and rate difference, our method provides analogues of conditional logistic regression, which were not previously available.  相似文献   

18.
There is considerable evidence indicating that disease risk in carriers of high-risk mutations (e.g. BRCA1 and BRCA2) varies by other genetic factors. Such mutations tend to be rare in the population and studies of genetic modifiers of risk have focused on sampling mutation carriers through clinical genetics centres. Genetic testing targets affected individuals from high-risk families, making ascertainment of mutation carriers non-random with respect to disease phenotype. Standard analytical methods can lead to biased estimates of associations. Methods proposed to address this problem include a weighted-cohort (WC) and retrospective likelihood (RL) approach. Their performance has not been evaluated systematically. We evaluate these methods by simulation and extend the RL to analysing associations of two diseases simultaneously (competing risks RL-CRRL). The standard cohort approach (Cox regression) yielded the most biased risk ratio (RR) estimates (relative bias-RB: -25% to -17%) and had the lowest power. The WC and RL approaches provided similar RR estimates, were least biased (RB: -2.6% to 2.5%), and had the lowest mean-squared errors. The RL method generally had more power than WC. When analysing associations with two diseases, ignoring a potential association with one disease leads to inflated type I errors for inferences with respect to the second disease and biased RR estimates. The CRRL generally gave unbiased RR estimates for both disease risks and had correct nominal type I errors. These methods are illustrated by analyses of genetic modifiers of breast and ovarian cancer risk for BRCA1 and BRCA2 mutation carriers.  相似文献   

19.
Results of a simulation study with two methods of analysis of data simulated under the mixed model on a 232-member pedigree are presented. The programs Pedigree Analysis Package (PAP), which approximate the likelihoods needed in a complex segregation analysis, and MIXD, which uses Monte Carlo Markov chain (MCMC), to estimate likelihoods were used. PAP obtained unbiased estimates of the major locus genotype means and the gene frequency, but biased estimates of the environmental variance component, and thus the heritability. A substantial fraction of the runs did not converge to an internal set of parameter estimates when analyzed with PAP. MIXD, which uses the Gibbs sampler to perform the MCMC sampling, produced unbiased estimates of all parameters with considerably more accuracy than obtained with PAP, and did not suffer from convergence of estimates to the boundary of the parameter space. The difference in behavior and accuracy of parameter estimates between PAP and MIXD was most apparent for models with either high or low residual additive genetic variance. Thus in situations where accuracy of the model is important, use of MCMC methods may be useful. In situations where less accuracy is needed, approximation methods may be adequate. Practical issues in using MCMC as implemented in MIXD to fit the mixed model are also discussed. Results of the simulations indicate that, unlike PAP, the starting configurations of most parameter estimates do not substantially influence the final parameter estimates in analysis with MIXD. © 1996 Wiley-Liss, Inc.  相似文献   

20.
Mendelian randomization is the use of genetic instrumental variables to obtain causal inferences from observational data. Two recent developments for combining information on multiple uncorrelated instrumental variables (IVs) into a single causal estimate are as follows: (i) allele scores, in which individual‐level data on the IVs are aggregated into a univariate score, which is used as a single IV, and (ii) a summary statistic method, in which causal estimates calculated from each IV using summarized data are combined in an inverse‐variance weighted meta‐analysis. To avoid bias from weak instruments, unweighted and externally weighted allele scores have been recommended. Here, we propose equivalent approaches using summarized data and also provide extensions of the methods for use with correlated IVs. We investigate the impact of different choices of weights on the bias and precision of estimates in simulation studies. We show that allele score estimates can be reproduced using summarized data on genetic associations with the risk factor and the outcome. Estimates from the summary statistic method using external weights are biased towards the null when the weights are imprecisely estimated; in contrast, allele score estimates are unbiased. With equal or external weights, both methods provide appropriate tests of the null hypothesis of no causal effect even with large numbers of potentially weak instruments. We illustrate these methods using summarized data on the causal effect of low‐density lipoprotein cholesterol on coronary heart disease risk. It is shown that a more precise causal estimate can be obtained using multiple genetic variants from a single gene region, even if the variants are correlated. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号