首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Countermatching designs can provide more efficient estimates than simple matching or case–cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time‐varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case–control designs in the presence of time‐varying variables. A simulation study is carried out, which considers four different scenarios including a binary time‐dependent variable, a continuous time‐dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case–cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case–cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time‐varying covariates. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Analysing the determinants and consequences of hospital‐acquired infections involves the evaluation of large cohorts. Infected patients in the cohort are often rare for specific pathogens, because most of the patients admitted to the hospital are discharged or die without such an infection. Death and discharge are competing events to acquiring an infection, because these individuals are no longer at risk of getting a hospital‐acquired infection. Therefore, the data is best analysed with an extended survival model – the extended illness‐death model. A common problem in cohort studies is the costly collection of covariate values. In order to provide efficient use of data from infected as well as uninfected patients, we propose a tailored case‐cohort approach for the extended illness‐death model. The basic idea of the case‐cohort design is to only use a random sample of the full cohort, referred to as subcohort, and all cases, namely the infected patients. Thus, covariate values are only obtained for a small part of the full cohort. The method is based on existing and established methods and is used to perform regression analysis in adapted Cox proportional hazards models. We propose estimation of all cause‐specific cumulative hazards and transition probabilities in an extended illness‐death model based on case‐cohort sampling. As an example, we apply the methodology to infection with a specific pathogen using a large cohort from Spanish hospital data. The obtained results of the case‐cohort design are compared with the results in the full cohort to investigate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case‐control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency‐matching case‐control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP‐secondary phenotype associations and had better‐controlled type I error probabilities. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc. 35:739‐743, 2011  相似文献   

4.
Genome‐wide association studies (GWAS) often measure gene–environment interactions (G × E). We consider the problem of accurately estimating a G × E in a case–control GWAS when a subset of the controls have silent, or undiagnosed, disease and the frequency of the silent disease varies by the environmental variable. We show that using case–control status without accounting for misdiagnosis can lead to biased estimates of the G × E. We further propose a pseudolikelihood approach to remove the bias and accurately estimate how the relationship between the genetic variant and the true disease status varies by the environmental variable. We demonstrate our method in extensive simulations and apply our method to a GWAS of prostate cancer.  相似文献   

5.
The case–cohort study design has often been used in studies of a rare disease or for a common disease with some biospecimens needing to be preserved for future studies. A case–cohort study design consists of a random sample, called the subcohort, and all or a portion of the subjects with the disease of interest. One advantage of the case–cohort design is that the same subcohort can be used for studying multiple diseases. Stratified random sampling is often used for the subcohort. Additive hazards models are often preferred in studies where the risk difference, instead of relative risk, is of main interest. Existing methods do not use the available covariate information fully. We propose a more efficient estimator by making full use of available covariate information for the additive hazards model with data from a stratified case–cohort design with rare (the traditional situation) and non‐rare (the generalized situation) diseases. We propose an estimating equation approach with a new weight function. The proposed estimators are shown to be consistent and asymptotically normally distributed. Simulation studies show that the proposed method using all available information leads to efficiency gain and stratification of the subcohort improves efficiency when the strata are highly correlated with the covariates. Our proposed method is applied to data from the Atherosclerosis Risk in Communities study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
ObjectivesWe provide a case-cohort approach and show that a full competing risk analysis is feasible even in a reduced data set. Competing events for hospital-acquired infections are death or discharge from the hospital because they preclude the observation of such infections.Study Design and SettingUsing surveillance data of 6,568 patient admissions (full cohort) from two Spanish intensive care units, we propose a case-cohort approach which uses only data from a random sample of the full cohort and all infected patients (the cases). We combine established methodology to study following measures: event-specific as well as subdistribution hazard ratios for all three events (infection, death, and discharge), cumulative hazards as well as incidence functions by risk factor, and also for all three events.ResultsCompared with the values from the full cohort, all measures are well approximated with the case-cohort design. For the event of interest (infection), event-specific and subdistribution hazards can be estimated with the full efficiency of the case-cohort design. So, standard errors are only slightly increased, whereas the precision of estimated hazards of the competing events is inflated according to the size of the subcohort.ConclusionThe case-cohort design provides an appropriate sampling design for studying hospital-acquired infections in a reduced data set. Potential effects of risk factors on the competing events (death and discharge) can be evaluated.  相似文献   

7.
When conducting a meta‐analysis of studies with bivariate binary outcomes, challenges arise when the within‐study correlation and between‐study heterogeneity should be taken into account. In this paper, we propose a marginal beta‐binomial model for the meta‐analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta‐binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed‐form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta‐binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study‐specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta‐binomial model with the bivariate generalized linear mixed model and the Sarmanov beta‐binomial model by simulation studies. Interestingly, the results show that the marginal beta‐binomial model performs better than the Sarmanov beta‐binomial model, whether or not the true model is Sarmanov beta‐binomial, and the marginal beta‐binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta‐analyses of diagnostic accuracy studies and a meta‐analysis of case–control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Competing risks analysis considers time‐to‐first‐event (‘survival time’) and the event type (‘cause’), possibly subject to right‐censoring. The cause‐, i.e. event‐specific hazards, completely determine the competing risk process, but simulation studies often fall back on the much criticized latent failure time model. Cause‐specific hazard‐driven simulation appears to be the exception; if done, usually only constant hazards are considered, which will be unrealistic in many medical situations. We explain simulating competing risks data based on possibly time‐dependent cause‐specific hazards. The simulation design is as easy as any other, relies on identifiable quantities only and adds to our understanding of the competing risks process. In addition, it immediately generalizes to more complex multistate models. We apply the proposed simulation design to computing the least false parameter of a misspecified proportional subdistribution hazard model, which is a research question of independent interest in competing risks. The simulation specifications have been motivated by data on infectious complications in stem‐cell transplanted patients, where results from cause‐specific hazards analyses were difficult to interpret in terms of cumulative event probabilities. The simulation illustrates that results from a misspecified proportional subdistribution hazard analysis can be interpreted as a time‐averaged effect on the cumulative event probability scale. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Power for time‐to‐event analyses is usually assessed under continuous‐time models. Often, however, times are discrete or grouped, as when the event is only observed when a procedure is performed. Wallenstein and Wittes (Biometrics, 1993) describe the power of the Mantel–Haenszel test for discrete lifetables under their chained binomial model for specified vectors of event probabilities over intervals of time. Herein, the expressions for these probabilities are derived under a piecewise exponential model allowing for staggered entry and losses to follow‐up. Radhakrishna (Biometrics, 1965) showed that the Mantel–Haenszel test is maximally efficient under the alternative of a constant odds ratio and derived the optimal weighted test under other alternatives. Lachin (Biostatistical Methods: The Assessment of Relative Risks, 2011) described the power function of this family of weighted Mantel–Haenszel tests. Prentice and Gloeckler (Biometrics, 1978) described a generalization of the proportional hazards model for grouped time data and the corresponding maximally efficient score test. Their test is also shown to be a weighted Mantel–Haenszel test, and its power function is likewise obtained. There is trivial loss in power under the discrete chained binomial model relative to the continuous‐time case provided that there is a modest number of periodic evaluations. Relative to the case of homogeneity of odds ratios, there can be substantial loss in power when there is substantial heterogeneity of odds ratios, especially when heterogeneity occurs early in a study when most subjects are at risk, but little loss in power when there is heterogeneity late in a study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
This paper discusses regression analysis of multivariate current status failure time data (The Statistical Analysis of Interval‐censoring Failure Time Data. Springer: New York, 2006), which occur quite often in, for example, tumorigenicity experiments and epidemiologic investigations of the natural history of a disease. For the problem, several marginal approaches have been proposed that model each failure time of interest individually (Biometrics 2000; 56 :940–943; Statist. Med. 2002; 21 :3715–3726). In this paper, we present a full likelihood approach based on the proportional hazards frailty model. For estimation, an Expectation Maximization (EM) algorithm is developed and simulation studies suggest that the presented approach performs well for practical situations. The approach is applied to a set of bivariate current status data arising from a tumorigenicity experiment. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
An index measuring the utility of testing a DNA marker before deciding between two alternative treatments is proposed which can be estimated from pharmaco‐epidemiological case‐control or cohort studies. In the case‐control design, external estimates of the prevalence of the disease and of the frequency of the genetic risk variant are required for estimating the utility index. Formulas for point and interval estimates are derived. Empirical coverage probabilities of the confidence intervals were estimated under different scenarios of disease prevalence, prevalence of drug use, and population frequency of the genetic variant. To illustrate our method, we re‐analyse pharmaco‐epidemiological case‐control data on oral contraceptive intake and venous thrombosis in carriers and non‐carriers of the factor V Leiden mutation. We also re‐analyse cross‐sectional data from the Framingham study on a gene‐diet interaction between an APOA2 polymorphism and high saturated fat intake on obesity. We conclude that the utility index may be helpful to evaluate and appraise the potential clinical and public health relevance of gene‐environment interaction effects detected in genomic and candidate gene association studies and may be a valuable decision support for designing prospective studies on the clinical utility.  相似文献   

12.
Biomarkers are often measured over time in epidemiological studies and clinical trials for better understanding of the mechanism of diseases. In large cohort studies, case‐cohort sampling provides a cost effective method to collect expensive biomarker data for revealing the relationship between biomarker trajectories and time to event. However, biomarker measurements are often limited by the sensitivity and precision of a given assay, resulting in data that are censored at detection limits and prone to measurement errors. Additionally, the occurrence of an event of interest may preclude biomarkers from being further evaluated. Inappropriate handling of these types of data can lead to biased conclusions. Under a classical case cohort design, we propose a modified likelihood‐based approach to accommodate these special features of longitudinal biomarker measurements in the accelerated failure time models. The maximum likelihood estimators based on the full likelihood function are obtained by Gaussian quadrature method. We evaluate the performance of our case‐cohort estimator and compare its relative efficiency to the full cohort estimator through simulation studies. The proposed method is further illustrated using the data from a biomarker study of sepsis among patients with community acquired pneumonia. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Recently, multivariate random‐effects meta‐analysis models have received a great deal of attention, despite its greater complexity compared to univariate meta‐analyses. One of its advantages is its ability to account for the within‐study and between‐study correlations. However, the standard inference procedures, such as the maximum likelihood or maximum restricted likelihood inference, require the within‐study correlations, which are usually unavailable. In addition, the standard inference procedures suffer from the problem of singular estimated covariance matrix. In this paper, we propose a pseudolikelihood method to overcome the aforementioned problems. The pseudolikelihood method does not require within‐study correlations and is not prone to singular covariance matrix problem. In addition, it can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta‐analysis where some studies have outcomes missing completely at random. Simulation studies show that the pseudolikelihood method provides unbiased estimates for functions of pooled estimates, well‐estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the pseudolikelihood method is found to maintain high relative efficiency compared to that of the standard inferences with known within‐study correlations. We illustrate the proposed method through three meta‐analyses for comparison of prostate cancer treatment, for the association between paraoxonase 1 activities and coronary heart disease, and for the association between homocysteine level and coronary heart disease. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points‐based risk‐scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points‐based risk‐scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points‐based risk‐scoring systems facilitates evidence‐based clinical decision making. There is a growing interest in cause‐specific mortality and in non‐fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points‐based risk‐scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk‐scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

15.
Molecularly targeted agent (MTA) combination therapy is in the early stages of development. When using a fixed dose of one agent in combinations of MTAs, toxicity and efficacy do not necessarily increase with an increasing dose of the other agent. Thus, in dose‐finding trials for combinations of MTAs, interest may lie in identifying the optimal biological dose combinations (OBDCs), defined as the lowest dose combinations (in a certain sense) that are safe and have the highest efficacy level meeting a prespecified target. The limited existing designs for these trials use parametric dose–efficacy and dose–toxicity models. Motivated by a phase I/II clinical trial of a combination of two MTAs in patients with pancreatic, endometrial, or colorectal cancer, we propose Bayesian dose‐finding designs to identify the OBDCs without parametric model assumptions. The proposed approach is based only on partial stochastic ordering assumptions for the effects of the combined MTAs and uses isotonic regression to estimate partially stochastically ordered marginal posterior distributions of the efficacy and toxicity probabilities. We demonstrate that our proposed method appropriately accounts for the partial ordering constraints, including potential plateaus on the dose–response surfaces, and is computationally efficient. We develop a dose‐combination‐finding algorithm to identify the OBDCs. We use simulations to compare the proposed designs with an alternative design based on Bayesian isotonic regression transformation and a design based on parametric change‐point dose–toxicity and dose–efficacy models and demonstrate desirable operating characteristics of the proposed designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

16.
One of the main perceived advantages of using a case‐cohort design compared with a nested case‐control design in an epidemiologic study is the ability to evaluate with the same subcohort outcomes other than the primary outcome of interest. In this paper, we show that valid inferences about secondary outcomes can also be achieved in nested case‐control studies by using the inclusion probability weighting method in combination with an approximate jackknife standard error that can be computed using existing software. Simulation studies demonstrate that when the sample size is sufficient, this approach yields valid type 1 error and coverage rates for the analysis of secondary outcomes in nested case‐control designs. Interestingly, the statistical power of the nested case‐control design was comparable with that of the case‐cohort design when the primary and secondary outcomes were positively correlated. The proposed method is illustrated with the data from a cohort in Cardiovascular Health Study to study the association of C‐reactive protein levels and the incidence of congestive heart failure. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
In analyzing competing risks data, a quantity of considerable interest is the cumulative incidence function. Often, the effect of covariates on the cumulative incidence function is modeled via the proportional hazards model for the cause‐specific hazard function. As the proportionality assumption may be too restrictive in practice, we consider an alternative more flexible semiparametric additive hazards model of (Biometrika 1994; 81 :501–514) for the cause‐specific hazard. This model specifies the effect of covariates on the cause‐specific hazard to be additive as well as allows the effect of some covariates to be fixed and that of others to be time varying. We present an approach for constructing confidence intervals as well as confidence bands for the cause‐specific cumulative incidence function of subjects with given values of the covariates. Furthermore, we also present an approach for constructing confidence intervals and confidence bands for comparing two cumulative incidence functions given values of the covariates. The finite sample property of the proposed estimators is investigated through simulations. We conclude our paper with an analysis of the well‐known malignant melanoma data using our method. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

18.
The intraclass correlation in binary outcome data sampled from clusters is an important and versatile measure in many biological and biomedical investigations. Properties of the different estimators of the intraclass correlation based on the parametric, semi‐parametric, and nonparametric approaches have been studied extensively, mainly in terms of bias and efficiency [see, for example, Ridout et al., Biometrics 1999, 55:137–148; Paul et al., Journal of Statistical Computation and Simulation 2003, 73:507–523; and Lee, Statistical Modelling 2004, 4: 113–126], but little attention has been paid to extending these results to the problem of the confidence intervals. In this article, we generalize the results of the four point estimators by constructing asymptotic confidence intervals obtaining closed‐form asymptotic and sandwich variance expressions of those four point estimators. It appears from simulation results that the asymptotic confidence intervals based on these four estimators have serious under‐coverage. To remedy this, we introduce the Fisher's z‐transformation approach on the intraclass correlation coefficient, the profile likelihood approach based on the beta‐binomial model, and the hybrid profile variance approach based on the quadratic estimating equation for constructing the confidence intervals of the intraclass correlation for binary outcome data. As assessed by Monte Carlo simulations, these confidence interval approaches show significant improvement in the coverage probabilities. Moreover, the profile likelihood approach performs quite well by providing coverage levels close to nominal over a wide range of parameter combinations. We provide applications to biological data to illustrate the methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
Almqvist C, Garden F, Kemp AS, Li Q, Crisafulli D, Tovey ER, Xuan W, Marks GB for the CAPS investigators. Effects of early cat or dog ownership on sensitisation and asthma in a high‐risk cohort without disease‐related modification of exposure. Paediatric and Perinatal Epidemiology 2010; 24: 171–178. Variation in the observed association between pet ownership and allergic disease may be attributable to selection bias and confounding. The aim of this study was to suggest a method to assess disease‐related modification of exposure and second to examine how cat acquisition or dog ownership in early life affects atopy and asthma at 5 years. Information on sociodemographic factors and cat and dog ownership was collected longitudinally in an initially cat‐free Australian birth cohort based on children with a family history of asthma. At age 5 years, 516 children were assessed for wheezing, and 488 for sensitisation. Data showed that by age 5 years, 82 children had acquired a cat. Early manifestations of allergic disease did not foreshadow a reduced rate of subsequent acquisition of a cat. Independent risk factors for acquiring a cat were exposure to tobacco smoke at home odds ratio (OR) 1.92 [95% confidence interval (CI) 1.13, 3.26], maternal education ≤12 years OR 1.95 [1.08, 3.51] and dog ownership OR 2.23 [1.23, 4.05]. Cat or dog exposure in the first 5 years was associated with a decreased risk of any allergen sensitisation, OR 0.50 [0.28, 0.88] but no association with wheeze OR 0.96 [0.57, 1.61]. This risk was not affected by age at which the cat was acquired or whether the pet was kept in‐ or outdoors. In conclusion, cat or dog ownership reduced the risk of subsequent atopy in this high‐risk birth cohort. This cannot be explained by disease‐related modification of exposure. Public health recommendations on the effect of cat and dog ownership should be based on birth cohort studies where possible selection bias has been taken into account.  相似文献   

20.
In this paper, we consider fitting semiparametric additive hazards models for case‐cohort studies using a multiple imputation approach. In a case‐cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing‐at‐random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号