首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Familial aggregation and the role of genetic and environmental factors can be investigated through family studies analysed using the liability‐threshold model. The liability‐threshold model ignores the timing of events including the age of disease onset and right censoring, which can lead to estimates that are difficult to interpret and are potentially biased. We incorporate the time aspect into the liability‐threshold model for case‐control‐family data following the same approach that has been applied in the twin setting. Thus, the data are considered as arising from a competing risks setting and inverse probability of censoring weights are used to adjust for right censoring. In the case‐control‐family setting, recognising the existence of competing events is highly relevant to the sampling of control probands. Because of the presence of multiple family members who may be censored at different ages, the estimation of inverse probability of censoring weights is not as straightforward as in the twin setting but requires consideration. We propose to employ a composite likelihood conditioning on proband status that markedly simplifies adjustment for right censoring. We assess the proposed approach using simulation studies and apply it in the analysis of two Danish register‐based case‐control‐family studies: one on cancer diagnosed in childhood and adolescence, and one on early‐onset breast cancer. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
Rich meta‐epidemiological data sets have been collected to explore associations between intervention effect estimates and study‐level characteristics. Welton et al proposed models for the analysis of meta‐epidemiological data, but these models are restrictive because they force heterogeneity among studies with a particular characteristic to be at least as large as that among studies without the characteristic. In this paper we present alternative models that are invariant to the labels defining the 2 categories of studies. To exemplify the methods, we use a collection of meta‐analyses in which the Cochrane Risk of Bias tool has been implemented. We first investigate the influence of small trial sample sizes (less than 100 participants), before investigating the influence of multiple methodological flaws (inadequate or unclear sequence generation, allocation concealment, and blinding). We fit both the Welton et al model and our proposed label‐invariant model and compare the results. Estimates of mean bias associated with the trial characteristics and of between‐trial variances are not very sensitive to the choice of model. Results from fitting a univariable model show that heterogeneity variance is, on average, 88% greater among trials with less than 100 participants. On the basis of a multivariable model, heterogeneity variance is, on average, 25% greater among trials with inadequate/unclear sequence generation, 51% greater among trials with inadequate/unclear blinding, and 23% lower among trials with inadequate/unclear allocation concealment, although the 95% intervals for these ratios are very wide. Our proposed label‐invariant models for meta‐epidemiological data analysis facilitate investigations of between‐study heterogeneity attributable to certain study characteristics.  相似文献   

3.
Sequentially administered, laboratory‐based diagnostic tests or self‐reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non‐recurring, time‐to‐event outcome that is observed using a repeatedly administered, error‐prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self‐reported data. For settings of imperfect diagnostic tests or self‐reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval‐censored, time‐to‐event outcomes. The proposed methods are applicable for the design of studies in which a time‐to‐event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
An improved method of sample size calculation for the one‐sample log‐rank test is provided. The one‐sample log‐rank test may be the method of choice if the survival curve of a single treatment group is to be compared with that of a historic control. Such settings arise, for example, in clinical phase‐II trials if the response to a new treatment is measured by a survival endpoint. Present sample size formulas for the one‐sample log‐rank test are based on the number of events to be observed, that is, in order to achieve approximately a desired power for allocated significance level and effect the trial is stopped as soon as a certain critical number of events are reached. We propose a new stopping criterion to be followed. Both approaches are shown to be asymptotically equivalent. For small sample size, though, a simulation study indicates that the new criterion might be preferred when planning a corresponding trial. In our simulations, the trial is usually underpowered, and the aspired significance level is not exploited if the traditional stopping criterion based on the number of events is used, whereas a trial based on the new stopping criterion maintains power with the type‐I error rate still controlled. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data. The placebo‐based pattern‐mixture model (Little and Yau, Biometrics 1996; 52 :1324–1333) treats missing data in a transparent and clinically interpretable manner and has been used as sensitivity analysis for monotone missing data in longitudinal studies. The standard multiple imputation approach (Rubin, Multiple Imputation for Nonresponse in Surveys, 1987) is often used to implement the placebo‐based pattern‐mixture model. We show that Rubin's variance estimate of the multiple imputation estimator of treatment effect can be overly conservative in this setting. As an alternative to multiple imputation, we derive an analytic expression of the treatment effect for the placebo‐based pattern‐mixture model and propose a posterior simulation or delta method for the inference about the treatment effect. Simulation studies demonstrate that the proposed methods provide consistent variance estimates and outperform the imputation methods in terms of power for the placebo‐based pattern‐mixture model. We illustrate the methods using data from a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
This paper presents a new goodness‐of‐fit test for an ordered stereotype model used for an ordinal response variable. The proposed test is based on the well‐known Hosmer–Lemeshow test and its version for the proportional odds regression model. The latter test statistic is calculated from a grouping scheme assuming that the levels of the ordinal response are equally spaced which might be not true. One of the main advantages of the ordered stereotype model is that it allows us to determine a new uneven spacing of the ordinal response categories, dictated by the data. The proposed test takes the use of this new adjusted spacing to partition data. A simulation study shows good performance of the proposed test under a variety of scenarios. Finally, the results of the application in two examples are presented. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
8.
9.
Point estimation for the selected treatment in a two‐stage drop‐the‐loser trial is not straightforward because a substantial bias can be induced in the standard maximum likelihood estimate (MLE) through the first stage selection process. Research has generally focused on alternative estimation strategies that apply a bias correction to the MLE; however, such estimators can have a large mean squared error. Carreras and Brannath (Stat. Med. 32:1677‐90) have recently proposed using a special form of shrinkage estimation in this context. Given certain assumptions, their estimator is shown to dominate the MLE in terms of mean squared error loss, which provides a very powerful argument for its use in practice. In this paper, we suggest the use of a more general form of shrinkage estimation in drop‐the‐loser trials that has parallels with model fitting in the area of meta‐analysis. Several estimators are identified and are shown to perform favourably to Carreras and Brannath's original estimator and the MLE. However, they necessitate either explicit estimation of an additional parameter measuring the heterogeneity between treatment effects or a quite unnatural prior distribution for the treatment effects that can only be specified after the first stage data has been observed. Shrinkage methods are a powerful tool for accurately quantifying treatment effects in multi‐arm clinical trials, and further research is needed to understand how to maximise their utility. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

10.
For analyzing complex trait association with sequencing data, most current studies test aggregated effects of variants in a gene or genomic region. Although gene‐based tests have insufficient power even for moderately sized samples, pathway‐based analyses combine information across multiple genes in biological pathways and may offer additional insight. However, most existing pathway association methods are originally designed for genome‐wide association studies, and are not comprehensively evaluated for sequencing data. Moreover, region‐based rare variant association methods, although potentially applicable to pathway‐based analysis by extending their region definition to gene sets, have never been rigorously tested. In the context of exome‐based studies, we use simulated and real datasets to evaluate pathway‐based association tests. Our simulation strategy adopts a genome‐wide genetic model that distributes total genetic effects hierarchically into pathways, genes, and individual variants, allowing the evaluation of pathway‐based methods with realistic quantifiable assumptions on the underlying genetic architectures. The results show that, although no single pathway‐based association method offers superior performance in all simulated scenarios, a modification of Gene Set Enrichment Analysis approach using statistics from single‐marker tests without gene‐level collapsing (weighted Kolmogrov‐Smirnov [WKS]‐Variant method) is consistently powerful. Interestingly, directly applying rare variant association tests (e.g., sequence kernel association test) to pathway analysis offers a similar power, but its results are sensitive to assumptions of genetic architecture. We applied pathway association analysis to an exome‐sequencing data of the chronic obstructive pulmonary disease, and found that the WKS‐Variant method confirms associated genes previously published.  相似文献   

11.
We generalize a multistage procedure for parallel gatekeeping to what we refer to as k‐out‐of‐n gatekeeping in which at least k out of n hypotheses ( 1 ? k ? n) in a gatekeeper family must be rejected in order to test the hypotheses in the following family. This gatekeeping restriction arises in certain types of clinical trials; for example, in rheumatoid arthritis trials, it is required that efficacy be shown on at least three of the four primary endpoints. We provide a unified theory of multistage procedures for arbitrary k, with k = 1 corresponding to parallel gatekeeping and k = n to serial gatekeeping. The theory provides an insight into the construction of truncated separable multistage procedures using the closure method. Explicit formulae for calculating the adjusted p‐values are given. The proposed procedure is simpler to apply for this particular problem using a stepwise algorithm than the mixture procedure and the graphical procedure with memory using entangled graphs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Dynamic prediction uses longitudinal biomarkers for real‐time prediction of an individual patient's prognosis. This is critical for patients with an incurable disease such as cancer. Biomarker trajectories are usually not linear, nor even monotone, and vary greatly across individuals. Therefore, it is difficult to fit them with parametric models. With this consideration, we propose an approach for dynamic prediction that does not need to model the biomarker trajectories. Instead, as a trade‐off, we assume that the biomarker effects on the risk of disease recurrence are smooth functions over time. This approach turns out to be computationally easier. Simulation studies show that the proposed approach achieves stable estimation of biomarker effects over time, has good predictive performance, and is robust against model misspecification. It is a good compromise between two major approaches, namely, (i) joint modeling of longitudinal and survival data and (ii) landmark analysis. The proposed method is applied to patients with chronic myeloid leukemia. At any time following their treatment with tyrosine kinase inhibitors, longitudinally measured BCR‐ABL gene expression levels are used to predict the risk of disease progression. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
In medical research, continuous markers are widely employed in diagnostic tests to distinguish diseased and non‐diseased subjects. The accuracy of such diagnostic tests is commonly assessed using the receiver operating characteristic (ROC) curve. To summarize an ROC curve and determine its optimal cut‐point, the Youden index is popularly used. In literature, the estimation of the Youden index has been widely studied via various statistical modeling strategies on the conditional density. This paper proposes a new model‐free estimation method, which directly estimates the covariate‐adjusted cut‐point without estimating the conditional density. Consequently, covariate‐adjusted Youden index can be estimated based on the estimated cut‐point. The proposed method formulates the estimation problem in a large margin classification framework, which allows flexible modeling of the covariate‐adjusted Youden index through kernel machines. The advantage of the proposed method is demonstrated in a variety of simulated experiments as well as a real application to Pima Indians diabetes study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
We examine goodness‐of‐fit tests for the proportional odds logistic regression model—the most commonly used regression model for an ordinal response variable. We derive a test statistic based on the Hosmer–Lemeshow test for binary logistic regression. Using a simulation study, we investigate the distribution and power properties of this test and compare these with those of three other goodness‐of‐fit tests. The new test has lower power than the existing tests; however, it was able to detect a greater number of the different types of lack of fit considered in this study. Moreover, the test allows for the results to be summarized in a contingency table of observed and estimated frequencies, which is a useful supplementary tool to assess model fit. We illustrate the ability of the tests to detect lack of fit using a study of aftercare decisions for psychiatrically hospitalized adolescents. The test proposed in this paper is similar to a recently developed goodness‐of‐fit test for multinomial logistic regression. A unified approach for testing goodness of fit is now available for binary, multinomial, and ordinal logistic regression models. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Parent‐of‐origin effects have been pointed out to be one plausible source of the heritability that was unexplained by genome‐wide association studies. Here, we consider a case‐control mother‐child pair design for studying parent‐of‐origin effects of offspring genes on neonatal/early‐life disorders or pregnancy‐related conditions. In contrast to the standard case‐control design, the case‐control mother‐child pair design contains valuable parental information and therefore permits powerful assessment of parent‐of‐origin effects. Suppose the region under study is in Hardy‐Weinberg equilibrium, inheritance is Mendelian at the diallelic locus under study, there is random mating in the source population, and the SNP under study is not related to risk for the phenotype under study because of linkage disequilibrium (LD) with other SNPs. Using a maximum likelihood method that simultaneously assesses likely parental sources and estimates effect sizes of the two offspring genotypes, we investigate the extent of power increase for testing parent‐of‐origin effects through the incorporation of genotype data for adjacent markers that are in LD with the test locus. Our method does not need to assume the outcome is rare because it exploits supplementary information on phenotype prevalence. Analysis with simulated SNP data indicates that incorporating genotype data for adjacent markers greatly help recover the parent‐of‐origin information. This recovery can sometimes substantially improve statistical power for detecting parent‐of‐origin effects. We demonstrate our method by examining parent‐of‐origin effects of the gene PPARGC1A on low birth weight using data from 636 mother‐child pairs in the Jerusalem Perinatal Study.  相似文献   

16.
We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure.  相似文献   

17.
18.
A fundamental challenge in analyzing next‐generation sequencing (NGS) data is to determine an individual's genotype accurately, as the accuracy of the inferred genotype is essential to downstream analyses. Correctly estimating the base‐calling error rate is critical to accurate genotype calls. Phred scores that accompany each call can be used to decide which calls are reliable. Some genotype callers, such as GATK and SAMtools, directly calculate the base‐calling error rates from phred scores or recalibrated base quality scores. Others, such as SeqEM, estimate error rates from the read data without using any quality scores. It is also a common quality control procedure to filter out reads with low phred scores. However, choosing an appropriate phred score threshold is problematic as a too high threshold may lose data, while a too low threshold may introduce errors. We propose a new likelihood‐based genotype‐calling approach that exploits all reads and estimates the per‐base error rates by incorporating phred scores through a logistic regression model. The approach, which we call PhredEM, uses the expectation–maximization (EM) algorithm to obtain consistent estimates of genotype frequencies and logistic regression parameters. It also includes a simple, computationally efficient screening algorithm to identify loci that are estimated to be monomorphic, so that only loci estimated to be nonmonomorphic require application of the EM algorithm. Like GATK, PhredEM can be used together with a linkage‐disequilibrium‐based method such as Beagle, which can further improve genotype calling as a refinement step. We evaluate the performance of PhredEM using both simulated data and real sequencing data from the UK10K project and the 1000 Genomes project. The results demonstrate that PhredEM performs better than either GATK or SeqEM, and that PhredEM is an improved, robust, and widely applicable genotype‐calling approach for NGS studies. The relevant software is freely available.  相似文献   

19.
20.
While intent‐to‐treat (ITT) analysis is widely accepted for superiority trials, there remains debate about its role in non‐inferiority trials. It has often been said that ITT analysis tends to be anti‐conservative in demonstrating non‐inferiority, suggesting that per‐protocol (PP) analysis may be preferable for non‐inferiority trials, despite the inherent bias of such analyses. We propose using randomization‐based g‐estimation analyses that more effectively preserve the integrity of randomization than do the more widely used PP analyses. Simulation studies were conducted to investigate the impacts of different types of treatment changes on the conservatism or anti‐conservatism of analyses using the ITT, PP, and g‐estimation methods in a time‐to‐event outcome. The ITT results were anti‐conservative for all simulations. Anti‐conservativeness increased with the percentage of treatment change and was more pronounced for outcome‐dependent treatment changes. PP analysis, in which treatment‐switching cases were censored at the time of treatment change, maintained type I error near the nominal level for independent treatment changes, whereas for outcome‐dependent cases, PP analysis was either conservative or anti‐conservative depending on the mechanism underlying the percentage of treatment changes. G‐estimation analysis maintained type I error near the nominal level even for outcome‐dependent treatment changes, although information on unmeasured covariates is not used in the analysis. Thus, randomization‐based g‐estimation analyses should be used to supplement the more conventional ITT and PP analyses, especially for non‐inferiority trials. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号