首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the 'standard' approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal 'one-step' estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies.  相似文献   

2.
This paper discusses causal inference with survival data from cluster randomized trials. It is argued that cluster randomization carries the potential for post-randomization exposures which involve differentially selective compliance between treatment arms, even for an all or nothing exposure at the individual level. Structural models can be employed to account for post-randomization exposures, but should not ignore clustering. We show how marginal modelling and random effects models allow to adapt structural estimators to account for clustering. Our findings are illustrated with data from a vitamin A trial for the prevention of infant mortality in the rural plains of Nepal.  相似文献   

3.
In this article we consider the problem of making inferences about the parameter β0 indexing the conditional mean of an outcome given a vector of regressors when a subset of the variables (outcome or covariates) are missing for some study subjects and the probability of non-response depends upon both observed and unobserved data values, that is, non-response is non-ignorable. We propose a new class of inverse probability of censoring weighted estimators that are consistent and asymptotically normal (CAN) for estimating β0 when the non-response probabilities can be parametrically modelled and a CAN estimator exists. The proposed estimators do not require full specification of the likelihood and their computation does not require numerical integration. We show that the asymptotic variance of the optimal estimator in our class attains the semi-parametric variance bound for the model. In some models, no CAN estimator of β0 exists. We provide a general algorithm for determining when CAN estimators of β0 exist. Our results follow after specializing a general representation described in the article for the efficient score and the influence function of regular, asymptotically linear estimators in an arbitrary semi-parametric model with non-ignorable non-response in which the probability of observing complete data is bounded away from zero and the non-response probabilities can be parametrically modelled. © 1997 by John Wiley & Sons, Ltd.  相似文献   

4.
ObjectiveRandomized trials generally use “frequentist” statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004.Study Design and SettingWe used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P < 0.05 in favor of the intervention were deemed “positive”; otherwise, they were “negative.” We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects.ResultsOf 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio < 1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies.ConclusionBayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.  相似文献   

5.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   

6.
We introduce a new approach to inference for subgroups in clinical trials. We use Bayesian model selection, and a threshold on posterior model probabilities to identify subgroup effects for reporting. For each covariate of interest, we define a separate class of models, and use the posterior probability associated with each model and the threshold to determine the existence of a subgroup effect. As usual in Bayesian clinical trial design we compute frequentist operating characteristics, and achieve the desired error probabilities by choosing an appropriate threshold(s) for the posterior probabilities.  相似文献   

7.
Ideally, the basis for estimation of variance components is large random samples selected from a well-defined reference population. Some large biomedical studies, however, consist of a random sample (S) of individuals ascertained at an initial visit, with a selected subsample from S seen on one or more follow-up visits. In this setting, the usual formulae for estimation of variance components are problematic since they do not take into account the censored nature of the data. For this purpose, we consider both maximum likelihood and moments estimation methods that take the censoring into account, and we compare their performance, in terms of bias and mean squared error, with that of the usual variance components estimators that ignore censoring. We find the maximum likelihood estimators somewhat more efficient than method of moments estimators, provided that the assumption of multivariate normality is met; furthermore, these estimators are substantially more efficient than those that ignore the censoring. It is important to record data on all individuals, even those who do not meet screening criteria; one can estimate between- and within-person variance more accurately with use of all available data. The resulting estimates are crucial in calculation of power for the design of future studies.  相似文献   

8.
Randomization models are useful in supporting the validity of linear model analyses applied to data from a clinical trial that employed randomization via permuted blocks. Here, a randomization model for clinical trials data with arbitrary randomization methodology is developed, with treatment effect estimators and standard error estimators valid from a randomization perspective. A central limit theorem for the treatment effect estimator is also derived. As with permuted‐blocks randomization, a typical linear model analysis provides results similar to the randomization model results when, roughly, unit effects display no pattern over time. A key requirement for the randomization inference is that the unconditional probability that any patient receives active treatment is constant across patients ; when this probability condition is violated, the treatment effect estimator is biased from a randomization perspective. Most randomization methods for balanced, 1 to 1, treatment allocation satisfy this condition. However, many dynamic randomization methods for planned unbalanced treatment allocation, like 2 to 1, do not satisfy this constant probability condition, and these methods should be avoided. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
“…The customary test for an observed difference…is based on an enumeration of the probabilities, on the initial hypothesis that two treatments do not differ in their effects,…of all the various results which would occur if the trial were repeated indefinitely with different random samples of the same size as those actually used.” –Peter Armitage (“Sequential tests in prophylactic and therapeutic trials” in Quarterly Journal of Medicine, 1954;23(91):255-274). Randomization has been the hallmark of the clinical trial since Sir Bradford Hill adopted it in the 1946 streptomycin trial. An exploration of the early literature yields three rationales, ie, (i) the incorporation of randomization provides unpredictability in treatment assignments, thereby mitigating selection bias; (ii) randomization tends to ensure similarity in the treatment groups on known and unknown confounders (at least asymptotically); and (iii) the act of randomization itself provides a basis for inference when random sampling is not conducted from a population model. Of these three, rationale (iii) is often forgotten, ignored, or left untaught. Today, randomization is a rote exercise, scarcely considered in protocols or medical journal articles. Yet, the literature of the last century is rich with statistical articles on randomization methods and their consequences, authored by some of the pioneers of the biostatistics and statistics world. In this paper, we review some of this literature and describe very simple methods to rectify some of the oversight. We describe how randomization-based inference can be used for virtually any outcome of interest in a clinical trial. Special mention is made of nonstandard clinical trials situations.  相似文献   

10.
As randomization methods use more information in more complex ways to assign patients to treatments, analysis of the resulting data becomes challenging. The treatment assignment vector and outcome vector become correlated whenever randomization probabilities depend on data correlated with outcomes. One straightforward analysis method is a re-randomization test that fixes outcome data and creates a reference distribution for the test statistic by repeatedly re-randomizing according to the same randomization method used in the trial. This article reviews re-randomization tests, especially in nonstandard settings like covariate-adaptive and response-adaptive randomization. We show that re-randomization tests provide valid inference in a wide range of settings. Nonetheless, there are simple examples demonstrating limitations.  相似文献   

11.
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length.  相似文献   

12.
In sequential multiple assignment randomized trials, longitudinal outcomes may be the most important outcomes of interest because this type of trials is usually conducted in areas of chronic diseases or conditions. We propose to use a weighted generalized estimating equation (GEE) approach to analyzing data from such type of trials for comparing two adaptive treatment strategies based on generalized linear models. Although the randomization probabilities are known, we consider estimated weights in which the randomization probabilities are replaced by their empirical estimates and prove that the resulting weighted GEE estimator is more efficient than the estimators with true weights. The variance of the weighted GEE estimator is estimated by an empirical sandwich estimator. The time variable in the model can be linear, piecewise linear, or more complicated forms. This provides more flexibility that is important because, in the adaptive treatment setting, the treatment changes over time and, hence, a single linear trend over the whole period of study may not be practical. Simulation results show that the weighted GEE estimators of regression coefficients are consistent regardless of the specification of the correlation structure of the longitudinal outcomes. The weighted GEE method is then applied in analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Generalized estimating equations (GEEs) are commonly used to estimate transition models. When the Markov assumption does not hold but first-order transition probabilities are still of interest, the transition inference is sensitive to the choice of working correlation. In this paper, we consider a random process transition model as the true underlying data generating mechanism, which characterizes subject heterogeneity and complex dependence structure of the outcome process in a very flexible way. We formally define two types of transition probabilities at the population level: “naive transition probabilities” that average across all the transitions and “population-average transition probabilities” that average the subject-specific transition probabilities. Through asymptotic bias calculations and finite-sample simulations, we demonstrate that the unstructured working correlation provides unbiased estimators of the population-average transition probabilities while the independence working correlation provides unbiased estimators of the naive transition probabilities. For population-average transition estimation, we demonstrate that the sandwich estimator fails for unstructured GEE and recommend the use of either jackknife or bootstrap variance estimates. The proposed method is motivated by and applied to the NEXT Generation Health Study, where the interest is in estimating the population-average transition probabilities of alcohol use in adolescents.  相似文献   

14.
In two‐stage randomization designs, patients are randomized to one of the initial treatments, and at the end of the first stage, they are randomized to one of the second stage treatments depending on the outcome of the initial treatment. Statistical inference for survival data from these trials uses methods such as marginal mean models and weighted risk set estimates. In this article, we propose two forms of weighted Kaplan–Meier (WKM) estimators based on inverse‐probability weighting—one with fixed weights and the other with time‐dependent weights. We compare their properties with that of the standard Kaplan–Meier (SKM) estimator, marginal mean model‐based (MM) estimator and weighted risk set (WRS) estimator. Simulation study reveals that both forms of weighted Kaplan–Meier estimators are asymptotically unbiased, and provide coverage rates similar to that of MM and WRS estimators. The SKM estimator, however, is biased when the second randomization rates are not the same for the responders and non‐responders to initial treatment. The methods described are demonstrated by applying to a leukemia data set. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Empirical bayes methods and false discovery rates for microarrays   总被引:22,自引:0,他引:22  
In a classic two-sample problem, one might use Wilcoxon's statistic to test for a difference between treatment and control subjects. The analogous microarray experiment yields thousands of Wilcoxon statistics, one for each gene on the array, and confronts the statistician with a difficult simultaneous inference situation. We will discuss two inferential approaches to this problem: an empirical Bayes method that requires very little a priori Bayesian modeling, and the frequentist method of "false discovery rates" proposed by Benjamini and Hochberg in 1995. It turns out that the two methods are closely related and can be used together to produce sensible simultaneous inferences.  相似文献   

16.
Parameters for latent transition analysis (LTA) are easily estimated by maximum likelihood (ML) or Bayesian method via Markov chain Monte Carlo (MCMC). However, unusual features in the likelihood can cause difficulties in ML and Bayesian inference and estimation, especially with small samples. In this study we explore several problems in drawing inference for LTA in the context of a simulation study and a substance use example. We argue that when conventional ML and Bayesian estimates behave erratically, problems often may be alleviated with a small amount of prior input for LTA with small samples. This paper proposes a dynamic data-dependent prior for LTA with small samples and compares the performance of the estimation methods with the proposed prior in drawing inference.  相似文献   

17.
There are no gold standard methods that perform well in every situation when it comes to the analysis of multiple time series of counts. In this paper, we consider a positively correlated bivariate time series of counts and propose a parameter-driven Poisson regression model for its analysis. In our proposed model, we employ a latent autoregressive process, AR(p) to accommodate the temporal correlations in the two series. We compute the familiar maximum likelihood estimators of the model parameters and their standard errors via a Bayesian data cloning approach. We apply the model to the analysis of a bivariate time series arising from asthma-related visits to emergency rooms across the Canadian province of Ontario.  相似文献   

18.
Mendelian randomization (MR) uses genetic information as an instrumental variable (IV) to estimate the causal effect of an exposure of interest on an outcome in the presence of unknown confounding. We are interested in the causal effect of cigarette smoking on lung cancer survival, which is subject to confounding by underlying pulmonary functions. Despite the well-developed IV analyses for the continuous and binary outcomes, the scarcity of methodology for the survival outcome limits its utility for the time-to-event data collected in many observational studies. We propose an IV analysis method in the survival context, estimating causal effects on a transformed survival time and survival probabilities using semiparametric linear transformation models. We study the conditions under which hazard ratio and the effect on survival probability can be approximated. For statistical inference, we construct estimating equations to circumvent the difficulty in deriving joint likelihood of the exposure and the outcome, due to the unknown confounding. Asymptotic properties of the proposed estimators are established without parametric assumptions about confounders. We study the finite sample performance in extensive simulation studies. The MR analysis of a lung cancer study suggests a harmful prognostic effect of smoking pack-years that would have been missed by the crude association.  相似文献   

19.
To estimate causal effects of vaccine on post‐infection outcomes, Hudgens and Halloran (2006) defined a post‐infection causal vaccine efficacy estimand VEI based on the principal stratification framework. They also derived closed forms for the maximum likelihood estimators of the causal estimand under some assumptions. Extending their research, we propose a Bayesian approach to estimating the causal vaccine effects on binary post‐infection outcomes. The identifiability of the causal vaccine effect VEI is discussed under different assumptions on selection bias. The performance of the proposed Bayesian method is compared with the maximum likelihood method through simulation studies and two case studies — a clinical trial of a rotavirus vaccine candidate and a field study of pertussis vaccination. For both case studies, the Bayesian approach provided similar inference as the frequentist analysis. However, simulation studies with small sample sizes suggest that the Bayesian approach provides smaller bias and shorter confidence interval length. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We present a Bayesian design for a multi-centre, randomized clinical trial of two chemotherapy regimens for advanced or metastatic unresectable soft tissue sarcoma. After randomization, each patient receives up to four stages of chemotherapy, with the patient's disease evaluated after each stage and categorized on a trinary scale of severity. Therapy is continued to the next stage if the patient's disease is stable, and is discontinued if either tumour response or treatment failure is observed. We assume a probability model that accounts for baseline covariates and the multi-stage treatment and disease evaluation structure. The design uses covariate-adjusted adaptive randomization based on a score that combines the patient's probabilities of overall treatment success or failure. The adaptive randomization procedure generalizes the method proposed by Thompson (1933) for two binomial distributions with beta priors. A simulation study of the design in the context of the sarcoma trial is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号