首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper focuses on the classical problem of comparison of treatment effects. We show that we can base a simple and intuitive approach to comparison of two treatments on the proportion of similar responses. This approach is equivalent to the standard comparison of the treatment means in the normal case with equal known variances, but is quite different in other cases. Our approach applies under two different settings: testing a null hypothesis of no treatment difference against an alternative hypothesis of a difference, and testing the null hypothesis of at least a specific difference against an alternative hypothesis of equivalence. We develop our approach both for parallel groups (independent samples), and cross-over (paired samples) studies. The two situations give rise to the known concepts of population and individual equivalence. We present a graphical procedure to supplement the method.  相似文献   

2.
Meta‐analysis of randomized controlled trials based on aggregated data is vulnerable to ecological bias if trial results are pooled over covariates that influence the outcome variable, even when the covariate does not modify the treatment effect, or is not associated with the treatment. This paper shows how, when trial results are aggregated over different levels of covariates, the within‐study covariate distribution, and the effects of both covariates and treatments can be simultaneously estimated, and ecological bias reduced. Bayesian Markov chain Monte Carlo methods are used. The method is applied to a mixed treatment comparison evidence synthesis of six alternative approaches to post‐stroke inpatient care. Results are compared with a model using only the stratified covariate data available, where each stratum is treated as a separate trial, and a model using fully aggregated data, where no covariate data are used. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, we propose a testing procedure for detecting and estimating the subgroup with an enhanced treatment effect in survival data analysis. Here, we consider a new proportional hazard model that includes a nonparametric component for the covariate effect in the control group and a subgroup‐treatment–interaction effect defined by a change plane. We develop a score‐type test for detecting the existence of the subgroup, which is doubly robust against misspecification of the baseline effect model or the propensity score but not both under mild assumptions for censoring. When the null hypothesis of no subgroup is rejected, the change‐plane parameters that define the subgroup can be estimated on the basis of supremum of the normalized score statistic. The asymptotic distributions of the proposed test statistic under the null and local alternative hypotheses are established. On the basis of established asymptotic distributions, we further propose a sample size calculation formula for detecting a given subgroup effect and derive a numerical algorithm for implementing the sample size calculation in clinical trial designs. The performance of the proposed approach is evaluated by simulation studies. An application to an AIDS clinical trial data is also given for illustration.  相似文献   

4.
In clinical trials comparing two treatments, ordinal scales of three, four or five points are often used to assess severity, both prior to and after treatment. Analysis of covariance is an attractive technique, however, the data clearly violate the normality assumption and in the presence of small samples, and large sample theory may not apply. The robustness and power of various versions of parametric analysis of covariance applied to small samples of ordinal scaled data are investigated through computer simulation. Subjects are randomized to one of two competing treatments and the pre-treatment, or baseline, assessment is used as the covariate. We compare two parametric analysis of covariance tests that vary according to the treatment of the homogeneity of regressions slopes and the two independent samples t-test on difference scores. Under the null hypothesis of no difference in adjusted treatment means, we estimated actual significance levels by comparing observed test statistics to appropriate critical values from the F- and t-distributions for nominal significance levels of 0.10, 0.05, 0.02 and 0.01. We estimated power by similar comparisons under various alternative hypotheses. The model which assumes homogeneous slopes and the t-test on difference scores were robust in the presence of three, four and five point ordinal scales. The hierarchical approach which first tests for homogeneity of regression slopes and then fits separate slopes if there is significant non-homogeneity produced significance levels that exceeded the nominal levels especially when the sample sizes were small. The model which assumes homogeneous regression slopes produced the highest power among competing tests for all of the configurations investigated. The t-test on difference scores also produced good power in the presence of small samples.  相似文献   

5.
The problem of testing for a centre effect in multi-centre studies following a proportional hazards regression analysis is considered. Two approaches to the problem can be used. One fits a proportional hazards model with a fixed covariate included for each centre (except one). The need for a centre specific adjustment is evaluated using either a score, Wald or likelihood ratio test of the hypothesis that all the centre specific effects are equal to zero. An alternative approach is to introduce a random effect or frailty for each centre into the model. Recently, Commenges and Andersen have proposed a score test for this random effects model. By a Monte Carlo study we compare the performance of these two approaches when either the fixed or random effects model holds true. The study shows that for moderate samples the fixed effects tests have nominal levels much higher than specified, but the random effect test performs as expected under the null hypothesis. Under the alternative hypothesis the random effect test has good power to detect relatively small fixed or random centre effects. Also, if the centre effect is ignored the estimator of the main treatment effect may be quite biased and is inconsistent. The tests are illustrated on a retrospective multi-centre study of recovery from bone marrow transplantation.  相似文献   

6.
Trials in which treatments induce clustering of observations in one of two treatment arms, such as when comparing group therapy with pharmacological treatment or with a waiting‐list group, are examined with respect to the efficiency loss caused by varying cluster sizes. When observations are (approximately) normally distributed, treatment effects can be estimated and tested through linear mixed model analysis. For maximum likelihood estimation, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. In an extensive Monte Carlo simulation for small sample sizes, the asymptotic relative efficiency turns out to be accurate for the treatment effect, but less accurate for the random intercept variance. For the treatment effect, the efficiency loss due to varying cluster sizes rarely exceeds 10 per cent, which can be regained by recruiting 11 per cent more clusters for one arm and 11 per cent more persons for the other. For the intercept variance the loss can be 16 per cent, which requires recruiting 19 per cent more clusters for one arm, with no additional recruitment of subjects for the other arm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
The primary objective of a Randomized Clinical Trial usually is to investigate whether one treatment is better than its alternatives on average. However, treatment effects may vary across different patient subpopulations. In contrast to demonstrating one treatment is superior to another on the average sense, one is often more concerned with the question that, for a particular patient, or a group of patients with similar characteristics, which treatment strategy is most appropriate to achieve a desired outcome. Various interaction tests have been proposed to detect treatment effect heterogeneity; however, they typically examine covariates one at a time, do not offer an integrated approach that incorporates all available information, and can greatly increase the chance of a false positive finding when the number of covariates is large. We propose a new permutation test for the null hypothesis of no interaction effects for any covariate. The proposed test allows us to consider the interaction effects of many covariates simultaneously without having to group subjects into subsets based on pre‐specified criteria and applies generally to randomized clinical trials of multiple treatments. The test provides an attractive alternative to the standard likelihood ratio test, especially when the number of covariates is large. We illustrate the proposed methods using a dataset from the Treatment of Adolescents with Depression Study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
We consider modelling interaction between a categoric covariate T and a continuous covariate Z in a regression model. Here T represents the two treatment arms in a parallel-group clinical trial and Z is a prognostic factor which may influence response to treatment (known as a predictive factor). Generalization to more than two treatments is straightforward. The usual approach to analysis is to categorize Z into groups according to cutpoint(s) and to analyse the interaction in a model with main effects and multiplicative terms. The cutpoint approach raises several well-known and difficult issues for the analyst. We propose an alternative approach based on fractional polynomial (FP) modelling of Z in all patients and at each level of T. Other prognostic variables can also be incorporated by first constructing a multivariable adjustment model which may contain binary covariates and FP transformations of continuous covariates other than Z. The main step involves FP modelling of Z and testing equality of regression coefficients between treatment groups in an interaction model adjusted for other covariates. Extensive experience suggests that a two-term fractional polynomial (FP2) function may describe the effect of a prognostic factor on a survival outcome quite well. In a controlled trial, this FP2 function describes the prognostic effect averaged over the treatment groups. We refit this function in each treatment group to see if there are substantial differences between groups. Allowing different parameter values for the chosen FP2 function is flexible enough to detect such differences. Within the same algorithm we can also deal with the conceptually different cases of a predefined hypothesis of interaction or searching for interactions. We demonstrate the ability of the approach to detect and display treatment/covariate interactions in two examples from controlled trials in cancer.  相似文献   

9.
Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data‐driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed‐effects and mixed‐effects linear and nonlinear models for cross‐sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
This paper discusses design considerations and the role of randomization-based inference in randomized community intervention trials. We stress that longitudinal follow-up of cohorts within communities often yields useful information on the effects of intervention on individuals, whereas cross-sectional surveys can usefully assess the impact of intervention on group indices of health. We also discuss briefly special design considerations, such as sampling cohorts from targeted subpopulations (for example, heavy smokers), matching the communities, calculating sample size, and other practical issues. We present randomization tests for matched and unmatched cohort designs. As is well known, these tests necessarily have proper size under the strong null hypothesis that treatment has no effect on any community response. It is less well known, however, that the size of randomization tests can exceed nominal levels under the ‘weak’ null hypothesis that intervention does not affect the average community response. Because this weak null hypothesis is of interest in community intervention trials, we study the size of randomization tests by simulation under conditions in which the weak null hypothesis holds but the strong null hypothesis does not. In unmatched studies, size may exceed nominal levels under the weak null hypothesis if there are more intervention than control communities and if the variance among community responses is larger among control communities than among intervention communities; size may also exceed nominal levels if there are more control than intervention communities and if the variance among community responses is larger among intervention communities. Otherwise, size is likely near nominal levels. To avoid such problems, we recommend use of the same numbers of control and intervention communities in unmatched designs. Pair-matched designs usually have size near nominal levels, even under the weak null hypothesis. We have identified some extreme cases, unlikely to arise in practice, in which even the size of pair-matched studies can exceed nominal levels. These simulations, however, tend to confirm the robustness of randomization tests for matched and unmatched community intervention trials, particularly if the latter designs have equal numbers of intervention and control communities. We also describe adaptations of randomization tests to allow for covariate adjustment, missing data, and application to cross-sectional surveys. We show that covariate adjustment can increase power, but such power gains diminish as the random component of variation among communities increases, which corresponds to increasing intraclass correlation of responses within communities. We briefly relate our results to model-based methods of inference for community intervention trials that include hierarchical models such as an analysis of variance model with random community effects and fixed intervention effects. Although we have tailored this paper to the design of community intervention trials, many of the ideas apply to other experiments in which one allocates groups or clusters of subjects at random to intervention or control treatments.  相似文献   

11.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
13.
We investigate through computer simulations the robustness and power of two group analysis of covariance tests applied to small samples distorted from normality by floor effects when the regression slopes are homogeneous. We consider four parametric analysis of covariance tests that vary according to the treatment of the homogeneity of regression slopes and two t-tests on unadjusted means and on difference scores. Under the null hypothesis of no difference in means, we estimated actual significance levels by comparing observed test statistics to appropriate values from the F and t distributions for nominal significance levels of 0⋅10, 0⋅05, 0⋅02 and 0⋅01. We estimated power by similar comparisons under various alternative hypotheses. The hierarchical approach (that adjusts for non-homogeneous slopes if found significant), the test that assumes homogeneous regression slopes, and the test that estimates separate regression slopes in each treatment were robust. In general, each test produced power at least equal to that expected from normal theory. The textbook approach, which does not test for mean differences when there is significant non-homogeneity, was conservative but also had good power. The t-tests were robust but had poorer power properties than the above procedures.  相似文献   

14.
MCP‐MOD is a testing and model selection approach for clinical dose finding studies. During testing, contrasts of dose group means are derived from candidate dose response models. A multiple‐comparison procedure is applied that controls the alpha level for the family of null hypotheses associated with the contrasts. Provided at least one contrast is significant, a corresponding set of “good” candidate models is identified. The model generating the most significant contrast is typically selected. There have been numerous publications on the method. It was endorsed by the European Medicines Agency. The MCP‐MOD procedure can be alternatively represented as a method based on simple linear regression, where “simple” refers to the inclusion of an intercept and a single predictor variable, which is a transformation of dose. It is shown that the contrasts are equal to least squares linear regression slope estimates after a rescaling of the predictor variables. The test for each contrast is the usual t statistic for a null slope parameter, except that a variance estimate with fewer degrees of freedom is used in the standard error. Selecting the model corresponding to the most significant contrast P value is equivalent to selecting the predictor variable yielding the smallest residual sum of squares. This criteria orders the models like a common goodness‐of‐fit test, but it does not assure a good fit. Common inferential methods applied to the selected model are subject to distortions that are often present following data‐based model selection.  相似文献   

15.
In lifetime data, like cancer studies, there may be long term survivors, which lead to heavy censoring at the end of the follow-up period. Since a standard survival model is not appropriate to handle these data, a cure model is needed. In the literature, covariate hypothesis tests for cure models are limited to parametric and semiparametric methods. We fill this important gap by proposing a nonparametric covariate hypothesis test for the probability of cure in mixture cure models. A bootstrap method is proposed to approximate the null distribution of the test statistic. The procedure can be applied to any type of covariate, and could be extended to the multivariate setting. Its efficiency is evaluated in a Monte Carlo simulation study. Finally, the method is applied to a colorectal cancer dataset.  相似文献   

16.
There are different kinds of randomised controlled trials: trials in which the superiority of a treatment can be demonstrated (superiority trials) and trials in which the equal efficacy of two treatments can be shown (equivalence trials). The main reason for performing an equivalence trial is that for many diseases and disorders an effective treatment already exists. Equivalence trials are appropriate when a new treatment offers some advantages over an existing treatment (less cost, greater safety, improved convenience or freedom of choice for the patient), in addition to the expected equal therapeutic effectiveness. The design of equivalence trials is to a large extent comparable to that of superiority trials, but there are some methodological differences. In equivalence trials, the null hypothesis and alternative hypothesis are interchanged, compared to superiority trials. In equivalence trials, an equivalence margin is defined for the different treatments. Clinical professionals decide on the equivalence margin beforehand on the basis of the clinical relevance. To demonstrate equivalence, the confidence interval of the difference between two treatments must lie completely within the equivalence margin. In equivalence trials, there are usually more patients needed: the smaller the equivalence margin, the more patients are needed. In equivalence trials, both per-protocol analyses and intention-to-treat analyses should be used to prove the equal therapeutic effectiveness of the treatments under study.  相似文献   

17.
We consider the problem of mapping the risk from a disease using a series of regional counts of observed and expected cases, and information on potential risk factors. To analyse this problem from a Bayesian viewpoint, we propose a methodology which extends a spatial partition model by including categorical covariate information. Such an extension allows detection of clusters in the residual variation, reflecting further, possibly unobserved, covariates. The methodology is implemented by means of reversible jump Markov chain Monte Carlo sampling. An application is presented in order to illustrate and compare our proposed extensions with a purely spatial partition model. Here we analyse a well-known data set on lip cancer incidence in Scotland.  相似文献   

18.
A common problem that arises in the meta-analysis of several studies, each with independent treatment and control groups, is to test for the homogeneity of effect sizes without the assumptions of equal variances of the treatment and the control groups and of equal variances among the separate studies. A commonly used test statistic, frequently denoted as Q, is the weighted sum of squares of the differences of the individual effect sizes from the mean effect size, with weights inversely proportional to the variances of the effect sizes. The primary contributions of this article are the presentation of improved and very accurate approximations to the distributions of the Q statistic when the effect size is a linear contrast such as the difference between the treatment and control means. Our improved approximation to the distribution of Q under the null hypothesis is based on a multiple of an F-distribution; its use yields a substantial reduction in the type I error rate of the homogeneity test. Our improved approximation to the distribution of Q under an alternative hypothesis is based on a shift of a chi-square distribution; its use allows for much greater accuracy in the computation of the power of the homogeneity test. These two improved approximate distributions are developed using the Welch methodology of approximating the moments of Q by the use of multivariate Taylor expansions. The quality of these approximations is studied by simulation. A secondary contribution of this article is a study of how best to combine the variances of the treatment and control groups (needed for the calculation of weights in the Q statistic). Our conclusion, based on simulations, is that use of pooled variances can result in substantially erroneous conclusions.  相似文献   

19.
We used theoretical and simulation‐based approaches to study Type I error rates for one‐stage and two‐stage analytic methods for cluster‐randomized designs. The one‐stage approach uses the observed data as outcomes and accounts for within‐cluster correlation using a general linear mixed model. The two‐stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one‐stage and two‐stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one‐stage and six two‐stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two‐stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one‐stage model with Kenward–Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one‐stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster‐randomized trials, the Kenward–Roger method is the preferred one‐stage approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
In a cluster randomized cross-over trial, all participating clusters receive both intervention and control treatments consecutively, in separate time periods. Patients recruited by each cluster within the same time period receive the same intervention, and randomization determines order of treatment within a cluster. Such a design has been used on a number of occasions. For analysis of the trial data, the approach of analysing cluster-level summary measures is appealing on the grounds of simplicity, while hierarchical modelling allows for the correlation of patients within periods within clusters and offers flexibility in the model assumptions. We consider several cluster-level approaches and hierarchical models and make comparison in terms of empirical precision, coverage, and practical considerations. The motivation for a cluster randomized trial to employ cross-over of trial arms is particularly strong when the number of clusters available is small, so we examine performance of the methods under small, medium and large (6, 18, 30) numbers of clusters. One hierarchical model and two cluster-level methods were found to perform consistently well across the designs considered. These three methods are efficient, provide appropriate standard errors and coverage, and continue to perform well when incorporating adjustment for an individual-level covariate. We conclude that choice between hierarchical models and cluster-level methods should be influenced by the extent of complexity in the planned analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号