首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Patient reported outcome and observer evaluative studies in clinical trials and post‐hoc analyses often use instruments that measure responses on ordinal‐rating or Likert scales. We propose a flexible distributional approach by modeling the change scores from the baseline to the end of the study using independent beta distributions. The two shape parameters of the fitted beta distributions are estimated by matching‐moments. Covariates and the interaction terms are included in multivariate beta‐regression analyses under generalized linear mixed models. These methods are illustrated on the treatment satisfaction data in an overactive bladder drug study with four treatment arms. Monte‐Carlo simulations were conducted to compare the Type 1 errors and statistical powers using a beta likelihood ratio test of the proposed method against its fully nonparametric or parametric alternatives. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Diabetes mellitus is a common condition which has several serious complications associated with it. In this paper a mixture model, based on one previously used to predict the onset of AIDS, is used to predict the onset of one of these complications, diabetic retinopathy, the major cause of adult blindness in the U.K. This model differs from the previous AIDS model by introducing covariates into the model and using a wider choice of mixture distributions. The fit and distributional assumptions of the model are then discussed for this example. The model is fitted to the data by maximum likelihood. It is important that the training set contains balanced numbers of individuals with and without retinopathy.  相似文献   

3.
Methods for sample size calculations in ROC studies often assume independent normal distributions for test scores among the diseased and nondiseased populations. We consider sample size requirements under the default two-group normal model when the data distribution for the diseased population is either skewed or multimodal. For these two common scenarios we investigate the potential for robustness of calculated sample sizes under the mis-specified normal model and we compare to sample sizes calculated under a more flexible nonparametric Dirichlet process mixture model. We also highlight the utility of flexible models for ROC data analysis and their importance to study design. When nonstandard distributional shapes are anticipated, our Bayesian nonparametric approach allows investigators to determine a sample size based on the use of more appropriate distributional assumptions than are generally applied. The method also provides researchers a tool to conduct a sensitivity analysis to sample size calculations that are based on a two-group normal model. We extend the proposed approach to comparative studies involving two continuous tests. Our simulation-based procedure is implemented using the WinBUGS and R software packages and example code is made available.  相似文献   

4.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
The ‘gold standard’ design for three‐arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non‐inferiority and superiority of the experimental treatment compared with the active control in three‐arm trials in the ‘gold standard’ design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald‐type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non‐inferiority in three‐arm trials in the ‘gold standard’ design outperforms its competitors, for instance the test based on a quasi‐Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
When testing for a treatment effect or a difference among groups, the distributional assumptions made about the response variable can have a critical impact on the conclusions drawn. For example, controversy has arisen over transformations of the response (Keene). An alternative approach is to use some member of the family of generalized linear models. However, this raises the issue of selecting the appropriate member, a problem of testing non-nested hypotheses. Standard model selection criteria, such as the Akaike information criterion (AIC), can be used to resolve problems. These procedures for comparing generalized linear models are applied to checking for difference in T4 cell counts between two disease groups. We conclude that appropriate model selection criteria should be specified in the protocol for any study, including clinical trials, in order that optimal inferences can be drawn about treatment differences. © 1998 John Wiley & Sons, Ltd.  相似文献   

7.
Repeated measurements of surrogate markers are frequently used to track disease progression, but these series are often prematurely terminated due to disease progression or death. Analysing such data through standard likelihood-based approaches can yield severely biased estimates if the censoring mechanism is non-ignorable. Motivated by this problem, we have proposed the bivariate joint multivariate random effects (JMRE) model, which has shown that when correctly specified it performs well in terms of bias reduction and precision.The bivariate JMRE model is fully parametric and belongs to the class of shared parameters joint models where a survival model for the dropouts and a mixed model for the markers' evolution are linked through a multivariate normal distribution of random effects. As in every parametric model, robustness under violations of its distributional assumptions is of great importance. In this study we generated 500 simulated data sets assuming that random effects jointly follow a heavy-tailed distribution, two skewed distributions or a mixture of two normal distributions. Moreover, we generated data where level-1 errors or residuals in the survival part of the model follow a skewed distribution. Further sensitivity analysis on the effects of reduced sample size, increased level-1 variances and altered fixed effects values was also performed.We found that fixed effects estimates are almost unaffected, but their standard errors (SEs) may be underestimated especially under heavily skewed distributions. The proposed model seems robust enough, but its performance on smaller data sets or under more extreme departures of its assumptions needs further investigation.  相似文献   

8.
This article is concerned about the test for the difference in the distributions of multigroup proportional data, which is motivated by the problem of comparing the distributions of quality of life (QoL) outcomes among different treatment groups in clinical trials. The proportional data, such as QoL outcomes assessed by answers to questions on a questionnaire, are bounded in a closed interval such as [0,1] with continuous observations in (0,1) and, in addition, excess observations taking the boundary values 0 and/or 1. Common statistical procedures used in practice, such as t- and rank-based tests, may not be very powerful since they ignore the specific feature of the proportional data. In this article, we propose a three-component mixture model for the proportional data and a density ratio model for the distributions of continuous observations in (0,1). A semiparametric test statistic for the homogeneity of distributions of multigroup proportional data is derived based on the empirical likelihood ratio principle and shown to be asymptotically distributed as a chi-squared random variable under null hypothesis. A nonparametric bootstrap procedure is proposed to further improve the performance of the semiparametric test. Simulation studies are performed to evaluate the empirical type I error and power of the proposed test procedure and compare it with likelihood ratio tests (LRTs) under parametric distribution assumptions, rank-based Kruskal-Wallis test, and Wald-type test. The proposed test procedure is also applied to the analysis of QoL outcomes from a clinical trial on colorectal cancer that motivated our study.  相似文献   

9.
Missing not at random (MNAR) data pose key challenges for statistical inference because the substantive model of interest is typically not identifiable without imposing further (eg, distributional) assumptions. Selection models have been routinely used for handling MNAR by jointly modeling the outcome and selection variables and typically assuming that these follow a bivariate normal distribution. Recent studies have advocated parametric selection approaches, for example, estimated by multiple imputation and maximum likelihood, that are more robust to departures from the normality assumption compared with those assuming that nonresponse and outcome are jointly normally distributed. However, the proposed methods have been mostly restricted to a specific joint distribution (eg, bivariate t-distribution). This paper discusses a flexible copula-based selection approach (which accommodates a wide range of non-Gaussian outcome distributions and offers great flexibility in the choice of functional form specifications for both the outcome and selection equations) and proposes a flexible imputation procedure that generates plausible imputed values from the copula selection model. A simulation study characterizes the relative performance of the copula model compared with the most commonly used selection models for estimating average treatment effects with MNAR data. We illustrate the methods in the REFLUX study, which evaluates the effect of laparoscopic surgery on long-term quality of life in patients with reflux disease. We provide software code for implementing the proposed copula framework using the R package GJRM .  相似文献   

10.
Bayesian methods are proposed for analysing matched case-control studies in which a binary exposure variable is sometimes measured with error, but whose correct values have been validated for a random sample of the matched case-control sets. Three models are considered. Model 1 makes few assumptions other than randomness and independence between matched sets, while Models 2 and 3 are logistic models, with Model 3 making additional distributional assumptions about the variation between matched sets. With Models 1 and 2 the data are examined in two stages. The first stage analyses data from the validation sample and is easy to perform; the second stage analyses the main body of data and requires MCMC methods. All relevant information is transferred between the stages by using the posterior distributions from the first stage as the prior distributions for the second stage. With Model 3, a hierarchical structure is used to model the relationship between the exposure probabilities of the matched sets, which gives the potential to extract more information from the data. All the methods that are proposed are generalized to studies in which there is more than one control for each case. The Bayesian methods and a maximum likelihood method are applied to a data set for which the exposure of every patient was measured using both an imperfect measure that is subject to misclassification, and a much better measure whose classifications may be treated as correct. To test methods, the latter information was suppressed for all but a random sample of matched sets.  相似文献   

11.
Drop-out is a prevalent complication in the analysis of data from longitudinal studies, and remains an active area of research for statisticians and other quantitative methodologists. This tutorial is designed to synthesize and illustrate the broad array of techniques that are used to address outcome-related drop-out, with emphasis on regression-based methods. We begin with a review of important assumptions underlying likelihood-based and semi-parametric models, followed by an overview of models and methods used to draw inferences from incomplete longitudinal data. The majority of the tutorial is devoted to detailed analysis of two studies with substantial rates of drop-out, designed to illustrate the use of effective methods that are relatively easy to apply: in the first example, we use both semi-parametric and fully parametric models to analyse repeated binary responses from a clinical trial of smoking cessation interventions; in the second, pattern mixture models are used to analyse longitudinal CD4 counts from an observational cohort study of HIV-infected women. In each example, we describe exploratory analyses, model formulation, estimation methodology and interpretation of results. Analyses of incomplete data requires making unverifiable assumptions, and these are discussed in detail within the context of each application. Relevant SAS code is provided.  相似文献   

12.
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial.  相似文献   

13.
Pattern‐mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing‐data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy‐to‐use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
In oncology clinical trials, overall survival, time to progression, and progression‐free survival are three commonly used endpoints. Empirical correlations among them have been published for different cancers, but statistical models describing the dependence structures are limited. Recently, Fleischer et al. proposed a statistical model that is mathematically tractable and shows some flexibility to describe the dependencies in a realistic way, based on the assumption of exponential distributions. This paper aims to extend their model to the more flexible Weibull distribution. We derived theoretical correlations among different survival outcomes, as well as the distribution of overall survival induced by the model. Model parameters were estimated by the maximum likelihood method and the goodness of fit was assessed by plotting estimated versus observed survival curves for overall survival. We applied the method to three cancer clinical trials. In the non‐small‐cell lung cancer trial, both the exponential and the Weibull models provided an adequate fit to the data, and the estimated correlations were very similar under both models. In the prostate cancer trial and the laryngeal cancer trial, the Weibull model exhibited advantages over the exponential model and yielded larger estimated correlations. Simulations suggested that the proposed Weibull model is robust for data generated from a range of distributions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
This article summarizes recommendations on the design and conduct of clinical trials of a National Research Council study on missing data in clinical trials. Key findings of the study are that (a) substantial missing data is a serious problem that undermines the scientific credibility of causal conclusions from clinical trials; (b) the assumption that analysis methods can compensate for substantial missing data is not justified; hence (c) clinical trial design, including the choice of key causal estimands, the target population, and the length of the study, should include limiting missing data as one of its goals; (d) missing‐data procedures should be discussed explicitly in the clinical trial protocol; (e) clinical trial conduct should take steps to limit the extent of missing data; (f) there is no universal method for handling missing data in the analysis of clinical trials – methods should be justified on the plausibility of the underlying scientific assumptions; and (g) when alternative assumptions are plausible, sensitivity analysis should be conducted to assess robustness of findings to these alternatives. This article focuses on the panel's recommendations on the design and conduct of clinical trials to limit missing data. A companion paper addresses the panel's findings on analysis methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Background: Health Related Quality of Life (HRQoL) measures are becoming more frequently used in clinical trials. Investigators are now asking statisticians for advice on how to plan and analyse studies using HRQoL measures, which includes questions on sample size. Sample size requirements are critically dependent on the aims of the study, the outcome measure and its summary measure, the effect size and the method of calculating the test statistic. The SF-6D is a new single summary preference-based measure of health derived from the SF-36 suitable for use in clinical trials. Objectives: To describe and compare two methods of calculating sample sizes when using the SF-6D in comparative clinical trials and to give pragmatic guidance to researchers on what method to use. Methods: We describe two main methods of sample size estimation. The parametric (t-test) method assumes that the SF-6D data is continuous and Normally distributed and that the effect size is the difference between two means. The non-parametric (Mann-Whitney or MW) method makes no distributional assumptions about the data and the effect size is defined in terms of the probability that an observation drawn at random from population Y would exceed an observation drawn at random from population X. We used bootstrap computer simulation to compare the power of the two methods for detecting a shift in location. Results: Computer simulation suggested that if the distribution of the SF-6D is reasonably symmetric then the t-test appears to be more powerful than the MW test at detecting differences in means. If the distribution of the SF-6D is skewed then the MW test appears to be more powerful at detecting a location shift (difference in means) than the t-test. However the differences in power (between the t and MW tests) are small and decrease as the sample size increases. Conclusions: Computer simulation has suggested that parametric methods work reasonably well. Therefore pragmatically we would recommend that parametric methods be used for sample size calculation and analysis when using the SF-6D.  相似文献   

17.
Liu C  Liu A  Halabi S 《Statistics in medicine》2011,30(16):2005-2014
Diagnostic accuracy can be improved considerably by combining multiple biomarkers. Although the likelihood ratio provides optimal solution to combination of biomarkers, the method is sensitive to distributional assumptions which are often difficult to justify. Alternatively simple linear combinations can be considered whose empirical solution may encounter intensive computation when the number of biomarkers is relatively large. Moreover, the optimal linear combinations derived under multivariate normality may suffer substantial loss of efficiency if the distributions are apart from normality. In this paper, we propose a new approach that linearly combines the minimum and maximum values of the biomarkers. Such combination only involves searching for a single combination coefficient that maximizes the area under the receiver operating characteristic (ROC) curves and is thus computation-effective. Simulation results show that the min-max combination may yield larger partial or full area under the ROC curves and is more robust against distributional assumptions. The methods are illustrated using the growth-related hormones data from the Growth and Maturation in Children with Autism or Autistic Spectrum Disorder Study (Autism/ASD Study).  相似文献   

18.
Standard linear regression is commonly used for genetic association studies of quantitative traits. This approach may not be appropriate if the trait, on its original or transformed scales, does not follow a normal distribution. A rank‐based nonparametric approach that does not rely on any distributional assumptions can be an attractive alternative. Although several nonparametric tests exist in the literature, their performance in the genetic association setting is not well studied. We evaluate various nonparametric tests for the analysis of quantitative traits and propose a new class of nonparametric tests that have robust performance for traits with various distributions and under different genetic models. We demonstrate the advantage of our proposed methods through simulation study and real data applications.  相似文献   

19.
In this paper we explore the potential of multilevel models for meta-analysis of trials with binary outcomes for both summary data, such as log-odds ratios, and individual patient data. Conventional fixed effect and random effects models are put into a multilevel model framework, which provides maximum likelihood or restricted maximum likelihood estimation. To exemplify the methods, we use the results from 22 trials to prevent respiratory tract infections; we also make comparisons with a second example data set comprising fewer trials. Within summary data methods, confidence intervals for the overall treatment effect and for the between-trial variance may be derived from likelihood based methods or a parametric bootstrap as well as from Wald methods; the bootstrap intervals are preferred because they relax the assumptions required by the other two methods. When modelling individual patient data, a bias corrected bootstrap may be used to provide unbiased estimation and correctly located confidence intervals; this method is particularly valuable for the between-trial variance. The trial effects may be modelled as either fixed or random within individual data models, and we discuss the corresponding assumptions and implications. If random trial effects are used, the covariance between these and the random treatment effects should be included; the resulting model is equivalent to a bivariate approach to meta-analysis. Having implemented these techniques, the flexibility of multilevel modelling may be exploited in facilitating extensions to standard meta-analysis methods.  相似文献   

20.
Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号