首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
A robust likelihood approach for the analysis of overdispersed correlated count data that takes into account cluster varying covariates is proposed. We emphasise two characteristics of the proposed method: That the correlation structure satisfies the constraints on the second moments and that the estimation of the correlation structure guarantees consistent estimates of the regression coefficients. In addition we extend the mean specification to include within- and between-cluster effects. The method is illustrated through the analysis of data from two studies. In the first study, cross-sectional count data from a randomised controlled trial are analysed to evaluate the efficacy of a communication skills training programme. The second study involves longitudinal count data which represent counts of damaged hand joints in patients with psoriatic arthritis. Motivated by this study, we generalize our model to accommodate for a subpopulation of patients who are not susceptible to the development of damaged hand joints.  相似文献   

2.
Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta‐analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back‐calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log‐likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta‐analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back‐calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo‐controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.  相似文献   

3.
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community‐intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster‐size imbalance. The compared methods are: (i) the two‐sample t‐test of cluster‐level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model‐based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes‐HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random‐effects estimation. GLMM and Bayes‐HM performed better in general with Bayes‐HM producing less dispersed results for random‐effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster‐level t‐test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community‐intervention trial on Solar Water Disinfection in rural Bolivia. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

5.
Bayesian approaches to inference in cluster randomized trials have been investigated for normally distributed and binary outcome measures. However, relatively little attention has been paid to outcome measures which are counts of events. We discuss an extension of previously published Bayesian hierarchical models to count data, which usually can be assumed to be distributed according to a Poisson distribution. We develop two models, one based on the traditional rate ratio, and one based on the rate difference which may often be more intuitively interpreted for clinical trials, and is needed for economic evaluation of interventions. We examine the relationship between the intracluster correlation coefficient (ICC) and the between‐cluster variance for each of these two models. In practice, this allows one to use the previously published evidence on ICCs to derive an informative prior distribution which can then be used to increase the precision of the posterior distribution of the ICC. We demonstrate our models using a previously published trial assessing the effectiveness of an educational intervention and a prior distribution previously derived. We assess the robustness of the posterior distribution for effectiveness to departures from a normal distribution of the random effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS.  相似文献   

7.
The recent 21st Century Cures Act propagates innovations to accelerate the discovery, development, and delivery of 21st century cures. It includes the broader application of Bayesian statistics and the use of evidence from clinical expertise. An example of the latter is the use of trial-external (or historical) data, which promises more efficient or ethical trial designs. We propose a Bayesian meta-analytic approach to leverage historical data for time-to-event endpoints, which are common in oncology and cardiovascular diseases. The approach is based on a robust hierarchical model for piecewise exponential data. It allows for various degrees of between trial-heterogeneity and for leveraging individual as well as aggregate data. An ovarian carcinoma trial and a non-small cell cancer trial illustrate methodological and practical aspects of leveraging historical data for the analysis and design of time-to-event trials.  相似文献   

8.
In practice, count data may exhibit varying dispersion patterns and excessive zero values; additionally, they may appear in groups or clusters sharing a common source of variation. We present a novel Bayesian approach for analyzing such data. To model these features, we combine the Conway‐Maxwell‐Poisson distribution, which allows both overdispersion and underdispersion, with a hurdle component for the zeros and random effects for clustering. We propose an efficient Markov chain Monte Carlo sampling scheme to obtain posterior inference from our model. Through simulation studies, we compare our hurdle Conway‐Maxwell‐Poisson model with a hurdle Poisson model to demonstrate the effectiveness of our Conway‐Maxwell‐Poisson approach. Furthermore, we apply our model to analyze an illustrative dataset containing information on the number and types of carious lesions on each tooth in a population of 9‐year‐olds from the Iowa Fluoride Study, which is an ongoing longitudinal study on a cohort of Iowa children that began in 1991.  相似文献   

9.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.  相似文献   

11.
The process of undertaking a meta‐analysis involves a sequence of decisions, one of which is deciding which measure of treatment effect to use. In particular, for comparative binary data from randomised controlled trials, a wide variety of measures are available such as the odds ratio and the risk difference. It is often of interest to know whether important conclusions would have been substantively different if an alternative measure had been used. Here we develop a new type of sensitivity analysis that incorporates standard measures of treatment effect. Thus, rather than examining the implications of a variety of measures in an ad hoc manner, we can simultaneously examine an entire family of possibilities, including the odds ratio, the arcsine difference and the risk difference. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
We consider random effects meta‐analysis where the outcome variable is the occurrence of some event of interest. The data structures handled are where one has one or more groups in each study, and in each group either the number of subjects with and without the event, or the number of events and the total duration of follow‐up is available. Traditionally, the meta‐analysis follows the summary measures approach based on the estimates of the outcome measure(s) and the corresponding standard error(s). This approach assumes an approximate normal within‐study likelihood and treats the standard errors as known. This approach has several potential disadvantages, such as not accounting for the standard errors being estimated, not accounting for correlation between the estimate and the standard error, the use of an (arbitrary) continuity correction in case of zero events, and the normal approximation being bad in studies with few events. We show that these problems can be overcome in most cases occurring in practice by replacing the approximate normal within‐study likelihood by the appropriate exact likelihood. This leads to a generalized linear mixed model that can be fitted in standard statistical software. For instance, in the case of odds ratio meta‐analysis, one can use the non‐central hypergeometric distribution likelihood leading to mixed‐effects conditional logistic regression. For incidence rate ratio meta‐analysis, it leads to random effects logistic regression with an offset variable. We also present bivariate and multivariate extensions. We present a number of examples, especially with rare events, among which an example of network meta‐analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
The over‐dispersion parameter is an important and versatile measure in the analysis of one‐way layout of count data in biological studies. For example, it is commonly used as an inverse measure of aggregation in biological count data. Its estimation from finite data sets is a recognized challenge. Many simulation studies have examined the bias and efficiency of different estimators of the over‐dispersion parameter for finite data sets (see, for example, Clark and Perry, Biometrics 1989; 45:309–316 and Piegorsch, Biometrics 1990; 46:863–867), but little attention has been paid to the accuracy of the confidence intervals (CIs) of it. In this paper, we first derive asymptotic procedures for the construction of confidence limits for the over‐dispersion parameter using four estimators that are specified by only the first two moments of the counts. We also obtain closed‐form asymptotic variance formulae for these four estimators. In addition, we consider the asymptotic CI based on the maximum likelihood (ML) estimator using the negative binomial model. It appears from the simulation results that the asymptotic CIs based on these five estimators have coverage below the nominal coverage probability. To remedy this, we also study the properties of the asymptotic CIs based on the restricted estimates of ML, extended quasi‐likelihood, and double extended quasi‐likelihood by eliminating the nuisance parameter effect using their adjusted profile likelihood and quasi‐likelihoods. It is shown that these CIs outperform the competitors by providing coverage levels close to nominal over a wide range of parameter combinations. Two examples to biological count data are presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
Paired count data usually arise in medicine when before and after treatment measurements are considered. In the present paper we assume that the correlated paired count data follow a bivariate Poisson distribution in order to derive the distribution of their difference. The derived distribution is shown to be the same as the one derived for the difference of the independent Poisson variables, thus recasting interest on the distribution introduced by Skellam. Using this distribution we remove correlation, which naturally exists in paired data, and we improve the quality of our inference by using exact distributions instead of normal approximations. The zero-inflated version is considered to account for an excess of zero counts. Bayesian estimation and hypothesis testing for the models considered are discussed. An example from dental epidemiology is used to illustrate the proposed methodology.  相似文献   

15.
Zhou XH  Li SM 《Statistics in medicine》2006,25(16):2737-2761
In this paper, we considered a missing outcome problem in causal inferences for a randomized encouragement design study. We proposed both moment and maximum likelihood estimators for the marginal distributions of potential outcomes and the local complier average causal effect (CACE) parameter. We illustrated our methods in a randomized encouragement design study on the effectiveness of flu shots.  相似文献   

16.
Last observation carried forward (LOCF) and analysis using only data from subjects who complete a trial (Completers) are commonly used techniques for analysing data in clinical trials with incomplete data when the endpoint is change from baseline at last scheduled visit. We propose two alternative methods. The semi-parametric method, which cumulates changes observed between consecutive time points, is conceptually similar to the familiar life-table method and corresponding Kaplan-Meier estimation when the primary endpoint is time to event. A non-parametric analogue of LOCF is obtained by carrying forward, not the observed value, but the rank of the change from baseline at the last observation for each subject. We refer to this method as the LRCF method. Both procedures retain the simplicity of LOCF and Completers analyses and, like these methods, do not require data imputation or modelling assumptions. In the absence of any incomplete data they reduce to the usual two-sample tests. In simulations intended to reflect chronic diseases that one might encounter in practice, LOCF was observed to produce markedly biased estimates and markedly inflated type I error rates when censoring was unequal in the two treatment arms. These problems did not arise with the Completers, Cumulative Change, or LRCF methods. Cumulative Change and LRCF were more powerful than Completers, and the Cumulative Change test provided more efficient estimates than the Completers analysis, in all simulations. We conclude that the Cumulative Change and LRCF methods are preferable to LOCF and Completers analyses. Mixed model repeated measures (MMRM) performed similarly to Cumulative Change and LRCF and makes somewhat less restrictive assumptions about missingness mechanisms, so that it is also a reasonable alternative to LOCF and Completers analyses.  相似文献   

17.
Background Individual patient data (IPD) meta‐analysis is the gold standard. Aggregate data (AD) and IPD can be combined using conventional pairwise meta‐analysis when IPD cannot be obtained for all relevant studies. We extend the methodology to combine IPD and AD in a mixed treatment comparison (MTC) meta‐analysis. Methods The proposed random‐effects MTC models combine IPD and AD for a dichotomous outcome. We study the benefits of acquiring IPD for a subset of trials when assessing the underlying consistency assumption by including treatment‐by‐covariate interactions in the model. We describe three different model specifications that make increasingly stronger assumptions regarding the interactions. We illustrate the methodology through application to real data sets to compare drugs for treating malaria by using the outcome unadjusted treatment success at day 28. We compare results from AD alone, IPD alone and all data. Results When IPD contributed (i.e. either using IPD alone or combining IPD and AD), the chains converged, and we identified statistically significant regression coefficients for the interactions. Using IPD alone, we were able to compare only three of the six treatments of interest. When models were fitted to AD, the treatment effects and regression coefficients for the interactions were far more imprecise, and the chains did not converge. Conclusions The models combining IPD and AD encapsulated all available evidence. When exploring interactions, it can be beneficial to obtain IPD for a subset of trials and to combine IPD with additional AD. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Poisson regression is widely used in medical studies, and can be extended to negative binomial regression to allow for heterogeneity. When there is an excess number of zero counts, a useful approach is to used a mixture model with a proportion P of subjects not at risk, and a proportion of 1--P at-risk subjects who take on outcome values following a Poisson or negative binomial distribution. Covariate effects can be incorporated into both components of the models. In child assessment, fine motor development is often measured by test items that involve a process of imitation and a process of fine motor exercise. One such developmental milestone is 'building a tower of cubes'. This study analyses the impact of foetal growth and postnatal somatic growth on this milestone, operationalized as the number of cubes and measured around the age of 22 months. It is shown that the two aspects of early growth may have different implications for imitation and fine motor dexterity. The usual approach of recording and analysing the milestone as a binary outcome, such as whether the child can build a tower of three cubes, may leave out important information.  相似文献   

19.
This article summarizes recommendations on the design and conduct of clinical trials of a National Research Council study on missing data in clinical trials. Key findings of the study are that (a) substantial missing data is a serious problem that undermines the scientific credibility of causal conclusions from clinical trials; (b) the assumption that analysis methods can compensate for substantial missing data is not justified; hence (c) clinical trial design, including the choice of key causal estimands, the target population, and the length of the study, should include limiting missing data as one of its goals; (d) missing‐data procedures should be discussed explicitly in the clinical trial protocol; (e) clinical trial conduct should take steps to limit the extent of missing data; (f) there is no universal method for handling missing data in the analysis of clinical trials – methods should be justified on the plausibility of the underlying scientific assumptions; and (g) when alternative assumptions are plausible, sensitivity analysis should be conducted to assess robustness of findings to these alternatives. This article focuses on the panel's recommendations on the design and conduct of clinical trials to limit missing data. A companion paper addresses the panel's findings on analysis methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Overdispersion and structural zeros are two major manifestations of departure from the Poisson assumption when modeling count responses using Poisson log‐linear regression. As noted in a large body of literature, ignoring such departures could yield bias and lead to wrong conclusions. Different approaches have been developed to tackle these two major problems. In this paper, we review available methods for dealing with overdispersion and structural zeros within a longitudinal data setting and propose a distribution‐free modeling approach to address the limitations of these methods by utilizing a new class of functional response models. We illustrate our approach with both simulated and real study data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号