首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Community intervention evaluations that measure changes over time may conduct repeated cross-sectional surveys, follow a cohort of residents over time, or (often) use both designs. Each survey design has implications for precision and cost. To explore these issues, we assume that two waves of surveys are conducted, and that the goal is to estimate change in behavior for people who reside in the community at both times. Cohort designs are shown to provide more accurate estimates (in the sense of lower mean squared error) than cross-sectional estimates if (1) there is strong correlation over time in an individual's behavior at time 0 and time 1, (2) relatively few subjects are lost to followup, (3) the bias is relatively small, and (4) the available sample size is not too large. Otherwise, a repeated cross-sectional design is more efficient. We developed methods for choosing between the two designs, and applied them to actual survey data. Owing to drop-outs and losses to followup, the cohort estimates were usually more biased than the cross-sectional estimates. The correlations over time for most of the variables studied were also high. In many instances the cohort estimate, although biased, is preferred to the relatively unbiased cross-sectional estimate because the mean squared error was smaller for the cohort than for the cross-sectional estimate. If these results are replicated in other data, they may result in guidelines for choosing a more efficient study design.  相似文献   

2.
We considered design issues for multiple treatment arms in survival intervention trials and used optimal design theory to allocate patients adaptively in such trials. We proposed three types of optimal designs: one ensures that we have the most precise estimates of the treatment effects, another guarantees that we have the minimal sample size subject to user-specified allocation ratio assignments among treatment arms, and the third ensures that the design has minimal total hazard for the cohort. The latter two types of optimal designs are also subject to user-specified power constraints for testing contrasts among treatment effects. The operating characteristics of these optimal designs along with balanced designs are compared theoretically and by simulation, including their robustness properties with respect to model misspecifications. Our results show that the proposed optimal designs are frequently unbalanced and that they are generally more efficient and more ethical than the popular balanced designs. We also apply our response-adaptive allocation strategy to redesign a three-arm head and neck cancer trial and make comparisons.  相似文献   

3.
Development of methods to accurately estimate human immunodeficiency virus (HIV) incidence rate remains a challenge. Ideally, one would follow a random sample of HIV-negative individuals under a longitudinal study design and identify incident cases as they arise. Such designs can be prohibitively resource intensive and therefore alternative designs may be preferable. We propose such a simple, less resource-intensive study design and develop a weighted log likelihood approach which simultaneously accounts for selection bias and outcome misclassification error. The design is based on a cross-sectional survey which queries individuals' time since last HIV-negative test, validates their test results with formal documentation whenever possible, and tests all persons who do not have documentation of being HIV-positive. To gain efficiency, we update the weighted log likelihood function with potentially misclassified self-reports from individuals who could not produce documentation of a prior HIV-negative test and investigate large sample properties of validated sub-sample only versus pooled sample estimators through extensive Monte Carlo simulations. We illustrate our method by estimating incidence rate for individuals who tested HIV-negative within 1.5 and 5 years prior to Botswana Combination Prevention Project enrolment. This article establishes that accurate estimates of HIV incidence rate can be obtained from individuals' history of testing in a cross-sectional cohort study design by appropriately accounting for selection bias and misclassification error. Moreover, this approach is notably less resource-intensive compared to longitudinal and laboratory-based methods.  相似文献   

4.
Both traditional phase I designs and the increasingly popular continual reassessment method (CRM) designs select an estimate of maximum tolerable dose (MTD) from among a set of prespecified dose levels. Although CRM designs use an implied dose-response model to select the next dose level, in general it is neither assumed nor necessary that this model be tied to the actual dose of a drug. In contrast, in our two-stage design the fitting of a dose-response model after data have been collected is a necessary feature of the design, and the MTD is not constrained to be one of the prespecified dose levels. We conducted a simulation study to evaluate the performance of the two-stage design, two likelihood-based CRM designs, and two traditional designs in estimating the MTD in situations where one assumes that an explicit dose-response model exists. Under a wide variety of dose-response settings, we examined the bias and precision of estimates, and the fraction of estimates that were extremely high or low. We also studied the effect of adding a model fitting step at the end of a traditional design or a CRM design. The best performance was achieved using the two-stage and CRM designs. Although the CRM designs generally had smaller bias, the two-stage design yielded equal or somewhat smaller precision in some cases. The addition of a model-fitting step slightly improved the precision of the CRM estimates and decreased the percentage of extreme estimates. Allowing interpolation between doses for updating during CRM did not improve overall performance.  相似文献   

5.
In order to better inform study design decisions when sampling patients within and across health care providers we develop a simulation-based approach for designing complex multi-stage samples. The approach explores the tradeoff between competing design goals such as precision of estimates, coverage of the target population and cost.We elicit a number of sensible candidate designs, evaluate these designs with respect to multiple sampling goals, investigate their tradeoffs, and identify the design that is the best compromise among all goals. This approach recognizes that, in the practice of sampling, precision of the estimates is not the only important goal, and that there are tradeoffs with coverage and cost that should be explicitly considered. One can easily add other goals. We construct a sample frame with all phase III clinical cancer treatment trials that are conducted by cooperative oncology groups of the National Cancer Institute from October 1, 1998 through December 31, 1999. Simulation results for our study suggest sampling a different number of trials and institutions than initially considered.Simulations of different study designs can uncover efficiency gains both in terms of improved precision of the estimates and in terms of improved coverage of the target population. Simulations enable us to explore the tradeoffs between competing sampling goals and to quantify these efficiency gains. This is true even for complex designs where the stages are not strictly nested in one another.  相似文献   

6.
We discuss sample size determination in group‐sequential designs with two endpoints as co‐primary. We derive the power and sample size within two decision‐making frameworks. One is to claim the test intervention's benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
When calculating sample size or power for stepped wedge or other types of longitudinal cluster randomized trials, it is critical that the planned sampling structure be accurately specified. One common assumption is that participants will provide measurements in each trial period, that is, a closed cohort, and another is that each participant provides only one measurement during the course of the trial. However some studies have an “open cohort” sampling structure, where participants may provide measurements in variable numbers of periods. To date, sample size calculations for longitudinal cluster randomized trials have not accommodated open cohorts. Feldman and McKinlay (1994) provided some guidance, stating that the participant-level autocorrelation could be varied to account for the degree of overlap in different periods of the study, but did not indicate precisely how to do so. We present sample size and power formulas that allow for open cohorts and discuss the impact of the degree of “openness” on sample size and power. We consider designs where the number of participants in each cluster will be maintained throughout the trial, but individual participants may provide differing numbers of measurements. Our results are a unification of closed cohort and repeated cross-sectional sample results of Hooper et al (2016), and indicate precisely how participant autocorrelation of Feldman and McKinlay should be varied to account for an open cohort sampling structure. We discuss different types of open cohort sampling schemes and how open cohort sampling structure impacts on power in the presence of decaying within-cluster correlations and autoregressive participant-level errors.  相似文献   

8.
In this paper, we consider two-stage designs with failure-time endpoints in single-arm phase II trials. We propose designs in which stopping rules are constructed by comparing the Bayes risk of stopping at stage I with the expected Bayes risk of continuing to stage II using both the observed data in stage I and the predicted survival data in stage II. Terminal decision rules are constructed by comparing the posterior expected loss of a rejection decision versus an acceptance decision. Simple threshold loss functions are applied to time-to-event data modeled either parametrically or nonparametrically, and the cost parameters in the loss structure are calibrated to obtain desired type I error and power. We ran simulation studies to evaluate design properties including types I and II errors, probability of early stopping, expected sample size, and expected trial duration and compared them with the Simon two-stage designs and a design, which is an extension of the Simon's designs with time-to-event endpoints. An example based on a recently conducted phase II sarcoma trial illustrates the method.  相似文献   

9.
Multistage designs allow considerable reductions in the expected sample size of a trial. When stopping for futility or efficacy is allowed at each stage, the expected sample size under different possible true treatment effects (δ) is of interest. The δ-minimax design is the one for which the maximum expected sample size is minimised amongst all designs that meet the types I and II error constraints. Previous work has compared a two-stage δ-minimax design with other optimal two-stage designs. Applying the δ-minimax design to designs with more than two stages was not previously considered because of computational issues. In this paper, we identify the δ-minimax designs with more than two stages through use of a novel application of simulated annealing. We compare them with other optimal multistage designs and the triangular design. We show that, as for two-stage designs, the δ-minimax design has good expected sample size properties across a broad range of treatment effects but generally has a higher maximum sample size. To overcome this drawback, we use the concept of admissible designs to find trials which balance the maximum expected sample size and maximum sample size. We show that such designs have good expected sample size properties and a reasonable maximum sample size and, thus, are very appealing for use in clinical trials.  相似文献   

10.
This paper discusses design considerations and the role of randomization-based inference in randomized community intervention trials. We stress that longitudinal follow-up of cohorts within communities often yields useful information on the effects of intervention on individuals, whereas cross-sectional surveys can usefully assess the impact of intervention on group indices of health. We also discuss briefly special design considerations, such as sampling cohorts from targeted subpopulations (for example, heavy smokers), matching the communities, calculating sample size, and other practical issues. We present randomization tests for matched and unmatched cohort designs. As is well known, these tests necessarily have proper size under the strong null hypothesis that treatment has no effect on any community response. It is less well known, however, that the size of randomization tests can exceed nominal levels under the ‘weak’ null hypothesis that intervention does not affect the average community response. Because this weak null hypothesis is of interest in community intervention trials, we study the size of randomization tests by simulation under conditions in which the weak null hypothesis holds but the strong null hypothesis does not. In unmatched studies, size may exceed nominal levels under the weak null hypothesis if there are more intervention than control communities and if the variance among community responses is larger among control communities than among intervention communities; size may also exceed nominal levels if there are more control than intervention communities and if the variance among community responses is larger among intervention communities. Otherwise, size is likely near nominal levels. To avoid such problems, we recommend use of the same numbers of control and intervention communities in unmatched designs. Pair-matched designs usually have size near nominal levels, even under the weak null hypothesis. We have identified some extreme cases, unlikely to arise in practice, in which even the size of pair-matched studies can exceed nominal levels. These simulations, however, tend to confirm the robustness of randomization tests for matched and unmatched community intervention trials, particularly if the latter designs have equal numbers of intervention and control communities. We also describe adaptations of randomization tests to allow for covariate adjustment, missing data, and application to cross-sectional surveys. We show that covariate adjustment can increase power, but such power gains diminish as the random component of variation among communities increases, which corresponds to increasing intraclass correlation of responses within communities. We briefly relate our results to model-based methods of inference for community intervention trials that include hierarchical models such as an analysis of variance model with random community effects and fixed intervention effects. Although we have tailored this paper to the design of community intervention trials, many of the ideas apply to other experiments in which one allocates groups or clusters of subjects at random to intervention or control treatments.  相似文献   

11.
Estimating dietary intake for children is an essential component of conducting pesticide exposure assessments given the fact that children are predominantly exposed to certain pesticides, such as organophosphorus pesticide, through dietary intake. Different study designs and their respective sampling methodology utilized to estimate food consumption patterns can significantly alter the parameter estimates and the variability in the values obtained. This study investigated the impacts of study design on overall estimates of dietary intake by applying the temporal sampling characteristics used in cross-sectional approaches, as in The Continuing Survey of Food for Intakes by Individuals (CSFII), to food consumption data collected in a longitudinal manner via a bootstrap sampling technique. We examined the precision of time-averaged dietary intake estimates under various sampling schemes and explored the contribution of seasonality toward the dietary patterns. A comparison between the estimates of food consumption obtained from the bootstrap replicates and the longitudinal study estimates indicate that variability is significantly decreased when employing a longitudinal study design. Moreover, both between and within-subject variability decreases when individuals are followed over an increasing number of days. Finally, within the longitudinal study cohort, we observed a seasonal component to dietary intake for fruits and grains. Our findings suggest that longitudinal dietary surveys offer substantial improvements for exposure assessment compared to a standard cross-sectional design.  相似文献   

12.
On sample sizes to estimate the protective efficacy of a vaccine   总被引:3,自引:0,他引:3  
To estimate vaccine protective efficacy, defined as VE = 1 - ARV/ARU where ARV is the disease attack rate in the vaccinated group and ARU is the disease attack rate in the controls, investigators have used both cohort and case-control designs. For each design, we present a method for calculation of the sample size required to provide an approximate confidence interval for VE of predetermined width and probability of coverage. The required sample size is a function of the desired width of the confidence interval, the probability of coverage, the assumed VE, and, for cohort designs, the assumed disease attack rate in the controls, and for case-control designs, the assumed vaccine exposure prevalence for the controls.  相似文献   

13.
We consider the situation where in a first stage of a clinical trial several treatments are compared with a single control and the ‘best’ treatment(s) are selected in an interim analysis to be carried on to the second stage. We quantify the mean bias and mean square error of the conventional estimates after selection depending on the number of treatments and the selection time during the trial. The cases without or with reshuffling the planned sample size of the dropped treatments to the selected ones are investigated. The mean bias shows very different patterns depending on the selection rule and the unknown parameter values. We stress the fact that the quantification of the bias is possible only in designs with planned adaptivity where the design allows reacting to new evidence, but the decision rules are laid down in advance. Finally, we calculate the mean bias which arises in a simple but influential regulatory selection rule, to register a new medical therapy only when two pivotal trials have both proven an effect by a statistical test. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
This study was carried out in order to assess the validity of the pure cross-sectional study in the ascertainment of nosocomial infection risk-factors. The results yielded by two designs (cross-sectional and case-control) are compared. A cross-sectional design was performed in a tertiary hospital. 592 patients were studied, 38 of whom were nosocomially infected. The clinical information on all the patients included in this design was reviewed after hospital discharge. A matched case-control study was nested in the population cross-sectionally surveyed. 66 cases (28 additional patients developed a hospital infection) and 132 controls were selected. Odds ratios (ORs) for the risk factors analyzed by both designs were compared. There were no significant differences between the estimates yielded by both designs; however, a trend of lower OR estimates for the cross-sectional study was seen, which may be important for risk factors not strongly related to (low relative risk) nosocomial infection. Several factors which might account for the results observed (random error, bias introduced by matching) are discussed. It is suggested that pure cross-sectional designs for the study of risk factors of nosocomial infection may introduce a negative (toward-the-null) bias.  相似文献   

15.
The design and analysis of cluster randomized trials has been a recurrent theme in Statistics in Medicine since the early volumes. In celebration of 25 years of Statistics in Medicine, this paper reviews recent developments, particularly those that featured in the journal. Issues in design such as sample size calculations, matched paired designs, cohort versus cross-sectional designs, and practical design problems are covered. Developments in analysis include modification of robust methods to cope with small numbers of clusters, generalized estimation equations, population averaged and cluster specific models. Finally, issues on presenting data, some other clustering issues and the general problem of evaluating complex interventions are briefly mentioned.  相似文献   

16.
We review recent developments in the design and analysis of group-randomized trials (GRTs). Regarding design, we summarize developments in estimates of intraclass correlation, power analysis, matched designs, designs involving one group per condition, and designs in which individuals are randomized to receive treatments in groups. Regarding analysis, we summarize developments in marginal and conditional models, the sandwich estimator, model-based estimators, binary data, survival analysis, randomization tests, survey methods, latent variable methods and nonlinear mixed models, time series methods, global tests for multiple endpoints, mediation effects, missing data, trial reporting, and software. We encourage investigators who conduct GRTs to become familiar with these developments and to collaborate with methodologists who can strengthen the design and analysis of their trials.  相似文献   

17.
The case-control design is widely used in retrospective database studies, often leading to spectacular findings. However, results of these studies often cannot be replicated, and the advantage of this design over others is questionable. To demonstrate the shortcomings of applications of this design, we replicate two published case-control studies. The first investigates isotretinoin and ulcerative colitis using a simple case-control design. The second focuses on dipeptidyl peptidase-4 inhibitors and acute pancreatitis, using a nested case-control design. We include large sets of negative control exposures (where the true odds ratio is believed to be 1) in both studies. Both replication studies produce effect size estimates consistent with the original studies, but also generate estimates for the negative control exposures showing substantial residual bias. In contrast, applying a self-controlled design to answer the same questions using the same data reveals far less bias. Although the case-control design in general is not at fault, its application in retrospective database studies, where all exposure and covariate data for the entire cohort are available, is unnecessary, as other alternatives such as cohort and self-controlled designs are available. Moreover, by focusing on cases and controls it opens the door to inappropriate comparisons between exposure groups, leading to confounding for which the design has few options to adjust for. We argue that this design should no longer be used in these types of data. At the very least, negative control exposures should be used to prove that the concerns raised here do not apply.  相似文献   

18.
ObjectiveThe stepped wedge design is increasingly being used in cluster randomized trials (CRTs). However, there is not much information available about the design and analysis strategies for these kinds of trials. Approaches to sample size and power calculations have been provided, but a simple sample size formula is lacking. Therefore, our aim is to provide a sample size formula for cluster randomized stepped wedge designs.Study Design and SettingWe derived a design effect (sample size correction factor) that can be used to estimate the required sample size for stepped wedge designs. Furthermore, we compared the required sample size for the stepped wedge design with a parallel group and analysis of covariance (ANCOVA) design.ResultsOur formula corrects for clustering as well as for the design. Apart from the cluster size and intracluster correlation, the design effect depends on choices of the number of steps, the number of baseline measurements, and the number of measurements between steps. The stepped wedge design requires a substantial smaller sample size than a parallel group and ANCOVA design.ConclusionFor CRTs, the stepped wedge design is far more efficient than the parallel group and ANCOVA design in terms of sample size.  相似文献   

19.
Clinical trials often employ two or more primary efficacy endpoints. One of the major problems in such trials is how to determine a sample size suitable for multiple co‐primary correlated endpoints. We provide fundamental formulae for the calculation of power and sample size in order to achieve statistical significance for all the multiple primary endpoints given as binary variables. On the basis of three association measures among primary endpoints, we discuss five methods of power and sample size calculation: the asymptotic normal method with and without continuity correction, the arcsine method with and without continuity correction, and Fisher's exact method. For all five methods, the achieved sample size decreases as the value of association measure increases when the effect sizes among endpoints are approximately equal. In particular, a high positive association has a greater effect on the decrease in the sample size. On the other hand, such a relationship is not very strong when the effect sizes are different. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Estimates of relative efficacy between alternative treatments are crucial for decision making in health care. Bayesian mixed treatment comparison models provide a powerful methodology to obtain such estimates when head‐to‐head evidence is not available or insufficient. In recent years, this methodology has become widely accepted and applied in economic modelling of healthcare interventions. Most evaluations only consider evidence from randomized controlled trials, while information from other trial designs is ignored. In this paper, we propose three alternative methods of combining data from different trial designs in a mixed treatment comparison model. Naive pooling is the simplest approach and does not differentiate between‐trial designs. Utilizing observational data as prior information allows adjusting for bias due to trial design. The most flexible technique is a three‐level hierarchical model. Such a model allows for bias adjustment while also accounting for heterogeneity between‐trial designs. These techniques are illustrated using an application in rheumatoid arthritis. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号