首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Alternative analysis strategies for the three-period crossover design with two treatments are discussed in this paper. One analysis strategy involves a parametric model that incorporates the effects of interest. To implement this method, one usually assumes that the covariance structure for the data has the sphericity, or circularity, property. Alternative approaches that do not require this assumption are described. They are based on the parametric and non-parametric analysis of appropriate within-subject linear functions of the data. The advantage of these methods is that one only needs the assumption that the resulting linear functions for the respective subjects are independent and have a common distribution. The parametric approach also requires normality of the resulting within-subject linear functions for small sample situations. An extension of the non-parametric method is considered for cases in which the treatment sequences are randomly assigned within strata. The various methods are illustrated for a three-period crossover design involving two strata with randomly assigned treatment sequences of the form A:B:A and B:A:B.  相似文献   

2.
In this study, we compare the statistical properties of a number of methods for estimating P-values for allele-sharing statistics in non-parametric linkage analysis. Some of the methods are based on the normality assumption, using different variance estimation methods, and others use simulation (gene-dropping) to find empirical distributions of the test statistics. For variance estimation methods, we consider the perfect variance approximation and two empirical variance estimates. The simulation-based methods are gene-dropping with and without conditioning on the observed founder alleles. We also consider the Kong and Cox linear and exponential models and a Monte Carlo method modified from a method for finding genome-wide significance levels. We discuss the analytical properties of these various P-value estimation methods and then present simulation results comparing them. Assuming that the sample sizes are large enough to justify a normality assumption for the linkage statistic, the best P-value estimation method depends to some extent on the (unknown) genetic model and on the types of pedigrees in the sample. If the sample sizes are not large enough to justify a normality assumption, then gene-dropping is the best choice. We discuss the differences between conditional and unconditional gene-dropping.  相似文献   

3.
Diagnostic methods are key components in any good statistical analysis. Because of the similarities between the variance components approach and regression analysis with respect to the normality assumption, when performing quantitative genetic linkage analysis using variance component methods, one must check the normality assumption of the quantitative trait and outliers. Thus, the main purposes of this paper are to describe methods for testing the normality assumption, to describe various diagnostic methods for identifying outliers, and to discuss the issues that may arise when outliers are present when using variance components models in quantitative trait linkage analysis. Data from the Rochester Family Heart Study are used to illustrate the various diagnostic methods and related issues.  相似文献   

4.
In population pharmacokinetic studies, one of the main objectives is to estimate population pharmacokinetic parameters specifying the population distributions of pharmacokinetic parameters. Confidence intervals for population pharmacokinetic parameters are generally estimated by assuming the asymptotic normality, which is a large-sample property, that is, a property which holds for the cases where sample sizes are large enough. In actual clinical trials, however, sample sizes are limited and not so large in general. Likelihood functions in population pharmacokinetic modelling include a multiple integral and are quite complicated. We hence suspect that the sample sizes of actual trials are often not large enough for assuming the asymptotic normality and that the asymptotic confidence intervals underestimate the uncertainties of the estimates of population pharmacokinetic parameters. As an alternative to the asymptotic normality approach, we can employ a bootstrap approach. This paper proposes a bootstrap standard error approach for constructing confidence intervals for population pharmacokinetic parameters. Comparisons between the asymptotic and bootstrap confidence intervals are made through applications to a simulated data set and an actual phase I trial.  相似文献   

5.
The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations.  相似文献   

6.
Correlated data arise in a longitudinal studies from epidemiological and clinical research. Random effects models are commonly used to model correlated data. Mostly in the longitudinal data setting we assume that the random effects and within subject errors are normally distributed. However, the normality assumption may not always give robust results, particularly if the data exhibit skewness. In this paper, we develop a Bayesian approach to bivariate mixed model and relax the normality assumption by using a multivariate skew-normal distribution. Specifically, we compare various potential models and illustrate the procedure using a real data set from HIV study.  相似文献   

7.
This paper explores the use of simple summary statistics for analysing repeated measurements in randomized clinical trials with two treatments. Quite often the data for each patient may be effectively summarized by a pre-treatment mean and a post-treatment mean. Analysis of covariance is the method of choice and its superiority over analysis of post-treatment means or analysis of mean changes is quantified, as regards both reduced variance and avoidance of bias, using a simple model for the covariance structure between time points. Quantitative consideration is also given to practical issues in the design of repeated measures studies: the merits of having more than one pre-treatment measurement are demonstrated, and methods for determining sample sizes in repeated measures designs are provided. Several examples from clinical trials are presented, and broad practical recommendations are made. The examples support the value of the compound symmetry assumption as a realistic simplification in quantitative planning of repeated measures trials. The analysis using summary statistics makes no such assumption. However, allowance in design for alternative non-equal correlation structures can and should be made when necessary.  相似文献   

8.
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
We explore the potential of Bayesian hierarchical modelling for the analysis of cluster randomized trials with binary outcome data, and apply the methods to a trial randomized by general practice. An approximate relationship is derived between the intracluster correlation coefficient (ICC) and the between-cluster variance used in a hierarchical logistic regression model. By constructing an informative prior for the ICC on the basis of available information, we are thus able implicitly to specify an informative prior for the between-cluster variance. The approach also provides us with a credible interval for the ICC for binary outcome data. Several approaches to constructing informative priors from empirical ICC values are described. We investigate the sensitivity of results to the prior specified and find that the estimate of intervention effect changes very little in this data set, while its interval estimate is more sensitive. The Bayesian approach allows us to assume distributions other than normality for the random effects used to model the clustering. This enables us to gain insight into the robustness of our parameter estimates to the classical normality assumption. In a model with a more complex variance structure, Bayesian methods can provide credible intervals for a difference between two variance components, in order for example to investigate whether the effect of intervention varies across clusters. We compare our results with those obtained from classical estimation, discuss the relative merits of the Bayesian framework, and conclude that the flexibility of the Bayesian approach offers some substantial advantages, although selection of prior distributions is not straightforward.  相似文献   

10.
Conventional seamless phase 2/3 design with fixed sample size determination (SSD) has gained its popularity in oncology drug development due to attractive features such as significantly shortening the development timeline, minimizing sample size, as well as early decision making. However, this design is not immune to inaccurate treatment effect assumption when only limited efficacy data are available at study design stage. We propose an innovative seamless phase 2/3 study design with flexible SSD for oncology trials, in which the trial is designed under a distribution of treatment effect instead of one single assumption due to huge uncertainty of treatment effect at design stage and the sample size for end of phase 3 analysis is not predetermined at design stage, but rather dynamically determined based on observed treatment effect at phase 2 portion. Some practical sample size determination rules for end of phase 3 analysis will be discussed. The proposed design can lead to reduced sample size or/and improved power compared with conventional seamless phase 2/3 design with fixed SSD. This innovative study design can be especially useful for programs with aggressive development strategy to expedite the process in delivering efficacious treatment to patients.  相似文献   

11.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

12.
Xie T  Waksman J 《Statistics in medicine》2003,22(18):2835-2846
Many clinical trials involve the collection of data on the time to occurrence of the same type of multiple events within sample units, in which ordering of events is arbitrary and times are usually correlated. To design a clinical trial with this type of clustered survival times as the primary endpoint, estimating the number of subjects (sampling units) required for a given power to detect a specified treatment difference is an important issue. In this paper we derive a sample size formula for clustered survival data via Lee, Wei and Amato's marginal model. It can be easily used to plan a clinical trial in which clustered survival times are of primary interest. Simulation studies demonstrate that the formula works very well. We also discuss and compare cluster survival time design and single survival time design (for example, time to the first event) in different scenarios.  相似文献   

13.
The concept of mediation has broad applications in medical health studies. Although the statistical assessment of a mediational effect under the normal assumption has been well established in linear structural equation models (SEM), it has not been extended to the general case where normality is not a usual assumption. In this paper, we propose to extend the definition of mediational effects through causal inference. The new definition is consistent with that in linear SEM and does not rely on the assumption of normality. Here, we focus our attention on the logistic mediation model, where all variables involved are binary. Three approaches to the estimation of mediational effects-Delta method, bootstrap, and Bayesian modelling via Monte Carlo simulation are investigated. Simulation studies are used to examine the behaviour of the three approaches. Measured by 95 per cent confidence interval (CI) coverage rate and root mean square error (RMSE) criteria, it was found that the Bayesian method using a non-informative prior outperformed both bootstrap and the Delta methods, particularly for small sample sizes. Case studies are presented to demonstrate the application of the proposed method to public health research using a nationally representative database. Extending the proposed method to other types of mediational model and to multiple mediators are also discussed.  相似文献   

14.
In clinical trials with time‐to‐event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long‐term survivors), such as trials for the non‐Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log‐rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short‐term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow‐up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Objectives. We reviewed published individually randomized group treatment (IRGT) trials to assess researchers’ awareness of within-group correlation and determine whether appropriate design and analytic methods were used to test for treatment effectiveness.Methods. We assessed sample size and analytic methods in IRGT trials published in 6 public health and behavioral health journals between 2002 and 2006.Results. Our review included 34 articles; in 32 (94.1%) of these articles, inappropriate analytic methods were used. In only 1 article did the researchers claim that expected intraclass correlations (ICCs) were taken into account in sample size estimation; in most articles, sample size was not mentioned or ICCs were ignored in the reported calculations.Conclusions. Trials in which individuals are randomly assigned to study conditions and treatments administered in groups may induce within-group correlation, violating the assumption of independence underlying commonly used statistical methods. Methods that take expected ICCs into account should be used in reexamining past studies and planning future studies to ensure that interventions are not judged effective solely on the basis of statistical artifacts. We strongly encourage investigators to report ICCs from IRGT trials and describe study characteristics clearly to aid these efforts.Randomized trials evaluating the effectiveness of interventions delivered to groups of participants, rather than individuals, are common in public health. These interventions may be preferred because they are often less expensive than an intervention delivered to individuals and because the group environment may enhance the effectiveness of the intervention. Trials designed to evaluate group interventions may assign intact groups (e.g., schools or worksites) to study conditions; these studies are examples of group-randomized trials (GRTs; also called cluster-randomized trials).Alternatively, such trials may assign individuals to study conditions, with interventions then delivered to groups; we propose labeling these trials individually randomized group treatment (IRGT) trials. In the past 30 years, a great deal of attention has been paid to the unique design and analytic methods needed for GRTs,14 but comparatively little attention has been devoted to IRGT trials.As do GRTs, IRGT trials involve the problem of potential correlation among observations within treatment conditions. In GRTs, the correlation is present at the beginning of the study because intact groups are randomly assigned to study conditions. Observations within these groups are correlated because people select themselves into groups, share a history, and interact with each other. In IRGT trials, correlation may develop over time as group members share the treatment environment and interact with each other.Regardless of how it develops, any correlation within groups violates one of the major assumptions of statistical methods used in the analysis of randomized clinical trials and often, erroneously, GRTs and IRGT trials. An assumption of these methods is that observations are independent within conditions; violations of this assumption can inflate type I error rates.57 Correlation within groups gives rise to an additional between-group component of variance, and standard analytic methods developed for randomized clinical trials ignore this extra variation, underestimating the error term.  相似文献   

17.
Power analysis constitutes an important component of modern clinical trials and research studies. Although a variety of methods and software packages are available, almost all of them are focused on regression models, with little attention paid to correlation analysis. However, the latter is arguably a simpler and more appropriate approach for modelling concurrent events, especially in psychosocial research. In this paper, we discuss power and sample size estimation for correlation analysis arising from clustered study designs. Our approach is based on the asymptotic distribution of correlated Pearson-type estimates. Although this asymptotic distribution is easy to use in data analysis, the presence of a large number of parameters creates a major problem for power analysis due to the lack of real data to estimate them. By introducing a surrogacy-type assumption, we show that all nuisance parameters can be eliminated, making it possible to perform power analysis based only on the parameters of interest. Simulation results suggest that power and sample size estimates obtained under the proposed approach are robust to this assumption.  相似文献   

18.
Tian L 《Statistics in medicine》2006,25(12):2008-2017
The within-subject coefficient of variation (WCV) is widely used as a measure of precision and reproducibility of data in medical and biological science. In this paper, generalized confidence intervals and tests for a single WCV is developed using the concept of generalized pivots under the assumption of one-way random effect model. This approach is further extended to two-sample cases. The resulting procedures are easy to compute and have good properties in term of coverage probabilities and type-I error control at small sample sizes. The proposed methods are illustrated by a real life example.  相似文献   

19.
In vaccine studies, a specific diagnosis of a suspected case by culture or serology of the infectious agent is expensive and difficult. Implementing validation sets in the study is less expensive and is easier to carry out. In studies using validation sets, the non-specific or auxiliary outcome is measured on each participant while the specific outcome is measured only for a small proportion of the participants. Vaccine efficacy, defined as one minus some measure of relative risk, could be severely attenuated if based only on the auxiliary outcome. Applying missing data analysis techniques could thus correct the bias while maintaining statistical efficiency. However, when the sample size in the validation sets is small and the vaccine is highly efficacious, all specific outcomes are likely to be negative in the validation set in the vaccinated group. Two commonly used missing data analysis methods, the mean score method and multiple imputation, depend on the ad hoc continuity correction when none of the specific outcomes are positive and the normality or log-normality assumption of relative risk, which may not hold when the relative risk is highly skewed, to estimate the confidence interval. In this paper, we propose a Bayesian method to estimate vaccine efficacy and its highest probability density (HPD) credible set using Monte Carlo (MC) methods when using auxiliary outcome data and a small validation sample. Comparing the performance of these approaches using data from a field study of influenza vaccine and simulations, we recommend to use the Bayesian method in this situation.  相似文献   

20.
This paper investigates estimating and testing treatment effects in randomized control trials where imperfect diagnostic device is used to assign subjects to treatment and control group(s). The paper focuses on pre‐post design and proposes two new methods for estimating and testing treatment effects. Furthermore, methods for computing sample sizes for such design accounting for misclassification of the subjects are devised. The methods are compared with each other and with a traditional method that ignores the imperfection of the diagnostic device. In particular, the likelihood‐based approach shows a significant advantage in terms of power, coverage probability and, consequently, in reduction of the required sample size. The application of the results are illustrated with data from an aging trial for dementia and data from electroencephalogram (EEG) recordings of alcoholic and non‐alcoholic subjects. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号