首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Robust rank-based methods are proposed for the analysis of data from multicenter clinical trials using a mixed model (including covariates) in which the treatment effects are assumed to be fixed and the center effects are assumed to be random. These rank-based methods are developed under the usual mixed-model structure but without the normality assumption of the random components in the model. For this mixed model, our proposed estimation includes R estimation of the fixed effects, robust estimation of the variance componets, and studentized residuals. Our accompanying inference includes estimates of the standard errors of the fixed-effects estimators and tests of general linear hypotheses concerning fixed effects. While the development is for general scores function, the Wilcoxon linear scores are emphasized. A discussion of the relative efficiency results shows that the R estimates are highly efficient compared to the traditional maximum likelihood (ML) estimates. A small Monte Carlo study confirms the validity of the analysis and its gain in power over the ML analysis for heavy-tailed distributions. We further develop a rank-based test for center by treatment interactions. We discuss the results of our analysis for an example of a multicenter clinical trial which shows the robustness of our procedure.  相似文献   

2.
The major limitations of growth curve mixture models for HIV/AIDS data are the usual assumptions of normality and monophasic curves within latent classes. This article addresses these limitations by using non-normal skewed distributions and multiphasic patterns for outcomes of prospective studies. For such outcomes, new skew-t (ST) distributions are proposed for modeling heterogeneous growth trajectories, which exhibit not abrupt but gradual multiphasic changes from a declining trend to an increasing trend over time. We assess these clinically important features of longitudinal HIV/AIDS data using the bent-cable framework within a context of a joint modeling of time-to-event process and response process. A real dataset from an AIDS clinical study is used to illustrate the proposed methods.  相似文献   

3.
The design of AIDS clinical trials is of growing importance. These studies tend to be longitudinal and typically involve missing data. HIV-1 RNA is a common endpoint for these studies and is inherently non-normal, although viral load can be measured only within certain bounds, resulting in censored data. We compared several analysis methods, both univariate and multivariate, on the basis of empirical power and provide an illustrative example of data from a controlled clinical trial. Simulated viral load data demonstrate that methods adjusting for baseline data have power increasing with increasing positive intrasubject correlation expected with this type of data. Several summary measures considered have power compatible with multivariate tests.  相似文献   

4.
The normal distribution is among the most useful distributions in statistical applications. Accordingly, testing for normality is of fundamental importance in many fields including biopharmaceutical research. A generally powerful test for normality is the Shapiro-Wilk test, which can be derived based on estimated entropy divergence. Another well-known test for normality based on entropy divergence was proposed by Vasicek (1976) which has inspired the development of many goodness-of-fit tests for other important distributions. Despite extensive research on the subject, there still exists considerable confusion concerning the fundamental characteristics of Vasicek’s test. This article presents a unified derivation of both the Shapiro-Wilk test and Vasicek’s test based on estimated entropy divergence and clarifies some existing confusion. A comparative study of power performance for these two well-known tests for normality is presented with respect to a wide range of alternatives.  相似文献   

5.
Summary Guidelines for the performance and analysis of bioequivalence studies are not very specific. The advantages and disadvantages of the following methods and tests are discussed: analysis of variance by summation or by use of general linear models, nonparametric procedures, aposteriori probabilities and tests on the normality of residuals and on the variability of the results. Arguments for or against an analysis of data after logarithmic transformation versus analysis of untransformed data are presented. If the confidence intervals lie within certain limits, preparations may be considered equivalent. The criteria leading to those limits are discussed.It is recommended that concentration-dependent data of bioequivalence studies be evaluated by analysis of variance after logarithmic transformation, applying general linear models. Data that by theoretical reasons cannot be normally or log-normally distributed should be analysed by nonparametric methods. Otherwise these methods can only be recommended if a significant deviation from normality has been noted and only for two-way cross-over designs. For a geometric evaluation (after logarithmic transformation) the regions of acceptance should be symmetrical in the logarithm, e.g. (80%, 125%).  相似文献   

6.
ABSTRACT

Analysis of covariance (ANCOVA) is commonly used in the analysis of randomized clinical trials to adjust for baseline covariates and improve the precision of the treatment effect estimate. We derive the exact power formulas for testing a homogeneous treatment effect in superiority, noninferiority, and equivalence trials under both unstratified and stratified randomizations, and for testing the overall treatment effect and treatment × stratum interaction in the presence of heterogeneous treatment effects when the covariates excluding the intercept, treatment, and prestratification factors are normally distributed. These formulas also work very well for nonnormal covariates. The sample size methods based on the normal approximation or the asymptotic variance generally underestimate the required size. We adapt the recently developed noniterative and two-step sample size procedures to the above tests. Both methods take into account the nonnormality of the t statistic, and the lower order variance term commonly ignored in the sample size estimation. Numerical examples demonstrate the excellent performance of the proposed methods particularly in small samples. We revisit the topic on the prestratification versus post-stratification by comparing their relative efficiency and power. Supplementary materials for this article are available online.  相似文献   

7.
ABSTRACT

Hypothesis tests based on linear models are widely accepted by organizations that regulate clinical trials. These tests are derived using strong assumptions about the data-generating process so that the resulting inference can be based on parametric distributions. Because these methods are well understood and robust, they are sometimes applied to data that depart from assumptions, such as ordinal integer scores. Permutation tests are a nonparametric alternative that require minimal assumptions which are often guaranteed by the randomization that was conducted. We compare analysis of covariance (ANCOVA), a special case of linear regression that incorporates stratification, to several permutation tests based on linear models that control for pretreatment covariates. In simulations of randomized experiments using models which violate some of the parametric regression assumptions, the permutation tests maintain power comparable to ANCOVA. We illustrate the use of these permutation tests alongside ANCOVA using data from a clinical trial comparing the effectiveness of two treatments for gastroesophageal reflux disease. Given the considerable costs and scientific importance of clinical trials, an additional nonparametric method, such as a linear model permutation test, may serve as a robustness check on the statistical inference for the main study endpoints. Supplementary materials for this article are available online.  相似文献   

8.
The importance of the use of appropriate biostatistical methods is stressed. In this article some problems and common errors in the data-reduction methods applied in biopharmaceutical and pharmacokinetic research are discussed. A commonly used representation of a set of concentration-time curves is the so-called ‘mean curve’, a curve through the arithmetic means of concentrations at discrete time points. If individual curves are compared with the ‘mean curve’ it appears that important characteristics have disappeared while other, incorrect, characteristics have been created. Unreliable conclusions may result from this procedure. Rather every single concentration-time curve should be fitted by appropriate regression methods and the resulting parameters be considered as multiple characteristics of individual pharmacokinetic behaviour. In a second data-analysis step these parameters may be clustered into more or less homogeneous subgroups, which subsequently may be represented by a representative curve. Standard errors of the mean and confidence intervals based on standard errors of the mean instead of the standard deviation are often misused as dispersion measures to characterize the sample or population distribution. Standard errors of the mean and confidence intervals measure the precision of the mean of a sample and are sensitive to the sample size. Vertical bars (in curves) representing standard deviation, standard errors of the mean or confidence intervals suggest symmetrical distributions, but this is sometimes not justified. Deviations from normality appear to occur often. A simple graphical method to indicate the dispersion of non-normal sets is presented. Methods for the determination of confidence intervals for normal and non-normal distributions are discussed. Attention has been given to a distribution-free method for the determinations of confidence intervals based on Wilcoxons test.  相似文献   

9.
The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.  相似文献   

10.
This article reviews nonparametric alternatives to the mixed model normal theory analysis for the analyses of multicenter clinical trials. Under a mixed model, the traditional analysis is based on maximum likelihood theory under normal errors. This analysis, though, is not robust to outliers. Robust, rank-based, Wilcoxon-type procedures are reviewed for a multicenter clinical trial for the mixed model but without the assumption of normality. These procedures retain the high efficiency of Wilcoxon methods for simple location problems and are based on a fitting criterion which is robust to outliers in response space. A simple weighting scheme can be employed so that the procedures are robust to outliers in factor (design) space as well as response space. These rank-based analyses offer a complete analysis, including estimation of fixed effects and their standard errors, and tests of linear hypotheses. Both rank-based estimates of contrasts and individual treatment effects are reviewed. We illustrate the analyses using real data from a clinical trial.  相似文献   

11.
This article discusses statistical methods for the analysis of multivariate data arising in clinical trials involving a small number of subjects randomly assigned to one of several treatment groups. Possible violations of traditional assumptions such as variance homogeneity and normality of errors are often dealt with by carrying out the statistical analysis using strategies such as transforming the data or applying nonparametric procedures. Multivariate nonparametric tests provide a realistic alternative for analyzing such data. We present a permutation procedure for analyzing data arising in randomized experiments  相似文献   

12.
It is important yet challenging to choose an appropriate analysis method for the analysis of repeated binary responses with missing data. The conventional method using the last observation carried forward (LOCF) approach can be biased in both parameter estimates and hypothesis tests. The generalized estimating equations (GEE) method is valid only when missing data are missing completely at random, which may not be satisfied in many clinical trials. Several random-effects models based on likelihood or pseudo-likelihood methods and multiple-imputation-based methods have been proposed in the literature. In this paper, we evaluate the random-effects models with full- or pseudo-likelihood methods, GEE, and several multiple-imputation approaches. Simulations are used to compare the results and performance among these methods under different simulation settings.  相似文献   

13.
It is important yet challenging to choose an appropriate analysis method for the analysis of repeated binary responses with missing data. The conventional method using the last observation carried forward (LOCF) approach can be biased in both parameter estimates and hypothesis tests. The generalized estimating equations (GEE) method is valid only when missing data are missing completely at random, which may not be satisfied in many clinical trials. Several random-effects models based on likelihood or pseudo-likelihood methods and multiple-imputation-based methods have been proposed in the literature. In this paper, we evaluate the random-effects models with full- or pseudo-likelihood methods, GEE, and several multiple-imputation approaches. Simulations are used to compare the results and performance among these methods under different simulation settings.  相似文献   

14.
A statistical study of amino acid side chain contact interactions was carried out using a data set based on 36 protein structures. For each type of amino acid, a distribution of per-residue inter-side-chain contacts was obtained, over the observed span of zero to 11 contacts per residue. Significant observations included the following: 1) The mean number of inter-side-chain contacts is proportional to side chain surface area with the exception of Lys and Arg. 2) The mean number of contacts was greater for amino acids in β-sheet relative to α-helical regions. 3) The more polar or surface-loving amino acids exhibited non-normal distributions, whereas distributions for the non-polar or interior-loving amino acids fell within accepted limits of normality.  相似文献   

15.
ABSTRACT

The design and analysis of cancer clinical trials with biomarker depend on various factors, such as the phase of trials, the type of biomarker, whether the used biomarker is validated or not, and the study objectives. In this article, we demonstrate the design and analysis of two Phase II cancer clinical trials, one with a predictive biomarker and the other with an imaging prognostic biomarker. Statistical testing methods and their sample size calculation methods are presented for each trial. We assume that the primary endpoint of these trials is a time to event variable, but this concept can be used for any type of endpoint.  相似文献   

16.
The problem of estimating parameters and testing hypotheses pertaining to categorical data is well known in statistical analysis. Much of the literature on the subject specifies and fits linear models to multinomial data using methods such as weighted least squares. This article describes maximum-likelihood estimation and likelihood ratio tests for ordered categorical response variates with either discrete or continuous underlying probability distributions. Emphasis is on fitting and making inferences about parameters of mixture distributions, especially mixtures of normal distributions. Goodness-of-fit tests are given to check the adequacy of the fitted distributional models.  相似文献   

17.
In this paper, we develop a general method of testing for independence when unobservable generalized errors are involved. Our method can be applied to testing for serial independence of generalized errors, and testing for independence between the generalized errors and observable covariates. The former can serve as a unified approach to testing the adequacy of time series models, as model adequacy often implies that the generalized errors obtained after a suitable transformation are independent and identically distributed. The latter is a key identification assumption in many nonlinear economic models. Our tests are based on a classical sample dependence measure, the Hoeffding–Blum–Kiefer–Rosenblatt‐type empirical process applied to generalized residuals. We establish a uniform expansion of the process, thereby deriving an explicit expression for the parameter estimation effect, which causes our tests not to be nuisance‐parameter‐free. To circumvent this problem, we propose a multiplier‐type bootstrap to approximate the limit distribution. Our bootstrap procedure is computationally very simple as it does not require a re‐estimation of the parameters in each bootstrap replication. Simulations and empirical applications to daily exchange rate data highlight the merits of our approach.  相似文献   

18.
Abstract

This is the second of a series of articles presented in this Journal introducing statistical terminology and methods useful in the design and analysis of clinical trials. This article describes some graphical techniques for data display, sampling procedures and introduces the normal and binomial distributions. These articles are based, in part, on material presented in greater detail in the book, Pharmaceutical Statistics, by the author (1).  相似文献   

19.
Bivariate correlated (clustered) data often encountered in epidemiological and clinical research are routinely analyzed under a linear mixed-effected (LME) model with normality assumptions for the random-effects and within-subject errors. However, those analyses might not provide robust inference when the normality assumptions are questionable if the data set particularly exhibits skewness and heavy tails. In this article, we develop a Bayesian approach to bivariate linear mixed-effects (BLME) models replacing the Gaussian assumptions for the random terms with skew-normal/independent (SNI) distributions. The SNI distribution is an attractive class of asymmetric heavy-tailed parametric structure which includes the skew-normal, skew-t, skew-slash, and skew-contaminated normal distributions as special cases. We assume that the random-effects and the within-subject (random) errors, respectively, follow multivariate SNI and normal/independent (NI) distributions, which provide an appealing robust alternative to the symmetric normal distribution in a BLME model framework. The method is exemplified through an application to an AIDS clinical data set to compare potential models with different distribution specifications, and clinically important findings are reported.  相似文献   

20.
This paper is intended to assist pharmacologists to make the most of statistical analysis and avoid common errors. A scenario, in which an experimenter performed an experiment in two separate stages, combined the control groups for analysis and found some surprising results, is presented. The consequences of combined controls are discussed, appropriate display and analysis of the data are described, and an analysis of the likelihood of erroneous conclusions is made. Comparisons between data from separately conducted experimental series are hazardous when there is any possibility that the properties of the experimental units have changed between the series. Experiments that have been performed independently should be analyzed independently. Unlikely or surprising results should be treated with caution and a high standard of evidence should be required, and verification by repeated experiments should be performed and reported. Box and whisker plots contain more information than plots more commonly used to display for qualitative variables and should be used where the sample size is large enough (say, n > or = 5). In most biomedical experiments the observations are not random samples from large populations as assumed by conventional parametric analyses such as Student's t-test, and so permutation tests, which do not lose their validity when a sampled population is non-normal or when the data are not random samples, should frequently be used instead of Student's t-tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号