首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The summary measures approach to analysing repeated measures is described. The circumstances under which it can be advantageous to use such measures are considered. Strategies for baseline adjustment where there are multiple baselines are examined, as is the choice of appropriate summary statistic. A compromise trend/mean measure, regression through the origin, is proposed as being useful under some circumstances. An analysis using this measure is illustrated with a suitable example.  相似文献   

3.
4.
Longitudinal endpoints are used in clinical trials, and the analysis of the results is often conducted using within-individual summary statistics. When these trials are monitored, interim analyses that include subjects with incomplete follow-up can give incorrect decisions due to bias by non-linearity in the true time trajectory of the treatment effect. Linear mixed-effects models can be used to remove this bias, but there is a lack of software to support both the design and implementation of monitoring plans in this setting. This paper considers a clinical trial in which the measurement time schedule is fixed (at least for pre-trial design), and the scientific question is parameterized by a contrast across these measurement times. This setting assures generalizable inference in the presence of non-linear time trajectories. The distribution of the treatment effect estimate at the interim analyses using the longitudinal outcome measurements is given, and software to calculate the amount of information at each interim analysis is provided. The interim information specifies the analysis timing thereby allowing standard group sequential design software packages to be used for trials with longitudinal outcomes. The practical issues with implementation of these designs are described; in particular, methods are presented for consistent estimation of treatment effects at the interim analyses when outcomes are not measured according to the pre-trial schedule. Splus/R functions implementing this inference using appropriate linear mixed-effects models are provided. These designs are illustrated using a clinical trial of statin treatment for the symptoms of peripheral arterial disease.  相似文献   

5.
Recently, Wu and Follmann developed summary measures to adjust for informative drop-out in longitudinal studies where drop-out depends on the underlying true value of the response. In this paper we evaluate these procedures in the common situation where drop-out depends on the observed responses. We also discuss various design and analysis strategies which minimize the bias obtained with this type of drop-out. Of particular interest is the use of multiple measurements of the response at each visit to reduce bias. These strategies are evaluated with a simulation study. The results are highlighted with applications to both a hypertensive and a respiratory disease clinical trial, where multiple measurements of the primary response were made for all participants at each visit.  相似文献   

6.
This paper explores the application of statistical decision theory to treatment choices in cancer which involve difficult value judgements in weighing beneficial and deleterious outcomes of treatment. Strengths and weaknesses of using decision theory are illustrated by considering the problem of selecting chemotherapy in advanced ovarian cancer. The paper includes an assessment of individual preferences in 27 volunteers and a discussion of some problems in utility assessment. An alternative approach, using threshold analysis, is presented in which the results of the decision analysis are expressed as a function of utility parameters. By knowing what sets of utilities favour each treatment, the assessment of patient preferences can then be focused on important differences of treatment options. The implications of these results for the design and analysis of clinical trials are discussed.  相似文献   

7.
Longitudinal data is often collected in clinical trials to examine the effect of treatment on the disease process over time. This paper reviews and summarizes much of the methodological research on longitudinal data analysis from the perspective of clinical trials. We discuss methodology for analysing Gaussian and discrete longitudinal data and show how these methods can be applied to clinical trials data. We illustrate these methods with five examples of clinical trials with longitudinal outcomes. We also discuss issues of particular concern in clinical trials including sequential monitoring and adjustments for missing data. A review of current software for analysing longitudinal data is also provided. Published in 1999 by John Wiley & Sons, Ltd. This article is a US Government work and is the public domain in the United States.  相似文献   

8.
Clinical trials often assess therapeutic benefit on the basis of an event such as death or the diagnosis of disease. Usually, there are several additional longitudinal measures of clinical status which are collected to be used in the treatment comparison. This paper proposes a simple non-parametric test which combines a time to event measure and a longitudinal measure so that a substantial treatment difference on either of the measures will reject the null hypothesis. The test is applied on AIDS prophylaxis and paediatric trials.  相似文献   

9.
Genome‐wide association studies (GWAS) for complex diseases have focused primarily on single‐trait analyses for disease status and disease‐related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL‐cholesterol, HDL‐cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual‐level data. Here, we develop metaUSAT (where USAT is unified score‐based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual‐level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P‐value for association and is computationally efficient for implementation at a genome‐wide level. Simulation experiments show that metaUSAT maintains proper type‐I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D‐GENES studies, metaUSAT detected genome‐wide significant loci beyond the ones identified by univariate analyses. Evidence from larger studies suggest that the variants additionally detected by our test are, indeed, associated with lipid levels in humans. In summary, metaUSAT can provide novel insights into the genetic architecture of a common disease or traits.  相似文献   

10.
BACKGROUND: Little is known about the degree to which behavioural, biological, and genetic traits contribute to within-person variation in serum cholesterol. Materials and Methods The authors studied within-person variation in serum total and high density lipoprotein (HDL) cholesterol in 458 participants of 27 dietary intervention studies in Wageningen, The Netherlands, from 1976 to 1995. RESULTS: For a median of 4 days between blood draws, the geometric mean of the within-person standard deviation was 0.13 mmol/l ( approximately 5 mg/dl, coefficient of variation = 3.0%) for total cholesterol and 0.04 mmol/l ( approximately 1.5 mg/dl, coefficient of variation = 3.0%) for HDL cholesterol. In mixed-model linear regressions using within-person variance as the dependent variable and including lipid concentration and covariates listed below, within-person variance of both total cholesterol and HDL cholesterol was higher for greater number of days between blood draws and for self-selected diet rather than investigator-controlled diet. Within-person variance of total cholesterol only was higher for non-standardized versus standardized phlebotomy protocol and for female sex. The authors found evidence that the APOA4 -347 (12/22 genotype) and MTP -493 (11 genotype) polymorphisms may increase the within-person variation in total cholesterol. CONCLUSION: Under certain study design (self-selected diet, use of non-standardized phlebotomy protocol) or participant characteristics (female, certain polymorphisms) within-person lipid variance is increased and required sample size will be greater. These findings may have important implications for the time and cost of such interventions.  相似文献   

11.
OBJECTIVE: To determine the usefulness of severity of illness measures in explaining the variation in costs observed in economic analyses of clinical trials. METHOD: Hospital costs and three severity of illness measures (Medical Illness Severity Grouping System [MedisGroups], Acute Physiology and Chronic Health Evaluation [APACHE] II, APACHE III) were calculated for patients undergoing surgical management of gastrointestinal malignancies. Regression models were developed to determine the predictive ability of the severity of illness measures on total costs and length of stay of surgical patients. RESULTS: There was not a significant reduction in the cost variance among patients after correcting for severity with use of the MedisGroups score. APACHE II scores were a better predictor of total costs, although this relationship did not reach statistical significance. As a continuous variable, APACHE III scores explained $326 of extra cost for each point on the scale, and as a categorical variable, identified those patients who were most expensive to care for and with long lengths of stay. CONCLUSION: Neither MedisGroups nor APACHE II were found to be useful in explaining cost variations in a clinical trial. The APACHE III system was more useful in discriminating resource intensive patients.  相似文献   

12.
Randomized clinical trials are typically conducted to compare the efficacy (benefits) and side effects (risks) of two or more treatments. One can use results from such trials to decide on a preferable treatment that reflects one's own evaluation of the benefits and risks. To facilitate the necessary decision making, we propose in this paper three measures for simultaneously assessing benefits and risks. All three measures use weights that reflect the relative importance of the various treatment outcomes to an individual. Two of them carry the flavour of benefit/risk ratios, while the third generalizes Hilden's measure which incorporates patients' preferences. The proposed measures and procedures are illustrated using data from a phase III clinical trial of antihypertensive compounds.  相似文献   

13.
Comparative studies of the accuracy of diagnostic tests often involve designs according to which each study participant is examined by two or more of the tests and the diagnostic examinations are interpreted by several readers. Tests are then compared on the basis of a summary index, such as the (full or partial) area under the receiver operating characteristic (ROC) curve, averaged over the population of readers. The design and analysis of such studies naturally need to take into account the correlated nature of the diagnostic test results and interpretations.In this paper, we describe the use of hierarchical modelling for ROC summary measures derived from multi-reader, multi-modality studies. The models allow the variance of the estimates to depend on the actual value of the index and account for the correlation in the data both explicitly via parameters and implicitly via the hierarchical structure. After showing how the hierarchical models can be employed in the analysis of data from multi-reader, multi-modality studies, we discuss the design of such studies using the simulation-based, Bayesian design approach of Wang and Gelfand (Stat. Sci. 2002; 17(2):193-208). The methodology is illustrated via the analysis of data from a study conducted to evaluate a computer-aided diagnosis tool for screen film mammography and via the development of design considerations for a multi-reader study comparing display modes for digital mammography. The hierarchical model methodology described in this paper is also applicable to the meta-analysis of ROC studies.  相似文献   

14.
15.
Methodological work on randomized trials has largely concerned pharmacological interventions in which the effects of the attending health professional may be regarded as minor. In other clinical settings, such as surgery, talk or physical therapies, staff specific variation may make generalization problematic, undermining the value of the trial. Such variation has been the basis of some objections to controlled trial methodology and non-acceptance of trial results. The implication of this source of variation will be considered for studies in which different types of health professional deliver the intervention in each arm of the trial. Such a trial may involve individual patient or group randomization. Whichever method is used, it is argued that variation in outcome between health professionals may lead to design effects. These issues will be illustrated using data from a large trial comparing primary care service delivered by two types of medical doctor. Random effect models are most suitable for analyzing this type of trial, as they allow adjustment for patient characteristics whilst controlling for design effects. This type of model illustrates that there can be substantial variation in the performance within each category of doctor.  相似文献   

16.
Ethical considerations in a cancer phase I trial require a design allowing determination of the maximum tolerated dose with a minimum number of patients treated at low ineffectual or high overly toxic doses. It would also be advantageous to complete the phase I trial in as short a period of time and with as few patients as possible to allow further resources for later studies in which patients are treated at the optimal dose. Several dose escalation schemes are compared. These are the Fibonacci, two two-stage schemes, and a proposed scheme which uses knowledge of all toxicity grades. Estimates of the maximum tolerated dose are obtained and compared using the dose escalation schemes alone, a logit model, and a proposed mean response model. Confidence intervals using the delta method are obtained from the logit and mean response models. The proposed scheme and the two-stage schemes have the advantage of requiring fewer patients, particularly at low doses. Confidence intervals obtained from the mean response model have better coverage than those from the logit model. Data from a cancer phase I trial of dipyridamole and acivicin is presented to illustrate the methods.  相似文献   

17.
In this paper we explore the possible reasons why medical papers reporting clinical trials sponsored by the pharmaceutical industry often analyse repeated measures data at certain key time-points instead of employing sophisticated models of repeated measures proposed by many statisticians. A survey indicated that the priority reason in the industry for having repeated measures in clinical trials is to monitor the trial and to utilize the early results for strategic decision making. We discuss what the common statistical methods do and do not offer for analysis of repeated measures in such clinical trials. We advocate the need to improve the understanding of the medical interest in conducting longitudinal trials in the industry, and to plan and analyse the repeated measures accordingly. We address the medical interest by formulating the problem and give illustrative examples for both phases II and III trials.  相似文献   

18.
In order to avoid certain difficulties with the conventional randomized clinical trial design, the expertise-based design has been proposed as an alternative. In the expertise-based design, patients are randomized to clinicians (e.g. surgeons), who then treat all their patients with their preferred intervention. This design recognizes individual clinical preferences and so may reduce the rates of procedural crossovers compared with the conventional design. It may also facilitate recruitment of clinicians, because they are always allowed to deliver their therapy of choice, a feature that may also be attractive to patients.The expertise-based design avoids the possibility of so-called differential expertise bias. If a standard treatment is generally more familiar to clinicians than a new experimental treatment, then in the conventional design, more patients randomized to the standard treatment will have an expert clinician, compared with patients randomized to the experimental treatment. If expertise affects the study outcome, then a biased comparison of the treatment groups will occur.We examined the relative efficiency of estimating the treatment effect in the expertise-based and conventional designs. We recognize that expected patient outcomes may be better in the expertise-based design, which in turn may modify the estimated treatment effect. In particular, a larger treatment effect in the expertise-based design can sometimes offset a higher standard error arising from the confounding of clinician effects with treatments.These concepts are illustrated with data taken from a randomized trial of two alternative surgical techniques for tibial fractures.  相似文献   

19.
Information technology is of increasing importance to the health service. Two main types of system have grown up, those in clinical departments and central administrative systems. It is important to consider their inter-relationships. A clinical system in the Renal Unit at Manchester Royal Infirmary is described, which is typical of many departmental systems. Further work is reported demonstrating an impressive commonality in the requirements of different clinical disciplines and supporting the view that departmental systems are a valuable source of accurate management data. The experience gained from designing departmental systems is reviewed. It is concluded that the participation of users is essential to the successful design and implementation of systems in the National Health Service and that departmental systems have a crucial role to play in the development of district information systems.  相似文献   

20.
Randomized clinical trial designs commonly include one or more planned interim analyses. At these times an external monitoring committee reviews the accumulated data and determines whether it is scientifically and ethically appropriate for the study to continue. With failure-time endpoints, it is common to schedule analyses at the times of occurrence of specified landmark events, such as the 50th event, the 100th event, and so on. Because interim analyses can impose considerable logistical burdens, it is worthwhile predicting their timing as accurately as possible. We describe two model-based methods for making such predictions during the course of a trial. First, we obtain a point prediction by extrapolating the cumulative mortality into the future and selecting the date when the expected number of deaths is equal to the landmark number. Second, we use a Bayesian simulation scheme to generate a predictive distribution of milestone times; prediction intervals are quantiles of this distribution. We illustrate our method with an analysis of data from a trial of immunotherapy in the treatment of chronic granulomatous disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号