首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Interim analyses are routinely used to monitor accumulating data in clinical trials. When the objective of the interim analysis is to stop the trial if the trial is deemed futile, it must ideally be conducted as early as possible. In trials where the clinical endpoint of interest is only observed after a long follow-up, many enrolled patients may therefore have no information on the primary endpoint available at the time of the interim analysis. To facilitate earlier decision-making, one may incorporate early response data that are predictive for the primary endpoint (eg, an assessment of the primary endpoint at an earlier time) in the interim analysis. Most attention so far has been given to the development of interim test statistics that include such short-term endpoints, but not to decision procedures. Existing tests moreover perform poorly when the information is scarce, eg, due to rare events, when the cohort of patients with observed primary endpoint data is small, or when the short-term endpoint is a strong but imperfect predictor. In view of this, we develop an interim decision procedure based on the conditional power approach that utilizes the short-term and long-term binary endpoints in a framework that is expected to provide reliable inferences, even when the primary endpoint is only available for a few patients, and has the added advantage that it allows the use of historical information. The operational characteristics of the proposed procedure are evaluated for the phase III clinical trial that motivated this approach, using simulation studies.  相似文献   

3.
This paper discusses interim analysis of randomized clinical trials for which the primary endpoint is observed at a specific long-term follow-up time. For such trials subjects only yield direct information on the primary endpoint once they have been followed through to the long-term follow-up time, potentially eliminating a large proportion of the accrued sample from an interim analysis of the primary endpoint. We advocate more efficient interim analysis of long-term endpoints by augmenting long-term information with short-term information on subjects who have not yet been followed through to the long-term follow-up time. While retaining the long-term endpoint as the subject of the analysis, methods of jointly analysing short- and long-term data are discussed for reversible binary endpoints. It is shown theoretically and by simulation that the use of short-term information improves the efficiency with which long-term treatment differences are assessed based on interim data. Sequential analysis of treatment differences is discussed based on spending functions, and is illustrated with a numerical example from a cholesterol treatment trial.  相似文献   

4.
Although traditional phase II cancer trials are usually single arm, with tumor response as endpoint, and phase III trials are randomized and incorporate interim analyses with progression-free survival or other failure time as endpoint, this paper proposes a new approach that seamlessly expands a randomized phase II study of response rate into a randomized phase III study of time to failure. This approach is based on advances in group sequential designs and joint modeling of the response rate and time to event. The joint modeling is reflected in the primary and secondary objectives of the trial, and the sequential design allows the trial to adapt to increase in information on response and survival patterns during the course of the trial and to stop early either for conclusive evidence on efficacy of the experimental treatment or for the futility in continuing the trial to demonstrate it, on the basis of the data collected so far.  相似文献   

5.
The Radiation Therapy Oncology Group (RTOG) has embarked on seven phase II or phase III multicentre clinical trials involving a quality of life component. Each quality of life trial used questionnaires or examinations that have been tested for reliability and validity by independent investigators. Each trial includes questionnaires that examine the patient's physical, functional, social, and emotional status, and that measure a specific quality of life issue pertinent to the patient's diagnosis or treatment. Two trial designs have been implemented for studies with quality of life endpoints. One design involves companion trials to the primary treatment study pertaining solely to the quality of life endpoint. The second design integrates the quality of life component into the primary trial design. The RTOG has found a need for education of individuals and institutions expected to administer and obtain the quality of life data. Once the data have been collected several methods for the analysis of the quality of life data are available. However, there is no one best method for analysing quality of life data, thus more than one method should be used in order to provide insight into the data.  相似文献   

6.
Assessment of health related quality of life (QOL) has become an important endpoint in many clinical trials of cancer therapy. Most of these studies entail multiple QOL scales that are assessed repeatedly over time. As a result, the problem of multiple comparisons is a primary analytic challenge with these trials. The use of summary measures and statistics both reduces the number of hypotheses tested and facilitates the interpretation of trial results where the primary question is ‘Does the overall QOL differ between treatment arms?’ I present two classes of summary measures that are sensitive to consistent trends in the same direction across multiple assessment times or multiple QOL scales. Missing data strongly influences the choice between the two classes, where one class handles missing data on an individual basis, while the other class uses model-based strategies. I present the results from a clinical trial of adjuvant therapy for breast cancer that use summary measures with a focus on the practical issues that affect these analysis strategies, such as missing data and integration of QOL with efficacy endpoints such as survival. © 1997 by John Wiley & Sons, Ltd.  相似文献   

7.
This brief overview of outcome in studies on the primary and secondary prevention of CHD with drugs influencing cholesterol-lipid-lipoprotein metabolism or with platelet-influencing drugs indicates the following conclusions: (1) The central finding of the European trial on primary prevention with clofibrate—significant adverse effects on long-term mortality from all causes—compels the conclusion that this drug has no place in the broad effort to reduce CHD risk by lowering plasma lipids-lipoproteins. (2) No other disease endpoint data are as yet available on lipid reduction by other drugs for primary prevention; data from a U.S. cholestyramine trial are due in 1983 or 1984. (3) Results of the Coronary Drug Project with lipid-influencing drugs for secondary prevention in middle-aged post-MI men were negative for 5.0 and 2.5 mg/day mixed conjugated equine estrogens, and for dextrothyroxine, with early termination of all three of these regimens because of possible adverse effects. At the scheduled end of the trial, clofibrate showed no evidence of benefit and some signs of adverse effects. Nicotinic acid showed no evidence of benefit in regard to the primary CDP endpoint, all causes mortality, but it yielded a significant reduction in rates of nonfatal MI, nonfatal MI + CHD death, and stroke events. (4) Combined therapy with resin + nicotinic acid has a much greater capacity—over and above diet—than any single lipid-lowering drug to reduce markedly all atherogenic lipids-lipoproteins, while simultaneously raising HDL, but no disease endpoint data are available on benefit/risk ratio with this regimen. (5) Eight secondary prevention trials with platelet—influencing drugs indicate encouraging but not statistically significant results in five of six aspirin trials, in a trial with aspirin + dipyridamole, and in a trial with sulfinpyrazone. (6) Viewed in perspective, these discouraging data on lipid—influencing drugs for the primary and secondary prevention of CHD, and these equivocal data with platelet-influencing drugs for secondary prevention reinforce the fundamental conclusion that the main strategic thrust for the control of epidemic premature CHD must be primary prevention by nutritional-hygienic means, i.e., avoidance and correction of the lifestyles—especially “rich” diet and cigarette smoking, also sedentary living and incongruent behavior—that are the cause of the mass problem.  相似文献   

8.
All trials use protocols to standardize practice within and between trial centres and to enable replication of an experiment across space and time. However, while 'centre effects' have been noted in the literature, the processes and mechanisms by which trial staff convert a protocol into practice, and create 'evidence', is a relatively understudied phenomenon. We undertook a qualitative investigation of a multi-centre, UK-based, insulin trial, where differences were found between participating centres in their attainment of the trial's primary clinical endpoint (HbA(1c)), a measure of patients' average blood glucose control. In-depth interviews were conducted with 12 research nurses and nine clinicians recruited from 11 centres in 2009, and explored their views about trial participation and experiences of trial delivery from inception to closeout. Staff accounts highlighted mixed agendas and/or ambivalent views about involvement in pharmaceutically funded trials, and discursive and temporal strategies by which they attempted to separate research from clinical practice and to convert commercially funded work into better patient care. Staff in different centres also reported divergent practices by which they recruited patients into the trial and 'enacted' the protocol to enhance trial outcomes and/or to individualise and improve patient care. By exploring, and comparing, the experiences of staff who worked on the same trial but in different centres, this study highlights the importance of understanding, and exploring, the enactment of protocols in ways which situate individual practices within both local (institutional) and global contexts.  相似文献   

9.
Treatment non‐compliance and missing data are common problems in clinical trials. Non‐compliance is a broad term including any kind of deviation from the assigned treatment protocol, such as dose modification, treatment discontinuation or switch, often resulting in missing values. Missing values and treatment non‐compliance may bias study results. Follow‐up of all patients until the planned end of treatment period irrespective of their protocol adherence may provide useful information on the effectiveness of a study drug, taking the actual compliance into account. In this paper, we consider non‐compliance as discontinuation of treatment and assume that the endpoint of interest is recorded for some non‐complying patients after treatment discontinuation. As a result, the patient's longitudinal profile is dividable into on‐ and off‐treatment observations. Within the framework of depression trials, which usually show a considerably high amount of dropouts, we compare different analysis strategies including both on‐ and off‐treatment observations to gain insight into how the additional use of off‐treatment data may affect the trial's outcome. We compare naïve strategies, which simply ignore off‐treatment data or treat on‐ and off‐treatment data in the same way, with more complex strategies based on piecewise linear mixed models, which assume different treatment effects for on‐ and off‐treatment data. We show that naïve strategies may considerably overestimate treatment effects. Therefore, it is worthwhile to follow up as many patients as possible until the end of their planned treatment period irrespective of compliance, including all available data in an analysis that accounts for the different treatment conditions. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Regardless of whether a statistician believes in letting a data set speak for itself through nominal p-values or believes in strict alpha conservation, the interpretation of experiments which are negative for the primary endpoint but positive for secondary endpoints is the source of some angst. The purpose of this paper is to apply the notion of prospective alpha allocation in clinical trials to this difficult circumstance. An argument is presented for differentiating between the alpha for the experiment ('experimental alpha' or alpha(E)) and the alpha for the primary endpoint (primary alpha, or alpha(P)) and notation is presented which succinctly describes the findings of a clinical trial in terms of its conclusions. Capping alpha(E) at 0.10 and alpha(P) at 0.05 conserves sample size and preserves consistency with the strength of evidence for the primary endpoint of clinical trials. In addition, a case is presented for the well defined circumstances in which a trial which did not reject the null hypothesis for the primary endpoint but does reject the null hypothesis for at least one of the secondary endpoints may be considered positive in a manner consistent with conservative alpha management.  相似文献   

11.
Phase I dose-finding trials typically are conducted using adaptive rules that select dose levels for successive patient cohorts based on the outcomes of patients treated previously in the trial. When patient outcome cannot be observed immediately after treatment, the problem arises of how to deal with new patients while waiting to observe the current patient cohort's outcomes. We consider two alternative approaches to this problem in the context of a phase I trial conducted using the continual reassessment method. With the first approach, a patient requiring treatment before the next cohort opens is treated off protocol with standard therapy, and otherwise waits until the next cohort opens. The second approach treats each patient immediately upon arrival at the dose recommended based on currently available data. We compare these two approaches by simulation under varying dose--toxicity curves, accrual rates, cohort sizes and early stopping rules. We evaluate patient waiting time, trial duration, number of patients treated off protocol and the probabilities of toxicity and of selecting the correct dose. We also study three strategies for assigning patients to trials when two or more phase I trials may be ongoing simultaneously. Based on our results, we provide practical guidelines for deciding among these approaches and strategies in a given clinical setting.  相似文献   

12.
In many phase II trials in solid tumours, patients are assessed using endpoints based on the Response Evaluation Criteria in Solid Tumours (RECIST) scale. Often, analyses are based on the response rate. This is the proportion of patients who have an observed tumour shrinkage above a predefined level and no new tumour lesions. The augmented binary method has been proposed to improve the precision of the estimator of the response rate. The method involves modelling the tumour shrinkage to avoid dichotomising it. However, in many trials the best observed response is used as the primary outcome. In such trials, patients are followed until progression, and their best observed RECIST outcome is used as the primary endpoint. In this paper, we propose a method that extends the augmented binary method so that it can be used when the outcome is best observed response. We show through simulated data and data from a real phase II cancer trial that this method improves power in both single‐arm and randomised trials. The average gain in power compared to the traditional analysis is equivalent to approximately a 35% increase in sample size. A modified version of the method is proposed to reduce the computational effort required. We show this modified method maintains much of the efficiency advantages.  相似文献   

13.
Many clinical trials follow patients for several different types of survival endpoints, such as mortality, disease progression, and time until dose-limiting toxicity. Conduct of such trials often requires that the accumulating data be reviewed periodically to protect the safety of participating patients and possibly identify early treatment differences. This paper proposes a group sequential method for assessing multiple survival endpoints using repeated confidence intervals. Counting processes for each survival endpoint are used to estimate both the correlation between outcomes and between times of interim analysis. The methods are illustrated using a clinical trial comparing two treatments for PCP prevention in AIDS patients. The operating characteristics of three strategies for constructing confidence intervals are assessed and compared in a simulation study.  相似文献   

14.
Subfebrile temperatures and fever in acute stroke are associated with poor functional outcome. A 1 degree C rise in body temperature may double the risk of a poor outcome in patients who are admitted within 12 hours from the onset of symptoms. Two randomised double-blind clinical trials in patients with acute ischaemic stroke have shown that treatment with a daily dose of 6 g paracetamol results in a small but rapid and potentially worthwhile reduction of 0.3 degree C (95% CI: 0.1-0.5) in body temperature. It has been hypothesized that early antipyretic therapy reduces the risk of death or dependency in patients with acute stroke, even if they are normothermic. For this reason, a multicentre, randomized, double-blind clinical trial comparing high-dose paracetamol with placebo in 2500 patients has been launched. This study has been named 'Paracetamol (acetaminophen) in stroke' (PAIS). The primary outcome is death or dependency at three months. The study protocol is simple, and the amount of data to be gathered is limited. The trial will run for four years.  相似文献   

15.
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second‐stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst‐case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well‐established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre‐planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

16.
PURPOSE We wanted to investigate the frequency of undisclosed changes in the outcomes of randomized controlled trials (RCTs) between trial registration and publication.METHODS Using a retrospective, nonrandom, cross-sectional study design, we investigated RCTs published in consecutive issues of 5 major medical journals during a 6-month period and their associated trials registry entries. Articles were excluded if they did not have an available trial registry entry, did not have analyzable outcomes, or were secondary publications. The primary outcome was the proportion of publications in which the primary outcome of the trial was, without disclosure, changed between that recorded in the trial registry and that reported in the final publication. The secondary outcome was the proportion of publications in which the secondary outcome was changed without disclosure.RESULTS We reviewed 158 reports of RCTs and included 110 in the analysis. In 34 (31%), a primary outcome had been changed, and in 77 (70%), a secondary outcome had been changed.CONCLUSIONS There are substantial and important undisclosed changes made to the outcomes of published RCTs between trial registration and publication. This finding has important implications for the interpretation of trial results. Disclosure and discussion of changes would improve transparency in the performance and reporting of trials.  相似文献   

17.
I present a method of sequential analysis for randomized clinical trials that allows use of all prior data in a trial to determine the use and weighting of subsequent observations. One continues to assign subjects until one has ‘used up’ all the variance of the test statistic. There are many strategies to determine the weights including Bayesian methods (though the proposal is a frequentist design). I explore further the self-designing aspect of the randomized trial to note that in some cases it makes good sense (i) to change the weighting on components of a multivariate endpoint, (ii) to add or drop treatment arms (especially in a parallel group dose ranging/efficacy/safety trial), (iii) to select sites to use as the trial goes on, (iv) to change the test statistic and (v) even to rethink the whole drug development paradigm to shorten drug development time while keeping current standards for the level of evidence necessary for approval. © 1998 John Wiley & Sons, Ltd.  相似文献   

18.
The predictive probability of success of a future clinical trial is a key quantitative tool for decision-making in drug development. It is derived from prior knowledge and available evidence, and the latter typically comes from the accumulated data on the clinical endpoint of interest in previous clinical trials. However, a surrogate endpoint could be used as primary endpoint in early development and, usually, no or limited data are collected on the clinical endpoint of interest. We propose a general, reliable, and broadly applicable methodology to predict the success of a future trial from surrogate endpoints, in a way that makes the best use of all the available evidence. The predictions are based on an informative prior, called surrogate prior, derived from the results of past trials on one or several surrogate endpoints. If available, in a Bayesian framework, this prior could be combined with data from past trials on the clinical endpoint of interest. Two methods are proposed to address a potential discordance between the surrogate prior and the data on the clinical endpoint. We investigate the patterns of behavior of the predictions in a comprehensive simulation study, and we present an application to the development of a drug in Multiple Sclerosis. The proposed methodology is expected to support decision-making in many different situations, since the use of predictive markers is important to accelerate drug developments and to select promising drug candidates, better and earlier.  相似文献   

19.
Recent lipid therapy clinical trials confirm the treatment recommendations of the National Cholesterol Education Program (NCEP) and extend the proven benefits of primary and secondary prevention to women, elderly, and high risk patients with average cholesterol.
OBJECTIVE: The purpose of this study was to estimate and compare the size of the primary and secondary prevention population if lipid treatment recommendations were based on (1) the NCEP treatment guidelines and the baseline characteristics of (2) the primary prevention population in the West of Scotland Coronary Prevention Study (WOSCOPS) and the Air Force/Texas Coronary Atherosclerosis Prevention Study (AFCAPS), (3) the secondary prevention population in the Cholesterol and Recurrent Events (CARE) trial, and the Scandinavian Simvastatin Survival study (4S).
METHODS: Phase 1 data from the third National Health and Nutrition Examination Survey were used for analysis.
RESULTS: Following the current NCEP recommendations, 18.6 and 5.8 million adults would be candidates for primary prevention and secondary prevention intervention, respectively. If the treatment guidelines were based on the characteristics of the WOSCOPS or AFCAPS patients, the size of the primary prevention population would increase by 12.2 million to 30.8 million. Extending the recommendation based on the characteristics of the CARE or the 4S study patients would increase the secondary prevention population size by 2 million to 7.8 million.
CONCLUSION: Recent clinical trials suggest that more than 15% of U.S. adults would benefit from lipid-lowering therapy for primary and secondary prevention of heart disease. Cost-effectiveness evaluation of lipid-lowering strategies should be used as a guide to treatment decisions.  相似文献   

20.
Palliative medicine is a relatively new specialty that focuses on preventing and relieving the suffering of patients facing life‐threatening illness. For cancer patients, clinical trials have been carried out to compare concurrent palliative care with usual cancer care in terms of longitudinal measurements of quality of life (QOL) until death, and overall survival is usually treated as a secondary endpoint. It is known that QOL of patients with advanced cancer decreases as death approaches; however, in previous clinical trials, this association has generally not been taken into account when inferences about the effect of an intervention on QOL or survival have been made. We developed a new joint modeling approach, a terminal decline model, to study the trajectory of repeated measurements and survival in a recently completed palliative care study. This approach takes the association of survival and QOL into account by modeling QOL retrospectively from death. For those patients whose death times are censored, marginal likelihood is used to incorporate them into the analysis. Our approach has two submodels: a piecewise linear random intercept model with serial correlation and measurement error for the retrospective trajectory of QOL and a piecewise exponential model for the survival distribution. Maximum likelihood estimators of the parameters are obtained by maximizing the closed‐form expression of log‐likelihood function. An explicit expression of quality‐adjusted life years can also be derived from our approach. We present a detailed data analysis of our previously reported palliative care randomized clinical trial. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号