首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Purpose: It has been argued, by many, that mathematical formulae for estimating sample size are unnecessarily complex, so much so that researchers may be reluctant to seek statistical advice.

Method: This paper reviews methods of sample size estimation arguing that two formulae (one based on comparison of proportion of 'successes', the other based on comparison of means of normally distributed data) suffice for many situations. This paper argues the case by taking examples drawn mainly from clinical trials research. However, the methods outlined can also be used in epidemiology specifically in both case-control and cohort studies with no loss of information.

Results: For the situations outlined, worked examples are provided.

Conclusions: Sample size estimation need not necessarily be a complex process. Simple techniques exist which enable the clinician and the statistician to work together. Continued dialogue between both parties is required so that good ideas do not go to waste.  相似文献   

3.
Scand J Caring Sci; 2013; 27; 487–492 The large sample size fallacy Background: Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p‐value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. Aim: The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Results: Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Conclusion: Effect sizes should generally be calculated and presented along with p‐values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature.  相似文献   

4.
Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).  相似文献   

5.
For high-throughput screening in drug development, methods that can reduce analytical work are desirable. Pooling of plasma samples from an individual subject in the time domain to yield a single sample for analysis has been used to estimate the area under the concentration-time curve (AUC). We describe a pooling procedure for the estimation of the area under the first moment curve (AUMC). The mean residence time (MRT), and where intravenous dosing has been used, the steady-state volume of distribution can then be determined. Plasma samples from pharmacokinetic studies in dogs and humans analyzed in our laboratory were used to validate the pooling approach. Each plasma sample containing a prokinetic macrolide and three of its metabolites was first analyzed separately, and AUCs and AUMCs were calculated using the linear trapezoidal rule. The procedures for the estimation of AUC by sample pooling have been reported by Riad et al. [Pharm. Res. (1991) vol. 8, pp. 541-543]. For the estimation of AUMC, the volume taken from each of n samples to form a pooled sample is proportional to t(n)(t(n+1) - t(n-1)), except at t0 where the aliquot volume is 0 and at t(last) where the aliquot volume is proportional to t(last)(t(last) - t((last)-1)). AUMC to t(last) is equal to C(pooled) x T2/2, where T is the overall experimental time (t(last) - t0). The ratio between AUMC and AUC yields the mean residence time (MRT). Bivariate (orthogonal) regression analysis was used to assess agreement between the pooling method and the linear trapezoidal rule. Bias and root mean square error were used to validate the pooling method. Orthogonal regression analysis of the AUMC values determined by pooling (y-axis) and those estimated by the linear trapezoidal rule (x-axis) yielded a slope of 1.08 and r2 of 0.994 for the dog samples; slope values ranged from 0.862 to 0.928 and r2 values from 0.838 to 0.988 for the human samples. Bias, expressed as percentage, ranged from -25.1% to 14.8% with an overall average of 1.40%. The results support the use of a pooled-sample technique in quantitating the average plasma concentration to estimate areas under the curve and areas under the first moment curve over the sampling time period. Mean residence times can then be calculated.  相似文献   

6.
7.
BACKGROUND: In many research papers, pilot studies are only reported as a means of justifying the methods. This justification might refer to the overall research design, or simply to the validity and reliability of the research tools. It is unusual for reports of pilot studies to include practical problems faced by the researcher(s). Pilot studies are relevant to best practice in research, but their potential for other researchers appears to be ignored. OBJECTIVE: The primary aim of this study was to identify the most appropriate method for conducting a national survey of maternity care. METHODS: Pilot studies were conducted in five hospitals to establish the best of four possible methods of approaching women, distributing questionnaires and encouraging the return of these questionnaires. Variations in the pilot studies included (a) whether or not the questionnaires were anonymous, (b) the staff involved in distributing the questionnaires and (c) whether questionnaires were distributed via central or local processes. For this purpose, five maternity hospitals of different sizes in Scotland were included. RESULTS: Problems in contacting women as a result of changes in the Data Protection Act (1998) required us to rely heavily on service providers. However, this resulted in a number of difficulties. These included poor distribution rates in areas where distribution relied upon service providers, unauthorized changes to the study protocol and limited or inaccurate information regarding the numbers of questionnaires distributed. CONCLUSIONS: The pilot raised a number of fundamental issues related to the process of conducting a large-scale survey, including the method of distributing the questionnaire, gaining access to patients, and reliance on 'gatekeepers'. This paper highlights the lessons learned as well as the balancing act of using research methods in the most optimal way under the combined pressure of time, ethical considerations and the influences of stakeholders. Reporting the kinds of practical issues that occur during pilot studies might help others avoid similar pitfalls and mistakes.  相似文献   

8.
Scand J Caring Sci; 2010; 24; 755–763
Case managers for frail older people; a randomised controlled pilot study Aim: The aim was to test sampling and explore sample characteristics in a pilot study using a case management intervention for older people with functional dependency and repeated contact with the healthcare services as well as to investigate the effects of the intervention on perceived health and depressed mood after 3 months. The aim was also to explore internal consistency in the life satisfaction index Z, activities of daily living‐staircase and Geriatric Depression Scale‐20. Method: This pilot study was carried out in a randomised controlled design with repeated follow‐ups. In all, 46 people were consecutively and randomly assigned to either an intervention (n = 23) or a control (n = 23) group. Two nurses worked as case managers and carried out the intervention, which consisted of four parts. Result: No differences were found between the groups at baseline. The results showed the participants had low life satisfaction (median 14 vs. 12), several health complaints (median 11) and a high score on the Geriatric Depression Scale (median 6) at baseline, indicating the risk of depression. No significant effects were observed regarding depressed mood or perceived health between or within groups at follow‐up after 3 months. Cronbach′s alpha showed satisfactory internal consistency for group comparisons. Conclusions: The sampling procedure led to similar groups. The life satisfaction, functional dependency and symptoms of depression measures were reliable to use. No changes in perceived health and symptoms of depression were found after 3 months, indicating that it may be too early to expect effects. The low depression score is noteworthy and requires further research.  相似文献   

9.
Pilot studies play an important role in health research, but they can be misused, mistreated and misrepresented. In this paper we focus on pilot studies that are used specifically to plan a randomized controlled trial (RCT). Citing examples from the literature, we provide a methodological framework in which to work, and discuss reasons why a pilot study might be undertaken. A well-conducted pilot study, giving a clear list of aims and objectives within a formal framework will encourage methodological rigour, ensure that the work is scientifically valid and publishable, and will lead to higher quality RCTs. It will also safeguard against pilot studies being conducted simply because of small numbers of available patients.  相似文献   

10.
Rationale, aims and objectives Population‐based randomized controlled trials (RCTs) often involve enormous costs and long‐term follow‐up to evaluate primary end points. Analytical decision‐simulated model for sample size and effectiveness projections based on primary and surrogate end points are necessary before planning a population‐based RCT. Method Based on the study design similar to two previous RCTs, transition rates were estimated using a five‐state natural history model [normal, preclinical detection phase (PCDP) Dukes' A/B, PCDP Dukes' C/D, Clinical Dukes' A/B and Clinical Dukes' C/D]. The Markov cycle tree was assigned transition parameters, variables related to screening and survival rate that simulated results of 10‐year follow‐up in the absence of screening for a hypothetical cohort aged 45–74 years. The corresponding screened arm was to simulate the results after the introduction of population‐based screening for colorectal cancer with fecal occult blood test with stop screen design. Results The natural course of mean sojourn time for five‐state Markov model were estimated as 2.75 years for preclinical Dukes' A/B and 1.38 years for preclinical Dukes' C/D. The expected reductions in mortality and Dukes' C/D were 13% (95% confidence intervals: 7–19%) and 26% (95% confidence intervals: 20–32%), respectively, given a 70% acceptance rate and a 90% colonoscopy referral rate. Sample sizes required were 86 150 and 65 592 subjects for the primary end point and the surrogate end point, respectively, given an incidence rate up to 0.0020 per year. Conclusions The sample sizes required for primary and surrogate end points and the projection of effectiveness of fecal occult blood test for colorectal cancer screening were developed. Both are very important to plan a population‐based RCT.  相似文献   

11.
Estimating the required sample size for a study is necessary during the design phase to ensure that it will have maximal efficiency to answer the primary question of interest. Clinicians require a basic understanding of the principles underlying sample size calculation to interpret and apply research findings. This article reviews the critical components of sample size calculation, including the selection of a primary outcome, specification of the acceptable types I and II error rates, identification of the minimal clinically important difference, and estimation of the error associated with measuring the primary outcome. The relationship among confidence intervals, precision, and study power is also discussed.  相似文献   

12.
13.
Objectives: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae.

Methods: Using a spreadsheet, we derived nomograms for calculating the number of patients required to determine the precision of a test's sensitivity or specificity.

Results: The nomograms could be easily used to determine the sensitivity and specificity of a test.

Conclusions: In addition to being easy to use, the nomogram allows deduction of a missing parameter (number of patients, confidence intervals, prevalence, or sensitivity/specificity) if the other three are known. The nomogram can also be used retrospectively by the reader of published research as a rough estimating tool for sample size calculations.

  相似文献   

14.
15.
Randomised controlled trials (RCTs) which involve cost-effectiveness evaluations rarely use health economic input when undertaking sample size calculations for the trial design; however, in studies undertaken with cost-effectiveness as the primary outcome, sample size calculations should be directly related to the cost-effectiveness result rather than to the effectiveness outcome alone. This paper reports on a case in which a clinical trial design sample size and power calculations were determined with regard to cost-effectiveness using the net monetary benefit (NMB) approach to demonstrate the feasibility of sample size calculation for cost-effectiveness in a real life setting.The proposed RCT of fetal fibronectin screening (fFN) for women with threatened pre-term labour is discussed, followed by the design of a preliminary model to inform the trial design calculation. The predictions from this pre-trial indicate potential cost-savings, but with a marginal detrimental impact on the effectiveness endpoint, neonatal morbidity. The NMB approach for cost-effectiveness is discussed and used to calculate the required sample sizes for different powers. The sample size calculations are then recalculated using a non-inferiority margin, to ensure that the NMB sample size for the trial was also sufficient to demonstrate non-inferiority for the effectiveness endpoint. Finally, a probabilistic analysis explored uncertainty in the model parameters and the impact on sample size.Considerations of economic assessments alongside clinical trials can and should be used to guide conventional trial design. This paper demonstrates the feasibility of such calculations, whilst simultaneously highlighting limitations and demonstrating the role for economic considerations to guide non-inferiority margins.  相似文献   

16.
This study aims to assess the preliminary efficacy and feasibility of a brief, peer-led alcohol intervention to reduce alcohol consumption in binge-drinking Spanish nursing students. A pilot randomized controlled trial was conducted with 50 first-year nursing students who were randomly assigned either a 50-min peer-led motivational intervention with individual feedback or a control condition. Primary outcomes for testing the preliminary efficacy were alcohol use and alcohol-related consequences. Quantitative and content analyses of open-ended survey questions were performed. Participants in the intervention condition significantly reduced binge-drinking episodes, peak blood alcohol content, and consequences compared to the control group. Principal facilitators were completing the questionnaire during the academic schedule and providing tailored feedback through a graphic report. The main barrier was the unreliability of students' initial commitment. The findings suggest that a brief motivational intervention could be effective for reducing alcohol consumption and alcohol-related consequences in Spanish college students. Peer counselors and participants reported high satisfaction, indicating that the intervention is feasible. However, a full trial should be conducted taking into account the identified barriers and facilitators.  相似文献   

17.
住院患儿肺炎检测标本采集的安全管理   总被引:1,自引:0,他引:1  
目的探讨住院患儿肺炎监测呼吸道标本及血标本采集的安全管理方法。方法对护理人员进行培训,加强标本采集安全管理和护理人员及患儿安全管理;并对标本采集成功率、送检率及结果回馈率进行监测。结果护理人员及患儿未发生交叉感染情况,标本采集成功率、标本送检率及结果回馈率均为100.00%;护士理论知识及操作技能明显提高。结论科学有效的护理安全管理是保证标本采集成功、保障护理人员及患儿安全的有效措施。  相似文献   

18.
Makuch and Simon gave a sample size calculation formula for historical control (HC) studies that assumed that the observed response rate in the control group is the true response rate. We dropped this assumption and computed the expected power and expected sample size to evaluate the performance of the procedure under the omniscient model. When there is uncertainty in the HC response rate but this uncertainty is not considered, Makuch and Simon's method produces a sample size that gives a considerably lower power than that specified. Even the larger sample size obtained from the randomized design formula and applied to the HC setting does not guarantee the advertised power in the HC setting. We developed a new uniform power method to search for the sample size required for the experimental group to yield an exact power without relying on the estimated HC response rate being perfectly correct. The new method produces the correct uniform predictive power for all permissible response rates. The resulting sample size is closer to the sample size needed for the randomized design than Makuch and Simon's method, especially when there is a small difference in response rates or a limited sample size in the HC group. HC design may be a viable option in clinical trials when the patient selection bias and the outcome evaluation bias can be minimized. However, the common perception of the extra sample size savings is largely unjustified without the strong assumption that the observed HC response rate is equal to the true control response rate. Generally speaking, results from HC studies need to be confirmed by studies with concurrent controls and cannot be used for making definitive decisions.  相似文献   

19.
Determining power and sample size in neuroimaging studies is a challenging task because of the massive multiple comparisons among tens of thousands of correlated voxels. To facilitate this task, we propose a power analysis method based on random field theory (RFT) by modeling signal areas within images as non-central random field. With this framework, power can be calculated for specific areas of anticipated signals within the brain while accounting for the 3D nature of signals. This framework can also be extended to visualize local variability in sensitivity as a power map and a sample size map. We validated our non-central RFT framework based on Monte-Carlo simulations. Moreover, we applied our method to a blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) data set with a small sample size in order to demonstrate its use in study planning. From the simulations, we found that our method was able to estimate power quite accurately. In the fMRI data analysis, despite the small sample size, we were able to determine power and the number of subjects required to detect signals.  相似文献   

20.
A pilot study was conducted to determine the feasibility of a longitudinal investigation of patients' coping during the early postdischarge period. Recruitment was conducted on a general medical unit and a surgical orthopedic unit. Forty‐four participants were recruited with 95% retention. Demographic characteristics plus measures of discharge risk and perceived readiness (expected coping) were collected before discharge. Measures of coping (experienced) and the use of supports and services were collected on the first day postdischarge, the end of the first week, and during weeks 3 and 5. Considerable variability was evident in coping scores, and not all participants exhibited improvement over time. Four patterns of coping were identified: ongoing recovery, initial shock, bumpy road, and progressive decline. Further investigation is required to validate the observed coping patterns. A better understanding of conditions affecting patient coping during the transition from hospital to home will support efforts to reduce unplanned use of acute care services.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号