首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
CONTEXT: Adapting web-based (WB) instruction to learners' individual differences may enhance learning. Objectives This study aimed to investigate aptitude-treatment interactions between learning and cognitive styles and WB instructional methods. METHODS: We carried out a factorial, randomised, controlled, crossover, post-test-only trial involving 89 internal medicine residents, family practice residents and medical students at 2 US medical schools. Parallel versions of a WB course in complementary medicine used either active or reflective questions and different end-of-module review activities ('create and study a summary table' or 'study an instructor-created table'). Participants were matched or mismatched to question type based on active or reflective learning style. Participants used each review activity for 1 course module (crossover design). Outcome measurements included the Index of Learning Styles, the Cognitive Styles Analysis test, knowledge post-test, course rating and preference. RESULTS: Post-test scores were similar for matched (mean +/- standard error of the mean 77.4 +/- 1.7) and mismatched (76.9 +/- 1.7) learners (95% confidence interval [CI] for difference - 4.3 to 5.2l, P = 0.84), as were course ratings (P = 0.16). Post-test scores did not differ between active-type questions (77.1 +/- 2.1) and reflective-type questions (77.2 +/- 1.4; P = 0.97). Post-test scores correlated with course ratings (r = 0.45). There was no difference in post-test subscores for modules completed using the 'construct table' format (78.1 +/- 1.4) or the 'table provided' format (76.1 +/- 1.4; CI - 1.1 to 5.0, P = 0.21), and wholist and analytic styles had no interaction (P = 0.75) or main effect (P = 0.18). There was no association between activity preference and wholist or analytic scores (P = 0.37). CONCLUSIONS: Cognitive and learning styles had no apparent influence on learning outcomes. There were no differences in outcome between these instructional methods.  相似文献   

2.
BACKGROUND: Distance learning has been advocated increasingly as a modern efficient method of teaching surgery. Efficiency of knowledge transfer and validity of web-based courses have not been subjected to rigorous study to date. METHODS: An entirely web-based surgical 5-week lecture course was designed. Fifty per cent of the lectures were prepared as HTML slides with voice-over while the other group was presented in the text-only form. Only written material presented was examined. The lectures were presented via an educational web module. The lecture series was balanced specifically to reduce the pre-existent knowledge bias. Web usage was estimated utilising surrogates, including the number of hits as well as log-on timing. Face validity was assessed by a standardised questionnaire. RESULTS: Eighty-eight students took part in the lecture series and subsequent examination and questionnaire. Median multiple choice questionnaire (MCQ) marks were significantly higher in the aural lecture-derived stems versus the non-aural (P = 0.012, Mann-Whitney U-test). There was widespread approval of web-based learning as an adjunct to conventional teaching. Usage rates were augmented significantly in the final week when compared to the previous 4 weeks (mean total hits weeks 1-4 +/- SEM: 100.9 +/- 9.7 and mean total hits week 5: 152.1 +/- 13.1; P < 0.001, Kruskal-Wallis). However, total hits did not correlate with overall examination results (r(2) = 0.16). The aural lectures demonstrated higher face validity than the non-aural for content and presentation (P < 0.05, Kruskal-Wallis). CONCLUSIONS: The addition of aural files to the novel web-based lecture series is face valid and results in significantly increased examination performance.  相似文献   

3.
Context  Self-efficacy is an important factor in many areas of medical education, including self-assessment and self-directed learning, but has been little studied in resuscitation training, possibly because of the lack of a simple measurement instrument.
Objective  We aimed to assess the validity of a visual analogue scale (VAS) linked to a single question as an instrument to measure self-efficacy with respect to resuscitation skills by comparing the VAS with a questionnaire and using known-groups comparisons.
Methods  We developed questionnaires to measure self-efficacy for a number of resuscitation tasks and for computer skills. These were compared with VASs linked to a single question per task, using a multi-trait, multi-method matrix. We also used known-groups comparisons of self-efficacy in specific professional groups.
Results  There was good correlation between the questionnaires and the VASs for self-efficacy for specific resuscitation tasks. There was a less clear correlation for self-efficacy for paediatric resuscitation overall. There was no correlation between self-efficacy for resuscitation and computer tasks. In specific professional groups, measured self-efficacy accorded with theoretical predictions.
Conclusions  A VAS linked to a single question appears to be a valid method of measuring self-efficacy with respect to specific well defined resuscitation tasks, but should be used with caution for multi-faceted tasks.  相似文献   

4.
OBJECTIVES: The clinical learning environment is an influential factor in work-based learning. Evaluation of this environment gives insight into the educational functioning of clinical departments. The Postgraduate Hospital Educational Environment Measure (PHEEM) is an evaluation tool consisting of a validated questionnaire with 3 subscales. In this paper we further investigate the psychometric properties of the PHEEM. We set out to validate the 3 subscales and test the reliability of the PHEEM for both clerks (clinical medical students) and registrars (specialists in training). METHODS: Clerks and registrars from different hospitals and specialties filled out the PHEEM. To investigate the construct validity of the 3 subscales, we used an exploratory factor analysis followed by varimax rotation, and a cluster analysis known as Mokken scale analysis. We estimated the reliability of the questionnaire by means of variance components according to generalisability theory. RESULTS: A total of 256 clerks and 339 registrars filled out the questionnaire. The exploratory factor analysis plus varimax rotation suggested a 1-dimensional scale. The Mokken scale analysis confirmed this result. The reliability analysis showed a reliable outcome for 1 department with 14 clerks or 11 registrars. For multiple departments 3 respondents combined with 10 departments provide a reliable outcome for both groups. DISCUSSION: The PHEEM is a questionnaire measuring 1 dimension instead of the hypothesised 3 dimensions. The sample size required to achieve a reliable outcome is feasible. The instrument can be used to evaluate both single and multiple departments for both clerks and registrars.  相似文献   

5.
Objectives Medical education instructional methods typically imply one ‘best’ management approach. Our objectives were to develop and evaluate an intervention to enhance residents’ appreciation for the diversity of acceptable approaches when managing complex patients. Methods A total of 124 internal medicine residents enrolled in a randomised, crossover trial. Residents completed four web‐based modules in ambulatory medicine during continuity clinic. For each module we developed three ‘complex cases’. Cases were intended to be complex (numerous variables, including psychosocial and economic barriers) and to suggest multiple acceptable management strategies. Several experienced faculty members described how they would manage each case. Residents reviewed each case, answered the same questions, and compared their responses with expert responses. Participants were randomly assigned to complete two modules with, and two modules without complex cases. Results A total of 76 residents completed 279 complex cases. Residents agreed that complex cases enhanced their appreciation for the diversity of ‘correct’ options (mean ± standard error of the mean 4.6 ± 0.2 [1 = strongly disagree, 6 = strongly agree]; P < 0.001). Mean preference score was neutral (3.4 ± 0.2 [1 = strongly favour no cases, 6 = strongly favour cases]; P = 0.72). Knowledge post‐test scores were similar between modules with (76.0 ± 0.9) and without (77.8 ± 0.9) complex cases (95% confidence interval for difference ? 4.0 to 0.3; P = 0.09). Resident comments suggested that lack of time and cognitive overload impeded learning. Conclusions Residents felt complex cases made a valuable contribution to their learning, although preference was neutral and knowledge scores were not affected. Methods to facilitate trainee comfort in managing medically complex patients should be further explored.  相似文献   

6.
CONTEXT: Admissions interviews are unreliable and have poor predictive validity, yet are the sole measures of non-cognitive skills used by most medical school admissions departments. The low reliability may be due in part to variation in conditional reliability across the rating scale. OBJECTIVES: To describe an empirically derived estimate of conditional reliability and use it to improve the predictive validity of interview ratings. METHODS: A set of medical school interview ratings was compared to a Monte Carlo simulated set to estimate conditional reliability controlling for range restriction, response scale bias and other artefacts. This estimate was used as a weighting function to improve the predictive validity of a second set of interview ratings for predicting non-cognitive measures (USMLE Step II residuals from Step I scores). RESULTS: Compared with the simulated set, both observed sets showed more reliability at low and high rating levels than at moderate levels. Raw interview scores did not predict USMLE Step II scores after controlling for Step I performance (additional r2 = 0.001, not significant). Weighting interview ratings by estimated conditional reliability improved predictive validity (additional r2 = 0.121, P < 0.01). CONCLUSIONS: Conditional reliability is important for understanding the psychometric properties of subjective rating scales. Weighting these measures during the admissions process would improve admissions decisions.  相似文献   

7.
Context  Reflective practice has been suggested to be an important instrument in improving clinical judgement and developing medical expertise. Empirical evidence supporting this suggestion, however, is absent. This paper reports on an experiment conducted to study the effects of reflective practice on diagnostic accuracy.
Methods  Participants were 42 internal medicine residents in hospitals in 2 states in the northeast of Brazil. They diagnosed 16 clinical cases. The experiment employed a repeated measures design, with 2 independent variables: the complexity of clinical cases (simple or complex), and the reasoning approach induced to diagnose the case (participants were instructed to diagnose each case either through pattern recognition or reflective reasoning). The dependent variable was the accuracy of the diagnosis provided for each case. All participants participated in each of the 2 levels of both independent variables.
Results  A main effect of case complexity emerged. There was no statistically significant main effect of reflective practice. However, a significant interaction effect was found between case complexity and mode of processing (F[1,41] = 4.48, P  < 0.05), indicating that although reflective practice did not make a difference to accuracy of diagnosis in simple cases, it had a positive effect when diagnosing complex cases.
Conclusions  Reflective practice had a positive effect on diagnosis of complex, unusual cases. Non-analytical reasoning was shown to be as effective as reflective reasoning for diagnosing routine clinical cases. Findings support the idea that reflective practice may particularly improve diagnoses in situations of uncertainty and uniqueness, reducing diagnostic errors.  相似文献   

8.
OBJECTIVE: To determine whether items of progress tests used for inter-curriculum comparison favour students from the medical school where the items were produced (i.e. whether the origin bias of test items is a potential confounder in comparisons between curricula). METHODS: We investigated scores of students from different schools on subtests consisting of progress test items constructed by authors from the different schools. In a cross-institutional collaboration between 3 medical schools, progress tests are jointly constructed and simultaneously administered to all students at the 3 schools. Test score data for 6 consecutive progress tests were investigated. Participants consisted of approximately 5000 undergraduate medical students from 3 medical schools. The main outcome measure was the difference between the scores on subtests of items constructed by authors from 2 of the collaborating schools (subtest difference score). RESULTS: The subtest difference scores showed that students obtained better results on items produced at their own schools. This effect was more pronounced in Years 2-5 of the curriculum than in Year 1, and diminished in Year 6. CONCLUSIONS: Progress test items were subject to origin bias. As a consequence, all participating schools should contribute equal numbers of test items if tests are to be used for valid and fair inter-curriculum comparisons.  相似文献   

9.
Context Self‐reflection, the practice of inspecting and evaluating one’s own thoughts, feelings and behaviour, and insight, the ability to understand one’s own thoughts, feelings and behaviour, are central to the self‐regulation of behaviours. The Self‐Reflection and Insight Scale (SRIS) measures three factors in the self‐regulation cycle: need for reflection; engagement in reflection, and insight. Methods We used structural equation modelling to undertake a confirmatory factor analysis of the SRIS. We re‐specified our model to analyse all of the data to explain relationships between the SRIS, medical student characteristics, and responses to issues of teaching and learning in professionalism. Results The factorial validity of a modified SRIS showed all items loading significantly on their expected factors, with a good fit to the data. Each subscale had good internal reliability (> 0.8). There was a strong relationship between the need for reflection and engagement in reflection (r = 0.77). Insight was related to need for reflection (0.22) and age (0.21), but not to the process of engaging in reflection (0.06). Conclusions Validation of the SRIS provides researchers with a new instrument with which to measure and investigate the processes of self‐reflection and insight in the context of students’ self‐regulation of their professionalism. Insight is related to the motive or need for reflection, but the process of reflection does not lead to insight. Attending to feelings is an important and integral aspect of self‐reflection and insight. Effective strategies are needed to develop students’ insight as they reflect on their professionalism.  相似文献   

10.
11.
Objective High‐stakes assessments of doctors’ physical examination skills often employ standardised patients (SPs) who lack physical abnormalities. Simulation technology provides additional opportunities to assess these skills by mimicking physical abnormalities. The current study examined the relationship between internists’ cardiac physical examination competence as assessed with simulation technology compared with that assessed with real patients (RPs). Methods The cardiac physical examination skills and bedside diagnostic accuracy of 28 internists were assessed during an objective structured clinical examination (OSCE). The OSCE included 3 modalities of cardiac patients: RPs with cardiac abnormalities; SPs combined with computer‐based, audio‐video simulations of auscultatory abnormalities, and a cardiac patient simulator (CPS) manikin. Four cardiac diagnoses and their associated cardiac findings were matched across modalities. At each station, 2 examiners independently rated a participant’s physical examination technique and global clinical competence. Two investigators separately scored diagnostic accuracy. Results Inter‐rater reliability between examiners for global ratings (GRs) ranged from 0.75–0.78 for the different modalities. Although there was no significant difference between participants’ mean GRs for each modality, the correlations between participants’ performances on each modality were low to modest: RP versus SP, r = 0.19; RP versus CPS, r = 0.22; SP versus CPS, r = 0.57 (P < 0.01). Conclusions Methodological limitations included variability between modalities in the components contributing to examiners’ GRs, a paucity of objective outcome measures and restricted case sampling. No modality provided a clear ‘gold standard’ for the assessment of cardiac physical examination competence. These limitations need to be addressed before determining the optimal patient modality for high‐stakes assessment purposes.  相似文献   

12.
OBJECTIVE: In recent decades, there has been increased interest in tools for assessing and improving the communication skills of general practice trainees. Recently, experts in the field rated the older Maas Global (MG) and the newer Common Ground (CG) instruments among the better communication skills assessment tools. This report seeks to establish their cross-validity. METHODS: Eighty trainees were observed by 2 raters for each instrument in 2 standardised patient stations from the final year objective structured clinical examination for Belgian trainee general practitioners. Each instrument was assigned 6 raters. RESULTS: Trainees showed the lowest mean scores for evaluating the consultation (MG7), summarising (MG11), addressing emotions (MG9) and addressing feelings (CG5). Inter-rater kappa statistics revealed fair-to-moderate agreement for the MG and slight-to-fair agreement for the CG. Cronbach's alpha was 0.78 for the MG and 0.89 for the CG. A generalisability study was only feasible for the MG: it was more helpful to increase the number of cases than the number of raters. Agreement between the instruments was examined using kappa statistics, Bland-Altman plots and multi-level analysis. Ranking the trainees for each instrument revealed similar results for the least competent trainees. Variances between and within trainees differed between instruments, whereas case specificity was comparable. Multi-level analysis also revealed a rater-item interaction effect. CONCLUSIONS: The 2 instruments have convergent validity, but the drawbacks of the CG, which has fewer items to be scored, include lower inter-rater reliability and score variance within trainees.  相似文献   

13.
Objective  This study aimed to evaluate the effectiveness and efficiency of three short-listing methodologies for use in selecting trainees into postgraduate training in general practice in the UK.
Methods  This was an exploratory study designed to compare three short-listing methodologies. Two methodologies – a clinical problem-solving test (CPST) and structured application form questions (AFQs) – were already in use for selection purposes. The third, a new situational judgement test (SJT), was evaluated alongside the live selection process. An evaluation was conducted on a sample of 463 applicants for training posts in UK general practice. Applicants completed all three assessments and attended a selection centre that used work-related simulations at final stage selection. Applicant scores on each short-listing methodology were compared with scores at the selection centre.
Results  Results indicate the structured AFQs, CPST and SJT were all valid short-listing methodologies. The SJT was the most effective independent predictor. Both the structured AFQs and the SJT add incremental validity over the use of the CPST alone. Results show that optimum validity and efficiency is achieved using a combination of the CPST and SJT.
Conclusions  A combination of the CPST and SJT represents the most effective and efficient battery of instruments as, unlike AFQs, these tests are machine-marked. Importantly, this is the first study to evaluate a machine-marked SJT to assess non-clinical domains for postgraduate selection. Future research should explore links with work-based assessment once trainees are in post to address long-term predictive validity.  相似文献   

14.
CONTEXT: Curricula about the care of homeless patients have been developed to improve stigmatising attitudes towards patients living in poverty. The Attitudes Toward Homelessness Inventory (ATHI) and the Attitudes Towards the Homeless Questionnaire (ATHQ) are both validated instruments developed to assess attitudes towards homeless patients. Although these surveys have similar goals, it is not clear which is superior for documenting attitude changes among doctors in training. METHODS: Seven cohorts of Year 2 and 3 primary care internal medicine residents at an urban public hospital in the USA completed the ATHI and ATHQ in a confidential manner before and after a 2-week rotation on health care for homeless patients (n = 25). RESULTS: Both the ATHI (P < 0.001) and the ATHQ (P = 0.050) documented changes in residents' attitudes. The magnitude of the pre/post change was 0.63 per item for the ATHI and 0.13 per item for the ATHQ. When the ATHI per-item change was standardised to reflect the change that would be expected if there were 5 response choices instead of 6, the per-item change for the ATHI was 4.1-fold greater than for the ATHQ (P = 0.001). Residents improved their responses to 1 of every 8 statements on the ATHQ and 1 of every 2 statements on the ATHI after the course. CONCLUSIONS: Both the ATHI and the ATHQ documented improvement in residents' attitudes after a 2-week homeless medicine curriculum. However, the ATHI was 4 times more responsive to change. These findings suggest that the ATHI is superior for detecting changes in attitudes after an educational intervention.  相似文献   

15.
CONTEXT: Self-assessment promotes reflective practice, helps students identify gaps in their learning and is used in curricular evaluations. Currently, there is a dearth of validated self-assessment tools in rheumatology. We present a new musculoskeletal self-assessment tool (MSAT) that allows students to assess their confidence in their skills in and knowledge of knee and shoulder examination. OBJECTIVES: We aimed to validate the 15-item MSAT, addressing its construct validity, internal consistency, responsiveness, repeatability and relationship with competence. METHODS: Participants were 241 Year 3 students in Newcastle upon Tyne and 113 Year 3 students at University College London, who were starting their musculoskeletal skills placement. Factor analysis explored the construct validity of the MSAT; Cronbach's alpha assessed its internal consistency; standardised response mean (SRM) evaluated its responsiveness, and test-retest, before and after a pathology lecture, assessed its repeatability. Its relationship with competence was explored by evaluating its correlation with shoulder and knee objective structured clinical examinations (OSCEs). Results The MSAT was valid in distinguishing the 5 domains it intended to measure: clinical examination of the knee; clinical examination of the shoulder; clinical anatomy of the knee and shoulder; history taking, and generic musculoskeletal anatomical and clinical terms. It was internally consistent (alpha = 0.93), responsive (SRM 0.6 in Newcastle and 2.2 in London) and repeatable (intraclass correlation coefficient 0.97). Correlations between MSAT scores and OSCE scores were weak (r < 0.2). CONCLUSIONS: The MSAT has strong psychometric properties, thereby offering a valid approach to evaluating the self-assessment of confidence in examination skills by students. Confidence does not necessarily reflect competence; future research should clarify what underpins confidence.  相似文献   

16.
INTRODUCTION: The aim of this study is to develop a new tool to assess professional behaviour in general practitioner (GP) trainees: the evaluation of professional behaviour in general practice (EPRO-GP) instrument. METHODS: Our study consisted of 4 phases: (1) development of a model of professionalism in general practice based on a literature review on professionalism, competency models of general practice and the overall educational objectives of postgraduate training for general practice; (2) development of the EPRO-GP instrument in collaboration with a sounding board; (3) establishing the content validity of the EPRO-GP instrument using a nominal group technique; and (4) establishing the feasibility of the EPRO-GP instrument in 12 general practice trainees and their general practice trainers. RESULTS: The model of professionalism in general practice encompassed 4 themes within professionalism: (a) professionalism towards the patient; (b) professionalism towards other professionals; (c) professionalism towards the public; and (d) professionalism towards oneself. These 4 themes covered 26 elements of professionalism. This model provided the framework of the EPRO-GP instrument, which we developed further by operationalising the 26 elements in 127 behavioural items. The expert ratings confirmed the content validity of the instrument with one exception: the element "altruism" was removed as a stand-alone category but it remained throughout the tool in items giving primacy to patient welfare. The results on the feasibility of the EPRO-GP instrument were very encouraging. All tutorials yielded professional behaviour learning points. DISCUSSION: Our results support the content validity of the EPRO-GP instrument as well as its feasibility as a tool to educate for professionalism in general practice.  相似文献   

17.
Objectives  A case-based, worked example approach was realised in a computer-based learning environment with the intention of facilitating medical students' diagnostic knowledge. In order to enhance the effectiveness of the approach, two additional measures were implemented: erroneous examples and elaborated feedback. In the context of an experimental study, the two measures were varied experimentally.
Methods  A total of 153 medical students were randomly assigned to four experimental conditions of a 2 × 2-factor design (errors versus no errors, elaborated feedback versus knowledge of correct result [KCR]). In order to verify the sustainability of the effects, a subgroup of subjects ( n  = 52) was compared with a control group of students who did not participate in the experiment ( n  = 145) on a regular multiple-choice question (MCQ) test.
Results  Results show that the acquisition of diagnostic knowledge is mainly supported by providing erroneous examples in combination with elaborated feedback. These effects were independent from differences in time-on-task and prior knowledge. Furthermore, the effects of the learning environment proved sustainable.
Conclusions  Our results demonstrate that the case-based, worked example approach is effective and efficient.  相似文献   

18.
Context  Although concern has been raised about the value of clinical evaluation reports for discriminating among trainees, there have been few efforts to formalise the dimensions and qualities that distinguish effective versus less useful styles of form completion.
Methods  Using brainstorming and a modified Delphi technique, a focus group determined the key features of high-quality completed evaluation reports. These features were used to create a rating scale to evaluate the quality of completed reports. The scale was pilot-tested locally; the results were psychometrically analysed and used to modify the scale. The scale was then tested on a national level. Psychometric analysis and final modification of the scale were completed.
Results  Sixteen features of high-quality reports were identified and used to develop a rating scale: the Completed Clinical Evaluation Report Rating (CCERR). The reliability of the scale after a national field test with 55 raters assessing 18 in-training evaluation reports (ITERs) was 0.82. Further revisions were made; the final version of the CCERR contains nine items rated on a 5-point scale. With this version, the mean ratings of three groups of 'gold-standard' ITERs (previously judged to be of high, average and poor quality) differed significantly ( P  < 0.05).
Discussion  The CCERR is a validated scale that can be used to help train supervisors to complete and assess the quality of evaluation reports.  相似文献   

19.
CONTEXT: Medical schools in the UK set their own graduating examinations and pass marks. In a previous study we examined the equivalence of passing standards using the Angoff standard-setting method. To address the limitation this imposed on that work, we undertook further research using a standard-setting method specifically designed for objective structured clinical examinations (OSCEs). METHODS: Six OSCE stations were incorporated into the graduating examinations of 3 of the medical schools that took part in the previous study. The borderline group method (BGM) or borderline regression method (BRM) was used to derive the pass marks for all stations in the OSCE. We compared passing standards at the 3 schools. We also compared the results within the schools with their previously generated Angoff pass marks. RESULTS: The pass marks derived using the BGM or BRM were consistent across 2 of the 3 schools, whereas the third school generated pass marks which were (with a single exception) much lower. Within-school comparisons of pass marks revealed that in 2 schools the pass marks generally did not significantly differ using either method, but for 1 school the Angoff mark was consistently and significantly lower than the BRM. DISCUSSION: The pass marks set using the BGM or BRM were more consistent across 2 of the 3 medical schools than pass marks set using the Angoff method. However, 1 medical school set significantly different pass marks from the other 2 schools. Although this study is small, we conclude that passing standards at different medical schools cannot be guaranteed to be equivalent.  相似文献   

20.
The effect of testing on skills learning   总被引:1,自引:1,他引:0  
Objectives  In addition to the extrinsic effects of assessment and examinations on students' study habits, testing can have an intrinsic effect on the memory of studied material. Whether this testing effect also applies to skills learning is not known. However, this is especially interesting in view of the need to maximise learning outcomes from costly simulation-based courses. This study was conducted to determine whether testing as the final activity in a skills course increases learning outcome compared with an equal amount of time spent practising the skill.
Methods  We carried out a prospective, controlled, randomised, single-blind, post-test-only intervention study, preceded by a similar pre- and post-test pilot study in order to make a power calculation. A total of 140 medical students participating in a mandatory 4-hour in-hospital resuscitation course in the seventh semester were randomised to either the intervention or control group and were invited to participate in an assessment of learning outcome. The intervention course included 3.5 hours of instruction and training followed by 30 minutes of testing. The control course included 4 hours of instruction and training. Participant learning outcomes were assessed 2 weeks after the course in a simulated scenario using a checklist. Total assessment scores were compared between the two groups.
Results  Overall, 81 of the 140 students volunteered to participate. Learning outcomes were significantly higher in the intervention group ( n  = 41; mean score 82.8%, 95% confidence interval [CI] 79.4–86.2) compared with the control group ( n  = 40; mean score 73.3%, 95% CI 70.5–76.1) ( P  < 0.001). Effect size was 0.93.
Conclusions  Testing as a final activity in a resuscitation skills course for medical students increases learning outcome compared with spending an equal amount of time practising the skills.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号