首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Medical Education 2011: 45 : 818–826 Context The Association of American Medical Colleges’ Institute for Improving Medical Education’s report entitled ‘Effective Use of Educational Technology’ called on researchers to study the effectiveness of multimedia design principles. These principles were empirically shown to result in superior learning when used with college students in laboratory studies, but have not been studied with undergraduate medical students as participants. Methods A pre‐test/post‐test control group design was used, in which the traditional‐learning group received a lecture on shock using traditionally designed slides and the modified‐design group received the same lecture using slides modified in accord with Mayer’s principles of multimedia design. Participants included Year 3 medical students at a private, midwestern medical school progressing through their surgery clerkship during the academic year 2009–2010. The medical school divides students into four groups; each group attends the surgery clerkship during one of the four quarters of the academic year. Students in the second and third quarters served as the modified‐design group (n = 91) and students in the fourth‐quarter clerkship served as the traditional‐design group (n = 39). Results Both student cohorts had similar levels of pre‐lecture knowledge. Both groups showed significant improvements in retention (p < 0.0001), transfer (p < 0.05) and total scores (p < 0.0001) between the pre‐ and post‐tests. Repeated‐measures anova analysis showed statistically significant greater improvements in retention (F = 10.2, p = 0.0016) and total scores (F = 7.13, p = 0.0081) for those students instructed using principles of multimedia design compared with those instructed using the traditional design. Conclusions Multimedia design principles are easy to implement and result in improved short‐term retention among medical students, but empirical research is still needed to determine how these principles affect transfer of learning. Further research on applying the principles of multimedia design to medical education is needed to verify the impact it has on the long‐term learning of medical students, as well as its impact on other forms of multimedia instructional programmes used in the education of medical students.  相似文献   

2.
Medical Education 2010: 44 : 731–740 Objectives Interpretation of the electrocardiogram (ECG) is a core clinical skill that should be developed in undergraduate medical education. This study assessed whether small‐group peer teaching is more effective than lectures in enhancing medical students’ ECG interpretation skills. In addition, the impact of assessment format on study outcome was analysed. Methods Two consecutive cohorts of Year 4 medical students (n = 335) were randomised to receive either traditional ECG lectures or the same amount of small‐group, near‐peer teaching during a 6‐week cardiorespiratory course. Before and after the course, written assessments of ECG interpretation skills were undertaken. Whereas this final assessment yielded a considerable amount of credit points for students in the first cohort, it was merely formative in nature for the second cohort. An unannounced retention test was applied 8 weeks after the end of the cardiovascular course. Results A significant advantage of near‐peer teaching over lectures (effect size 0.33) was noted only in the second cohort, whereas, in the setting of a summative assessment, both teaching formats appeared to be equally effective. A summative instead of a formative assessment doubled the performance increase (Cohen’s d 4.9 versus 2.4), mitigating any difference between teaching formats. Within the second cohort, the significant difference between the two teaching formats was maintained in the retention test (p = 0.017). However, in both cohorts, a significant decrease in student performance was detected during the 8 weeks following the cardiovascular course. Conclusions Assessment format appeared to be more powerful than choice of instructional method in enhancing student learning. The effect observed in the second cohort was masked by an overriding incentive generated by the summative assessment in the first cohort. This masking effect should be considered in studies assessing the effectiveness of different teaching methods.  相似文献   

3.
Medical Education 2011: 45 : 347–353 Context Teaching 12‐lead electrocardiogram (ECG) interpretation to students and residents is a challenge for medical educators. To date, few studies have compared the effectiveness of different techniques used for ECG teaching. Objectives This study aimed to determine if common teaching techniques, such as those involving workshops, lectures and self‐directed learning (SDL), increase medical students’ ability to correctly interpret ECGs. It also aimed to compare the effectiveness of these formats. Methods This was a prospective randomised study conducted over a 28‐month period. Year 4 medical students were randomised to receive teaching in ECG interpretation using one of three teaching formats: workshop, lecture or SDL. All three formats covered the same content. Students were administered three tests: a pre‐test (before teaching); a post‐test (immediately after teaching), and a retention test (1 week after teaching). Each tested the same content using 25 questions worth 1 point each. A mixed‐model repeated‐measures analysis of variance (anova ) with least squares post hoc analysis was conducted to determine if differences in test scores between the formats were statistically significant. Results Of the 223 students for whom data were analysed, 79 were randomised to a workshop, 82 to a lecture‐based format and 62 to SDL. All three teaching formats resulted in a statistically significant improvement in individual test scores (p < 0.001). Comparison of the lecture‐ and workshop‐based formats demonstrated no difference in test scores (marginal mean [MM] for both formats = 12.4, 95% confidence interval [95% CI] 11.7–13.2]; p = 0.99). Test scores of students using SDL (MM = 10.7, 95% CI 9.8–11.5) were lower than those of students in the workshop (p = 0.003) and lecture (p = 0.002) groups. Conclusions Compared with those taught using workshop‐ and lecture‐based formats, medical students learning ECG interpretation by SDL had lower test scores.  相似文献   

4.
Medical Education 2012: 46: 689–697 Objectives This study was conducted to assess the associations between several clerkship process measures and students’ clinical and examination performance in an internal medicine clerkship. Methods We collected data from the internal medicine clerkship at one institution over a 3‐year period (classes of 2010–2012; n = 507) and conducted correlation and multiple regression analyses. We examined the associations between clerkship process measures (student‐reported number of patients evaluated, percentage of core problems encountered, total number of core problems encountered, total number of clinics attended) and four clerkship outcomes (clinical points [a weighted summation of a student’s clinical grade recommendations], ambulatory clinical points [the out‐patient portion of clinical points], examination points [a weighted summation of scores on three clerkship examinations], and National Board of Medical Examiners examination score). Results After controlling for pre‐clerkship ability and gender, percentage of core problems was significantly associated with ambulatory clinical points (b = 3.84, total model R2 = 0.14). Further, number of patients evaluated was significantly associated with clinical points (b = 0.19, total model R2 = 0.22), but only for students who undertook first‐quarter clerkships, who reported higher numbers of patients. Conclusions Notwithstanding a few positive (but small) associations, the results from this study suggest that clinical exposure is, at best, weakly associated with internal medicine clerkship performance.  相似文献   

5.
The prediction of professional behaviour   总被引:1,自引:0,他引:1  
Objective The purpose of this study was to establish outcome measures for professionalism in medical students and to identify predictors of these outcomes. Design Retrospective cohort study. Setting A US medical school. Participants All students entering in 1995 and graduating within 5 years. Measures Outcome measures included review board identification of professionalism problems and clerkship evaluations for items pertaining to professionalism. Pre-clinical predictor variables included material from the admissions application, completion of required course evaluations, students' self-reporting of immunisation compliance, students' performance on standardised patient (SP) exercises, and students' self-assessed performance on SP exercises. Results The outcome measures of clerkship professionalism scores were found to be highly reliable (alpha 0.88–0.96). No data from the admissions material was found to be predictive of professional behaviour in the clinical years. Using multivariate regression, failing to complete required course evaluations (B = 0.23) and failing to report immunisation compliance (B = 0.29) were significant predictors of unprofessional behaviour found by the review board in subsequent years. Immunisation non-compliance predicted low overall clerkship professional evaluation scores (B = − 0.34). Student self-assessment accuracy (SP score minus self-assessed score) (B = 0.03) and immunisation non-compliance (B = 0.54) predicted the internal medicine clerkship professionalism score. Conclusions This study identifies a set of reliable, context-bound outcome measures in professionalism. Although we searched for predictors of behaviour in the admissions application and other domains commonly felt to be predictive of professionalism, we found significant predictors only in domains where students had had opportunities to demonstrate conscientious behaviour or humility in self-assessment.  相似文献   

6.
Medical Education 2011: 45 : 849–857 Objectives This study examined the construct validity of three commonly used clerkship performance assessments, including preceptors’ evaluations, objective structured clinical examination (OSCE)‐type clinical performance measures, and the National Board of Medical Examiners (NBME) medicine subject examination, in order to better understand their conceptual structures and utility in the explanation of clinical competence. Methods A total of 686 students who took an in‐patient medicine clerkship during the period 2003 to 2007 participated in the study. Exploratory and confirmatory factor analyses using structural equation modelling were adopted to examine the latent domains underlying various indicators assessed by these three measures and the pattern of indicator–domain relationships. Results Factor analyses found three latent constructs, labelled Clinical Performance, Interpersonal Skills and Clinical Knowledge, underlying the observed measures. The three domains were modestly correlated with one another (inter‐factor correlation coefficients ranged from 0.39 to 0.54). They also tapped a common higher‐order construct, Clinical Competence, in varying degrees of magnitude (0.73, 0.74, 0.53, respectively). Conclusions The study demonstrated that although the three commonly used tools for assessing clerkship performance contributed uniquely to the understanding of clinical performance, they also attested to a shared domain of clinical competence in their assessment. The study confirmed the need for a multiple‐methods approach to clinical performance assessment. Findings also revealed that clerkship preceptors need to differentiate their evaluation of students’ performances, and that the OSCE did not assess a single domain of clinical competence.  相似文献   

7.
Computer-based clinical simulations for medical education vary widely in structure and format, yet few studies have examined which formats are optimal for particular educational settings. This study is a randomized comparison of the same simulated case in three formats: a "pedagogic" format offering explicit educational support, a "high-fidelity" format attempting to model clinical reasoning in the real world, and a "problem-solving" format that requires students to express specific diagnostic hypotheses. Data were collected from rising third-year medical students using a posttest, attitudinal questionnaire, students' write-ups of the case, and log files of students' progress through the simulation. Student performances on all measures differed significantly by format. In general, students using the pedagogic format were more proficient but less efficient. They acquired more information but were able to do proportionately less with it. The results suggest that the format of computer-based simulations is an important educational variable.  相似文献   

8.
9.
Background: Consistent and effective implementation of clinical clerkship objectives remains elusive. Using the behavioral principles of self assessment, active learning and learner differences, we designed an objectives checklist to ensure that all students mastered a core body of internal medicine (IM) knowledge and to facilitate self-directed learning. Methods: We developed a 54-item learning objectives checklist card in the IM clerkship. In a randomized controlled trial by clerkship site and block, students in the intervention group received the checklist card and were instructed to obtain sign off on objectives by faculty and housestaff and to seek teaching, literature, and clinical experiences to satisfy objectives unmet through routine activities. Intervention group faculty and housestaff were oriented to the use of the checklist. Both intervention and control groups received the course syllabus. We assessed learning with faculty and housestaff evaluations, student knowledge self-assessment, and a written examination. Satisfaction with the cards was assessed with written evaluations. Results: There were no significant differences in ward evaluations, examination scores or self-assessed knowledge between students using the learning objectives cards and control groups. Faculty were more likely than students to agree that objectives cards improved education. Conclusions: An intervention designed to guide students in the use of a learning objectives card did not enhance learning as assessed by ward evaluations, a written examination, and satisfaction surveys. It is possible that more sensitive outcome measures could detect differences in knowledge for students using learning objectives checklist cards. This revised version was published online in September 2006 with corrections to the Cover Date.  相似文献   

10.
This paper describes a study in which students were faced with a series of problems presented as patient management problems and as simulated patients (individuals trained to accurately portray a clinical problem). The subjects were sixty-five final-year medical students in a clinical clerkship in family medicine. Four clinical problems were used—each was developed in the PMP and simulated patient format. Each student completed one PMP and one simulated patient encounter (SPE) during the 2nd week of the 8-week clerkship, and a second PMP and SPE in the 7th week of the clerkship. Performance on the two formats was compared by determining the number of options, and the number of critical options (weighted +1 or +2 by a criterion panel) elicited in each section of the problem—history, physical examination, investigations and management. Students were found to elicit significantly more options in the PMP in all sections of the problem, an increase of from 20 to 150%. This difference due to format was of similar magnitude to the difference between problems and the proportion of observed variance in response due to the differences between formats and cases was consistently greater than the variance due to systematic differences between students. The findings of this study are consistent with previous studies comparing performance on PMPs to oral examinations and medical records, and raise some concern about the use of PMPs as a measure of competence in certification and licensure decisions.  相似文献   

11.
12.
Context The Reporter–Interpreter–Manager–Educator (RIME) evaluation framework is intuitive and reliable. Our preceptors’ frustration with using summative tools for formative feedback and the hypothesis that the RIME vocabulary might improve students’ and preceptors’ experiences with feedback prompted us to develop and pilot a RIME‐based feedback tool. Methods The tool was based on the RIME vocabulary, which has previously been used for evaluation. As interpersonal skills and professionalism are difficult areas in which to give feedback, we added these as explicit categories. We piloted the tool in a longitudinal, 5‐month, multi‐specialty clerkship. Preceptors completed pre‐ and post‐introductory workshop surveys. Students completed post‐workshop and post‐clerkship surveys. Results Preceptors (n = 14) and students (n = 8) preferred RIME‐based feedback to ‘usual feedback’ (previously given using end‐of‐clerkship evaluation forms). After the initial workshop, preceptors expected that giving feedback, including critical feedback, would be easier. After the 5‐month clerkship, students reported receiving more feedback than in previous clerkships and rated feedback given using this tool more highly (P = 0.002; effect size 1.2). Students also felt it helped them understand specifically how to improve their performance (P = 0.003; effect size 1.2). Discussion In this pilot study, preceptors and students preferred feedback with a specific RIME‐based tool. Students felt such feedback was more useful and helped them identify specifically how to improve. Whether this method can improve student performance through improved feedback remains an area for further research.  相似文献   

13.
Medical Education 2011: 45 : 166–175 Context Academic medical centres may adopt new learning technologies with little data on their effectiveness or on how they compare with traditional methodologies. We conducted a comparative study of student reflective writings produced using either an electronic (blog) format or a traditional written (essay) format to assess differences in content, depth of reflection and student preference. Methods Students in internal medicine clerkships at two US medical schools during the 2008–2009 academic year were quasi‐randomly assigned to one of two study arms according to which they were asked to either write a traditional reflective essay and subsequently join in faculty‐moderated, small‐group discussion (n = 45), or post two writings to a faculty‐moderated group blog and provide at least one comment on a peer’s posts (n = 50). Examples from a pilot block were used to refine coding methods and determine inter‐rater reliability. Writings were coded for theme and level of reflection by two blinded authors; these coding processes reached inter‐rater reliabilities of 91% and 80%, respectively. Anonymous pre‐ and post‐clerkship surveys assessed student perceptions and preferences. Results Student writing addressed seven main themes: (i) being humanistic; (ii) professional behaviour; (iii) understanding caregiving relationships; (iv) being a student; (v) clinical learning; (vi) dealing with death and dying, and (vii) the health care system, quality, safety and public health. The distribution of themes was similar across institutions and study arms. The level of reflection did not differ between study arms. Post‐clerkship surveys showed that student preferences for blogging or essay writing were predicted by experience, with the majority favouring the method they had used. Conclusions Our study suggests there is no significant difference in themes addressed or levels of reflection achieved when students complete a similar assignment via online blogging or traditional essay writing. Given this, faculty staff should feel comfortable in utilising the blog format for reflective exercises. Faculty members could consider the option of using either format to address different learning styles of students.  相似文献   

14.
Medical Education 2012: 46: 575–585 Context Research from numerous medical schools has shown that students from ethnic minorities underperform compared with those from the ethnic majority. However, little is known about why this underperformance occurs and whether there are performance differences among ethnic minority groups. Objectives This study aimed to investigate underperformance across ethnic minority groups in undergraduate pre‐clinical and clinical training. Methods A longitudinal prospective cohort study of progress on a 6‐year undergraduate medical course was conducted in a Dutch medical school. Participants included 1661 Dutch and 696 non‐Dutch students who entered the course over a consecutive 6‐year period (2002–2007). Main outcome measures were performance in Year 1 and in the pre‐clinical and clinical courses. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated by logistic regression analysis for ethnic subgroups (Surinamese/Antillean, Turkish/Moroccan/African, Asian, Western) compared with Dutch students, adjusted for age, gender, pre‐university grade point average (pu‐GPA), additional socio‐demographic variables (first‐generation immigrant, urban background, first‐generation university student, first language, medical doctor as parent) and previous performance at medical school. Results Compared with Dutch students, Surinamese and Antillean students specifically underperformed in the Year 1 course (pass rate: 37% versus 64%; adjusted OR 0.40, 95% CI 0.27–0.60) and the pre‐clinical course (pass rate: 19% versus 41%; adjusted OR 0.57, 95% CI 0.35–0.93). On the clinical course all non‐Dutch subgroups were less likely than Dutch students to receive a grade of ≥ 8.0 (at least three of five grades: 54–77% versus 88%; adjusted ORs: 0.17–0.45). Conclusions Strong ethnic disparities exist in medical school performance even after adjusting for age, gender, pu‐GPA and socio‐demographic variables. More subjective grading cannot be ruled out as a cause of lower grades in clinical training, but other possible explanations should be studied further to mitigate the disparities.  相似文献   

15.
Medical Education 2010: 44 : 298–305 Context Doctors have used the subjective–objective–assessment–plan (SOAP) note format to organise data about patient problems and create plans to address each of them. We retooled this into the ‘Programme Evaluation SOAP Note’, which serves to broaden the clinician faculty member’s perspective on programme evaluation to include the curriculum and the system, as well as students. Methods The SOAP Note was chosen as the method for data recording because of its familiarity to clinician‐educators and its strengths as a representation of a clinical problem‐solving process with elements analogous to educational programme evaluation. We pilot‐tested the Programme Evaluation SOAP Note to organise faculty members' interpretations of integrated student performances during the Year 3 patient care skills objective structured clinical examination (OSCE). Results Eight community clerkship directors and lead clerkship faculty members participated as observers in the 2007 gateway examination and completed the Programme Evaluation SOAP Note. Problems with the curriculum and system far outnumbered problems identified with students. Conclusions Using the Programme Evaluation SOAP Note, clerkship leaders developed expanded lists of ‘differential diagnoses’ that could explain possible learner performance inadequacies in terms of system, curriculum and learner problems. This has informed programme improvement efforts currently underway. We plan to continue using the Programme Evaluation SOAP Note for ongoing programme improvement.  相似文献   

16.
Medical Education 2011: 45 : 688–695 Context Skill in clinical reasoning is a highly valued attribute of doctors, but instructional approaches to foster medical students’ clinical reasoning skills remain scarce. Self‐explanation is an instructional procedure, the positive effects of which on learning have been demonstrated in a variety of domains, but which remain largely unexplored in medical education. Objectives The purpose of this study was to investigate the effects of self‐explanation on students’ learning of clinical reasoning during clerkships and to examine whether these effects are affected by topic familiarity. Methods An experimental study with a training phase and an assessment phase was conducted with 36 Year 3 medical students, randomly assigned to one of two groups. In the training phase, students solved 12 clinical cases (four cases on a less familiar topic; four on a more familiar topic; four on filler topics), either generating self‐explanations (n = 18) or not (n = 18). The self‐explanations were generated after minimal instructions and no feedback was provided to students. One week later, in the assessment phase, students were requested to diagnose 12 different, more difficult cases, similarly distributed among the same more familiar topic, less familiar topic and filler topics, and their diagnostic performance was assessed. Results In the training phase the performance of the two groups did not differ. However, in the assessment phase 1 week later, a significant interaction was found between self‐explanation and case topic familiarity (F1,34 = 6.18, p < 0.05). Students in the self‐explanation condition, compared with those in the control condition, demonstrated better diagnostic performance on subsequent clinical cases, but this effect emerged only for cases concerning the less familiar topic. Conclusions The present study shows the beneficial influence of generating self‐explanations when dealing with less familiar clinical contexts. Generating self‐explanations without feedback resulted in better diagnostic performance than in the control group at 1 week after the intervention.  相似文献   

17.
Context Ber’s Comprehensive Integrative Puzzle aims to assess analytical clinical thinking in medical students. We developed a paediatric version, the MATCH test, in which we added two irrelevant options to each question in order to reduce guessing behaviour. We tested its construct validity and studied the development of integrative skills over time. Methods We administered a test (MATCH 1) to subjects from two universities, both with a 6‐year medical training course. Subjects included 30 students from university 1 who had completed a paediatric clerkship in Year 4, 23 students from university 2 who had completed a paediatric clerkship in Year 5, 13 students from both universities who had completed an advanced paediatric clerkship in Year 6, 28 paediatric residents and 17 paediatricians. We repeated this procedure using a second test with different domains in a new, comparable group of subjects (MATCH 2). Results Mean MATCH 1 scores for the respective groups were: Year 4 students: 61.2% (standard deviation [SD] 1.3); Year 5 students: 71.3% (SD 1.6); Year 6 students: 76.2% (SD 1.5); paediatric residents: 88.5% (SD 0.7), and paediatricians: 92.2% (SD 1.1) (one‐way anova F = 104.00, P < 0.0001). Students of both universities had comparable scores. MATCH 1 and 2 scores were comparable. Cronbach’s α‐values in MATCH 1 and 2 were 0.92 and 0.91, respectively, for all subjects, and 0.82 and 0.87, respectively, for all students. Conclusions Analytical clinical thinking develops over time, independently of the factual content of the course. This implies that shortened medical training programmes could produce less skilled graduates.  相似文献   

18.
In 1980 a new course, called ALCO, was introduced in the faculty of Medicine at the University of Leiden. ALCO is the Dutch abbreviation of Algemeen Coassistentschap, which means general clerkship. This course was designed to bridge the gap between the first 4 years of theoretical studies and the subsequent 2 years of clerkship rotation. Because of the multidisciplinarity of the ALCO and the enormous amount of manpower required for this small-scale educational programme, the Faculty installed an Executive Board to oversee the course and the quality of instruction. By request of this Board the Department of Educational Research and Development set up a regular course evaluation, by having students fill out questionnaires, and reported annually on the outcome. This article presents a reconstruction of the dynamic process of the implementation of a new course in a traditional curriculum over a period of 5 years: on the one hand the impact of student ratings; on the other hand the changes made by the Executive Board in order to adjust the contents, format and methods of instruction. Now, after 5 years of ALCO, there is evidence that the student ratings, on the basis of which most decisions were taken, have contributed substantially to the instructional improvement of the ALCO.  相似文献   

19.
INTRODUCTION: Structured assessment, embedded in a training programme, with systematic observation, feedback and appropriate documentation may improve the reliability of clinical assessment. This type of assessment format is referred to as in-training assessment (ITA). The feasibility and reliability of an ITA programme in an internal medicine clerkship were evaluated. The programme comprised 4 ward-based test formats and 1 outpatient clinic-based test format. Of the 4 ward-based test formats, 3 were single-sample tests, consisting of 1 student-patient encounter, 1 critical appraisal session and 1 case presentation. The other ward-based test and the outpatient-based test were multiple sample tests, consisting of 12 ward-based case write-ups and 4 long cases in the outpatient clinic. In all the ITA programme consisted of 19 assessments. METHODS: During 41 months, data were collected from 119 clerks. Feasibility was defined as over two thirds of the students obtaining 19 assessments. Reliability was estimated by performing generalisability analyses with 19 assessments as items and 5 test formats as items. RESULTS: A total of 73 students (69%) completed 19 assessments. Reliability expressed by the generalisability coefficients was 0.81 for 19 assessments and 0.55 for 5 test formats. CONCLUSIONS: The ITA programme proved to be feasible. Feasibility may be improved by scheduling protected time for assessment for both students and staff. Reliability may be improved by more frequent use of some of the test formats.  相似文献   

20.
Summary. Data from the first 20 periods of a long-station clinical performance examination for a 4-week required clerkship in family medicine were examined in order to assess the reliability and validity of the examination. Data from 304 students were examined for station, case scenario and examiner effects and results compared to short-station formats. A significant examiner effect was found but there were no differences in student performance for station or case scenario. These findings reflect examiner specificity cited in the literature for short station examinations, but not case specificity. The source of variability for this examination appears to be primarily examiner effect. There was a significant correlation between student scores on the two cases, and raters tended to rank order students similarly in spite of variability in mean rater score. Scores on the CPE correlated with other measures of clinical performance as well as other methods of student evaluation for the clerkship providing some evidence for construct and criterion-related validity. CPE cases were developed from clerkship objectives but examination of the test blueprint revealed some gaps in the extent to which the CPE covers the course content. CPE developers are working to increase interrater reliability through examiner training and further standardize case scenarios through checklists and patient training. Additional cases are being developed to increase the content validity of the examination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号