首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
CONTEXT: The assessment of undergraduates' communication skills by means of objective structured clinical examinations (OSCEs) is a demanding task for examiners. Tiredness over the course of an examining session may introduce systematic error. In addition, unsystematic error may also be present which changes over the duration of the OSCE session. AIM: To determine the strength of some sources of systematic and unsystematic error in the assessment of communication skills over the duration of an examination schedule. METHODS: Undergraduate first-year medical students completing their initial summative assessment of communication skills (a four-station OSCE) comprised the study population. Students from three cohorts were included (1996-98 intake). In all 3 years the OSCE was carried out identically. All stations lasted 5 minutes with a simulated patient. Students were assessed using an examiner (content expert) and a simulated-patient evaluation tool, the Liverpool Communication Skills Assessment Scale (LCSAS) and the Global Simulated-patient Rating Scale (GSPRS), respectively. Each student was assigned a time slot ranging from 1 to 24, where 1, for example, would denote that the student entered the exam first and 24 indicates the final slot for entry into the examination. The number of students who failed this exam was noted for each of the 24 time slots. A control set of marks from a communication skills written exam was also adopted for exploring a possible link with the time slot. Analysis was conducted using graphical display, covariate analysis and logistic regression. RESULTS: No significant relationship was found between the schedule point that the student entered the OSCE exam and their performance. The reliability of the content expert and simulated-patient assessments was stable throughout the session. CONCLUSION: No evidence could be found that duration of examining in a communication OSCE influenced examiners and the marks they awarded. Checks of this nature are recommended for routine inspection to confirm a lack of bias.  相似文献   

2.
Evidence of clinical competence for medical students entering the clinical clerkships at the University of Kansas College of Health Sciences is established by passing two different examinations: a 100 item multiple choice examination and a videotaped history and physical examination by each student of a simulated patient, being rated by that patient and two examiners. In 1976 the class of 196 medical students took an average 1.85 written examinations per student. With 70% or better constituting a passing score, 30.6% passed on the first attempt, 55.6% the second, 11.2% the third and 2.5% the fourth. Each student passed the televised practical examination and had the opportunity to review his or her videotape with a critiqued data base and the examiners' and simulated patient's evaluations in hand. Correlation coefficients for all 196 students between scores of written examinations, medicine tutors, examiners and professional patients revealed weak but significant correlations between the assessments of examiners and medical tutors and assessments of examiners and written examination scores, but not between other evaluations. This scheme of proof of competence appears to be objective and direct, and serves the convenience of both students and teaching staff.  相似文献   

3.
BACKGROUND: In order to emphasise learning more than control, from autumn 2000 we have invited medical students to propose questions for their own written examination in family medicine. One out of three student's proposals was guaranteed to be a part of their coming written examination, possibly somewhat modified. AIM: To evaluate how sixth year medical students experienced the project, and to what extent their performance in the examination was influenced. PARTICIPANTS: Sixth year medical students. MAIN OUTCOME MEASURES: The project was evaluated using (i) marks in examination; (ii) scores on self-administered questionnaires; and (iii) students' free text evaluation. RESULTS: Fifty-seven of 64 (89%) students taking their examination in autumn 2000, and 56 of 59 (95%) students taking the exam in spring 2001, responded. In autumn 2000, 34 (60%) students reported that the project had changed their learning strategies. During spring 2001, 46 of 56 students participated in producing questions, using a mean of 2.6 hours on the work. Students got 5-7% higher marks on their own questions on a scale ranging from 1 to 12. The students' free text evaluation showed that they had prepared especially thoroughly for the topics proposed by the students. They found it comforting to know at least one of the questions in the examination, and the students' questions were found relevant for general practice. CONCLUSION: Encouraging students to write questions for their own examination makes them feel more confident during the examination period, and may increase their reflective learning, without seriously limiting topics studied or violating the control function of the examination.  相似文献   

4.
BACKGROUND: Medical schools across Canada expend great effort in selecting students from a large pool of qualified applicants. Non-cognitive assessments are conducted by most schools in an effort to ensure that medical students have the personal characteristics of importance in the practice of Medicine. We reviewed the ability of University of Toronto academic and non-academic admission assessments to predict ranking by Internal Medicine and Family Medicine residency programmes. METHODS: The study sample consisted of students who had entered the University of Toronto between 1994 and 1998 inclusive, and had then applied through the Canadian resident matching programme to positions in Family or Internal Medicine at the University of Toronto in their graduating year. The value of admissions variables in predicting medical school performance and residency ranking was assessed. RESULTS: Ranking in Internal Medicine correlated significantly with undergraduate grade point average (GPA) and the admissions non-cognitive assessment. It also correlated with 2-year objective structured clinical examination (OSCE) score, clerkship grade in Internal Medicine, and final grade in medical school. Ranking in Family Medicine correlated with the admissions interview score. It also correlated with 2nd-year OSCE score, clerkship grade in Family Medicine, clerkship ward evaluation in Internal Medicine and final grade in medical school. DISCUSSION: The results of this study suggest that cognitive as well as non-cognitive factors evaluated during medical school admission are important in predicting future success in Medicine. The non-cognitive assessment provides additional value to standard academic criteria in predicting ranking by 2 residency programmes, and justifies its use as part of the admissions process.  相似文献   

5.
Objective  Timely intervention, based on early identification of poor performance, is likely to help weaker medical students improve their performance. We wished to identify if poor performance in degree assessments early in the medical degree predicts later undergraduate grades. If it does, this information could be used to signpost strategically placed supportive interventions for our students.
Methods  We carried out a retrospective, observational study of anonymised databases of student assessment outcomes at the University of Aberdeen Medical School. Data were accessed for students who graduated in the years 2003−07 ( n  = 861). The main outcome measure was marks for summative degree assessments from the end of Year 2 to the end of Year 5.
Results  After adjustment for cohort, maturity, gender, funding source, intercalation and graduate status, poor performance (fail and borderline pass) in the Year 2 first semester written examination Principles of Medicine II was found to be a significant predictor of poor performance in all subsequent written examinations (all P  < 0.001). Poor performance in the Year 3 objective structured clinical examination (OSCE) was a significant predictor of poor performance in Year 4 and 5 OSCEs. Relationships between essay-based summative assessments were not significantly predictive. Male gender appeared to significantly predict poor performance.
Discussion  Examinations taken as early as mid-Year 2 can be used to identify medical students who would benefit from intervention and support. Strategic delivery of appropriate intervention at this time may enable poorer students to perform better in subsequent examinations. We can then monitor the impact of remedial support on subsequent performance.  相似文献   

6.
OBJECTIVES: This paper reports a project that assessed a series of portfolios assembled by a cohort of participants attending a course for prospective general practice trainers. DESIGN: The reliability of judgements about individual 'components', together with an overall global judgement about performance were studied. SETTING: NHSE South & West, King Alfred's College, Winchester and Institute of Community Studies, Bournemouth University. SUBJECTS: Eight experienced general practice trainers recruited from around Wessex, which incorporates Hampshire, Dorset, Wiltshire and the Isle of Wight. RESULTS: The reliability of individual assessor's judgements (i.e. their consistency) was moderate, but inter-rater reliability did not reach a level which could support making a safe summative judgement. The levels of reliability reached were similar to other subjective assessments and perhaps reflected individuality of personal agendas of both the assessed and the assessors, and variations in portfolio structure and content. CONCLUSIONS: Suggestions for approaches in future are made.  相似文献   

7.
OBJECTIVE: To describe the development, organization, implementation and evaluation of a yearly multicentre, identical and simultaneous objective structured clinical examination (OSCE). SUBJECTS: All fifth-year medical students in a 6-year undergraduate medical programme. SETTING: The Christchurch, Dunedin and Wellington Schools of Medicine of the University of Otago, New Zealand. METHOD: One practice and two full 18-station OSCEs have been completed over 2 years, for up to 72 students per centre, in three centres. The process of development and logistics is described. Data are presented on validity, reliability and fairness. RESULTS: Face and content validity were established. Internal consistency was 0.83-0. 86 and interexaminer reliability, as assessed by the coefficient of correlation, averaged 0.78. Students rated the OSCE highly on relevance. Of the total variance in total OSCE marks, the schools contributed 6.9%, and the students 93.1%, in the first year. In the second year the schools contributed 6.2% and the students 93.8%. CONCLUSION: Implementation of a psychometrically sound, multicentre, simultaneous and identical OSCE is possible with a low level of interschool variation.  相似文献   

8.
AIM: At Dundee University, midwifery and medical students are taught obstetrics together in a 2-week intensive course. We set out to test the hypothesis that staff time and effort could be saved by using shared resources in teaching a multidisciplinary group of students to an acceptable level. METHOD: In order to measure the knowledge gain by two different groups of students, we tested the students before and after a timetabled computer-assisted learning (CAL) session focusing on how to interpret a cardiotocograph (CTG). Also, half of each student group was given extra CTG teaching before the CAL session. RESULTS: The medical students (n=38) increased their median score from 9 to 17 after the CAL (P<0.001) but the midwifery students (n=13) only increased their median score from 12 to 14 after the CAL (n.s.). However, when given a tutorial and CAL, the post-test scores for both medical and midwifery students were similar and significantly higher than pre-test scores (median score increase from 8.5 to 18 for medical students, P<0.001, n=34, and from 9 to 16 for midwifery students, P<0.01 n=11). There was no significant knowledge gain by the medical students who undertook the additional tutorial. CONCLUSION: We conclude that shared resources could be used by medical and midwifery students to reach equivalent levels of skill in CTG interpretation. However, in order to achieve equivalence, staff time and effort was wasted as medical students were given unnecessary tuition.  相似文献   

9.
CONTEXT: Problem based learning (PBL) has become an integral component of medical curricula around the world. In Ontario, Canada, PBL has been implemented in all five Ontario medical schools for several years. Although proper and timely feedback is an essential component of medical education, the types of feedback that students receive in PBL have not been systematically investigated. OBJECTIVES: In the first multischool study of PBL in Canada, we sought to determine the types of feedback (grades, written comments, group feedback from tutor, individual feedback from tutor, peer feedback, self-assessment, no feedback) that students receive as well as their satisfaction with these different feedback modalities. SUBJECTS AND METHODS: We surveyed a sample of 103 final year medical students at the five Ontario schools (University of Toronto, McMaster University, Queens University, University of Ottawa and University of Western Ontario). Subjects were recruited via E-mail and were asked to fill out a questionnaire. RESULTS: Many students felt that the most helpful type of feedback in PBL was individual feedback from the tutor, and indeed, individual feedback was one of the more common types of feedback provided. However, although students also indicated a strong preference for peer and group feedback, these forms of feedback were not widely reported. There were significant differences between schools in the use of grades, written comments, self-assessment and peer feedback, as well as the immediacy of the feedback given. CONCLUSIONS: Across Ontario, students do receive frequent feedback in PBL. However, significant differences exist in the types of feedback students receive, as well as the timing. Although rated highly by students at all schools, the use of peer feedback and self-assessment is limited at most, but not all, medical schools.  相似文献   

10.
An elective course titled 'Teaching in Medicine' was given to eight third-year medical students in response to the policy of the University of British Columbia medical school to expand its elective offerings. Course objectives focused on the skills that doctors need to fulfil their role of teacher of patients, students or colleagues. Instructional methods included directed reading, group discussions, microteaching, evaluation of videotaped samples of teacher behaviour, role play, demonstration and practice in developing and using audiovisual materials, and analysis of research in teaching and learning in medicine. The course culminated in each student presenting a major teaching session which was videotaped and assessed by the student and course teachers. All students rated the course as excellent. This paper describes the course and the teacher and student perceptions of it. The experience of this medical school is that a course of this nature is extremely worthwhile.  相似文献   

11.
OBJECTIVES: This study was undertaken to determine whether or not breadth of clinical experience and student levels of confidence were indicators of competency on standardized simulator performance-based assessments. METHODS: All students (n=144) attending an educational session were asked to complete a 25-point questionnaire regarding specific clinical experiences and levels of confidence in their ability to manage patient problems. For enumeration of clinical experiences, students were asked to estimate the number of times a situation had been encountered or a skill had been performed. For level of confidence, each response was based on a 5-point Likert scale where 1=novice and 5=expert. Students then participated in a standardized simulated performance test. Median and range were calculated and data analysed using Spearman rank correlations. A P-value <0.05 was considered significant. Level of confidence data were compared to performance during clinical rotation and to marks in the anaesthesia final examination. RESULTS: A total of 144 students attended the session, completed the questionnaire and participated in the standardized test. There were wide ranges of experience and confidence in the 25 listed items. Analysis of data showed good correlation between clinical experience and level of confidence. There was no correlation between clinical experience, level of confidence and performance in a standardized simulation test. Neither was there any correlation between level of confidence and clinical grades or written examination marks. CONCLUSIONS: Clinical experience and level of confidence have no predictive value in performance assessments when using standardized anaesthesia simulation scenarios.  相似文献   

12.
OBJECTIVES: Ophthalmoscopy is an important clinical skill that is essential for medical students to master. Competency in the performance of this skill needs to be assessed objectively. DESIGN: The development of a simple, cheap eye model for objective assessment of ophthalmoscopic skills of medical students is described. SETTING: University of Liverpool. SUBJECTS: Undergraduate medical students. RESULTS: The model was used in 803 assessments and showed a high level of student performance, based on both checklist marking of the general approach to the examination and objective marking of the ability of students to manipulate the light beam, focus the lens and systematically examine the model's fundus. CONCLUSIONS: The method described provides a simple, cost-effective, objective assessment of the performance of ophthalmoscopy.  相似文献   

13.
OBJECTIVES: (i) To design a new, quick and efficient method of assessing specific cognitive aspects of trainee clinical communication skills, to be known as the Objective Structured Video Exam (OSVE) (Study 1); (ii) to prepare a scoring scheme for markers (Study 2); and (iii) to determine reliability and evidence for validity of the OSVE (Study 3). METHODS: Study 1 describes how the exam was designed. The OSVE assesses the student's recognition and understanding of the consequences of various communication skills. In addition, the assessment taps the number of alternative skills that the student believes will be of assistance in improving the patient-doctor interaction. Study 2 outlines the scoring system that is based on a range of 50 marks. Study 3 reports inter-rater consistency and presents evidence to support the validity of the new assessment by associating the marks from 607 1st year undergraduate medical students with their performance ratings in a communication skills OSCE. SETTING: Medical school, The University of Liverpool. RESULTS: Preparation of a scoring scheme for the OSVE produced consistent marking. The reliability of the marking scheme was high (ICC=0.94). Evidence for the construct validity of the OSVE was found when a moderate predicted relationship of the OSVE to interviewing behaviour in the communication skills OSCE was shown (r=0.17, P < 0.001). CONCLUSION: A new video-based written examination (the OSVE) that is efficient and quick to administer was shown to be reliable and to demonstrate some evidence for validity.  相似文献   

14.
PROBLEM: A perception that the reliability of our oral assessments of clinical competence was vitiated by lack of consistency in questioning. DESIGN: Parallel group controlled trial of a Structured Question Grid for use in clinical assessments. The Structured Question Grid required assessors to see the patient personally in advance of the student and to write down for each case the points they wished to examine. The Structured Question Grid limited assessors to two questions on each point, one designated a pass question and one at a higher level. Three basic science and three clinical reasoning issues were required, so that a total of 12 questions was allowed. SETTING: Small (70 students/year) undergraduate medical school with an integrated, problem-based curriculum. SUBJECTS: Sixty-seven students in the fourth year of a 5-year course were assessed, each seeing one patient and being examined by a pair of assessors. Assessor pairs were allocated to use the Structured Question Grid or to assess according to their usual practice. RESULTS: After the assessment but before being informed of the result the students completed a questionnaire on their experience and gave their performance a score between 0 and 100. The questions asked were based on focus group discussions with a previous student cohort, and concerned principally the perceived fairness and subjective validity of the assessment. The assessors independently completed a similar questionnaire, gave the student's performance a score between 0 and 100, and assigned an overall pass/fail grade. CONCLUSIONS: No difference was detected between students' or assessors' views of the fairness of the assessment for assessors who had used the Structured Question Grid compared to those who had not. Students whose assessors used the Structured Question Grid considered the assessment less representative of their ability. No difference was detected in the chance of students being assessed as failing or on the likelihood of a discrepancy between students' and assessors' ratings of students as passing or failing.  相似文献   

15.
INTRODUCTION: Assessment of medical student clinical skills is best carried out using multiple assessment methods. A programme was developed to obtain parent evaluations of medical student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. METHOD: A total of 130 parent evaluations were obtained for 67 students (parent participation 72%, student participation 58%). Parents completed a 13-item questionnaire [Interpersonal Skills Rating Scale (IPS) maximum score 91, higher scores = higher student skill level]. Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the programme. Parent evaluation scores were compared with student performance in formative and summative faculty assessments of clinical interview skills. RESULTS: Parents supported the programme and participating students valued parent feedback. Students with a parent score that was less than 1 standard deviation (SD) below the class mean (low IPS score students) obtained lower faculty summative assessment scores than did other students (mean +/- SD, 59% +/- 5 versus 64% +/- 7; P < 0.05). Obtaining 1 low IPS score was associated with a subsequent faculty summative assessment score below the class mean (sensitivity 0.38, specificity 0.88). Parent evaluations combined with faculty formative assessments identified 50% of students who subsequently performed below the class mean in summative assessments. CONCLUSIONS: Parent evaluations provided useful feedback to students and identified 1 group of students at increased risk of weaker performance in summative assessments. They could be combined with other methods of formative assessment to enhance screening procedures for clinically weak students.  相似文献   

16.
Searle J 《Medical education》2000,34(5):363-366
CONTEXT: The responsibility to determine just who is competent to practice medicine, and at what standard, is great. Whilst there is still a period available for potential remediation, examinations at the completion of year three of the four-year Graduate Entry Medical Programme (GEMP) at Flinders University of South Australia (FUSA) are high stakes and contain the majority of final summative assessment for the certification of student to doctor. Therefore, the medical school has recently examined its methods for certification, the clinical practice standards sought in its programme and how to determine these standards. DESIGN: For all assessments a standard was documented and methods employed to set these standards using specific measures of performance. A modification of the Angoff method was applied to the written examination and the Rothman method, using two criteria, was used to determine competency in the objective structured clinical examination (OSCE). These methods were used for the first time in 1998. Both methods used trained 'experts' as standard setters and both methods used the notion of the 'borderline candidate' to determine the passing standard. This paper describes these two criterion-referenced standard-setting procedures as used in this school and related examination performance. CONCLUSIONS: Whilst the use of standard-setting procedures goes part way to defining and measuring competence, it is time consuming and requires significant examiner training and acceptance. Using 50% to determine who is and isn't competent is simpler but not transparent, fair nor defensible.  相似文献   

17.
OBJECTIVES: To assess student performance during tutorial sessions in problem-based learning (PBL). DESIGN: A 24-item rating scale was developed to assess student performance during tutorial sessions in problem-based learning (PBL) as conducted during the pre-clinical years of Medical School at the National Autonomous University of Mexico. Items were divided into three categories: Independent study, Group interaction and Reasoning skills. Fourteen tutors assessed 152 first and second-year students in 16 tutorial groups. An exploratory factor analysis with an Oblimin rotation was carried out to identify the underlying dimensions of the questionnaire. SETTING: Medical School at the National Autonomous University of Mexico. SUBJECTS: Medical students. RESULTS: Factor analysis yielded four factors (Independent study, Group interaction, Reasoning skills, and Active participation) which together accounted for 76.6% of the variance. Their Cronbach reliability coefficients were 0.95, 0.83, 0.94 and 0. 93, respectively, and 0.96 for the scale as a whole. CONCLUSIONS: It was concluded that the questionnaire provides a reliable identification of the fundamental components of the PBL method as observable in tutorial groups and could be a useful assessment instrument for tutors wishing to monitor students' progress in each of these components.  相似文献   

18.
CONTEXT: For more than two decades the Medical School in Maastricht, the Netherlands, has used simulated patients (SPs) to provide students with opportunities to practise their skills in communication and physical examination. In this educational setting a student meets a SP in a videotaped session. Feedback by the SP to the student at the end of the session is considered an important educational feature. We found no instruments to assess individual SP performance during those sessions. OBJECTIVE: To develop a valid, reliable and feasible instrument to evaluate the performance of SPs. METHODS: The content of the instrument was validated through interviews with students, teachers and experts who are involved with SPs. They were asked to indicate key features of good SP performance. Based on the interviews, a written checklist was developed to measure individual SP performance. The instrument was evaluated in a regular SP session at the medical school, involving 152 students and their teachers. MAIN OUTCOMES: All interviewees considered the scale to be satisfactory and the instrument to be valid. The feasibility and reliability of the checklists were investigated using the data of 398 returned checklists. Cronbach's alpha was found to be 0.73. Generalizability analysis showed that 12 completed checklists were required to obtain a reliable assessment of one SP. CONCLUSIONS: The Maastricht Assessment of Simulated Patients (MaSP) appears to be a valid, reliable and feasible tool to assess the performance of SPs in an educational setting.  相似文献   

19.
OBJECTIVE: To compare alcohol-related intervention and general interactional skills performance of medical students from a traditional (Sydney) and a non-traditional (Newcastle) medical school, before and after participation in an alcohol education programme about brief intervention. DESIGN: In two controlled trials, students received either a didactic alcohol education programme or didactic input plus skills-based training. Prior to and after training, all students completed videotaped interviews with simulated patients. SETTING: The Faculties of Medicine at the University of Newcastle and the University of Sydney, Australia. SUBJECTS: Fifth-year medical students (n=154). RESULTS: Both alcohol-related intervention and general interactional skills scores of the Newcastle students were significantly higher than those of the Sydney students at pre-test but not after training. Although alcohol-related interactional skills scores improved after training at both universities, they did not reach a satisfactory level. The educational approach used had no effect on post-test scores at either university. CONCLUSIONS: Significant baseline differences in interactional skills scores favouring non-traditional over traditional students were no longer evident after both groups had been involved in an alcohol education programme. Further research is required to develop more effective alcohol intervention training methods.  相似文献   

20.
Context  The dissemination of objective structured clinical examinations (OSCEs) is hampered by requirements for high levels of staffing and a significantly higher workload compared with multiple-choice examinations. Senior medical students may be able to support faculty staff to assess their peers. The aim of this study is to assess the reliability of student tutors as OSCE examiners and their acceptance by their peers.
Methods  Using a checklist and a global rating, teaching doctors (TDs) and student tutors (STs) simultaneously assessed students in basic clinical skills at 4 OSCE stations. The inter-rater agreement between TDs and STs was calculated by kappa values and paired t -tests. Students then completed a questionnaire to assess their acceptance of student peer examiners.
Results  All 214 Year 3 students at the University of Göttingen Medical School were evaluated in spring 2005. Student tutors gave slightly better average grades than TDs (differences of 0.02–0.20 on a 5-point Likert scale). Inter-rater agreement at the stations ranged from 0.41 to 0·64 for checklist assessment and global ratings; overall inter-rater agreement on the final grade was 0.66. Most students felt that assessment by STs would result in the same grades as assessment by TDs (64%) and that it would be similarly objective (69%). Nearly all students (95%) felt confident that they could evaluate their peers themselves in an OSCE.
Conclusions  On the basis of our results, STs can act as examiners in summative OSCEs to assess basic medical skills. The slightly better grades observed are of no practical concern. Students accepted assessment performed by STs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号