首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
OBJECTIVES: To report the use of OSCEs for both formative and summative purposes within a general practice undergraduate clinical attachment and to compare student performance in the departmental OSCEs with that of their final medical school examinations. METHODS: Twenty-eight students rotated through the attachment and undertook pre- and post-attachment OSCEs of similar format but different content. Results were analysed to determine relationships between mean scores in the two OSCEs and student performance in their final medical school MBBS examinations. RESULTS: There was a marked improvement in all OSCE station scores. Pre-attachment scores for those stations measuring physical examination and problem-solving skills were unrelated to prior clinical experience. Post-attachment OSCE mean scores were significantly correlated with final examination OSCE and total mean scores. CONCLUSION: The general practice attachment appears to upgrade those clinical skills measured by the pre- and post-attachment OSCE, however, there was no control group of students. Problem-solving and focused physical examination skills need to be targeted by all undergraduate clinical departments. The department's post-attachment OSCE and total assessment results are predictors of final examination OSCE and total results. The use of pre- and post-attachment OSCEs facilitates both students' formative learning processes and the department's evaluation of its educational programme.  相似文献   

2.
Rees C  Sheard C 《Medical education》2004,38(2):125-128
INTRODUCTION: To date, no studies have examined preclinical medical students' views about portfolios. Since portfolios are becoming increasingly valued in medical education, this study explores second-year medical students' views about a reflective portfolio assessment of their communication skills. METHODS: 178 second-year medical students at the University of Nottingham completed the 18-item reflective portfolio questionnaire (RPQ) (alpha = 0.716) and a personal details questionnaire three days before submitting their portfolio assessment for communication skills. Data were analysed using univariate and multivariate statistics on SPSS Version 10.0. RESULTS: Total scores on the RPQ ranged from 40 to 75 (mean 58.28, SD 7.08). Significant relationships existed between RPQ total scores and students' ratings of their reflection skills (rs = 0.322, P < 0.001), RPQ total scores and students' confidence building another portfolio (T = 4.381, d.f. = 176, P < 0.001), and RPQ total scores and students' marks for their reflective portfolio assessment (rs = 0.167, P = 0.029). Students with more positive views about reflective portfolios were more likely to rate their reflection skills as good, receive better marks for their portfolio assessment, and be more confident building another portfolio. DISCUSSION: This study begins to highlight preclinical medical students' views about reflective portfolios. However, further research is required using qualitative studies to explore students' views in depth. Medical educators should be encouraged to consider introducing portfolios as a method of formative and summative assessment earlier in the medical curriculum.  相似文献   

3.
BACKGROUND: There is still a great deal to be learnt about teaching and assessing undergraduate communication skills, particularly as formal teaching in this area expands. One approach is to use the summative assessments of these skills in formative ways. Discourse analysis of data collected from final year examinations sheds light on the grounds for assessing students as 'good' or 'poor' communicators. This approach can feed into the teaching/learning of communication skills in the undergraduate curriculum. SETTING: A final year UK medical school objective structured clinical examination (OSCE). METHODS: Four scenarios, designed to assess communication skills in challenging contexts, were included in the OSCE. Video recordings of all interactions at these stations were screened. A sample covering a range of good, average and poor performances were transcribed and analysed. Discourse analysis methods were used to identify 'key components of communicative style'. FINDINGS: Analysis revealed important differences in communicative styles between candidates who scored highly and those who did poorly. These related to: empathetic versus 'retractive' styles of communicating; the importance of thematically staging a consultation, and the impact of values and assumptions on the outcome of a consultation. CONCLUSION: Detailed discourse analysis sheds light on patterns of communicative style and provides an analytic language for students to raise awareness of their own communication. This challenges standard approaches to teaching communication and shows the value of using summative assessments in formative ways.  相似文献   

4.
CONTEXT: Objective structured clinical examinations (OSCEs) can be used for formative and summative evaluation. We sought to determine the generalisability of students' summary scores aggregated from formative OSCE cases distributed across 5 clerkships during Year 3 of medical school. METHODS: Five major clerkships held OSCEs with 2-4 cases each during their rotations. All cases used 15-minute student-standardised patient encounters and performance was assessed using clinical and communication skills checklists. As not all students completed every clerkship or OSCE case, the generalisability (G) study was an unbalanced student x (case : clerkship) design. After completion of the G study, a decision (D) study was undertaken and phi (phi) values for different cut-points were calculated. RESULTS: The data for this report were collected over 2 academic years involving 262 Year 3 students. The G study found that 9.7% of the score variance originated from the student, 3.1% from the student-clerkship interaction, and 87.2% from the student-case nested within clerkship effect. Using the variance components from the G study, the D study suggested that if students completed 3 OSCE cases in each of the 5 different clerkships, the reliability of the aggregated scores would be 0.63. The phi, calculated at a cut-point 1 standard deviation below the mean, would be approximately 0.85. CONCLUSIONS: Aggregating case scores from low stakes OSCEs within clerkships results in a score set that allows for very reliable decisions about which students are performing poorly. Medical schools can use OSCE case scores collected over a clinical year for summative evaluation.  相似文献   

5.
INTRODUCTION: Assessment of medical student clinical skills is best carried out using multiple assessment methods. A programme was developed to obtain parent evaluations of medical student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. METHOD: A total of 130 parent evaluations were obtained for 67 students (parent participation 72%, student participation 58%). Parents completed a 13-item questionnaire [Interpersonal Skills Rating Scale (IPS) maximum score 91, higher scores = higher student skill level]. Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the programme. Parent evaluation scores were compared with student performance in formative and summative faculty assessments of clinical interview skills. RESULTS: Parents supported the programme and participating students valued parent feedback. Students with a parent score that was less than 1 standard deviation (SD) below the class mean (low IPS score students) obtained lower faculty summative assessment scores than did other students (mean +/- SD, 59% +/- 5 versus 64% +/- 7; P < 0.05). Obtaining 1 low IPS score was associated with a subsequent faculty summative assessment score below the class mean (sensitivity 0.38, specificity 0.88). Parent evaluations combined with faculty formative assessments identified 50% of students who subsequently performed below the class mean in summative assessments. CONCLUSIONS: Parent evaluations provided useful feedback to students and identified 1 group of students at increased risk of weaker performance in summative assessments. They could be combined with other methods of formative assessment to enhance screening procedures for clinically weak students.  相似文献   

6.
Objectives  Communication skills training in undergraduate medical education is considered to play an important role in medical students' formation of their professional identity. This qualitative study explores Year 1 students' perceptions of their identities when practising communication skills with real patients.
Methods  A total of 23 individual semi-structured interviews and two focus group discussions were conducted with 10 students during their first year of communication skills training. All interviews and discussions were audio-recorded, transcribed and analysed for emergent themes relating to identity.
Results  Students struggled to communicate professionally with patients because of a lack of clinical knowledge and skills. Consequently, students enacted other identities, yet patients perceived them differently, causing conversational ambiguities.
Discussion  Students' perceptions challenge educational goals, suggesting that there is limited potential for the formation of professional identity through early training. Teacher-doctors must acknowledge how students' low levels of clinical competence and patients' behaviour complicate students' identity formation.  相似文献   

7.
Context  The dissemination of objective structured clinical examinations (OSCEs) is hampered by requirements for high levels of staffing and a significantly higher workload compared with multiple-choice examinations. Senior medical students may be able to support faculty staff to assess their peers. The aim of this study is to assess the reliability of student tutors as OSCE examiners and their acceptance by their peers.
Methods  Using a checklist and a global rating, teaching doctors (TDs) and student tutors (STs) simultaneously assessed students in basic clinical skills at 4 OSCE stations. The inter-rater agreement between TDs and STs was calculated by kappa values and paired t -tests. Students then completed a questionnaire to assess their acceptance of student peer examiners.
Results  All 214 Year 3 students at the University of Göttingen Medical School were evaluated in spring 2005. Student tutors gave slightly better average grades than TDs (differences of 0.02–0.20 on a 5-point Likert scale). Inter-rater agreement at the stations ranged from 0.41 to 0·64 for checklist assessment and global ratings; overall inter-rater agreement on the final grade was 0.66. Most students felt that assessment by STs would result in the same grades as assessment by TDs (64%) and that it would be similarly objective (69%). Nearly all students (95%) felt confident that they could evaluate their peers themselves in an OSCE.
Conclusions  On the basis of our results, STs can act as examiners in summative OSCEs to assess basic medical skills. The slightly better grades observed are of no practical concern. Students accepted assessment performed by STs.  相似文献   

8.
CONTEXT: The teaching of clinical communication skills' teaching has become an important part of medical school curricula. Many undergraduate medical courses include communication skills training at various points in their curriculum. Very few reports have been published on the development of communication skills over the duration of a medical undergraduate training. AIMS: To determine the change in communication skills between early and mid-stages of the students' 5-year curriculum, and to investigate the predictive and theoretical significance of knowledge and understanding of communication skills in relation to observed performance. PARTICIPANTS: Students entering as the first cohort to the new medical curriculum at Liverpool Medical School (n=207). Nine students withdrew leaving 198 students who completed two summative assessments in June 1997 (level 1) and November 1998 (level 2). STATISTICAL ANALYSIS: Repeated measures multivariate ANOVAS were applied to the main study data to detect any change in performance between levels 1 and 2. RESULTS AND CONCLUSIONS: An improvement in communication skills was found in medical students over 17 months of their undergraduate teaching: that is from the level 1 to the level 2 assessment. Knowledge and understanding of communication skills at initial assessment did not show the predicted association with performance at level 2.  相似文献   

9.
INTRODUCTION: As we move from standard 'long case' final examinations to new objective structured formats, we need to ensure the new is at least as good as the old. Furthermore, knowledge of which examination format best predicts medical student progression and clinical skills development would be of value. METHODS: A group of medical students sat both the standard long case examination and the new objective structured clinical examination (OSCE) to introduce this latter examination to our Medical School for final MB. At the end of their pre-registration year, the group and their supervising consultants submitted performance evaluation questionnaires. RESULTS: Thirty medical students sat both examinations and 20 returned evaluation questionnaires. Of the 72 consultants approached, 60 (83%) returned completed questionnaires. No correlation existed between self- and consultant reported performance. The traditional finals examination was inversely associated with consultant assessment. Better performing students were not rated as better doctors. The OSCE (and its components) was more consistent and showed positive associations with consultant ratings across the board. DISCUSSION: Major discrepancies exist between the 2 examination formats, in data interpretation and practical skills, which are explicitly tested in OSCEs but less so in traditional finals. Standardised marking schemes may reduce examiner variability and discretion and weaken correlations across the 2 examinations. This pilot provides empirical evidence that OSCEs assess different clinical domains than do traditional finals. Additionally, OSCEs improve prediction of clinical performance as assessed by independent consultants. CONCLUSION: Traditional finals and OSCEs correlate poorly with one another. Objective structured clinical examinations appear to correlate well with consultant assessment at the end of the pre-registration house officer year.  相似文献   

10.
OBJECTIVE: This study aimed to monitor which undergraduate students collected formative feedback on their degree essays and to quantify any correlations between gender or summative mark achieved and whether formative feedback was sought. METHODS: We carried out a study at the University of Aberdeen Medical School, involving a total of 360 Year 3 students, comprising all 177 students in the 2004 cohort and 183 in 2005. Data on gender and summative mark were routinely collected during the degree assessment processes in March 2004 and 2005. Students signed on receipt of their feedback. RESULTS: Less than half the students (46%) collected their formative feedback: 47% in 2004, and 45% in 2005. Overall, females were significantly more likely than males to seek formative feedback (P = 0.004). Higher achievers were significantly more likely than lower achievers to seek their feedback (P = 0.020). CONCLUSIONS: Our findings indicate that these medical students, particularly males and poor students, may not use assessment feedback as a learning experience. Female and better students are keener to seek out formative feedback that might be expected to help them continue to do well. We need to explore further why so many students do not access formative feedback, and develop strategies for addressing this issue effectively.  相似文献   

11.
CONTEXT: The assessment of undergraduates' communication skills by means of objective structured clinical examinations (OSCEs) is a demanding task for examiners. Tiredness over the course of an examining session may introduce systematic error. In addition, unsystematic error may also be present which changes over the duration of the OSCE session. AIM: To determine the strength of some sources of systematic and unsystematic error in the assessment of communication skills over the duration of an examination schedule. METHODS: Undergraduate first-year medical students completing their initial summative assessment of communication skills (a four-station OSCE) comprised the study population. Students from three cohorts were included (1996-98 intake). In all 3 years the OSCE was carried out identically. All stations lasted 5 minutes with a simulated patient. Students were assessed using an examiner (content expert) and a simulated-patient evaluation tool, the Liverpool Communication Skills Assessment Scale (LCSAS) and the Global Simulated-patient Rating Scale (GSPRS), respectively. Each student was assigned a time slot ranging from 1 to 24, where 1, for example, would denote that the student entered the exam first and 24 indicates the final slot for entry into the examination. The number of students who failed this exam was noted for each of the 24 time slots. A control set of marks from a communication skills written exam was also adopted for exploring a possible link with the time slot. Analysis was conducted using graphical display, covariate analysis and logistic regression. RESULTS: No significant relationship was found between the schedule point that the student entered the OSCE exam and their performance. The reliability of the content expert and simulated-patient assessments was stable throughout the session. CONCLUSION: No evidence could be found that duration of examining in a communication OSCE influenced examiners and the marks they awarded. Checks of this nature are recommended for routine inspection to confirm a lack of bias.  相似文献   

12.
OBJECTIVES: An exercise is described which aimed to make clear to first-year undergraduate medical students the expected writing skills required for an essay examination in one discipline. SUBJECTS: Many students were from a non-English speaking background and over one-third of students, regardless of language background, had limited experience in this type of essay writing. PROCEDURE: For this exercise, a practice essay was written by each student for formative assessment. The essay was rated by a tutor and by the student according to well-defined criteria. This allowed for comparisons to be made in a structured and objective way between the judgements of the student and the assessor. RESULTS: Students found the exercise to be very useful, although whether essay writing skills actually improved could not be established. Students from non-English speaking backgrounds tended to be most harsh in their self-evaluations, yet tutor-evaluations generally showed these students to have better writing skills than other students. Indeed, correlations between self- and tutor-evaluations were quite low. CONCLUSIONS: It is evident that students and their educators may be unclear about each others' expectations. By making explicit the requirements of an exercise, misunderstandings may be minimized and it is possible that student performance could improve, though further research is required to verify these hypotheses. It is suggested that students should be encouraged to evaluate their own work and should be instructed in writing skills throughout their medical degree education.  相似文献   

13.
AIM: To test the stability of medical student communication skills over a period of 17 months as exhibited by performance in objective structured clinical examinations (OSCEs) and to determine the strength of prediction of these skills by initial levels of knowledge and understanding. DESIGN: This is a prospective study using a 2-wave cohort. PARTICIPANTS: Medical undergraduates (n = 383) from 2 years intake (1996 and 1997) were followed through the first 3 years of a medical curriculum. PROCEDURE: The study procedure involved the objective structured video examination (OSVE) conducted at formative and summative examinations during the first year. Two OSCE measures were employed: expert examiners and simulated patients completed the Liverpool Communication Skills Assessment Scale (LCSAS) and the Global Simulated Patient Rating Scale (GSPRS), respectively. The OSCE data were collected at Level 1 and 17 months later at Level 2 examinations. RESULTS: The measurement model followed prediction. A causal model using latent variables was fitted with Level 2 OSCE performance regressed on Level 1 OSCE and OSVE marks. Expert and simulated patient OSCE data were fitted separately and combined to determine strength of model fit according to professional and patient opinion of student skills. The overall fit of the models was acceptable. Communication skills performance showed a high level of stability. Some negative effect of cognitive factors on future skills performance was found. CONCLUSION: Early development of communication skills shows stable performance following an introductory course. Knowledge of communication skills has a small but significant influence on performance, depending on the time of testing. New assessments of cognitive factors are required to include both tacit and explicit knowledge.  相似文献   

14.
Haq I  Higham J  Morris R  Dacre J 《Medical education》2005,39(11):1126-1128
OBJECTIVE: To assess the effect of ethnicity and gender on medical student examination performance. DESIGN: Cohort study of Year 3 medical students in 2002 and 2003. SETTING: Royal Free and University College Medical School, Imperial College School of Medicine. SUBJECTS: A total of 1216 Year 3 medical students, of whom 528 were male and 688 female, and 737 were white European and 479 Asian. OUTCOME MEASURE: Performance in summative written and objective structured clinical examinations (OSCEs) in July 2002 and 2003. RESULTS: White females performed best in all OSCEs and in 3 out of 4 written examinations. Mean scores for each OSCE and 2 out of 4 written examinations were higher for white students than for Asian students. The overall size of the effect is relatively small, being around 1-2%. CONCLUSION: Students of Asian origin, of both genders, educated in the UK, using English as their first language, continue to perform less well in OSCEs and written assessments than their white European peers.  相似文献   

15.
CONTEXT: Writing is an important skill for practitioners and students, yet this is a skill rarely taught in a formal capacity at medical school. At the University of Adelaide many students are from non-English speaking backgrounds and have varying proficiencies in English. We wished to devise a method and instrument which could identify students who may benefit from formative feedback and tuition in writing. OBJECTIVES AND METHOD: Students' written account of a short clinical interview with a standardized patient was assessed using a new instrument (the Written Language Rating Scale) designed especially for this study. The assessment of writing was made by one rater with qualifications in teaching English as a second language. SUBJECTS: 127 second-year medical students enrolled at the University of Adelaide, Australia. INSTRUMENTS AND RESULTS: The scale appeared to have good internal consistency, face and construct validity, and test security was not an issue. However, it had questionable concurrent validity with a standardized language test, although this may be partly due to the period of time which had elapsed between administration of the two tests. CONCLUSIONS: This study was useful in providing a means to objectively rate students' written English language skills and to target students in need of formative feedback and tuition. However, further research is necessary for both evaluation of medical writing and interventions for its improvement.  相似文献   

16.
CONTEXT: Various research studies have examined the question of whether expert or non-expert raters, faculty or students, evaluators or standardized patients, give more reliable and valid summative assessments of performance on Objective Structured Clinical Examinations (OSCEs). Less studied has been the question of whether or not non-faculty raters can provide formative feedback that allows students to take advantage of the educational opportunity that OSCEs provide. This question is becoming increasingly important, however, as the strain on faculty resources increases. METHODS: A questionnaire was developed to assess the quality of feedback that medical examiners provide during OSCEs. It was pilot tested for reliability using video recordings of OSCE performances. The questionnaires were then used to evaluate the feedback given during an actual OSCE in which clinical clerks, residents, and faculty were used as examiners on two randomly selected test stations. RESULTS: The inter-rater reliability of the 19-item feedback questionnaire was 0.69 during the pilot test. The internal consistency was found to be 0.90 during pilot testing and 0.95 in the real OSCE. Using this form, the feedback ratings assigned to clinical clerks were significantly greater than those assigned to faculty evaluators. Furthermore, performance on the same OSCE stations eight months later was not impaired by having been evaluated by student examiners. DISCUSSION: While evidence of mark inflation within the clinical clerk examiners should be addressed with examiner training, the current results suggest that clerks are capable of giving adequate formative feedback to more junior colleagues.  相似文献   

17.
Background  Medical students' final clinical grades in internal medicine are based on the results of multiple assessments that reflect not only the students' knowledge, but also their skills and attitudes.
Objective  To examine the sources of validity evidence for internal medicine final assessment results comprising scores from 3 evaluations and 2 examinations.
Methods  The final assessment scores of 8 cohorts of Year 4 medical students in a 6-year undergraduate programme were analysed. The final assessment scores consisted of scores in ward evaluations (WEs), preceptor evaluations (PREs), outpatient clinic evaluations (OPCs), general knowledge and problem-solving multiple-choice questions (MCQs), and objective structured clinical examinations (OSCEs). Sources of validity evidence examined were content, response process, internal structure, relationship to other variables, and consequences.
Results  The median generalisability coefficient of the OSCEs was 0.62. The internal consistency reliability of the MCQs was 0.84. Scores for OSCEs correlated well with WE, PRE and MCQ scores with observed (disattenuated) correlation of 0.36 (0.77), 0.33 (0.71) and 0.48 (0.69), respectively. Scores for WEs and PREs correlated better with OSCE than MCQ scores. Sources of validity evidence including content, response process, internal structure and relationship to other variables were shown for most components.
Conclusion  There is sufficient validity evidence to support the utilisation of various types of assessment scores for final clinical grades at the end of an internal medicine rotation. Validity evidence should be examined for any final student evaluation system in order to establish the meaningfulness of the student assessment scores.  相似文献   

18.
A questionnaire, which consisted of 10 statements dealing with the attributes of effective clinical instruction, was designed for use by medical students. Three groups of trainees who followed consecutive clinical rotations in paediatrics assessed the instructional skills of their tutors using the instrument. Summary reports on students' perceptions were made available to the teachers soon after each rotation. The results showed that although individual instructors exhibited varying degrees of the desired skills, they maintained a consistent pattern through the assessments. When considered on an overall basis, teacher behaviours such as allowing the students to ask questions and giving satisfactory answers, and helping in students' learning problems with relevant feedback, received a higher percentage of positive ratings than emphasizing problem-solving, demonstrating and supervising physical examinations and procedures, and stimulating the students' interest in the subject. It appears that the instrument developed is suitable for obtaining feedback from the students to identify the strengths and weaknesses of the instructional skills of their clinical teachers. Such feedback would become useful when modifying programme presentation and in planning and conducting faculty development activities.  相似文献   

19.
Formative assessments are systematically designed instructional interventions to assess and provide feedback on students’ strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This study investigates how performance on two different modes of formative assessment relate to each other and to performance on summative assessments in an integrated, medical-school environment. Two types of formative assessment were administered to 146 first-year medical students each week over 8 weeks: a timed, closed-book component to assess factual recall and image recognition, and an un-timed, open-book component to assess higher order reasoning including the ability to identify and access appropriate resources and to integrate and apply knowledge. Analogous summative assessments were administered in the ninth week. Models relating formative and summative assessment performance were tested using Structural Equation Modeling. Two latent variables underlying achievement on formative and summative assessments could be identified; a “formative-assessment factor” and a “summative-assessment factor,” with the former predicting the latter. A latent variable underlying achievement on open-book formative assessments was highly predictive of achievement on both open- and closed-book summative assessments, whereas a latent variable underlying closed-book assessments only predicted performance on the closed-book summative assessment. Formative assessments can be used as effective predictive tools of summative performance in medical school. Open-book, un-timed assessments of higher order processes appeared to be better predictors of overall summative performance than closed-book, timed assessments of factual recall and image recognition. This research was presented at the 86th Annual Meeting of the American Educational Research Association (AERA) in Montreal, Canada, April 11–15, 2005.  相似文献   

20.
Objective  Timely intervention, based on early identification of poor performance, is likely to help weaker medical students improve their performance. We wished to identify if poor performance in degree assessments early in the medical degree predicts later undergraduate grades. If it does, this information could be used to signpost strategically placed supportive interventions for our students.
Methods  We carried out a retrospective, observational study of anonymised databases of student assessment outcomes at the University of Aberdeen Medical School. Data were accessed for students who graduated in the years 2003−07 ( n  = 861). The main outcome measure was marks for summative degree assessments from the end of Year 2 to the end of Year 5.
Results  After adjustment for cohort, maturity, gender, funding source, intercalation and graduate status, poor performance (fail and borderline pass) in the Year 2 first semester written examination Principles of Medicine II was found to be a significant predictor of poor performance in all subsequent written examinations (all P  < 0.001). Poor performance in the Year 3 objective structured clinical examination (OSCE) was a significant predictor of poor performance in Year 4 and 5 OSCEs. Relationships between essay-based summative assessments were not significantly predictive. Male gender appeared to significantly predict poor performance.
Discussion  Examinations taken as early as mid-Year 2 can be used to identify medical students who would benefit from intervention and support. Strategic delivery of appropriate intervention at this time may enable poorer students to perform better in subsequent examinations. We can then monitor the impact of remedial support on subsequent performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号