首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This booklet aims to provide relevant background information and guidelines for medical school teachers in clinical departments charged with assessing the clinical competence of undergraduate students. It starts by emphasizing the difference between clinical competence and clinical performance. An approach to defining what should be assessed is outlined. The technical considerations of validity, reliability and practicability are discussed with reference to the ward- or practice-based setting and to the examination setting. The various methods available to assess aspects of competence are described and their strengths and weaknesses reviewed. The paper concludes with a discussion of the important issues of scoring and standard setting. The conclusion is reached that the quality of many current assessments could be improved. To do so will require a multi-format approach using both the practice and examination settings. Some of the traditional methods will have to be abandoned or modified and new methods introduced.  相似文献   

2.
The assessment of clinical competence has traditionally been carried out through standard evaluations such as multiple choice question and bedside oral examinations. The attributes which constitute clinical competence are multidimensional, and we have modified the objective structured clinical examination (OSCE) to measure these various competencies. We have evaluated the validity and reliability of the OSCE in a paediatric clinical clerkship. We divided the examination into the four components of competence (clinical skills, problem-solving, knowledge, and patient management) and evaluated the performance of 77 fourth-year medical students. The skill and content domains of the OSCE were carefully defined, agreed upon, sampled and reproduced. This qualitative evaluation of the examination was both adequate and appropriate. We achieved both acceptable interstation and intertask reliability. When correlated with concurrent methods of evaluation we found the OSCE to be an accurate measure of paediatric knowledge and patient management skills. The OSCE did not correlate, however, with traditional measures of clinical skills including history-taking and physical examination. Our OSCE, as outlined, offers an objective means of identifying weaknesses and strengths in specific areas of clinical competence and is therefore an important addition to the traditional tools of evaluation.  相似文献   

3.
In undergraduate medical education, a shift away from in-patient teaching towards greater use of the ambulatory care setting is occurring. This paper looks at what effect this change in emphasis might have on students' clinical competence. A log-book approach was used to study final-year orthopaedic students' opportunities to interact with patients in the wards and out-patient clinics at the University of Dundee Medical School. Students perceived that similar opportunities to interact with patients to develop and improve their clinical skills were provided by both settings. The study showed that much greater use could be made of both settings for clinical skills teaching. While students were not enthusiastic about the log-book approach, it stimulated their thinking. It is concluded that student opportunities to develop clinical skills will not be adversely affected by the trend towards ambulatory care teaching. There should be more clinical teaching in the out-patient setting.  相似文献   

4.
A questionnaire, which consisted of 10 statements dealing with the attributes of effective clinical instruction, was designed for use by medical students. Three groups of trainees who followed consecutive clinical rotations in paediatrics assessed the instructional skills of their tutors using the instrument. Summary reports on students' perceptions were made available to the teachers soon after each rotation. The results showed that although individual instructors exhibited varying degrees of the desired skills, they maintained a consistent pattern through the assessments. When considered on an overall basis, teacher behaviours such as allowing the students to ask questions and giving satisfactory answers, and helping in students' learning problems with relevant feedback, received a higher percentage of positive ratings than emphasizing problem-solving, demonstrating and supervising physical examinations and procedures, and stimulating the students' interest in the subject. It appears that the instrument developed is suitable for obtaining feedback from the students to identify the strengths and weaknesses of the instructional skills of their clinical teachers. Such feedback would become useful when modifying programme presentation and in planning and conducting faculty development activities.  相似文献   

5.
OBJECTIVES: This study examines Finnish medical students' approaches to diagnosis and investigates further how medical teachers can use information about variation in their students' approaches to diagnosis to foster teaching in medical school. DESIGN: The medical students responded to the Conceptions and Experiences of Diagnosis Inventory (CEDI). SETTING: Faculty of Medicine, University of Helsinki. SUBJECTS: Ninety medical students in their clinical years and eight clinical teachers from the same Faculty of Medicine. RESULTS: The 11 subscales of the CEDI formed two contrasting factors: the first reflecting variation in 'non-virtuously' labelled, and the second in 'virtuously' labelled, aspects of diagnosis. CONCLUSIONS: Cluster analyses revealed subgroup characteristics of students' diagnostic processes that are of potential benefit to both students and teachers. Teacher interviews indicated that, for students, the CEDI may act as a self-assessment tool to help develop their diagnostic and metacognitive skills. For teachers, the CEDI was seen to offer important information about their students' conceptions of diagnosis and diagnostic skills.  相似文献   

6.
BACKGROUND: The ability to self-assess one's competence is a crucial skill for all health professionals. The interactive examination is an assessment model aiming to evaluate not only students' clinical skills and competence, but also their ability to self-assess their proficiency. METHODS: The methodology utilised students' own self-assessment, an answer to a written essay question and a group discussion. Students' self-assessment was matched to the judgement of their instructors. As a final task, students compared their own essay to one written by an "expert". The differences pointed by students in their comparison documents and the accompanying arguments were analysed and categorised. Students received individual feedback on their performance and learning needs. The model was tested on 1 cohort of undergraduate dental students (year 2001, n = 52) in their third semester of studies, replacing an older form of examination in the discipline of clinical periodontology. RESULTS: Students' acceptance of the methodology was very positive. Students tended to overestimate their competence in relation to the judgement of their instructors in diagnostic skills, but not in skills relevant to treatment. No gender differences were observed, although females performed better than males in the examination. Three categories of differences were observed in the students' comparison documents. The accompanying arguments may reveal students' understanding and methods of prioritising. CONCLUSIONS: Students tended to overestimate their competence in diagnostic rather than treatment skills. The interactive examination appeared to be a convenient tool for providing deeper insight into students' ability to prioritise, self-assess and steer their own learning.  相似文献   

7.
OBJECTIVES: Learning to perform physical examination of the abdomen is a challenge for medical students. Medical educators need to find engaging, effective tools to help students acquire competence and confidence in abdominal examination techniques. This study evaluates the added value of ultrasound training when Year 1 medical students learn abdominal examination. METHODS: The study used a randomised trial with a wait-list control condition. Year 1 medical students were randomised into 2 groups: those who were given immediate ultrasound training, and those for whom ultrasound training was delayed while they received standard instruction on abdominal examination. Standardised patients (SPs) used a clinical skills assessment (CSA) checklist to assess student abdominal examination competence on 2 occasions - CSA-1 and CSA-2 - separated by 8 weeks. Students also estimated SP liver size for comparison with gold-standard ultrasound measurements. Students completed skills confidence surveys. RESULTS: Proficiency in abdominal examination technique acquired from traditional instruction boosted with ultrasound training showed no advantage at CSA-1. However, at CSA-2 the delayed ultrasound training group showed significant improvement. Students uniformly underestimated SP liver sizes and the estimates were not affected by ultrasound training. Student confidence in both groups improved from baseline to CSA-1 and CSA-2. CONCLUSIONS: Ultrasound training as an adjunct to traditional means of teaching abdominal examination improves students' physical examination technique after students have acquired skills with basic examination manoeuvres.  相似文献   

8.
This study used factor analysis to define the components of clinical competence of medical students during their undergraduate psychiatric training. Four factors were defined; factor 1 related to cognitive and psychological problem-solving; factor 2 tapped the interpersonal and observational skills students showed with patients; factor 3 was characterized by knowledge in the examination setting, and factor 4 related to students' capacity to demonstrate their ability in an interpersonal setting. These are similar to the component skills of clinical competence demonstrated by students in other areas of the medical curriculum. They also correspond to the skills which Walton (1986) has suggested should be focused upon in undergraduate psychiatric education.  相似文献   

9.
The objective structured clinical examination in undergraduate psychiatry   总被引:1,自引:0,他引:1  
Inadequate attention has been given to verifying the psychometric attributes of the objective structured clinical examination (OSCE), yet its popularity has been increasing in recent years. Our 6 years' experience in Nigeria showed that OSCE is practicable in undergraduate psychiatry assessment and there is evidence over consecutive years that it has satisfactory reliability and criterion-based validity. The importance of students' feedback in assessing the quality of examination is reinforced, and subtle, less tangible elements which determine students' performance, such as social interactional mystique and some personality traits, are worthy of evaluative research.  相似文献   

10.
BACKGROUND: Clinical examinations increasingly consist of composite tests to assess all aspects of the curriculum recommended by the General Medical Council. SETTING: A final undergraduate medical school examination for 214 students. AIM: To estimate the overall reliability of a composite examination, the correlations between the tests, and the effect of differences in test length, number of items and weighting of the results on the reliability. METHOD: The examination consisted of four written and two clinical tests: multiple-choice questions (MCQ) test, extended matching questions (EMQ), short-answer questions (SAQ), essays, an objective structured clinical examination (OSCE) and history-taking long cases. Multivariate generalizability theory was used to estimate the composite reliability of the examination and the effects of item weighting and test length. RESULTS: The composite reliability of the examination was 0.77, if all tests contributed equally. Correlations between examination components varied, suggesting that different theoretically interpretable parameters of competence were being tested. Weighting tests according to items per test or total test time gave improved reliabilities of 0.93 and 0.81, respectively. Double weighting of the clinical component marginally affected the reliability (0.76). CONCLUSION: This composite final examination achieved an overall reliability sufficient for high-stakes decisions on student clinical competence. However, examination structure must be carefully planned and results combined with caution. Weighting according to number of items or test length significantly affected reliability. The components testing different aspects of knowledge and clinical skills must be carefully balanced to ensure both content validity and parity between items and test length.  相似文献   

11.
Limitations of the traditional final medical examination for the assessment of clinical competence led to such developments as simulated patients and the Objective Structured Clinical Examination (OSCE). An interdisciplinary OSCE incorporating simulated patients and involving nine disciplines was introduced into the final examination in the Auckland School of Medicine to supplement the written papers and the long case. Six-hundred and eight students were assessed over a 6-year period. Each of the three examination modes provided good discriminatory power. Significant correlations were found between the tests, but this does not mean one or more is redundant. Principal component analysis showed that a single significant factor accounted for over half the variance in the final assessment. This factor was equally weighted to the three examinations. A variety of evaluative methods are necessary to assess a student's competence and greater emphasis should be placed on those methods which encourage the learning of clinical skills and concurrently provide an appropriate mechanism for assessing them. The changes introduced have been supported by students and teachers and have fostered the learning of important clinical skills. Efforts to standardize the single long case have not overcome the criticisms surrounding its use, particularly in summative assessment.  相似文献   

12.
Summary: Examinations of competence which may affect career prospects require measures which are of high reliability, and which can be demonstrated to be valid. In a New Zealand summative postgraduate examination of competence in family practice the doctor-patient communication skills of candidates were assessed by non-medically trained nominees of community organizations. The assessments were based on direct observation of the candidates' encounters with simulated patients. To estimate the reliability of the consumer examiner, after the examination the examiners re-scored a random selection of videotaped candidate encounters. The test-retest correlations of consumer scoring were demonstrated to be at a level consistent with adequate examination reliability (confidence interval 0.59-0-98).
Summary: Consumers may be valuable as a resource for the training and assessment of the communication skills of medical practitioners.  相似文献   

13.
14.
Medical educators have always recognized the need to teach and train medical graduates and undergraduates the skills of conducting a consultation. Several authors have established the efficacy of using constructive feedback on videotape of each student's interaction with a patient to teach and enhance such skills. This study reports `students' perceptions' of the feedback process used in the Junior Paediatric Clerkship at the Faculty of Medicine of the United Arab Emirates University. An unexpected 73% of the respondents believed that self-observation influenced development of their clinical skills. More than 80% said that the feedback from instructors and peers helped them to improve their clinical skills, but they would have liked to have more than one of their consultations recorded and reviewed. It was found that 75% of the students felt that self-critique of their performance made them aware of their strengths and weaknesses and their skills in analysing and evaluating consultations had been enhanced. It was found from Kruskal Wallis one-way ANOVA that the students' professional attitude, empathy, and warmth towards the patients differed highly significantly ( P  = 0·0062, 0·0089, 0·0007, respectively) from self-assurance, self-confidence and competence. They were also deficient in certain areas of history-taking, interviewing skills, and physical examination techniques and perceived they needed more training in order to be proficient.  相似文献   

15.
The objective structured clinical examination (OSCE) is being used increasingly to assess students' clinical competence in a variety of controlled settings. The OSCE consists of multiple stations composed of a variety of clinically relevant problems (e.g. examining simulated patients, diagnosing X-rays, etc.) Generally, three types of performance data are collected: answers to multiple choice or true/false questions, written short answers, and performance check-lists completed by observers. In most OSCEs these student performance measures are scored by hand. This is time-consuming, increases the probability of mistakes and reduces the amount of data available for analysis. This paper describes a method of computer scoring OSCEs with over 100 students using statistical and test-scoring software regularly used for multiple choice examinations. During the examination, students, markers and raters code answers and performance data directly on optical mark-sheets which are read into the computer using an optical mark reader. The resultant computer data can be efficiently scored and rescored, grouped into different types of subscales, weighted to reflect questions' relative importance, and easily printed in a variety of report formats.  相似文献   

16.
17.
CONTEXT: The College of Medicine and Medical Sciences at the Arabian Gulf University, Bahrain, replaced the traditional long case/short case clinical examination on the final MD examination with a direct observation clinical encounter examination (DOCEE). Each student encountered four real patients. Two pairs of examiners from different disciplines observed the students taking history and conducting physical examinations and jointly assessed their clinical competence. OBJECTIVES: To determine the reliability and validity of the DOCEE by investigating whether examiners agree when scoring, ranking and classifying students; to determine the number of cases and examiners necessary to produce a reliable examination, and to establish whether the examination has content and concurrent validity. SUBJECTS: Fifty-six final year medical students and 22 examiners (in pairs) participated in the DOCEE in 2001. METHODS: Generalisability theory, intraclass correlation, Pearson correlation and kappa were used to study reliability and agreement between the examiners. Case content and Pearson correlation between DOCEE and other examination components were used to study validity. RESULTS: Cronbach's alpha for DOCEE was 0.85. The intraclass and Pearson correlation of scores given by specialists and non-specialists ranged from 0.82 to 0.93. Kappa scores ranged from 0.56 to 1.00. The overall intraclass correlation of students' scores was 0.86. The generalisability coefficient with four cases and two raters was 0.84. Decision studies showed that increasing the cases from one to four improved reliability to above 0.8. However, increasing the number of raters had little impact on reliability. The use of a pre-examination blueprint for selecting the cases improved the content validity. The disattenuated Pearson correlations between DOCEE and other performance measures as a measure of concurrent validity ranged from 0.67 to 0.79. CONCLUSIONS: The DOCEE was shown to have good reliability and interrater agreement between two independent specialist and non-specialist examiners on the scoring, ranking and pass/fail classification of student performance. It has adequate content and concurrent validity and provides unique information about students' clinical competence.  相似文献   

18.
OBJECTIVES: (i) To design a new, quick and efficient method of assessing specific cognitive aspects of trainee clinical communication skills, to be known as the Objective Structured Video Exam (OSVE) (Study 1); (ii) to prepare a scoring scheme for markers (Study 2); and (iii) to determine reliability and evidence for validity of the OSVE (Study 3). METHODS: Study 1 describes how the exam was designed. The OSVE assesses the student's recognition and understanding of the consequences of various communication skills. In addition, the assessment taps the number of alternative skills that the student believes will be of assistance in improving the patient-doctor interaction. Study 2 outlines the scoring system that is based on a range of 50 marks. Study 3 reports inter-rater consistency and presents evidence to support the validity of the new assessment by associating the marks from 607 1st year undergraduate medical students with their performance ratings in a communication skills OSCE. SETTING: Medical school, The University of Liverpool. RESULTS: Preparation of a scoring scheme for the OSVE produced consistent marking. The reliability of the marking scheme was high (ICC=0.94). Evidence for the construct validity of the OSVE was found when a moderate predicted relationship of the OSVE to interviewing behaviour in the communication skills OSCE was shown (r=0.17, P < 0.001). CONCLUSION: A new video-based written examination (the OSVE) that is efficient and quick to administer was shown to be reliable and to demonstrate some evidence for validity.  相似文献   

19.
Context  Our project investigated whether trained lay observers can reliably assess the communication skills of medical students by observing their patient encounters in an out-patient clinic.
Methods  During a paediatrics clerkship, trained lay observers (standardised observers [SOs]) assessed the communication skills of Year 3 medical students while the students interviewed patients. These observers accompanied students into examination rooms in an out-patient clinic and completed a 15-item communication skills checklist during the encounter. The reliability of the communication skills scores was calculated using generalisability analysis. Students rated the experience and the validity of the assessment. The communication skills scores recorded by the SOs in the clinic were correlated with communication skills scores on a paediatrics objective structured clinical examination (OSCE).
Results  Standardised observers accompanied a total of 51 medical students and watched 199 of their encounters with paediatric patients. The reliability of the communication skills scores from nine observed patient encounters was calculated to be 0.80. There was substantial correlation between the communication skills scores awarded by the clinic observers and students' communication skills scores on their OSCE cases ( r  = 0.53, P  < 0.001). Following 83.8% of the encounters, students strongly agreed that the observer had not interfered with their interaction with the patient. After 95.8% of the encounters, students agreed or strongly agreed that the observers' scoring of their communication skills was valid.
Conclusions  Standardised observers can reliably assess the communication skills of medical students during clinical encounters with patients and are well accepted by students.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号