首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
CONTEXT: The impact of faculty development activities aimed at improving the teaching skills of clinical instructors requires elucidation. Since 2003, all instructors at our school of medicine have been required to undertake a brief workshop in basic clinical instructional skills as a prerequisite for promotion and tenure. The impact of this has, so far, remained unknown.OBJECTIVE: This study aimed to examine to what extent participation in a brief workshop can improve clinical instructors' performance in the long run, and which particular dimensions of performance are improved.METHODS: The study included a sample of 149 faculty members who undertook a required workshop in basic instructional skills. The teaching performance of these faculty members was measured by student feedback a year after the workshop. The study used pre- and post-test design, with a comparison group of 121 faculty members.RESULTS: Student ratings for 5 dimensions of clinical instruction increased significantly, but only for the study group who had participated in a workshop. The comparison group's ratings were unchanged. The highest improvement in the instructors' performance related to availability of teachers to students.CONCLUSIONS: The study supports previous findings about the added value gained by longterm improvement of instructional skills after participation in even a brief workshop. The meaningful improvement in instructor availability to students is associated with the workshops' emphasis on a learner-centred approach and the need to provide continuous feedback.  相似文献   

2.
The Center for Clinical Skills (CCS) at the University of Hawai‘i''s John A. Burns School of Medicine (JABSOM) trains medical students in a variety of medical practice education experiences aimed at improving patient care skills of history taking, physical examination, communication, and counseling. Increasing class sizes accentuate the need for efficient scheduling of faculty and students for clinical skills examinations. This research reports an application of a discrete simulation methodology, using a computerized commercial business simulation optimization software package Arena® by Rockwell Automation Inc, to model the flow of students through an objective structure clinical exam (OSCE) using the basic physical examination sequence (BPSE). The goal was to identify the most efficient scheduling of limited volunteer faculty resources to enable all student teams to complete the OSCE within the allocated 4 hours. The simulation models 11 two-person student teams, using resources of 10 examination rooms where physical examination skills are demonstrated on fellow student subjects and assessed by volunteer faculty. Multiple faculty availability models with constrained time parameters and other resources were evaluated. The results of the discrete event simulation suggest that there is no statistical difference in the baseline model and the alternative models with respect to faculty utilization, but statistically significant changes in student wait times. Two models significantly reduced student wait times without compromising faculty utilization.  相似文献   

3.
We sought to identify key qualities of healthcare that influence patient appraisal of satisfaction with primary care. An Internet survey of patients was used to collect anonymous ratings of physicians on several dimensions of healthcare experiences, as well as comments about aspects of care that were excellent and those that could be improved. Qualitative data analysis was used to discern content clusters and relate them to high and low ratings of patient satisfaction. Content analysis revealed that patients perceive and value at least seven domains of healthcare in defining outstanding quality (access, communication, personality and demeanor of provider, quality of medical care processes, care continuity, quality of the healthcare facilities, and office staff. All seven were cited as reasons for rating physicians as excellent, while four domains (communication, care coordination, interpersonal skills, and barriers to access) drove negative ratings. We conclude that patient satisfaction ratings are highly influenced by a core of communication and follow-up care. Physicians who do not possess these traits will not likely attain high ratings, while having these core traits does not necessarily ensure high patient satisfaction.  相似文献   

4.
Purpose To determine whether global ratings by patients are valid and reliable enough to be used within a major summative assessment of medical students' clinical skills. Method In 11 stations of an 18‐station objective structured clinical examination (OSCE), where a student was asked to educate or take a history from a patient, the patient was asked, ‘How likely would you be to come back and discuss your concerns with this student again?’ These 11 opinions were aggregated into a single patient opinion mark and correlated with other measures of student competence. The patients were not experienced in student assessment. Results A total of 204 students undertook the OSCE. Reliability of patient opinion across all 11 stations revealed a Cronbach alpha of 0·65. The correlation coefficient between the patient ratings and the total OSCE score was good (r = 0·74; P < 0·001) and was better than the correlation between any single OSCE station and the total OSCE score. It was also better than the correlation between the aggregated patient opinion and tests of student knowledge (r = 0·47). Conclusion It is known that patients can reliably complete checklists of clinical skills and that doctors can reliably provide global ratings of students. We have now shown that, by controlling the context, asking the right question and aggregating several opinions, untrained patients can provide a reliable and valid global opinion that contributes to the assessment of a student's clinical skills.  相似文献   

5.
The purpose of this study is to investigate the content-specificity of communication skills. It investigates the reliability and dimensionality of standardized patient (SP) ratings of communication skills in an Objective Structured Clinical Examination (OSCE) for final year medical students. An OSCE consisting of seven standardized patient (SP) encounters was administered to final-year medical students at four medical schools that are members of the California Consortium for the Assessment of Clinical Competence (N = 567). For each case, SPs rated students' communication skills on the same seven items. Internal consistency coefficients were calculated and a two-facet generalizability study was performed to investigate the reliability of the scores. An exploratory factor analysis was conducted to examine the dimensionality of the exam. Findings indicate that communication skills across the seven-case examination demonstrate a reliable generic component that supports relative decision making, but that a significant case-by-student interaction exists. The underlying structure further supports the case-specific nature of students' ability to communicate with patients. From these findings, it is evident that individual's communication skills vary systematically with specific cases. Implications include the need to consider the range of communication skill demands made across the OSCE to support generalization of findings, the need for instruction to provide feedback on communication skills in multiple contexts, and the need for research to further examine the student, patient, and presenting problem as sources of variation in communication skills.  相似文献   

6.
Medical Education 2011: 45 : 1048–1060 Objectives This study was intended to develop a conceptual framework of the factors impacting on faculty members’ judgements and ratings of resident doctors (residents) after direct observation with patients. Methods In 2009, 44 general internal medicine faculty members responsible for out‐patient resident teaching in 16 internal medicine residency programmes in a large urban area in the eastern USA watched four videotaped scenarios and two live scenarios of standardised residents engaged in clinical encounters with standardised patients. After each, faculty members rated the resident using a mini‐clinical evaluation exercise and were individually interviewed using a semi‐structured interview. Interviews were videotaped, transcribed and analysed using grounded theory methods. Results Four primary themes that provide insights into the variability of faculty assessments of residents’ performance were identified: (i) the frames of reference used by faculty members when translating observations into judgements and ratings are variable; (ii) high levels of inference are used during the direct observation process; (iii) the methods by which judgements are synthesised into numerical ratings are variable, and (iv) factors external to resident performance influence ratings. From these themes, a conceptual model was developed to describe the process of observation, interpretation, synthesis and rating. Conclusions It is likely that multiple factors account for the variability in faculty ratings of residents. Understanding these factors informs potential new approaches to faculty development to improve the accuracy, reliability and utility of clinical skills assessment.  相似文献   

7.
Effective education of practical skills can alter clinician behaviour, positively influence patient outcomes, and reduce the risk of patient harm. This study compares the efficacy of two innovative practical skill teaching methods, against a traditional teaching method. Year three pre-clinical physiotherapy students consented to participate in a randomised controlled trial, with concealed allocation and blinded participants and outcome assessment. Each of the three randomly allocated groups were exposed to a different practical skills teaching method (traditional, pre-recorded video tutorial or student self-video) for two specific practical skills during the semester. Clinical performance was assessed using an objective structured clinical examination (OSCE). The students were also administered a questionnaire to gain the participants level of satisfaction with the teaching method, and their perceptions of the teaching methods educational value. There were no significant differences in clinical performance between the three practical skill teaching methods as measured in the OSCE, or for student ratings of satisfaction. A significant difference existed between the methods for the student ratings of perceived educational value, with the teaching approaches of pre-recorded video tutorial and student self-video being rated higher than ‘traditional’ live tutoring. Alternative teaching methods to traditional live tutoring can produce equivalent learning outcomes when applied to the practical skill development of undergraduate health professional students. The use of alternative practical skill teaching methods may allow for greater flexibility for both staff and infrastructure resource allocation.  相似文献   

8.
Previous projects (Combell I & II) to assess clinical skills were conducted in medical schools in Catalonia, in order to introduce a model of such an assessment using standardized patients (SP). The aim of this study (Combell III) was to measure selected characteristics of our model. Seventy-three medical students in the final year at the Bellvitge teaching unit of the University of Barcelona participated in a clinical skills assessment (CSA) project that used 10 SP cases. The mean group scores for the four components of clinical skills for each day of testing were studied, and ratings for each student in the 10 sequential encounters were checked. The study also compared the clinical skills scores with their academic grades. The total case mean score (mean score of history-taking, physical examination and patient notes scores) was 51.9%, and the mean score for communication skills was 63.6%. The clinical skills scores over the 8 testing days showed no day-to-day differences. The study did not find differences among the sequential encounters for each student (training effect). There was a lack of correlation between clinical skill scores and academic grades. The project demonstrated the feasibility of the method for assessing clinical skills, confirmed its reliability, and showed that there is no correlation between scores with this method and academic examinations that mainly reflect knowledge.  相似文献   

9.
Medical Education 2012: 46 : 267–276 Objectives We reviewed papers describing the development of instruments for assessing clinical communication in undergraduate medical students. The instruments had important limitations: most lacked a theoretical basis, and their psychometric properties were often poor or inadequately investigated and reported. We therefore describe the development of a new instrument, the Liverpool Undergraduate Communication Assessment Scale (LUCAS), which is intended to overcome some of these limitations. We designed LUCAS to reflect the theory that communication is contextually dependent, inherently creative and cannot be fully described within a conceptual framework of discrete skills. Methods We investigated the preliminary psychometric properties of LUCAS in two studies. To assess construct and external validity, we examined correlations between examiners’ LUCAS ratings and simulated patients’ ratings of their relationships with students in Year 1 formative (n = 384) and summative (n = 347) objective structured clinical examination (OSCE) samples. Item–total correlations and item difficulty analyses were also performed. The dimensionality of LUCAS was examined by confirmatory factor analysis. We also assessed inter‐rater reliability; four raters used LUCAS to rate 40 video‐recorded encounters between Year 1 students and simulated patients. Results Simulated patient ratings correlated with examiner ratings across two OSCE datasets. All items correlated with the total score. Item difficulty showed LUCAS was able to discriminate between student performances. LUCAS had a two‐dimensional factor structure: we labelled Factor 1 creative communication and Factor 2 procedural communication. The intraclass correlation coefficient was 0.73 (95% confidence interval 0.54–0.85), indicating acceptable reliability. Conclusions We designed LUCAS to move the primary focus of examiners away from an assessment of students’ enactment of behavioural skills to a judgement of how well students’ communication met patients’ needs. LUCAS demonstrated adequate reliability and validity. The instrument can be administered easily and efficiently and is therefore suitable for use in medical school examinations.  相似文献   

10.
INTRODUCTION: Yearly evaluation of academic faculty teaching is required by institutions for advancement purposes and continued employment. The method in which these evaluations are collected may influence the outcome of that evaluation. We compared the results of three different data collection methods of faculty ratings. METHODS: Diagnostic radiology residents evaluated four behaviour categories of faculty in three different ways during the 1995-96 academic year. The individual anonymous ballot was compared to two student debriefing techniques. RESULTS: Ratings in individual categories and rankings of several of the faculty changed considerably depending upon the data gathering method. Individual anonymous ballots produced a higher average rating in all four categories evaluated. The average ratings were lowest in the closed meeting group. DISCUSSION: The method in which evaluation of faculty are collected influences both the numerical value of the rating as well as the ranking of the teachers within the group. Evaluation outcomes are highly dependent upon the method of data collection.  相似文献   

11.
The purpose of this survey study was to investigate allied health faculty members' and students' ratings of the clinical educational feedback process. Faculty members and students from seven allied health programs at the University of Nebraska Medical Center, who were currently involved with clinical education, were asked to indicate their feelings on a seven-point scale for each of 22 feedback characteristics. An ANOVA and a Scheffe's test for post hoc analysis were used for data analyses. The results indicated that while both faculty members and students perceived eight feedback characteristics as equally important, they differed significantly (p less than .01) in their ratings of actual feedback provided in the characteristics of specific, timely, encouraging, and recommending improvement. Other significant faculty/student discrepancies were found in the area of student reception of feedback provided. The results are useful to guide and direct improvements in the clinical education of allied health students.  相似文献   

12.
Student evaluation of teaching effectiveness is widely used in undergraduate institutions as one element of determining overall faculty effectiveness. The evaluation format typically consists of (1) a number of questions the student answers by indicating a numerical rating and (2) an open-ended section for written comments. Some faculty members believe that the numerical ratings are not taken seriously by the students, and that the written comments impose greater accountability on the part of students. On the other hand, numerical ratings are necessary to minimize the fear that unfavorable written comments will be taken out of context in promotion decisions. Tabulation of numerical ratings is essential if a computerized data base for faculty evaluation is to be established. This study was designed to examine the relationship between student's numerically based ratings and written comments by evaluating allied health instructors using a standard, schoolwide evaluation form. Written comments were categorized according to a five-point scale and compared to mean values obtained from numerical ratings. Twenty-two faculty and 1,311 student evaluations were included. Significant positive correlations were found between the numerical student ratings and the written comments. The highest correlations were between student comments and two items related to overall teaching effectiveness. Students who evaluated instructors at either extreme on the spectrum of effectiveness were most likely to include written comments. Based on the consistency of numerical ratings and written comments we recommend that only the numerical ratings be used as part of the promotion and tenure decision-making process.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

13.
Purpose: There is a paucity of evaluation forms specifically developed and validated for outpatient settings. The purpose of this study was to develop and validate an instrument specifically for evaluating outpatient teaching, to provide reliable and valid ratings for individual and group feedback to faculty, and to identify outstanding teachers in that setting. Method: By literature reviews and pilot studies at the Faculties of Health Sciences, McMaster University (Canada) and Aga Khan University (AKU-Pakistan), a 15-item instrument, Student Evaluation of Teaching in Outpatient Clinics (SETOC), was created with five subscales: “Establishing Learning-Milieu, Clinical-Teaching, General-Teaching, Clinical-Competence, and Global-Rating.” Seven-point Likert-type rating scales were used. Students also nominated three “best” outpatient teachers. Participants: 87 faculty members (79%) rated by all 224 third to fifth-year students (clerks) at outpatient departments of the AKU hospital over a one-year period. Analyses: Repeated measures generalizability studies, correlations, concurrent validity of SETOC scores with best teacher nominations. Results: Inter-rater G-coefficient and internal consistency of SETOC student ratings were 0.92 and 0.98. Average inter-item and inter-subscale correlations were 0.79 and 0.86. Comparing SETOC scores against “Best Teacher” nominations, sensitivity, specificity, positive and negative predictive values were greater than 0.84.Student ratings ranged from unsatisfactory (fourteen instructors) to outstanding (four instructors). Mean-scores for Learning-Milieu, Clinical-Teaching and General-Teaching were lower than those for Clinical-Competence and Global-Rating (p=0.000 for all). Conclusions: The SETOC elicited reliable and valid student ratings that can provide specific feedback to individual faculty with weak or outstanding teaching skills, and identify overall group shortcomings for faculty development.  相似文献   

14.
BACKGROUND: Doctors develop the skills needed to interview parents and children in paediatric settings by practice and by receiving feedback during their medical training. Interviewed parents are ideally placed to provide evaluations of these skills. If parents, as consumers of health care services, are to be consulted, it is important to determine whether factors other than interview skills affect their evaluations. OBJECTIVES: Our aim was to examine the relationship between maternal satisfaction ratings of student doctor interviews, and maternal and child characteristics. METHODS: Sixty mothers of children attending the paediatric medical out-patient clinic at the Women's and Children's Hospital, South Australia were allocated randomly to rate one of four video-taped final year student doctor interviews (15 mothers per interview). The level of skills displayed by the student doctor differed in each interview. Maternal satisfaction was measured using the Medical Interview Satisfaction Scale (MISS) and the Interpersonal Skills Rating Scale (IPS), and interview ratings were compared for a number of maternal and child characteristics. RESULTS: No significant associations were observed between maternal satisfaction ratings and any maternal or child characteristics other than lower satisfaction associated with previous experience of a real student doctor interview (P <0.01). The interview seen by mothers predicted 53% (MISS) and 65% (IPS) of the variance in maternal satisfaction ratings. After controlling for the interview type, the maternal and child characteristics studied predicted 17% additional variance in MISS scores and 7% in IPS scores. CONCLUSION: The quality of the interview skills demonstrated was the principle determinant of maternal satisfaction ratings.  相似文献   

15.
Rater errors in a clinical skills assessment of medical students   总被引:1,自引:0,他引:1  
The authors used a many-faceted Rasch measurement model to analyze rating data from a clinical skills assessment of 173 fourth-year medical students to investigate four types of rater errors: leniency, inconsistency, the halo effect, and restriction of range. Students performed six clinical tasks with 6 standardized patients (SPs) selected from a pool of 17 SPs. SPs rated the performance of each student in six skills: history taking, physical examination, interpersonal skills, communication technique, counseling skills, and physical examination etiquette. SPs showed statistically significant differences in their rating severity, indicating rater leniency error. Four SPs exhibited rating inconsistency. Four SPs restricted their ratings in high categories. Only 1 SP exhibited a halo effect. Administrators of objective structured clinical examinations should be vigilant for various types of rater errors and attempt to reduce or eliminate those errors to improve the validity of inferences based on objective structured clinical examination scores.  相似文献   

16.
Family medicine programs need faculty well trained in the roles of educator, administrator, researcher, and clinician. While the need for faculty development is recognized in all colleges and departments, it is a particular problem in family medicine due to the shortage of faculty, diverse backgrounds of existing faculty, and current pressures to develop the research base for the discipline of family medicine. This study was conducted to gather information about the effectiveness of the two-to-three day workshop format for faculty development in family medicine. In a pre-post comparison and a nine-month follow-up of four faculty development workshops, significant and persistent changes were found in participants' ratings of their abilities to do faculty related skills. The three-day residential workshop was found to be an effective means for promoting faculty development.  相似文献   

17.
This study estimated the interrater reliability of medical student evaluations of clinical teaching. Data consisted of 1,570 ratings evaluating 147 faculty over a 4-year period in a 3rd-year internal medicine clerkship. The number of ratings a typical faculty member receives in a year was also calculated and used to extrapolate the standard error of measurement for data typically available to evaluate faculty at different time intervals. The data available to evaluate a faculty member after 1 year was not adequate, but improved substantially at the 5- to 7-year mark, when a faculty member is typically evaluated for promotion and tenure.  相似文献   

18.
Medical Education 2010: 44 : 298–305 Context Doctors have used the subjective–objective–assessment–plan (SOAP) note format to organise data about patient problems and create plans to address each of them. We retooled this into the ‘Programme Evaluation SOAP Note’, which serves to broaden the clinician faculty member’s perspective on programme evaluation to include the curriculum and the system, as well as students. Methods The SOAP Note was chosen as the method for data recording because of its familiarity to clinician‐educators and its strengths as a representation of a clinical problem‐solving process with elements analogous to educational programme evaluation. We pilot‐tested the Programme Evaluation SOAP Note to organise faculty members' interpretations of integrated student performances during the Year 3 patient care skills objective structured clinical examination (OSCE). Results Eight community clerkship directors and lead clerkship faculty members participated as observers in the 2007 gateway examination and completed the Programme Evaluation SOAP Note. Problems with the curriculum and system far outnumbered problems identified with students. Conclusions Using the Programme Evaluation SOAP Note, clerkship leaders developed expanded lists of ‘differential diagnoses’ that could explain possible learner performance inadequacies in terms of system, curriculum and learner problems. This has informed programme improvement efforts currently underway. We plan to continue using the Programme Evaluation SOAP Note for ongoing programme improvement.  相似文献   

19.
Evidence suggests that the quality and frequency of bedside clinical examination have declined. We undertook the study to (1) determine whether intensive instruction in physical examination enhances medical student skills and (2) develop a tool to evaluate those skills using a modified observed structured clinical examination (OSCE). This was a randomized, blinded, prospective, year-long study involving 3rd year students at the Albert Einstein College of Medicine. Students were randomized to receive intensive instruction in physical examination [study group (n = 46)] or usual instruction [control group (n = 75)] and evaluated by a modified OSCE. The OSCE consisted of 6 real patient stations: Head, ears, eyes, neck, throat; pulmonary; cardiovascular; gastrointestinal; neurology; musculoskeletal; and 2 computer imaging stations: genitourinary and dermatology. A faculty member present at each patient station evaluated student performance. Data were analyzed using t-tests for comparison of the mean scores between the two groups for each station and for average scores across stations. A total of 121 students were tested. The study group performed significantly better than the control group in the gastrointestinal station (p = 0.0004), the combined average score across the six real patient stations (p = 0.0001), and the combined average score across all eight stations (p = 0.0014). Intensive physical diagnosis instruction enhances physical examination skills of 3rd year medical students. The modified OSCE is a useful tool to evaluate these skills.  相似文献   

20.
CONTEXT: General practice. OBJECTIVES: To compare ratings of GP registrars' communication skills by patients and GP examiners. DESIGN: A comparative study where the communication skills of GP registrars were assessed both by patients, using a validated tool called the Doctors' Interpersonal Skills Questionnaire (DISQ), and by GP examiners as part of the Fellowship examination of the Royal Australian College of General Practitioners (RACGP). PARTICIPANTS: These included 138 GP registrars, 6075 patients, and more than 70 GP examiners. RESULTS: Spearman rank correlations were used to test the strength of the relationship between Fellowship examination and DISQ scores. Findings showed that there were several communication skills areas with mild (but significant) correlations between patient and GP examiner ratings. These areas included warmth of greeting, listening skills, respect, and concern for the patient as a person. No significant correlations were detected for explanation skills. Interestingly, the correlations between GP examiner and patient ratings were stronger for female GP registrars. CONCLUSION: There is some evidence that patients' ratings of GP registrars' communication skills is aligned to ratings made by GP examiners as part of the summative RACGP Fellowship examination. However, further work is required to assess the strength of this alignment, given that patient-doctor communication is assessed more widely through new components of the examination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号