首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
INTRODUCTION: The literature on how in-training assessment (ITA) works in practice and what educational outcomes can actually be achieved is limited. One of the aims of introducing ITA is to increase trainees' clinical confidence; this relies on the assumption that assessment drives learning through its content, format and programming. The aim of this study was to investigate the effect of introducing a structured ITA programme on junior doctors' clinical confidence. The programme was aimed at first year trainees in anaesthesiology. METHODS: The study involved a nationwide survey of junior doctors' self-confidence in clinical performance before (in 2001) and 2 years after (in 2003) the introduction of an ITA programme. Respondents indicated confidence on a 155-item questionnaire related to performance of clinical skills and tasks reflecting broad aspects of competence. A total of 23 of these items related to the ITA programme. RESULTS: The response rate was 377/531 (71%) in 2001 and 344/521 (66%) in 2003. There were no statistically significant differences in mean levels of confidence before and 2 years after the introduction of the ITA programme - neither in aspects that were related to the programme nor in those that were unrelated to the programme. DISCUSSION: This study demonstrates that the introduction of a structured ITA programme did not have any significant effect on trainees' mean level of confidence on a broad range of aspects of clinical competence. The importance of timeliness and rigorousness in the application of ITA is discussed.  相似文献   

3.
OBJECTIVE: To determine whether postgraduate students are able to assess the quality of undergraduate medical examinations and to establish whether faculty can use their results to troubleshoot the curriculum in terms of its content and evaluation. SUBJECTS: First and second year family medicine postgraduate students. MATERIALS: A randomly generated sample of undergraduate medical examination questions. METHODS: Postgraduate students were given two undergraduate examinations which included questions with an item difficulty (ID) > 0.60. The students answered and then rated each question on a scale of 1-7. RESULTS: The percentage of postgraduate students answering each question correctly correlated significantly with the average perceived relevance (Examination 1: r=0.372; P < 0.05; Examination 2: r=0.458; P < 0.05). Questions plotted for average postgraduate/undergraduate performance ratio versus the average perceived relevance were significantly correlated (Examination 1: r=0.462; P < 0.01; Examination 2: r=0.458; P < 0.05). CONCLUSIONS: This study offers a method of validating question appropriateness prior to examination administration. The design has the potential to be used as a model for determining the relevancy of a medical curriculum.  相似文献   

4.
BACKGROUND: Recent studies raise concerns over the preparedness of newly qualified doctors for the role of the pre-registration house officer (PRHO). This study aimed to assess self-perception of preparedness, objective assessment of core clinical skills and the effect of an extended clinical induction programme prior to commencing full duties. METHODS: A group of 26 newly qualified doctors from 1 district general hospital underwent an extended 5-day, ward-based induction programme. The participants completed questionnaires on their own perceptions of their preparedness for PRHO duties and underwent an objective structured clinical examination (OSCE) of 4 core clinical skills prior to induction, on completion of induction and 1 month into working life. RESULTS: At the outset PRHOs had low perceptions of their own capabilities in all clinical scenarios and skills. Most perceptions improved after induction, although in 2 clinical areas they felt even less confident. One month into post there were significant improvements in all areas. Only 1 PRHO passed all 4 clinical skills assessments at the pre-induction assessment. Seven (26%) failed on 1 or more skills at the post-induction assessment. However, all participants were deemed competent in all skills at the 1-month assessment. CONCLUSION: Newly qualified doctors do not feel prepared for PRHO duties and objectively are not competent in basic clinical skills. An extended induction improves preparedness in some but not all clinical areas and improves performance of objectively assessed clinical skills.  相似文献   

5.
OBJECTIVE: To survey medical students' views about the purposes and fairness of assessment procedures. METHOD: The survey used a 19-item questionnaire designed for self-completion. Respondents were invited to "strongly agree", "agree", "disagree" or "strongly disagree" with a series of statements about the purposes and fairness of assessment. There was space for free text comments relating to each statement. RESULTS: A total of 312 students out of a sample of 381 completed questionnaires (82% response rate). Whilst the majority of students (> 95%) agreed that ensuring competence, providing feedback and guiding student learning were important purposes of assessment, only half (51%) felt that assessment should be used to predict performance as a doctor. A clear majority of students (81%) agreed that, on the whole, assessment at Newcastle Medical School was fair. Data interpretation papers (comprising a combination of multiple true/false, "one best answer" and short answers) were perceived to be the fairest assessment tool; the assessment of clinical rotations by supervisors was perceived as the least fair. Differences in perception about the fairness of several assessment methods emerged between junior and senior students. A large number of respondents expressed desire for the provision of more feedback on performance in order to guide future learning. CONCLUSIONS: Whilst students' views about the fairness of specific assessment tools may sometimes be at variance with published research on assessment, their perceptions will influence the acceptability of assessment. Students would welcome the introduction of methods that provide meaningful assessment feedback.  相似文献   

6.
Objectives To investigate the experiences and opinions of programme directors, clinical supervisors and trainees on an in‐training assessment (ITA) programme on a broad spectrum of competence for first year training in anaesthesiology. How does the programme work in practice and what are the benefits and barriers? What are the users' experiences and thoughts about its effect on training, teaching and learning? What are their attitudes towards this concept of assessment? Methods Semistructured interviews were conducted with programme directors, supervisors and trainees from 3 departments. Interviews were audiotaped and transcribed. The content of the interviews was analysed in a consensus process among the authors. Results The programme was of benefit in making goals and objectives clear, in structuring training, teaching and learning, and in monitoring progress and managing problem trainees. There was a generally positive attitude towards assessment. Trainees especially appreciated the coupling of theory with practice and, in general, the programme inspired an academic dialogue. Issues of uncertainty regarding standards of performance and conflict with service declined over time and experience with the programme, and departments tended to resolve practical problems through structured planning. Discussion Three interrelated factors appeared to influence the perceived value of assessment in postgraduate education: (1) the link between patient safety and individual practice when assessment is used as a licence to practise without supervision rather than as an end‐of‐training examination; (2) its benefits to educators and learners as an educational process rather than as merely a method of documenting competence, and (3) the attitude and rigour of assessment practice.  相似文献   

7.
CONTEXT: Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. PURPOSE: The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. DISCUSSION: The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. CONCLUSIONS: There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.  相似文献   

8.
OBJECTIVES: To promote safe prescribing and administration of medicines in the pre-registration house officer (PRHO) year through a programme of structured teaching and assessment for final year medical students. DESIGN: Forty final year medical students from two medical schools were randomly allocated either to participate in a pharmacist facilitated teaching session or to receive no additional teaching. Teaching comprised five practical exercises covering seven skills through which students rotated in small groups. One month later, a random sample of 16 taught and 16 non-taught students participated in a nine-station objective structured clinical examination (OSCE) to assess the impact of the teaching. SETTING: Manchester School of Medicine (MSM), and Kings College School of Medicine and Dentistry (KCSMD). PARTICIPANTS: Final year medical student volunteers. MAIN OUTCOME MEASURES: The need for teaching as indicated by student prior experience; questionnaire rating of student acceptability of teaching and assessment; self-rating of student confidence post-assessment, and student performance assessed by OSCE. RESULTS: The study demonstrated that the taught group achieved higher scores in eight OSCE stations. Four of these were statistically significant (P < or= 0.005). Taught students felt more confident performing the skills on five stations. From 0 to 47.5% students had prior experience of the skills taught. The post-teaching questionnaire evaluated exercises positively on several criteria, including provision of new information and relevance to future work. CONCLUSIONS: Structured teaching provided an effective and acceptable method of teaching the medicines management skills needed in the PRHO year. The structured approach complemented variable precourse clinical experience.  相似文献   

9.
INTRODUCTION: As we move from standard 'long case' final examinations to new objective structured formats, we need to ensure the new is at least as good as the old. Furthermore, knowledge of which examination format best predicts medical student progression and clinical skills development would be of value. METHODS: A group of medical students sat both the standard long case examination and the new objective structured clinical examination (OSCE) to introduce this latter examination to our Medical School for final MB. At the end of their pre-registration year, the group and their supervising consultants submitted performance evaluation questionnaires. RESULTS: Thirty medical students sat both examinations and 20 returned evaluation questionnaires. Of the 72 consultants approached, 60 (83%) returned completed questionnaires. No correlation existed between self- and consultant reported performance. The traditional finals examination was inversely associated with consultant assessment. Better performing students were not rated as better doctors. The OSCE (and its components) was more consistent and showed positive associations with consultant ratings across the board. DISCUSSION: Major discrepancies exist between the 2 examination formats, in data interpretation and practical skills, which are explicitly tested in OSCEs but less so in traditional finals. Standardised marking schemes may reduce examiner variability and discretion and weaken correlations across the 2 examinations. This pilot provides empirical evidence that OSCEs assess different clinical domains than do traditional finals. Additionally, OSCEs improve prediction of clinical performance as assessed by independent consultants. CONCLUSION: Traditional finals and OSCEs correlate poorly with one another. Objective structured clinical examinations appear to correlate well with consultant assessment at the end of the pre-registration house officer year.  相似文献   

10.
BACKGROUND: The intern year is a key time for the acquisition of clinical skills, both procedural and cognitive. We have previously described self-reported confidence and experience for a number of clinical skills, finding high levels of confidence among Australian junior doctors. This has never been correlated with an objective measure of competence. AIMS AND HYPOTHESIS: We aimed to determine the relationship between self-reported confidence and observed competence for a number of routine, procedural clinical skills. METHODS: A group of 30 junior medical officers in their first postgraduate year (PGY1) was studied. All subjects completed a questionnaire concerning their confidence and experience in the performance of clinical skills. A competency-based assessment instrument concerning 7 common, practical, clinical skills was developed, piloted and refined. All 30 PGY1s then completed an assessment using this instrument. Comparisons were then made between the PGY1s' self-reported levels of confidence and tutors' assessments of their competence. RESULTS: A broad range of competence levels was revealed by the clinical skills assessments. There was no correlation between the PGY1s' self-ratings of confidence and their measured competencies. CONCLUSIONS: Junior medical officers in PGY1 demonstrate a broad range of competence levels for several common, practical, clinical skills, with some performing at an inadequate level. There is no relationship between their self-reported level of confidence and their formally assessed performance. This observation raises important caveats about the use of self-assessment in this group.  相似文献   

11.
OBJECTIVES: To evaluate the development, validity and reliability of a multimodality objective structured clinical examination (OSCE) in undergraduate psychiatry, integrating interactive face-to-face and telephone history taking and communication skills stations, videotape mental state examinations and problem-oriented written stations. METHODS: The development of the OSCE on a restricted budget is described. This study evaluates the validity and reliability of 4 15-18-station OSCEs for 128 students over 1 year. Face and content validity were assessed by a panel of clinicians and from feedback from OSCE participants. Correlations with consultant clinical 'firm grades' were performed. Interrater reliability and internal consistency (interstation reliability) were assessed using generalisability theory. RESULTS: The OSCE was feasible to conduct and had a high level of high perceived face and content validity. Consultant firm grades correlated moderately with scores on interactive stations and poorly with written and video stations. Overall reliability was moderate to good, with G-coefficients in the range 0.55-0.68 for the 4 OSCEs. CONCLUSIONS: Integrating a range of modalities into an OSCE in psychiatry appears to represent a feasible, generally valid and reliable method of examination on a restricted budget. Different types of stations appear to have different advantages and disadvantages, supporting the integration of both interactive and written components into the OSCE format.  相似文献   

12.
BACKGROUND: No method of standard setting for objective structured clinical examinations (OSCEs) is perfect. Using scores aggregated across stations risks allowing students who are incompetent in some core skills to pass an examination, which may not be acceptable for high stakes assessments. AIM: To assess the feasibility of using a factor analysis of station scores in a high stakes OSCE to derive measures of underlying competencies. METHODS: A 12-station OSCE was administered to all 192 students in the penultimate undergraduate year at the University of Aberdeen Medical School. Analysis of the correlation table of station scores was used to exclude stations performing unreliably. Factor analysis of the remaining station scores was carried out to characterise the underlying competencies being assessed. Factor scores were used to derive pass/fail cut-off scores for the examination. RESULTS: Four stations were identified as having unpredicted variations in station scores. Analysis of the content of these stations allowed the underlying problems with the station designs to be isolated. Factor analysis of the remaining 8 stations revealed 3 main underlying factors, accounting for 53% of the total variance in scores. These were labelled "examination skills", "communication skills" and "history taking skills". CONCLUSION: Factor analysis is a useful tool for characterising and quantifying the skills that are assessed in an OSCE. Standard setting procedures can be used to calculate cut-off scores for each underlying factor.  相似文献   

13.
AIM: This study was designed to assess medical school teachers' tacit knowledge of basic pedagogic principles and to explore the specific character of the knowledge base. METHODS: We developed a 50-item, multiple-choice question test based on important pedagogic principles, and classified all questions as requiring either declarative or procedural knowledge. A total of 72 medical teachers representing 5 different groups of clinicians and educators agreed to sit the test. RESULTS: Teachers in all 5 groups performed well on the test of tacit pedagogic knowledge but those with advanced education degrees, or local recognition as experts, performed best. All test takers performed best on questions requiring procedural knowledge. CONCLUSION: Medical teachers possess tacit knowledge of basic pedagogic principles. Superior test performance on questions requiring procedural knowledge is consistent with their working in a clinical environment characterised by repeated procedural activities.  相似文献   

14.
INTRODUCTION: Inventories to quantify approaches to studying try to determine how students approach academic tasks. Medical curricula usually aim to promote a deep approach to studying, which is associated with academic success and which may predict desirable traits postqualification. AIMS: This study aimed to validate a revised Approaches to Learning and Studying Inventory (ALSI) in medical students and to explore its relation to student characteristics and performance. METHODS: Confirmatory factor analysis was used to validate the reported constructs in a sample of 128 Year 1 medical students. Models were developed to investigate the effect of age, graduate status and gender, and the relationships between approaches to studying and assessment outcomes. RESULTS: The ALSI performed as anticipated in this population, thus validating its use in our sample, but a 4-factor solution had a better fit than the reported 5-factor one. Medical students scored highly on deep approach compared with other students in higher education. Graduate status and gender had significant effects on approach to studying and a deep approach was associated with higher academic scores. CONCLUSIONS: The ALSI is valid for use in medical students and can uncover interesting relationships between approaches to studying and student characteristics. In addition, the ALSI has potential as a tool to predict student success, both academically and beyond qualification.  相似文献   

15.
INTRODUCTION: Assessment of medical student clinical skills is best carried out using multiple assessment methods. A programme was developed to obtain parent evaluations of medical student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. METHOD: A total of 130 parent evaluations were obtained for 67 students (parent participation 72%, student participation 58%). Parents completed a 13-item questionnaire [Interpersonal Skills Rating Scale (IPS) maximum score 91, higher scores = higher student skill level]. Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the programme. Parent evaluation scores were compared with student performance in formative and summative faculty assessments of clinical interview skills. RESULTS: Parents supported the programme and participating students valued parent feedback. Students with a parent score that was less than 1 standard deviation (SD) below the class mean (low IPS score students) obtained lower faculty summative assessment scores than did other students (mean +/- SD, 59% +/- 5 versus 64% +/- 7; P < 0.05). Obtaining 1 low IPS score was associated with a subsequent faculty summative assessment score below the class mean (sensitivity 0.38, specificity 0.88). Parent evaluations combined with faculty formative assessments identified 50% of students who subsequently performed below the class mean in summative assessments. CONCLUSIONS: Parent evaluations provided useful feedback to students and identified 1 group of students at increased risk of weaker performance in summative assessments. They could be combined with other methods of formative assessment to enhance screening procedures for clinically weak students.  相似文献   

16.
CONTEXT: Reliability is defined as the extent to which a result reflects all possible measurements of the same construct. It is an essential measurement characteristic. Unfortunately, there are few objective tests for the most important aspects of the professional role because they are complex and intangible. In addition, professional performance varies markedly from setting to setting and case to case. Both these factors threaten reliability. AIM: This paper describes the classical approach to evaluating reliability and points out the limitations of this approach. It goes on to describe how generalisability theory solves many of these limitations. CONDITIONS: A G-study uses variance component analysis to measure the contributions that all relevant factors make to the result (observer, situation, case, assessee and their interactions). This information can be combined to reflect the reliability of a single observation as a reflection of all possible measurements - a true reflection of reliability. It can also be used to estimate the reliability of a combined sample of several different observations, or to predict how many observations are required with different test formats to achieve a given level of reliability. Worked examples are used to illustrate the concepts.  相似文献   

17.
OBJECTIVE: To examine teachers' views of the first batch of graduates of a revised medical curriculum in Asia. METHODS: A cross-sectional study using a structured questionnaire was carried out to obtain the views of all the clinical teachers involved in teaching final year students of the old curriculum in 2000-01 and the new curriculum in 2001-02 at the University of Hong Kong, which commenced curricular reform in 1997. RESULTS: Nearly 62% of respondents felt that better graduates were being produced with the new curriculum. The majority of them rated the new curriculum students better in nearly all the major goals of the new curriculum, such as self-directed learning initiative, problem solving skills, interpersonal skills and clinical performance in patient care. However, the core knowledge of the new curriculum students was of concern to some teachers. CONCLUSION: This study focused on the first complete cycle of a revised medical curriculum in Asia. Teachers' views of the new curriculum students were highly positive and they felt that better graduates were being produced.  相似文献   

18.
PURPOSE: We describe the use of standardised students (SSs) in interdisciplinary faculty development programmes to improve clinical teaching skills. Standardised students are actual health professions students who are trained to portray a prototypical teaching challenge consistently across many encounters with different faculty participants. METHODS: The faculty development programmes described focused on the skills of providing feedback and brief clinical teaching. At the beginning of each session, each participant was videotaped in encounters with 2 different SSs. Using microteaching (an instructional method in which learners view short segments of their own videotaped performance and discuss the tapes with a facilitator, consultant or other workshop participants), each group of participants and instructors reviewed the tapes and reflected on the encounters, providing immediate feedback to participants and modelling different approaches to the same teaching problem. The same process was repeated with more complicated scenarios after 2 weeks and again after 6 months offering reinforcement, further practice and more sophisticated development of the strategies learned. Participants completed post-session evaluations and a follow-up telephone survey. RESULTS: A total of 36 faculty members from the colleges of medicine, dentistry, pharmacy and nursing participated in workshops in 2000-01. The workshops were rated as highly relevant to participants' teaching, and most participants reported that they had learned a great deal. Participants most appreciated reviewing the videotaped interactions, the feedback they received, the interactions with their colleagues, the interdisciplinary nature of the groups and the practical focus of the workshops. CONCLUSIONS: Standardised students provide a high fidelity, low risk, simulated environment in which faculty can reflect on and experiment with new teaching behaviours. Such encounters can enhance the effectiveness and impact of faculty development programmes to improve clinical teaching skills.  相似文献   

19.
BACKGROUND: Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6-year problem-based curriculum. METHODS: We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6-year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves. FINDINGS: Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was 32% and 25% for basic and behavioural/social sciences, respectively. INTERPRETATION: Maastricht Medical School claims to offer a problem-based, student-centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.  相似文献   

20.
Rogers J 《Medical education》2005,39(11):1110-1117
OBJECTIVE: This paper explores the thesis that medical education is the cultural transmission to learners of specific values, which are increasingly expressed as graduation competencies. As testing is a powerful way to transmit cultural values to learners in a brief period of time, competency-based assessments can be an instrument of cultural compression in medical education. METHODS: The author reviewed medical literature to illustrate the concepts from educational anthropology, led the process one medical school used to develop its list of graduation competencies, and conducted a citation search about competency domains. RESULTS: There is support in the literature for viewing medical education as an example of cultural transmission and compression and for the assertion that testing influences student behaviour. The graduation competency statements developed by the school reflect traditional and emergent values. The citation search data confirmed that some competency domains reflected traditional values, while others reflected more emergent values. CONCLUSION: Concepts from educational anthropology are relevant to medical education and provide perspectives for understanding contemporary issues such as competency-based assessments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号