首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Abstract

Background: One popular procedure in the medical student selection process are multiple mini-interviews (MMIs), which are designed to assess social skills (e.g., empathy) by means of brief interview and role-play stations. However, it remains unclear whether MMIs reliably measure desired social skills or rather general performance differences that do not depend on specific social skills. Here, we provide a detailed investigation into the construct validity of MMIs, including the identification and quantification of performance facets (social skill-specific performance, station-specific performance, general performance) and their relations with other selection measures.

Methods: We used data from three MMI samples (N?=?376 applicants, 144 raters) that included six interview and role-play stations and multiple assessed social skills.

Results: Bayesian generalizability analyses show that, the largest amount of reliable MMI variance was accounted for by station-specific and general performance differences between applicants. Furthermore, there were low or no correlations with other selection measures.

Discussion: Our findings suggest that MMI ratings are less social skill-specific than originally conceptualized and are due more to general performance differences (across and within-stations). Future research should focus on the development of skill-specific MMI stations and on behavioral analyses on the extents to which performance differences are based on desirable skills versus undesired aspects.  相似文献   

2.
Background: In the 11 years since its development at McMaster University Medical School, the multiple mini-interview (MMI) has become a popular selection tool. We aimed to systematically explore, analyze and synthesize the evidence regarding MMIs for selection to undergraduate health programs.

Methods: The review protocol was peer-reviewed and prospectively registered with the Best Evidence Medical Education (BEME) collaboration. Thirteen databases were searched through 34 terms and their Boolean combinations. Seven key journals were hand-searched since 2004. The reference sections of all included studies were screened. Studies meeting the inclusion criteria were coded independently by two reviewers using a modified BEME coding sheet. Extracted data were synthesized through narrative synthesis.

Results: A total of 4338 citations were identified and screened, resulting in 41 papers that met inclusion criteria. Thirty-two studies report data for selection to medicine, six for dentistry, three for veterinary medicine, one for pharmacy, one for nursing, one for rehabilitation, and one for health science. Five studies investigated selection to more than one profession. MMIs used for selection to undergraduate health programs appear to have reasonable feasibility, acceptability, validity, and reliability. Reliability is optimized by including 7–12 stations, each with one examiner. The evidence is stronger for face validity, with more research needed to explore content validity and predictive validity. In published studies, MMIs do not appear biased against applicants on the basis of age, gender, or socio-economic status. However, applicants of certain ethnic and social backgrounds did less well in a very small number of published studies. Performance on MMIs does not correlate strongly with other measures of noncognitive attributes, such as personality inventories and measures of emotional intelligence.

Discussion: MMI does not automatically mean a more reliable selection process but it can do, if carefully designed. Effective MMIs require careful identification of the noncognitive attributes sought by the program and institution. Attention needs to be given to the number of stations, the blueprint and examiner training.

Conclusion: More work is required on MMIs as they may disadvantage groups of certain ethnic or social backgrounds. There is a compelling argument for multi-institutional studies to investigate areas such as the relationship of MMI content to curriculum domains, graduate outcomes, and social missions; relationships of applicants’ performance on different MMIs; bias in selecting applicants of minority groups; and the long-term outcomes appropriate for studies of predictive validity.  相似文献   

3.
Purpose: Professionalism is a core physician competency and identifying students at risk for poor professional development early in their careers may allow for mentoring. This study identified indicators in the preclinical years associated with later professionalism concerns.

Methods: A retrospective analysis of observable indicators in the preclinical and clinical years was conducted using two classes of students (n?=?226). Relationships between five potential indicators of poor professionalism in the preclinical years and observations related to professional concerns in the clinical years were analyzed.

Results: Fifty-three medical students were identified with at least one preclinical indicator and one professionalism concern during the clinical years. Two observable preclinical indicators were significantly correlated with unprofessional conduct during the clinical years: Three or more absences from attendance-required sessions (odds ratio 4.47; p=.006) and negative peer assessment (odds ratio 3.35; p=.049).

Conclusions: We identified two significant observable preclinical indicators associated with later professionalism concerns: excessive absences and negative peer assessments. Early recognition of students at risk for future professionalism struggles would provide an opportunity for proactive professional development prior to the clinical years, when students’ permanent records may be affected. Peer assessment, coupled with attention to frequent absences, may be a method to provide early recognition.  相似文献   

4.
Purpose: Evaluation of non-cognitive skills never has been used in Brazil. This study aims to evaluate Multiple Mini Interviews (MMI) in the admission process of a School of Medicine in São Paulo, Brazil.

Methods: The population of the study comprised 240 applicants summoned for the interviews, and 96 raters. MMI contributed to 25% of the applicants’ final grade. Eight scenarios were created with the aim of evaluating different non-cognitive skills, each one had two raters. At the end of the interviews, the applicants and raters described their impressions about MMI. The reliability of the MMI was analyzed using the Theory of Generalization and Many-Facet Rasch Model (MFRM).

Results: The G-study showed that the general reliability of the process was satisfactory (coefficient G?=?0.743). The MMI grades were not affected by the raters’ profile, time of interview (p?=?0.715), and randomization group (p?=?0.353). The Rasch analysis showed that there was no misfitting effects or inconsistent stations or raters. A significant majority of the applicants (98%) and all the raters believed MMIs were important in selecting students with a more adequate profile to study medicine.

Conclusions: The general reliability of the selection process was excellent, and it was fully accepted by the applicants and raters.  相似文献   

5.
Introduction: Up to now, student selection for medical schools is merely used to decide which applicants will be admitted. We investigated whether narrative information obtained during multiple mini-interviews (MMIs) can also be used to predict problematic study behavior.

Methods: A retrospective exploratory study was performed on students who were selected into a four-year research master’s program Physician-Clinical Investigator in 2007 and 2008 (n?=?60). First, counselors were asked for the most prevalent non-cognitive problems among their students. Second, MMI notes were analyzed to identify potential indicators for these problems. Third, a case-control study was performed to investigate the association between students exhibiting the non-cognitive problems and the presence of indicators for these problems in their MMI notes.

Results: The most prevalent non-cognitive problems concerned planning and self-reflection. Potential indicators for these problems were identified in randomly chosen MMI notes. The case-control analysis demonstrated a significant association between indicators in the notes and actual planning problems (odds ratio: 9.33, p?=?0.003). No such evidence was found for self-reflection-related problems (odds ratio: 1.39, p?=?0.68).

Conclusions: Narrative information obtained during MMIs contains predictive indicators for planning-related problems during study. This information would be useful for early identification of students-at-risk, which would enable focused counseling and interventions to improve their academic achievement.  相似文献   

6.
Abstract

Background: Instruments that measure exposure to bullying and harassment of students learning in a clinical workplace environment (CWE) that contain validity evidence are scarce. The aim of this study was to develop such a measure and provide some validity evidence for its use.

Method: We took an instrument for detecting bullying of employees in the workplace, called the Negative Acts Questionnaire – Revised (NAQ-R). Items on the NAQ-R were adapted to align with our context of health professional students learning in a CWE and added two new factors of sexual and ethnic harassment. This new instrument, named the Clinical Workplace Learning NAQ-R, was distributed to 540 medical and nursing undergraduate students and we undertook a Confirmatory Factor Analysis (CFA) to investigate its construct validity and factorial structure.

Results: The results provided support for the construct validity and factorial structure of the new scale comprising five factors: workplace learning-related bullying (WLRB), person-related bullying (PRB), physically intimidating bullying (PIB), sexual harassment (SH), and ethnic harassment (EH). The reliability estimates for all factors ranged from 0.79 to 0.94.

Conclusion: This study provides a tool to measure the exposure to bullying and harassment in health professional students learning in a CWE.  相似文献   

7.
Background: Evaluation is an integral part of curriculum development in medical education. Given the peculiarities of bedside teaching, specific evaluation tools for this instructional format are needed. Development of these tools should be informed by appropriate frameworks. The purpose of this study was to develop a specific evaluation tool for bedside teaching based on the Stanford Faculty Development Program’s clinical teaching framework.

Methods: Based on a literature review yielding 47 evaluation items, an 18-item questionnaire was compiled and subsequently completed by undergraduate medical students at two German universities. Reliability and validity were assessed in an exploratory full information item factor analysis (study one) and a confirmatory factor analysis as well as a measurement invariance analysis (study two).

Results: The exploratory analysis involving 824 students revealed a three-factor structure. Reliability estimates of the subscales were satisfactory (α?=?0.71–0.84). The model yielded satisfactory fit indices in the confirmatory factor analysis involving 1043 students.

Discussion: The new questionnaire is short and yet based on a widely-used framework for clinical teaching. The analyses presented here indicate good reliability and validity of the instrument. Future research needs to investigate whether feedback generated from this tool helps to improve teaching quality and student learning outcome.  相似文献   

8.
Abstract

Background/purpose: There is inadequate evidence of reported validity of the results of assessment instruments used to assess clinical competence. This study aimed at combining multiple lines of quantitative and qualitative evidence to support interpretation and use of assessment results.

Method: This study is a mixed methods explanatory research set in two stages of data collection and analysis (QUAN : qual). Guided by Messick’s conceptual model, quantitative evidences as reliability and correlation coefficients of various validity components were calculated using students’ scores, grades and success rates of the whole population of students in 2012/2013 and 2013/2014 (n=?383; 326). The underlying values that scaffold validity evidences were identified via Focus Group Discussions (FGD) with faculty and students; sampling technique was purposive; and results were analyzed by content analysis.

Results: (1) Themes that resulted from content analysis aligned with quantitative evidences. (2) Assessment results showed: (a) content validity (table of specifications and blueprinting in another study); (b) consequential validity (positive unintended consequences resulted from new assessment approach); (c) relationships to other variables [a statistically significant correlation among various assessment methods; with combined score (0.64–0.86) and between mid and final exam results (r?=?0.672)]; (d) internal consistency (high reliability of MCQ and OSCE: 0.81, 0.80); (3) success rates and grades distribution alone could not provide evidence to advocate an argument on validity of results.

Conclusion: The unified approach pursued in this study created a strong evidential basis for meaningful interpretation of assessment scores that could be applied in clinical assessments.  相似文献   

9.
Abstract

Introduction: Poor teamwork has been implicated in medical error and teamwork training has been shown to improve patient care. Simulation is an effective educational method for teamwork training. Post-simulation reflection aims to promote learning and we have previously developed a self-assessment teamwork tool (SATT) for health students to measure teamwork performance. This study aimed to evaluate the psychometric properties of a revised self-assessment teamwork tool.

Methods: The tool was tested in 257 medical and nursing students after their participation in one of several mass casualty simulations.

Results: Using exploratory and confirmatory factor analysis, the revised self-assessment teamwork tool was shown to have strong construct validity, high reliability, and the construct demonstrated invariance across groups (Medicine & Nursing).

Conclusions: The modified SATT was shown to be a reliable and valid student self-assessment tool. The SATT is a quick and practical method of guiding students’ reflection on important teamwork skills.  相似文献   

10.
Abstract

Introduction: The COVID-19 pandemic presented numerous, significant challenges for medical schools, including how to select the best candidates from a pool of applicants when social distancing and other measures prevented “business as usual” admissions processes. However, selection into medical school is the gateway to medicine in many countries, and it is critical to use processes which are evidence-based, valid and reliable even under challenging circumstances. Our challenge was to plan and conduct a multiple-mini interview (MMI) in a dynamic and stringent safe distancing context.

Methods: This paper reports a case study of how to plan, re-plan and conduct MMIs in an environment where substantially tighter safe distancing measures were introduced just before the MMI was due to be delivered.

Results: We report on how to design and implement a fully remote, online MMI which ensured the safety of candidates and assessors.

Discussion: We discuss the challenges of this approach and also reflect on broader issues associated with selection into medical school during a pandemic. The aim of the paper is to provide broadly generalizable guidance to other medical schools faced with the challenge of selecting future students under difficult conditions.  相似文献   

11.
Abstract

The medical school admissions process seeks to assess a core set of cognitive and non-cognitive competencies that reflect professional readiness and institutional mission alignment. The standardized format of multiple mini-interviews (MMIs) can enhance assessments, and thus many medical schools have switched to this for candidate interviews. However, because MMIs are resource-intensive, admissions deans use a variety of interviewers from different backgrounds/professions. Here, we analyze the MMI process for the 2018 admissions cycle at the VCU School of Medicine, where 578 applicants were interviewed by 126 raters from five distinct backgrounds: clinical faculty, basic science faculty, medical students, medical school administrative staff, and community members. We found that interviewer background did not significantly influence MMI evaluative performance scoring, which eliminates a potential concern about the consistency and reliability of assessment.  相似文献   

12.
Background: The Faculty of Medicine at the American University of Beirut implemented a new medical curriculum, which included 90 team-based learning (TBL) sessions in years 1 and 2 of medical school.

Methods: A validated team performance scale (TPS) and peer evaluation of communication skills, professionalism and personal development were collected at different time points during the two years. Grades on the individual and group readiness assurance tests and an evaluation form were collected after every TBL session.

Results: Students generally positively evaluated most TBL sessions as promoters of critical thinking and appreciated the self-learning experience, though they preferred and had better individual grades on those that entailed preparation of didactic lectures. There was a sustained and cumulative improvement in teamwork skills over time. Similar improvement was noted with peer evaluations of communication skills, professionalism, and personal development over time.

Conclusions: This is the first report about such a longitudinal follow-up of medical students who were exposed to a large number of TBL sessions over two years. The results support the suggestion that TBL improves medical students’ team dynamics and their perceived self-learning, problem solving and communication skills, as well as their professionalism and personal development.  相似文献   

13.
Introduction: Credible evaluation of the learning climate requires valid and reliable instruments in order to inform quality improvement activities. Since its initial validation the Dutch Residency Educational Climate Test (D-RECT) has been increasingly used to evaluate the learning climate, yet it has not been tested in its final form and on the actual level of use – the department.

Aim: Our aim was to re-investigate the internal validity and reliability of the D-RECT at the resident and department levels.

Methods: D-RECT evaluations collected during 2012–2013 were included. Internal validity was assessed using exploratory and confirmatory factor analyses. Reliability was assessed using generalizability theory.

Results: In total, 2306 evaluations and 291 departments were included. Exploratory factor analysis showed a 9-factor structure containing 35 items: teamwork, role of specialty tutor, coaching and assessment, formal education, resident peer collaboration, work is adapted to residents’ competence, patient sign-out, educational atmosphere, and accessibility of supervisors. Confirmatory factor analysis indicated acceptable to good fit. Three resident evaluations were needed to assess the overall learning climate reliably and eight residents to assess the subscales.

Conclusion: This study reaffirms the reliability and internal validity of the D-RECT in measuring residency training learning climate. Ongoing evaluation of the instrument remains important.  相似文献   

14.
15.
Abstract

Purpose: For new and emerging medical schools, developing a system to peer-review and evaluate the assessment processes through faculty development programs can be a challenge. This study evaluates the impact of peer-review practices on item analysis, reliability, and the standard error of measurement of multiple-choice questions for summative final examinations.

Methods: This study used a retrospective cohort design of two consecutive academic years in 2012 and in 2013. Psychometric analyses of multiple-choice questions of three summative final examinations in Medicine, Pediatrics, and Surgery for sixth year medical students at the College of Medicine Taif University were used. Formal peer review of multiple-choice questions began in 2013, using guidelines from the National Board of Medical Examiners. Psychometric analyses of multiple-choice questions included item analysis (item difficulty and item discrimination) and calculation of internal-consistency reliability and the standard error of measurement. Data analyses were conducted using Stata.

Results: Results showed significant improvement in psychometric indices, particularly item discrimination and reliability by .14 and .12 points, respectively, following the implementation of the peer review process across the three exams. Item difficulty remained unchanged for Pediatrics and Surgery.

Conclusion: Peer-review practices of multiple-choice questions using guidelines can lead to improved psychometric characteristics of items; these findings have implications for faculty development programs in improving item quality, particularly for medical schools in early stages of transforming assessment practices.  相似文献   

16.
Background: Multiple mini-interviews (MMI) are commonly used for medical school admission. This study aimed to assess if sociodemographic characteristics are associated with MMI performance, and how they may act as barriers or enablers to communication in MMI.

Methods: This mixed-method study combined data from a sociodemographic questionnaire, MMI scores, semi-structured interviews and focus groups with applicants and assessors. Quantitative and qualitative data were analyzed using multiple linear regression and a thematic framework analysis.

Results: 1099 applicants responded to the questionnaire. A regression model (R2?=?0.086) demonstrated that being age 25–29 (β?=?0.11, p?=?0.001), female and a French-speaker (β?=?0.22, p?=?0.003) were associated with better MMI scores. Having an Asian-born parent was associated with a lower score (β?=??0.12, p?Conclusion: Age, gender, ethnicity, socioeconomic status and language seem to be associated with applicants’ MMI scores because of perceived differences in communications skills and life experiences. Monitoring this association may provide guidance to improve fairness of MMI stations.  相似文献   

17.
Abstract

Background: Medical education has a longstanding tradition of using logbooks to record activities. The portfolio is an alternative tool to document competence and promote reflective practice. This study assessed the acceptance of portfolio use among Saudi undergraduate medical students.

Methods: Portfolios were introduced in the 2nd through 5th years at King Abdulaziz University over a two-year period (2013–2015). At the end of each academic year, students completed a mixed questionnaire that included a self-assessment of skills learned through the use of portfolio.

Results: The results showed a difference in focus between basic and clinical years: in basic years students’ focus was on acquiring practical skills, but in clinical years they focused more on acquiring complex skills, including identifying and managing problems. The questionnaire responses nonetheless revealed a positive trend in acceptance (belief in the educational value) of portfolios among students and their mentors, across the years of the program.

Conclusions: Using portfolios as a developmental learning and formative assessment tool in the early undergraduate years was found to contribute to students’ ability to create their own clinical skills guidelines in later years, as well as to engage in and appreciate reflective learning.  相似文献   

18.
Abstract

Purpose: Problem-based learning (PBL) is an instructional method widely used by medical educators that promotes an environment in which students effectively learn the foundational knowledge and skills that are prerequisites for graduation. This study evaluated medical students’ perceptions of the helpfulness of skills acquired in PBL to core clerkship rotations.

Methods: A 25-item survey was designed to assess students’ perceptions of skills learned in PBL that were helpful on core clerkships and transferable to the clinical setting. A random sample of students with at least 8?months of clerkship experience were invited to complete the survey.

Results: Of 68 students, 35 (52%) returned questionnaires. Results suggest a clustering of themes based on their perceived value. Skills learned in PBL that students rated most highly as helpful or very helpful during core clinical rotations include: comfort discussing concepts, identifying key information, presentation skills, interpersonal skills, diagnostic thinking, finding information, self-awareness, and organizing information. Other items rated highly included: forming questions, time management, primary literature (engaging with published original research articles), and leadership. The skills acquired in PBL were associated with multiple competency domains.

Conclusions: Although conditions of the pre-clerkship curriculum are substantially different from the learning environment of clerkship rotations, skills learned in PBL are perceived as applicable to authentic clinical training.  相似文献   

19.
Abstract

Objectives: Test anxiety is well known among medical students. However, little is known about test anxiety produced by different components of exam individually. This study aimed to stratify varying levels of test anxiety provoked by each exam modality and to explore the students perceptions about confounding factors.

Methods: A self-administered questionnaire was administered to medical students. The instrument contained four main themes; lifestyle, psychological and specific factors of information needs, learning styles, and perceived difficulty level of each assessment tool.

Results: A highest test anxiety score of 5 was ranked for “not scheduling available time” and “insufficient exercise” by 28.8 and 28.3% students, respectively. For “irrational thoughts about exam” and “fear to fail”, a highest test anxiety score of 5 was scored by 28.8 and 25.7% students, respectively. The highest total anxiety score of 1255 was recorded for long case exam, followed by 975 for examiner-based objective structured clinical examination. Excessive course load and course not well covered by faculty were thought to be the main confounding factors.

Conclusions: The examiner-based assessment modalities induced high test anxiety. Faculty is urged to cover core contents within stipulated time and to rigorously reform and update existing curricula to prepare relevant course material.  相似文献   

20.
Background: In ordinary circumstances, objective structured clinical examination (OSCE) is a resource-intensive assessment method. In case of developing and implementing multidisciplinary OSCE, there is no doubt that the cost will be greater.

Aim: Through this study a research project was conducted to develop, implement and evaluate a multidisciplinary OSCE model within limited resources.

Methods: This research project went through the steps of blueprinting, station writing, resources reallocation, implementation and finally evaluation.

Results: The developed model was implemented in the Primary Health Care (PHC) program which is one of the pillars of the Community-Based undergraduate curriculum of the Faculty of Medicine, Suez Canal University (FOM-SCU). Data for evaluation of the implemented OSCE model were derived from two resources. First, feedback of the students and assessors through self-administered questionnaires was obtained. Second, evaluation of the OSCE psychometrics was done. The deliverables of this research project included a set of validated integrated multi-disciplinary and low cost OSCE stations with an estimated reliability index of 0.6.

Conclusion: After having this experience, we have a critical mass of faculty members trained on blueprinting and station writing and a group of trained assessors, facilitators and role players. Also there is a state of awareness among students on how to proceed in this type of OSCE which renders future implementation more feasible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号