首页 | 本学科首页   官方微博 | 高级检索  
     


Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination
Authors:Johnson S J  Hunt C M  Woolnough H M  Crawshaw M  Kilkenny C  Gould D A  England A  Sinha A  Villard P F
Affiliation:Manchester Business School, University of Manchester, UK. sheena.johnson@mbs.ac.uk
Abstract:

Objectives

The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology.

Methods

Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups.

Results

Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics.

Conclusion

It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required.Training in interventional radiology (IR) uses the traditional apprenticeship model despite recognised drawbacks, e.g. difficulty articulating expertise, pressure to train more rapidly [1], reduced number of training opportunities. Moreover, it has been described as inefficient, unpredictable and expensive [2,3] and its suitability for training has been questioned owing to there being no mechanism for measuring post-training skill [4]. There is an increasing need to develop alternative training methods [5]. Using simulators to train offers numerous benefits, including gaining experience free from risk to patients, learning from mistakes and rehearsal of complex cases [6]. IR is particularly appropriate for simulator training as skills, such as interpreting two-dimensional radiographs or ultrasound images, can be reproduced in a simulator in the same way as in real-life procedures.There is increased use of medical virtual reality simulators, with some validated to show improved clinical skills, e.g. laparascopic surgery [7], colonoscopy [8] and anaesthetics [9]. However, within IR no simulator has met this standard [5,6,10], with validation studies typically failing to discriminate accurately between experts and novices [11], although differences have been observed [12]. Length of time to complete procedures on simulators is a frequently reported expertise discriminator [6] but there is a worrying lack of emphasis on the number of errors made or other clinically relevant parameters. A recent review [6] reported “fundamental inconsistencies” and “wide variability in results” in validation studies, concluding that the analysis of errors and quality of the end product should be the focus of assessment. The authors proposed that, to fully develop and validate simulators, there is a need for task analysis (TA) to deconstruct individual procedural tasks followed by metric definition and critical performance indicator identification. This echoes previous calls for expert involvement in simulator design [13].To the best of our knowledge, no IR simulators have been developed through the use of TA of real-world tasks despite the critical role of such techniques in training development and system design for the past 100 years [14]. TA identifies knowledge and thought processes supporting task performance, and the structure and order of individual steps, with particular relevance in deconstructing tasks conducted by experts [15,16]. TA techniques are increasingly being used as a medical educational resource, e.g. the development of surgical training [17] and the teaching of technical skills within surgical skills laboratories [18].Using task analysis, this research identified and prospectively investigated simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in IR.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号