Affiliation: | 1.Department of Electrical Engineering,University of Texas at Dallas,Richardson,USA;2.Department of Surgery,UT Southwestern Medical Center,Dallas,USA;3.Department of Mechanical Engineering,University of Texas at Dallas,Richardson,USA |
Abstract: | ObjectiveQuantitative assessment of surgical skills is an important aspect of surgical training; however, the proposed metrics are sometimes difficult to interpret and may not capture the stylistic characteristics that define expertise. This study proposes a methodology for evaluating the surgical skill, based on metrics associated with stylistic adjectives, and evaluates the ability of this method to differentiate expertise levels.MethodsWe recruited subjects from different expertise levels to perform training tasks on a surgical simulator. A lexicon of contrasting adjective pairs, based on important skills for robotic surgery, inspired by the global evaluative assessment of robotic skills tool, was developed. To validate the use of stylistic adjectives for surgical skill assessment, posture videos of the subjects performing the task, as well as videos of the task were rated by crowd-workers. Metrics associated with each adjective were found using kinematic and physiological measurements through correlation with the crowd-sourced adjective assignment ratings. To evaluate the chosen metrics’ ability in distinguishing expertise levels, two classifiers were trained and tested using these metrics.ResultsCrowd-assignment ratings for all adjectives were significantly correlated with expertise levels. The results indicate that naive Bayes classifier performs the best, with an accuracy of (89pm 12), (94pm 8), (95pm 7), and (100pm 0%) when classifying into four, three, and two levels of expertise, respectively.ConclusionThe proposed method is effective at mapping understandable adjectives of expertise to the stylistic movements and physiological response of trainees. |