首页 | 本学科首页   官方微博 | 高级检索  
     

基于三维动态照相机的正常人面部表情可重复性研究
引用本文:邱天成,刘筱菁,薛竹林,李自力. 基于三维动态照相机的正常人面部表情可重复性研究[J]. 北京大学学报(医学版), 2020, 52(6): 1107-1111. DOI: 10.19723/j.issn.1671-167X.2020.06.020
作者姓名:邱天成  刘筱菁  薛竹林  李自力
作者单位:北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081
摘    要:目的:测量正常人群表情运动的可重复性,为患者手术等干预措施的效果评价提供参照数据。方法:征集面部结构大致对称、无面部运动及感觉神经障碍病史的志愿者共15名(男性7名,女性8名,中位年龄25岁)。使用三维动态照相机记录研究对象的面部表情运动(闭唇笑、露齿笑、噘嘴、鼓腮),分辨率为采集频率60帧/s,挑选每个面部表情中最有特征的6帧图像,分别为静止状态时图像(T0)、从静止状态至最大运动状态时的中间图像(T1)、刚达到最大运动状态时的图像(T2)、最大运动状态将结束时的图像(T3)、最大运动状态至静止状态时的中间图像(T4)及动作结束时的静止图像(T5)。采集两次面部表情三维图像数据,间隔1周以上。以静止图像(T0)为参照,将运动状态系列图像(T1~T5)与之进行图像配准融合,采用区域分析法量化分析前后两次同一表情相同关键帧图像与对应静止状态三维图像的三维形貌差异,以均方根(root mean square,RMS)表示。结果:闭唇笑、露齿笑以及鼓腮表情中,前后两次的对应时刻(T1~T5)图像与相应T0时刻的静止图像配准融合,计算得出的RMS值差异无统计学意义。撅嘴动作过程中,前后两次T2时刻对应面部三维图像与相应T0时刻静止图像配准融合,得出RMS值差异有统计学意义(P<0.05),其余时刻的图像差异无统计学意义。结论:正常人的面部表情具有一定的可重复性,但是噘嘴动作的可重复性较差,三维动态照相机能够量化记录及分析面部表情动作的三维特征。

关 键 词:三维成像  面部表情  结果可重复性  
收稿时间:2018-10-10

Evaluation of the reproducibility of non-verbal facial expressions in normal persons using dynamic stereophotogrammetric system
Tian-cheng QIU,Xiao-jing LIU,Zhu-lin XUE,Zi-li LI. Evaluation of the reproducibility of non-verbal facial expressions in normal persons using dynamic stereophotogrammetric system[J]. Journal of Peking University. Health sciences, 2020, 52(6): 1107-1111. DOI: 10.19723/j.issn.1671-167X.2020.06.020
Authors:Tian-cheng QIU  Xiao-jing LIU  Zhu-lin XUE  Zi-li LI
Affiliation:Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
Abstract:Objective: To assess the reproducibility of non-verbal facial expressions (smile lips closed, smile lips open, lip purse, cheek puff) in normal persons using dynamic three-dimensional (3D) imaging and provide reference data for future research. Methods: In this study, 15 adults (7 males and 8 females) without facial asymmetry and facial nerve dysfunction were recruited. Each participant was seated upright in front of the 3D imaging system in natural head position. The whole face could be captured in all six cameras. The dynamic 3D system captured 60 3D images per second. Four facial expressions were included: smile lips closed, smile lips open, lip purse, and cheek puff. Before starting, we instructed the subjects to make facial expressions to develop muscle memory. During recording, each facial expression took about 3 to 4 seconds. At least 1 week later, the procedure was repeated. The rest position (T0) was considered as the base frame. The first quartile of expressions (T1), just after reaching the maximum state of expressions (T2), just before the end of maximum state of expressions (T3), the third quartile of expressions (T4), and the end of motion (T5) were selected as key frames. Using the stable part of face such as forehead, each key frame (T1-T5) of the different expressions was aligned on the corresponding frame at rest (T0). The root mean square (RMS) between each key frame and its corresponding frame at rest were calculated. The Wilcoxon signed ranks test was applied to assess statistical differences between the corresponding frames of the different facial expressions. Results: Facial expressions like smile lips closed, smile lips open, and cheek puff were reproducible. Lip purse was not reproducible. The statistically significant differences were found on the T2 frame of the repeated lip purse movement. Conclusion: The dynamic 3D imaging can be used to evaluate the reproducibility of facial expressions. Compared with the qualitative analysis and two-dimensions analysis, dynamic 3D images can be able to more truly represent the facial expressions which make the research more reliable.
Keywords:Three-dimensional images  Facial expression  Reproducibility of results  
点击此处可从《北京大学学报(医学版)》浏览原始摘要信息
点击此处可从《北京大学学报(医学版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号