首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 171 毫秒
1.
目的采用眼动技术,比较动态和静态呈现面部表情即时加工特点。方法选取17名被试进行实验,采用2(呈现方式:动态、静态)×3(面部表情:高兴、中性、生气)的两因素被试内设计。使用ASL504型眼动仪记录被试观看不同面部表情图片(每种条件各10张)的眼动情况,眼动分析指标包括总注视时间、注视百分比、注视点个数和凝视点个数。结果反应时出现了面部表情主效应(F(2,32)=5.27,P=0.011)。事后比较(LSD)显示,中性表情比高兴表情和生气表情反应时更长(P0.05)。注视点个数出现了呈现方式主效应(F(1,16)=14.34,P=0.002),动态呈现方式的注视点比静态呈现方式更少。凝视点个数出现面部表情主效(F(2,32)=5.14,P=0.012)。事后比较显示,中性面部表情比高兴和生气面部表情的凝视点更多(P0.05)。结论在同等时间内动态呈现面部表情的加工更有效率,进而证明了面部表情加工中的动态优势。  相似文献   

2.
目的:探究大学生对动态与静态面部表情图片限时加工的特点.方法:以18名大学生为被试,记录被试在动态或静态面部表情图片呈现的600 ms做出表情判断(生气、中性、高兴)的反应时和正确率,采用EGI-128导联脑电仪收集脑电数据,分析NI70、P200和晚正成分(LPC)3个脑电成分特点.结果:与动态呈现相比,被试对静态呈现的面部表情图片的反应时较短[(474.1±5.1) ms vs.(486.2±5.4) ms,P<0.001],正确率较高[(0.83±0.02) vs.(0.79±0.02),P<0.01].静态呈现时,N170成分表现出中性表情诱发的波幅小于生气表情和高兴表情[(-2.7±0.6)μv vS.(-3.3±0.7)μv,(-3.2±0.7)μv;P<0.05],而高兴表情与生气表情诱发的波幅差异无统计学意义(P>0.05).静态呈现时,P200成分表现出中性表情诱发的波幅小于生气表情和高兴表情[(3.3±0.5) μvvs.(4.0±0.6)μv,(3.8±0.6)μv;P<0.05],高兴表情与生气表情诱发的波幅差异无统计学意义(P>0.05).LPC的波幅与潜伏期的表情主效应显著(P<0.05),生气表情诱发的波幅大于中性表情[(7.2±0.7) μv vs.(6.6±0.6)μv,P<0.05];中性表情的潜伏期长于生气表情和高兴表情[(534.4±9.7)ms vs.(515.2±10.4) ms,(502.8±12.1)ms; P<0.05].结论:本研究提示在限时条件下,早期阶段,静态呈现的表情加工比动态呈现的表情加工具有优势;到后期阶段,动态呈现的表情加工才能达到静态呈现的加工水平.  相似文献   

3.
学龄前孤独症谱系儿童对人物面部表情的识别特征   总被引:1,自引:0,他引:1  
目的:测试学龄前孤独症儿童对人物静态面部表情的识别能力和特点。方法:使用自制人物面部开心、吃惊、害怕、生气、讨厌、伤心、中性七种表情图,对13例4~8岁孤独症男童和23例3~5岁正常对照组男童进行了测试分析,两组儿童在发展年龄上作了匹配(3.66±0.44岁)。结果:两组儿童在识别七种表情照片的正确率上无显著性差异(P>0.05);各表情类别识认比较,孤独症组识别伤心、害怕、讨厌及吃惊表情正确率高于对照儿童(P<0.05);孤独症组识别表情的正确率依次是:开心>伤心>生气>害怕>讨厌>吃惊>中性;而对照组则为:开心>生气>伤心>害怕>中性>讨厌>吃惊。结论:孤独症儿童对七种面部表情的命名性识别与正常儿童无明显差异,但识别基本面部表情的模式与正常儿童有所不同。  相似文献   

4.
目的:研究男性不同性取向面部表情识别的差异。方法:采取自我报告方法选取男同性恋10名及男异性恋15名,实验一采用呈现-判断范式,让被试对四种表情(愤怒、厌恶、高兴、平静)进行判断,通过反应时和正确率比较两组被试加工差异。实验二,选取愤怒、高兴、平静三种表情,将表情划分为眉眼兴趣区和鼻子嘴巴兴趣区,通过眼动仪记录两组被试识别表情的视觉扫描特征,分析其认知加工过程。结果:实验一,面部表情判断存在性取向和表情类型效应。实验二,表情识别视觉扫描特征表明,男同性恋对眼睛线索注视时间和注视次数显著多于男异性恋,表情类型效应显著。结论:男同性恋表情判别敏感性优于男异性恋,表情加工主要表现为重视眼睛线索提取。表情加工由难到异依次为愤怒、高兴、平静表情。  相似文献   

5.
目的运用表情操控技术,考察表情操作对大学生态度的影响。方法选取48名被试进行实验,采用2(情境故事效价:积极、消极)×2(面部表情:控制、微笑)两因素被试间设计,分析在表情操作后大学生对不同效价情境故事的态度。结果情境故事类型的主效应显著(F=181.50,P0.01)。事后比较(LSD)显示,积极情绪故事比消极情境故事所持态度更积极;情境故事类型与表情操作组交互效应显著(F=37.50,P0.01),在积极情境故事条件下,微笑组高于控制组;在消极情境故事条件下,控制组高于微笑组。结论微笑表情影响大学生积极态度的形成。  相似文献   

6.
目的:探讨3~6岁自闭症儿童面部表情识别的影响因素。方法:采用眼动技术,观察高言语水平组与低语言水平组对不同情绪类型的陌生者面部表情和熟悉者面部表情图片的注视情况。结果:(1)在注视时间、注视点个数、注视率3项因变量指标上,言语能力的主效应均不显著(P0.05);(2)在注视点个数以及注视率上,表情性质的主效应显著(F=6.35,3.97;P0.05);(3)在注视时间上,面孔分区与熟悉度的交互作用显著(F=6.43,P0.05);(4)在注视点个数和注视率上,熟悉度与表情性质的交互作用显著(F=4.29,P0.05;F=6.73,P0.01)。结论:言语能力对自闭症儿童面部表情识别的影响较小;自闭症儿童对熟悉面孔的识别能力较好;自闭症儿童整体表现出对积极表情和消极情表情的注意偏好。  相似文献   

7.
任务的呈现方式对孤独症儿童误念理解的影响   总被引:6,自引:0,他引:6  
目的:探讨一种更适于测试孤独症儿童误念理解能力的方法.方法:2(呈现方式)×2(被试类型)设计,采用经典误念任务,使用计算机多媒体动画呈现方式与传统图片-实物呈现方式,分别测试20名孤独症儿童和言语能力(PPVT-R)匹配的正常儿童.结果:呈现方式的主效应非常显著(P<0.001),动画呈现方式使正常儿童和孤独症儿童的误念理解成绩都有提高,孤独症儿童的提高更大.结论:孤独症儿童的误念理解能力在误念任务的传统呈现方式之下可能被低估了.充分考虑孤独症认知缺陷的多样性,使用恰当的任务方式,可能会更真实地反映他们的心理理论发展水平.  相似文献   

8.
目的:探索在静态和动态呈现条件下孤独症儿童对视向提示信息加工的特征.方法:选取符合美国精神障碍诊断与统计手册第4版(DSM-Ⅳ)孤独症诊断标准的儿童和年龄性别匹配的正常对照儿童各10名.通过l张图片形成的静态视向提示和5张图片形成的动态视向提示,采用2(组别:孤独症儿童,正常儿童)×2(呈现方式:静态,动态)×2(提示性:有效提示,无效提示)的重复测量方差分析,比较两组儿童在不同呈现方式下的视向提示反应的正确率和反应时.结果:在静态条件下,孤独症儿童识别视向的正确率低于正常对照组儿童[(94.8±1.3)%vs.(99.5±1.3)%,P<0.05],且反应时较长[(470.2±23.8) ms vs.(389.2±23.8) ms,P<0.05];在动态条件下,有效提示的识别正确率高于无效提示[(98.8±0.5)%vs.(93.8±0.3)%,P<0.05],且有效提示的反应时短于无效提示[(463.1±19.7) ms vs.(504.8 ±21.4) ms,P<0.01],孤独症儿童的反应时长于正常对照组儿童[(544.6±28.4) ms vs.(423.3±28.4) ms,P<0.05].结论:本研究发现提示孤独症儿童存在与正常儿童一样的视向注意转移,并无特异性视向注意损伤,孤独症儿童对动态视向信息加工可能较静态视向信息加工更敏感.  相似文献   

9.
目的探究孤独症患儿对面部情绪表情的认知特征。方法选取2007年3月至2008年9月在中山大学附属第三医院发育行为中心诊断为孤独症的18~36个月患儿作为孤独症组,同期行健康查体的年龄、性别与孤独症组匹配的正常儿童作为对照组,被动观看电脑屏幕显示的高兴、悲伤、惊讶、愤怒和恐惧5种面部基本情绪表情图,观察比较两组幼儿对各种面部表情的视觉注意行为和自身情绪反应。结果研究期间孤独症组和对照组均纳入45例,两组幼儿对各种面部表情的初次注视时间组间效应不明显,而表情效应明显,两组幼儿初次注视高兴和愤怒表情的时间长于注视恐惧表情的时间。但孤独症组对各种面部表情图的回看次数明显少于对照组,总注视时间也明显短于对照组。对照组对不同的面部表情自身情绪反应评分明显不同,对高兴表情的积极情绪评分明显高于其他表情,对高兴表情的消极情绪评分明显低于愤怒和恐惧表情,对悲伤和惊讶表情的消极情绪评分也明显低于恐惧表情。而孤独症组对各种情绪表情的自身情绪反应评分差异无统计学意义。结论孤独症患儿早期不仅对面部情绪表情的视觉注意减少,对面部情绪表情的感知也存在缺陷,尤其对各种负性情绪表情理解困难。  相似文献   

10.
目的研究分析孤独症儿童家长在情绪识别上与正常儿童家长的差异性,进一步探讨孤独症儿童父母的情绪面孔识别是否存在遗传的可能性。方法通过电脑呈现六大基本情绪面孔测验和一套神经心理学测验,对32例孤独症儿童家长和32例正常儿童的父母进行心理理论能力的比较。结果孤独症组儿童父母得分与正常儿童家长得分在识别总正确分和"恐"以及"厌"表情得分存在差异(82.69±7.74,96.06±8.50;6.56±3.41,12.50±2.74;11.63±3.17,16.63±1.83;P<0.01),具有显著统计学意义。"悲"表情的得分较之正常儿童家长存在差异(15±4.10,18.25±3.26;P<0.05),有统计学意义。其他情绪识别得分无统计学意义。结论孤独症儿童父母的情绪识别能力较之正常儿童家长有缺陷,为孤独症儿童在情绪识别上的损害可能存在遗传性因素,提供认知行为学方面的证据。  相似文献   

11.
Facial muscular reactions to avatars' static (neutral, happy, angry) and dynamic (morphs developing from neutral to happy or angry) facial expressions, presented for 1 s each, were investigated in 48 participants. Dynamic expressions led to better recognition rates and higher intensity and realism ratings. Angry expressions were rated as more intense than happy expressions. EMG recordings indicated emotion-specific reactions to happy avatars as reflected in increased M. zygomaticus major and decreased M. corrugator supercilii tension, with stronger reactions to dynamic as compared to static expressions. Although rated as more intense, angry expressions elicited no significant M. corrugator supercilii activation. We conclude that facial reactions to angry and to happy facial expressions hold different functions in social interactions. Further research should vary dynamics in different ways and also include additional emotional expressions.  相似文献   

12.
Research investigating the early development of emotional processing has focused mainly on infants' perception of static facial emotional expressions, likely restricting the amount and type of information available to infants. In particular, the question of whether dynamic information in emotional facial expressions modulates infants' neural responses has been rarely investigated. The present study aimed to fill this gap by recording 7-month-olds' event-related potentials to static (Study 1) and dynamic (Study 2) happy, angry, and neutral faces. In Study 1, happy faces evoked a faster right-lateralized negative central (Nc) component compared to angry faces. In Study 2, both happy and angry faces elicited a larger right-lateralized Nc compared to neutral faces. Irrespective of stimulus dynamicity, a larger P400 to angry faces was associated with higher scores on the Negative Affect temperamental dimension. Overall, results suggest that 7-month-olds are sensitive to facial dynamics, which might play a role in shaping the neural processing of facial emotional expressions. Results also suggest that the amount of attentional resources infants allocate to angry expressions is associated to their temperamental traits. These findings represent a promising avenue for future studies exploring the neurobiological processes involved in perceiving emotional expressions using dynamic stimuli.  相似文献   

13.
目的初步制备开心、生气、吃惊、害怕、伤心、讨厌和中性7种中国人物静态面部表情图片库以提供情绪研究的取材,探讨采集真实人物面部表情的方法。方法选取学龄前期、学龄期、小学、中学、成年早期、成人中期和成人晚期5~80岁五官端正的健康中国人。告知受试者各种面部表情内在情绪涵义、等级强度及面部表情特征,继而以谈话法结合个体背景和经历诱发面部表情,所有面部表情均在统一拍摄条件下由同一人采用数码相机拍摄。初步制备7种基本面部表情图片,经研究小组成员初步筛选出的图片进行统一处理,制作成10cm×15cm不同面部表情图片;进而以健康大学生对面部表情图片进行第2次筛选,对筛选后识别一致率较高的面部表情图片进行重测信度评价,同时以日本女性面部表情图片为参照评价效度。结果第1次筛选出80张面部表情图片,59名大学生第2次筛选出21张面部表情图片(每种面部表情各3张)。28名大学生进行重测信度、表情强度和效度测试。21张面部表情图片的重测信度较日本女性面部表情图片略好,除生气2张、害怕3张、伤心1张及讨厌3张图片重测信度低于70%外,其余表情重测信度均较高,负性表情(生气、害怕、伤心和讨厌)图片的效度较低,开心、中性和吃惊表情图片的效度较高。结论中国人物静态面部表情是具有代表性和信度较高的表情图片,可作为情绪研究的材料。面部表情的表达及内在情绪的判断受异族效应和文化差异的影响。  相似文献   

14.
Preliminary studies have demonstrated that school-aged children (average age 9-10years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in younger children aged 6-7years to emotional facial expressions of other children. Facial EMG activity to the presentation of dynamic emotional faces was recorded from the corrugator, zygomaticus, frontalis and depressor muscle in sixty-one healthy participants aged 6-7years. Results showed that the presentation of angry faces was associated with corrugator activation and zygomaticus relaxation, happy faces with an increase in zygomaticus and a decrease in corrugator activation, fearful faces with frontalis activation, and sad faces with a combination of corrugator and frontalis activation. This study demonstrates the feasibility of measuring facial EMG response to emotional facial expressions in 6-7year old children.  相似文献   

15.
Growing evidence suggests that the recognition of different emotional states involves at least partly separable neural circuits. Here we assessed the discrimination of both anger and happiness in healthy subjects receiving transcranial magnetic stimulation (TMS) over the medial-frontal cortex or over a control site (mid-line parietal cortex). We found that TMS over the medial-frontal cortex impairs the processing of angry, but not happy, facial expressions of emotion.  相似文献   

16.
The present study investigated the effects of dynamic information on the recognition of emotional facial expressions across the visual field (i.e., central or peripheral vision). Facial stimuli with three pleasant expressions (excited, happy, and relaxed) and three unpleasant expressions (fearful, angry, and sad) were selected on the basis of valence and activation. The facial stimuli were presented dynamically or statically at either the central or peripheral visual field. Participants evaluated the emotional state of the target facial expression using a forced-choice task (N=34) and an Affect Grid (Russell, Weiss, & Mendelsohn, 1989) (N =39) requiring categorical and dimensional judgments about facial expressions. The results of the forced-choice task showed that only dynamic angry faces in peripheral vision had better recognition than the equivalent faces in the static condition. The results of the Affect Grid indicated that only the pleasant expressions presented in the peripheral field were significantly rated as more strongly pleasant. These findings suggest that an effect of dynamic information is more salient in peripheral vision than in central vision for recognizing certain facial expressions.  相似文献   

17.
The present research investigates the effects of gaze direction on the perceived duration of the presentation of angry and happy expressions. When the facial expression was angry, a straight gaze elongated the perceived duration of the expression compared with an averted gaze. However, there was no effect of gaze direction when the facial expression was happy. These findings indicate that the subjective estimation of time is elongated when the observer encounters a socially important survival signal, considering that an angry face with a straight gaze may be perceived as a threat requiring a fight-or-flight response.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号