首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
学龄前孤独症谱系儿童对人物面部表情的识别特征   总被引:1,自引:0,他引:1  
目的:测试学龄前孤独症儿童对人物静态面部表情的识别能力和特点。方法:使用自制人物面部开心、吃惊、害怕、生气、讨厌、伤心、中性七种表情图,对13例4~8岁孤独症男童和23例3~5岁正常对照组男童进行了测试分析,两组儿童在发展年龄上作了匹配(3.66±0.44岁)。结果:两组儿童在识别七种表情照片的正确率上无显著性差异(P>0.05);各表情类别识认比较,孤独症组识别伤心、害怕、讨厌及吃惊表情正确率高于对照儿童(P<0.05);孤独症组识别表情的正确率依次是:开心>伤心>生气>害怕>讨厌>吃惊>中性;而对照组则为:开心>生气>伤心>害怕>中性>讨厌>吃惊。结论:孤独症儿童对七种面部表情的命名性识别与正常儿童无明显差异,但识别基本面部表情的模式与正常儿童有所不同。  相似文献   

2.
目的:揭示多维情绪能力干预对低龄自闭症男孩面部表情表达和模仿的影响。方法:在尊重家长意愿的前提下,将26名年龄在39~79个月的自闭症男孩分为干预组(n=15)和对照组(n=11),干预组接受为期约3个月的多维情绪能力综合干预,干预内容包括情绪的识别、理解、体验、调节、模仿和表达。干预方案共36课时,每课时约15~25分钟。所有幼儿在干预前后分别完成六种基本情绪(开心、生气、伤心、惊讶、讨厌、害怕)的表达和模仿任务,并用摄像机记录其面部肌肉运动。通过面部表情编码系统标定幼儿面部表情产出的正确率。结果:(1)多维情绪能力干预后,自闭症男孩模仿六种基本情绪面部表情的质量以及表达开心、生气、伤心、害怕情绪面部表情的质量均有显著提升。(2)面部表情模仿任务中,口唇区和眼眶区AU的产出在干预后显著提升;面部表情表达任务中,仅有眼眶区AU的产出在干预后得到显著提升。结论:多维情绪能力干预方案可改善学前自闭症男孩的面部表情产出,研究为自闭症儿童情绪能力的干预提供了可参考的内容和框架。  相似文献   

3.
目的:探究大学生在多种情绪图片与表情图片刺激下是否存在不同程度的注意偏向,以及考察是否存在性别差异。方法:通过网上招募广州某大学本科生42例(男女各21例;年龄21~25岁),选用3种效价情绪图片和5种面部表情图片作为刺激材料,采用情绪Stroop任务,比较被试在情绪性刺激与中性刺激下的的反应时和错误率。结果:积极、消极和中性图片诱导的错误率差异无统计学意义;消极图片诱导的反应时长于中性图片[(654±92)ms vs.(636±77)ms,P0.05],而积极图片与中性图片诱导的反应时差异无统计学意义[(647±81)ms vs.(636±77)ms,P0.05];厌恶表情诱导的错误率高于中性表情[11.3%vs.3.4%,P0.001],开心(2.6%)、伤心(4.2%),生气(3.2%)与中性表情诱导的错误率均无统计学差异;5种表情诱导的反应时差异均无统计学意义[伤心(638±92)ms,开心(633±83)ms,厌恶(650±98)ms,生气(645±96)ms,中性(645±90)ms,Ps0.05];且反应时和错误率均不存在明显的性别差异(P0.05)。结论:情绪stroop任务中,消极图片和厌恶表情刺激可诱导出大学生的情绪注意偏向,分别表现为反应时延长和错误率增高;情绪注意偏向的行为学表现性别差异不明显。  相似文献   

4.
目的 建立基于动画人物的儿童动态面部表情识别图片。方法 研究1采用自编的开放式问卷从眉、眼、鼻、嘴及整个面颊对322名成人调查高兴、悲伤、生气、恐惧、厌恶和惊讶的典型特征。研究2由48名大学生对动画人物的动态表情图片进行情绪类型、强度、愉悦度和优势度的评定。结果 (1)有效动图共104张,其中高兴40张,悲伤14张,惊讶23张,生气8张,恐惧10张,厌恶9张。(2)问卷的调查典型特征与动态图片的匹配度高相关。结论 为提高儿童面部表情识别能力提供了以动画人物为图片材料的表情图库。  相似文献   

5.
中国面孔表情图片系统的修订   总被引:4,自引:1,他引:3  
目的:扩展本土化的中国面孔表情图片系统以提供情绪研究的取材。方法:采用方便取样。从北京2所高等艺术院校的表演系、导演系选取100名学生,作为面孔表情表演者;从北京2所普通高等院校招募100名学生,作为面孔表情评分者。采集表演者的愤怒,厌恶,恐惧,悲伤,惊讶,高兴和平静7种面孔表情图片,再由评分者对图片进行情绪类别的判定和情绪强烈程度的9点量表评分,扩展各种情绪类型的图片数量。然后,从北京3所普通高校戏剧社社员中选取100名学生,从北京某社区选取老年人、儿童各10名,作为面孔表情表演者;另从北京2所普通高等院校招募100名学生,作为面孔表情评分者。进一步扩展负性图片的数量(如,愤怒,厌恶,恐惧,悲伤),并补充一些其他年龄段的图片。结果:得到具有代表性的7种情绪类型的面孔表情图片共870张,每张图片都有其对应的认同率和情绪强度评分,其中,愤怒74张,厌恶47张,恐惧64张,悲伤95张,惊讶120张,高兴248张,平静222张。结论:本研究初步建立了信度较高的中国人面孔表情图片系统,可作为以后情绪研究的选取材料,本系统有待进一步完善。  相似文献   

6.
目的:考察社会性发展延缓大学生对面部表情图片识别的特点。方法:在福州市3所高校中筛选出72名大学生,社会性发展延缓组和正常组各36人。在中国面孔表情图片系统(CAFPS)中选取54张标准化的图片为实验材料,两组分别完成面部表情识别任务。结果:延缓组对正性、负性面部表情识别正确率均低于正常组(均P0.05);延缓组识别正性面部表情的正确率高于负性面部表情(P0.05),而正常组识别正、负性面孔表情图片的正确率差异无统计学意义(P0.05)。延缓组对高兴、惊奇、悲伤、愤怒、恐惧表情识别的正确率均低正常组(均P0.05),而在兴奋表情上的正确率差异无统计学意义(P0.05)。结论:社会性发展延缓大学生对面部表情图片的识别能力差于社会性发展正常大学生。  相似文献   

7.
目的:养育脑是人脑中和育幼感受与行为密切相关的一系列神经回路的统称,对它的研究有助于了解人类亲子关系形成的实质。本研究为了方便养育脑及相关领域研究的开展,准备建立一套标准化的中国婴儿面孔表情图片系统。方法:采集211名3~6个月中国婴儿的面孔表情图片共计360张,对其进行标准化加工处理。在重庆及贵阳各1所高校共招募196名未婚无生育史的大学生,作为面孔表情评分者。其中100名对图片的情绪类别和强度进行评定;另外96名对筛选出的图片在愉悦度、唤醒度和优势度上进行9点评分。结果:建立了包含117张高兴,92张中性,108张悲伤表情的中国婴儿面孔表情图片系统,其对应的平均认同率为89%、77%、92%;强度为5.83、5.15、6.19;愉悦度为5.92、4.55、2.70;唤醒度为5.11、4.18、3.77;优势度为5.02、4.93、4.91。该系统能有效唤起被试的情绪反应,所建立的愉悦度、唤醒度和优势度指标,在3类表情图片间的得分差异均有统计学意义(均P0.01)。结论:本研究初步建立了信效度较高的中国婴儿面孔表情图片系统,可作为养育脑及情绪研究的实验材料。  相似文献   

8.
目的:评价中国人Morph情绪面孔识别测验的信度和效度.方法:57名大学生完成中国人Morph情绪面孔识别测验和Calder情绪面孔识别测验,对其中的39名大学生进行了重测.结果:中国人Morph情绪面孔识别测验内在一致性信度α系数为0.728,重测信度分别为总分0.715,喜0.609,惊0.815,恐0.777,悲0.742,厌0.552,怒0.782;该测验正确识别率结果与Calder情绪面孔识别测验的测试结果具有显著的相关.结论:该测验在中国大学生中具有可接受的的信度、效度.  相似文献   

9.
目的:通过两个行为实验来探究人们对情绪性别刻板印象的归因解释.方法:采用Barrett等修改了的自发特质推论范式,以不同强度(44%和88%)的面部表情图片和问卷调查得到的情境句子为实验材料,以大学生为被试在计算机上进行实验.首先在屏幕上呈现随机配对的目标情绪图片和情境句子,接着要求被试对表情强度为88%的目标情绪图片作出归因判断,然后对目标情绪图片进行再认,最后要求被试回忆情境句子.结果:感知者对女性的正性面部表情和负性面部表情均做出了更多的性格归因解释,而对男性的面部表情则没有.结论:感知者对女性的面部表情表现出更多的对应偏见,而男性的情绪反应则被认为是环境导致.  相似文献   

10.
目的:在我国大学生中对情绪表达冲突问卷(AEQ)进行修订并考察其信度和效度。方法:选取大学生467人(样本1)用于条目分析和探索性因子分析;另外选取大学生377人(样本2),用于验证性因子分析及聚合效度、区分效度、组合信度以及内部一致性信度检验。在样本1中,选取150人施测伯克利情绪表达量表(BEQ)、多伦多述情障碍量表(TAS-20),及情绪表达冲突问卷(AEQ-G28)检验效标效度;两周后,在样本2中随机选取100人进行重测。结果:探索性因子分析得到后悔表达、渴望被理解、情绪迷思、抑制正性情绪表达、抑制负性情绪表达5个因子,共23个项目,累积解释问卷总变异量的54.53%,验证性因子分析表明模型拟合较好(χ~2/df=2.07,CFI=0.92,TLI=0.91,GFI=0.90,RMSEA=0.05)。修订后的情绪表达冲突问卷总分与BEQ和TAS得分均显著相关(r=-0.32、0.40,P0.01),问卷总的Cronbachα系数为0.91,重测信度为0.80,5个因子的内部一致性信度在0.68-0.77之间,重测信度在0.44-0.80之间,组合信度在0.75-0.83之间。结论:修订后的情绪表达冲突问卷中文版具有良好的信度和效度,可以作为测量和评估中国大学生情绪表达冲突的工具。  相似文献   

11.
Sixty subjects were exposed for 40 s each to 48 imagery situations designed to reflect happy, sad, angry and fearful conditions. Facial electromyographic (EMG) activity from zygomatic, corrugator, masseter and lateral frontalis muscle regions was recorded, and experienced emotion was measured on a scale tapping these four emotions. Results showed that: (1) zygomatic activity reliably differentiated happy imagery, corrugator activity reliably differentiated sad imagery, but masseter activity did not differentiate angry imagery and lateral frontalis activity did not differentiate fearful imagery; (2) different intensities of specific emotional imagery situations evoked the expected differential patterns of self-report and EMG; (3) higher correlations between self-report and EMG for ‘present’, rather than ‘future’ ratings of experienced emotion emerged for positive affect only; and (4) the use of a standardized imagery scale, rather than the self-generated, personally-relevant affective situations used in previous studies, allowed for more sensitive measurement of the relationship between facial muscle activity and subjective experience of emotion during affective imagery.  相似文献   

12.
Startle reflex modulation by affective pictures is a well-established effect in human emotion research. However, much less is known about startle modulation by affective faces, despite the growing evidence that facial expressions robustly activate emotion-related brain circuits. In this study, acoustic startle probes were administered to 37 young adult participants (20 women) during the viewing of slides from the Pictures of Facial Affect set including neutral, happy, angry, and fearful faces. The effect of expression valence (happy, neutral, and negative) on startle magnitude was highly significant (p < .001). Startle reflex was strongly potentiated by negative expressions (fearful and angry), however, no attenuation by happy faces was observed. A significant valence by gender interaction suggests stronger startle potentiation effects in females. These results demonstrate that affective facial expressions can produce significant modulation of the startle reflex.  相似文献   

13.
Facial electromyographic (EMG) activity was recorded from the zygomatic, corrugator, masse-ter and frontalis muscle regions in 30 male and 30 female subjects. Forty-eight items were selected to reflect happy, sad, angry and fearful situations. Subjects imagined each of the items for 40 sec and rated how they felt on a scale tapping the four emotions. The results indicated that for certain emotions, muscle regions and ratings, females (as compared to males): 1) generated facial EMG patterns of greater magnitude (relative to rest) during affective imagery, 2) reported a stronger experience of emotion to the imagery, 3) showed greater within-subject correlations between the experience of emotions and facial EMG, 4) evidenced somewhat higher corrugator and significantly lower masseter EMG activity during rest, and 5) generated greater facial EMG changes during a post-imagery, voluntary facial expression condition. Cultural and biological interpretations of the data are considered. The importance of evaluating gender in psychophysiological studies of emotion is stressed.  相似文献   

14.
目的探究孤独症患儿对面部情绪表情的认知特征。方法选取2007年3月至2008年9月在中山大学附属第三医院发育行为中心诊断为孤独症的18~36个月患儿作为孤独症组,同期行健康查体的年龄、性别与孤独症组匹配的正常儿童作为对照组,被动观看电脑屏幕显示的高兴、悲伤、惊讶、愤怒和恐惧5种面部基本情绪表情图,观察比较两组幼儿对各种面部表情的视觉注意行为和自身情绪反应。结果研究期间孤独症组和对照组均纳入45例,两组幼儿对各种面部表情的初次注视时间组间效应不明显,而表情效应明显,两组幼儿初次注视高兴和愤怒表情的时间长于注视恐惧表情的时间。但孤独症组对各种面部表情图的回看次数明显少于对照组,总注视时间也明显短于对照组。对照组对不同的面部表情自身情绪反应评分明显不同,对高兴表情的积极情绪评分明显高于其他表情,对高兴表情的消极情绪评分明显低于愤怒和恐惧表情,对悲伤和惊讶表情的消极情绪评分也明显低于恐惧表情。而孤独症组对各种情绪表情的自身情绪反应评分差异无统计学意义。结论孤独症患儿早期不仅对面部情绪表情的视觉注意减少,对面部情绪表情的感知也存在缺陷,尤其对各种负性情绪表情理解困难。  相似文献   

15.
We examined genetic and environmental influences on recognition of facial expressions in 250 pairs of 10-year-old monozygotic (83 pairs) and dizygotic (167 pairs) twins. Angry, fearful, sad, disgusted, and happy faces varying in intensity (15%–100%), head orientation, and eye gaze were presented in random order across 160 trials. Total correct recognition responses to each facial expression comprised the dependent variables. Twin data examined genetic and environmental contributions to variables and their correlations. Results support a common psychometric factor influenced primarily by additive genetic influences across expressions with discrimination of specific expressions due largely to non-shared environmental influences.  相似文献   

16.
A considerable body of research has focused on neural responses evoked by emotional facial expressions, but little is known about mother-specific brain responses to infant facial emotions. We used near-infrared spectroscopy to investigate prefrontal activity during discriminating facial expressions of happy, angry, sad, fearful, surprised and neutral of unfamiliar infants and unfamiliar adults by 14 mothers and 14 age-matched females who have never been pregnant (non-mothers). Our results revealed that discriminating infant facial emotions increased the relative oxyHb concentration in mothers' right prefrontal cortex but not in their left prefrontal cortex, compared with each side of the prefrontal cortices of non-mothers. However, there was no difference between mothers and non-mothers in right or left prefrontal cortex activation while viewing adult facial expressions. These results suggest that the right prefrontal cortex is involved in human maternal behavior concerning infant facial emotion discrimination.  相似文献   

17.
The ability to distinguish facial emotions emerges in infancy. Although this ability has been shown to emerge between 5 and 7 months of age, the literature is less clear regarding the extent to which neural correlates of perception and attention play a role in processing of specific emotions. This study's main goal was to examine this question among infants. To this end, we presented angry, fearful, and happy faces to 7-month-old infants (N = 107, 51% female) while recording event-related brain potentials. The perceptual N290 component showed a heightened response for fearful and happy relative to angry faces. Attentional processing, indexed by the P400, showed some evidence of a heightened response for fearful relative to happy and angry faces. We did not observe robust differences by emotion in the negative central (Nc) component, although trends were consistent with previous work suggesting a heightened response to negatively valenced expressions. Results suggest that perceptual (N290) and attentional (P400) processing is sensitive to emotions in faces, but these processes do not provide evidence for a fear-specific bias across components.  相似文献   

18.
Preliminary studies have demonstrated that school-aged children (average age 9-10years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in younger children aged 6-7years to emotional facial expressions of other children. Facial EMG activity to the presentation of dynamic emotional faces was recorded from the corrugator, zygomaticus, frontalis and depressor muscle in sixty-one healthy participants aged 6-7years. Results showed that the presentation of angry faces was associated with corrugator activation and zygomaticus relaxation, happy faces with an increase in zygomaticus and a decrease in corrugator activation, fearful faces with frontalis activation, and sad faces with a combination of corrugator and frontalis activation. This study demonstrates the feasibility of measuring facial EMG response to emotional facial expressions in 6-7year old children.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号