首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
目的:抑郁症患者存在负性认知模式,本研究旨在探讨男性抑郁症患者识别动态面部表情情绪偏向性的神经基础.方法:利用1.5T功能核磁共振成像系统检测12名男性抑郁症患者及相匹配的12名健康男性识别悲伤、喜悦及中性面部表情视频时的脑部反应.功能:MRI原始图像数据经转化格式,SPM2软件处理,配对t检验统计分析.结果:与正常对照相比,抑郁症患者明确识别喜悦情感(识别喜悦表情-识别中性表情)时活动增加的脑区有右枕中回(BA37)、左额中回(BA6)、左顶下小叶(BA40)、左中央前回(BA6)、左中央后回(BA2),而活动降低的脑区有左颞上回(BA38)、左颞中回(BA21)、右额下回(BA10).明确识别悲伤情感(识别悲伤表情-识别中性表情)时活动增加的脑区包括右额中回(BA6)、左扣带回(BA31),右梭状回(BA20)、右中央后回(BA2).但正常对照较抑郁症患者在悲伤情绪的识别中未发现显著激活增强的脑区.结论:抑郁症患者明确识别动态面部表情的神经基础与正常对照存在差异,表现为需要更多脑区参与情绪刺激的识别,尤其在悲伤情绪识别中,情绪相关脑区活动增加明显.  相似文献   

2.
目的:建立中国人面孔表情强度分级图片库,为情绪研究提供素材。方法:22名话剧演员在专业导演指导下表演愤怒、厌恶、恐惧、悲伤、惊奇、高兴和平静7种面部表情,用高速摄像机录制,之后根据表演情绪的强度多次屏幕截图,所有图片去除非面孔特征后,制作成相同大小和灰度的的814张黑白照片。每名表演者的每种情绪均包含6个强度等级。招募112名健康志愿者,采用0~100等级对全部图片的情绪强度、愉悦度、唤醒度进行评价。结果:建立的中国人面孔表情强度分级图片库,包含2个子库,图片库1为不同表演者的面孔表情强度分级图片库,共入选436张图片;图库2为同一表演者不同情绪强度图片库共入选640张图片;每个图片库都有相应的强度、愉悦度、唤醒度的评分结果,且在信度分析方面均呈现出较高的内部一致性(Cronbach a值均大于0. 9)。结论:本研究建立了具有良好认同度、符合心理学测量属性的中国人面孔表情强度分级图片库,可作为我国面孔表情的相关研究素材。  相似文献   

3.
目的:利用事件相关的功能核磁共振成像技术研究健康汉族女性对动态表情的识别情况并探讨其神经基础。方法:利用1.5T功能核磁共振成像系统检测13名女性健康受试者识别悲伤、喜悦及中性动态表情视频时的脑部反应。图像数据经SPM2软件处理和统计分析,获得脑区激活图。结果:与识别十字架相比,识别中性表情激活左额中回、双侧中央前回、右侧杏仁核、左顶下小叶、右中央后回以及丘脑等。与识别中性表情相比,识别喜悦表情激活右额内侧回、右额上回、右额中回、右前扣带回、左胼胝体下回、右枕上回、右枕下回、左枕中回及右颞上回等脑区,而识别悲伤表情激活左额内侧回、右额中回、左颞下回以及左颞上回等脑区。结论:面孔加工及动态表情的识别由脑内一个分布式神经网络所调控,其中额内侧回参与多种情绪的加工,可能是情绪加工的共同通路,而颞上回主要负责面部动态特征的加工。  相似文献   

4.
马婧  刘灵  彭思敏 《校园心理》2023,(4):258-262
目的 为建立与完善儿童羞愧情绪表情识别图片库提供素材。方法 采用开放式问卷对344名被试调查归纳儿童羞愧情绪标志性非言语表达动作;选取67名大学生采用九点评分量表对羞愧情绪图片进行情绪类型、强度、愉悦度及优势度评定。结果 (1)认同率≥60%的羞愧动作单元共7个;(2)有效羞愧情绪图片5张,情绪3维度两两之间呈正相关(r=0.677,0.695,0.819,P<0.01)。结论 “低头—眼皮下垂—眼珠向下看—嘴角紧抿—整体面部表情忧愁——蜷缩肩膀、含胸”是儿童羞愧非言语表达较有代表性的动作编码系统。  相似文献   

5.
目的:建立一套标准化的中国大学生虚拟仿真动态表情库。方法:采集筛选40名大学生的6种不同动态表情共240段,对其进行标准化加工处理。在北京地区4所高校招募23名大学生,对动态表情的类型、愉悦度和唤醒度进行内部一致性信度、评分者信度检验。结果:建立了包含愤怒9段、厌恶1段、害怕1段、高兴38段、悲伤19段、惊讶39段的表情库,其动态表情的认同率均大于60%。愉悦度与唤醒度的Cronbach α系数分别为0.96、0.76;评分者信度显示被试在愉悦度与唤醒度(W=0.54、0.21,均P0.001)上具有一致的评价。结论:本研究建立的中国大学生虚拟仿真动态表情库有较好的信度,可作为虚拟现实及情绪研究的实验材料。  相似文献   

6.
目的初步制备开心、生气、吃惊、害怕、伤心、讨厌和中性7种中国人物静态面部表情图片库以提供情绪研究的取材,探讨采集真实人物面部表情的方法。方法选取学龄前期、学龄期、小学、中学、成年早期、成人中期和成人晚期5~80岁五官端正的健康中国人。告知受试者各种面部表情内在情绪涵义、等级强度及面部表情特征,继而以谈话法结合个体背景和经历诱发面部表情,所有面部表情均在统一拍摄条件下由同一人采用数码相机拍摄。初步制备7种基本面部表情图片,经研究小组成员初步筛选出的图片进行统一处理,制作成10cm×15cm不同面部表情图片;进而以健康大学生对面部表情图片进行第2次筛选,对筛选后识别一致率较高的面部表情图片进行重测信度评价,同时以日本女性面部表情图片为参照评价效度。结果第1次筛选出80张面部表情图片,59名大学生第2次筛选出21张面部表情图片(每种面部表情各3张)。28名大学生进行重测信度、表情强度和效度测试。21张面部表情图片的重测信度较日本女性面部表情图片略好,除生气2张、害怕3张、伤心1张及讨厌3张图片重测信度低于70%外,其余表情重测信度均较高,负性表情(生气、害怕、伤心和讨厌)图片的效度较低,开心、中性和吃惊表情图片的效度较高。结论中国人物静态面部表情是具有代表性和信度较高的表情图片,可作为情绪研究的材料。面部表情的表达及内在情绪的判断受异族效应和文化差异的影响。  相似文献   

7.
学龄前孤独症谱系儿童对人物面部表情的识别特征   总被引:1,自引:0,他引:1  
目的:测试学龄前孤独症儿童对人物静态面部表情的识别能力和特点。方法:使用自制人物面部开心、吃惊、害怕、生气、讨厌、伤心、中性七种表情图,对13例4~8岁孤独症男童和23例3~5岁正常对照组男童进行了测试分析,两组儿童在发展年龄上作了匹配(3.66±0.44岁)。结果:两组儿童在识别七种表情照片的正确率上无显著性差异(P>0.05);各表情类别识认比较,孤独症组识别伤心、害怕、讨厌及吃惊表情正确率高于对照儿童(P<0.05);孤独症组识别表情的正确率依次是:开心>伤心>生气>害怕>讨厌>吃惊>中性;而对照组则为:开心>生气>伤心>害怕>中性>讨厌>吃惊。结论:孤独症儿童对七种面部表情的命名性识别与正常儿童无明显差异,但识别基本面部表情的模式与正常儿童有所不同。  相似文献   

8.
目的:开发同面孔自发多表情图库,为面孔表情研究提供情绪刺激材料。方法:采集3~12月的婴儿68名(男31名,女37名)和18~27岁的成人61名(男27名,女34名)的同面孔自发多表情照片;招募健康大学生310名评定图片的认同度、情绪强度、吸引力、愉悦度、唤醒度和优势度指标。结果:建立了包含269张婴儿图片(高兴、中性、悲伤图片分别有85、85、99张)和420张成人图片(高兴、中性、悲伤图片分别有190、77、153张)的图库,每名婴儿或成人在每种表情上至少有1张图片。6个指标的Cronbach α系数均大于0.87;聚类分析显示,面孔与表情的类别间差异均有统计学意义(均P0.001)。结论:本图库涵盖婴儿与成人同面孔自发多表情图片,其指标的建立过程和结果符合心理测量学规范,可作为面孔表情研究的标准化实验材料。  相似文献   

9.
目的比较孤独症儿童对静态和动态面部表情识别的差异。方法采用45名被试进行3(被试类型:ASD、ID、TD)×2(呈现方式:静态、动态)×3(面部表情:高兴、中性、愤怒)三因素重复测量实验,考察孤独症儿童、智力障碍儿童、正常儿童在呈现静态与动态表情时,对高兴、中性和生气表情的识别能力。结果在正确率层面,被试类型、呈现方式和表情类型的交互效应显著(F==2.02,P0.01),孤独症儿童在两种呈现方式下的表情识别率显著低于对照组被试。在反应时层面,被试类型、呈现方式和表情类型的交互效应显著(F==5.96,P0.01),孤独症儿童在两种呈现方式的表情反应时显著慢于对照组被试。结论孤独症儿童对积极表情的识别能力优于消极表情;孤独症儿童对静态表情的识别能力优于动态表情。  相似文献   

10.
目的采用眼动技术,比较动态和静态呈现面部表情即时加工特点。方法选取17名被试进行实验,采用2(呈现方式:动态、静态)×3(面部表情:高兴、中性、生气)的两因素被试内设计。使用ASL504型眼动仪记录被试观看不同面部表情图片(每种条件各10张)的眼动情况,眼动分析指标包括总注视时间、注视百分比、注视点个数和凝视点个数。结果反应时出现了面部表情主效应(F(2,32)=5.27,P=0.011)。事后比较(LSD)显示,中性表情比高兴表情和生气表情反应时更长(P0.05)。注视点个数出现了呈现方式主效应(F(1,16)=14.34,P=0.002),动态呈现方式的注视点比静态呈现方式更少。凝视点个数出现面部表情主效(F(2,32)=5.14,P=0.012)。事后比较显示,中性面部表情比高兴和生气面部表情的凝视点更多(P0.05)。结论在同等时间内动态呈现面部表情的加工更有效率,进而证明了面部表情加工中的动态优势。  相似文献   

11.
Startle reflex modulation by affective pictures is a well-established effect in human emotion research. However, much less is known about startle modulation by affective faces, despite the growing evidence that facial expressions robustly activate emotion-related brain circuits. In this study, acoustic startle probes were administered to 37 young adult participants (20 women) during the viewing of slides from the Pictures of Facial Affect set including neutral, happy, angry, and fearful faces. The effect of expression valence (happy, neutral, and negative) on startle magnitude was highly significant (p < .001). Startle reflex was strongly potentiated by negative expressions (fearful and angry), however, no attenuation by happy faces was observed. A significant valence by gender interaction suggests stronger startle potentiation effects in females. These results demonstrate that affective facial expressions can produce significant modulation of the startle reflex.  相似文献   

12.
Human vocalizations (HV), as well as environmental sounds, convey a wide range of information, including emotional expressions. The latter have been relatively rarely investigated, and, in particular, it is unclear if duration-controlled non-linguistic HV sequences can reliably convey both positive and negative emotional information. The aims of the present psychophysical study were: (i) to generate a battery of duration-controlled and acoustically controlled extreme valence stimuli, and (ii) to compare the emotional impact of HV with that of other environmental sounds. A set of 144 HV and other environmental sounds was selected to cover emotionally positive, negative, and neutral values. Sequences of 2 s duration were rated on Likert scales by 16 listeners along three emotional dimensions (arousal, intensity, and valence) and two non-emotional dimensions (confidence in identifying the sound source and perceived loudness). The 2 s stimuli were reliably perceived as emotionally positive, negative or neutral. We observed a linear relationship between intensity and arousal ratings and a “boomerang-shaped” intensity-valence distribution, as previously reported for longer, duration-variable stimuli. In addition, the emotional intensity ratings for HV were higher than for other environmental sounds, suggesting that HV constitute a characteristic class of emotional auditory stimuli. In addition, emotionally positive HV were more readily identified than other sounds, and emotionally negative stimuli, irrespective of their source, were perceived as louder than their positive and neutral counterparts. In conclusion, HV are a distinct emotional category of environmental sounds and they retain this emotional pre-eminence even when presented for brief periods. Electronic Supplementary Material The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

13.
情绪图片应答的性别差异研究   总被引:1,自引:2,他引:1  
目的:了解国际情绪图片系统(IAPS)在不同性别大学生中诱发情绪反应的差异性。方法:265名大学生对140幅情绪图片的愉悦度、唤醒度和优势度进行1-9级的评分。结果:女性对正性图片的愉悦度、优势度评分均显著高于男性;女性对负性图片的愉悦度、优势度评分均显著低于男性,而唤醒度的评分则显著高于男性;在中性图片三个维度的的评分上。均不存在明显的性别差异。结论:与男性相比.女性对正性和负性图片均更敏感、反应也更强烈。这种性别差异在对负性图片的应答中。表现得尤其明显。  相似文献   

14.
Modulatory effects of emotion and sleep on recollection and familiarity   总被引:1,自引:0,他引:1  
Growing evidence suggests that declarative memory benefits from the modulatory effects of emotion and sleep. The primary goal of the present study was to determine whether these two factors interact to enhance memory or they act independently of each other. Twenty-eight volunteers participated in the study. Half of them were sleep deprived the night immediately following the exposure to emotional and non-emotional images, whereas the control group slept at home. Their memory for images was tested 1 week later along the valence and arousal dimension of emotion with the remember–know procedure. As emotional events appear to gain preference during encoding, via the modulatory effect of amygdala on prefrontal and medial temporal lobe regions, conscious retrieval of emotional pictures (relative to neutral ones) was expected to be less disrupted by sleep loss. Results indicated that emotional images were more richly experienced in memory than neutral, particularly those with high arousal and positive valence. Even though sleep deprivation resulted in behavioral impairment at retrieval of both emotional and neutral images, results revealed that remember-based recognition accuracy and its underlying process of recollection for emotional images were less influenced by the lack of sleep (the mean difference between control and sleep-deprived subjects was around 40% higher for neutral images than for emotional images). Familiarity, however, was affected by neither emotion nor sleep. Taken together, these results suggest that emotion and sleep influence differentially the subjective experience of remembering and knowing and the underlying processes of recollection and familiarity through brain mechanisms probably involving amygdala- and hippocampo-neocortical networks respectively.  相似文献   

15.
《Biological psychology》2012,89(2-3):196-203
A major problem in recent neuroscience research on the processing of loved familiar faces is the absence of evidence concerning the elicitation of a genuine positive emotional response (love). These studies have two confounds: familiarity and arousal. The present investigation controlled for both factors in female university students. Two categories of loved faces were chosen: one with higher familiarity but lower emotionality (fathers) and the other with higher emotionality but lower familiarity (romantic partners). Unfamiliar and baby faces were used as control faces. Central and peripheral electrophysiological measures as well as subjective indices of valence, arousal, and dominance were recorded. Results support the conclusion that viewing loved familiar faces elicits an intense positive emotional reaction that cannot be explained either by familiarity or arousal. The differences between romantic and filial love appeared in the magnitude of some peripheral and subjective indices of emotionality (zygomatic activity, valence, arousal, and dominance), that were higher for images of the romantic partners, and one central index of familiarity (the P3 amplitude), that was higher for images of the fathers.  相似文献   

16.
A major problem in recent neuroscience research on the processing of loved familiar faces is the absence of evidence concerning the elicitation of a genuine positive emotional response (love). These studies have two confounds: familiarity and arousal. The present investigation controlled for both factors in female university students. Two categories of loved faces were chosen: one with higher familiarity but lower emotionality (fathers) and the other with higher emotionality but lower familiarity (romantic partners). Unfamiliar and baby faces were used as control faces. Central and peripheral electrophysiological measures as well as subjective indices of valence, arousal, and dominance were recorded. Results support the conclusion that viewing loved familiar faces elicits an intense positive emotional reaction that cannot be explained either by familiarity or arousal. The differences between romantic and filial love appeared in the magnitude of some peripheral and subjective indices of emotionality (zygomatic activity, valence, arousal, and dominance), that were higher for images of the romantic partners, and one central index of familiarity (the P3 amplitude), that was higher for images of the fathers.  相似文献   

17.
Startle modulation was investigated as participants first anticipated and then viewed affective pictures in order to determine whether affective modulation of the startle reflex is similar in these different task contexts. During a 6-s anticipation period, a neutral light cue signaled whether the upcoming picture would portray snakes, erotica, or household objects; at the end of the anticipatory period, a picture in the signaled category was viewed for 6 s. Male participants highly fearful of snakes were recruited to maximize emotional arousal during anticipation and perception. Results indicated that the startle reflex was potentiated when anticipating either unpleasant (phobic) or pleasant (erotic) pictures, compared to neutral stimuli, whereas during perception, reflexes were potentiated when viewing unpleasant stimuli, and reduced when viewing pleasant pictures. The startle reflex is modulated by hedonic valence in picture perception, and by emotional arousal in a task context involving picture anticipation.  相似文献   

18.
Developing measures of socioaffective processing is important for understanding the mechanisms underlying emotional-interpersonal traits relevant to health, such as hostility. In this study, cigarette smokers with low (LH; n = 49) and high (HH; n = 43) trait hostility completed the Emotional Interference Gender Identification Task (EIGIT), a newly developed behavioral measure of socioaffective processing biases toward facial affect. The EIGIT involves quickly categorizing the gender of facial pictures that vary in affective valence (angry, happy, neutral, sad). Results showed that participants were slower and less accurate in categorizing the gender of angry faces in comparison to happy, neutral, and sad faces (which did not differ), signifying interference indicative of a socioaffective processing bias toward angry faces. Compared to LH individuals, HH participants exhibited diminished biases toward angry faces on error-based (but not speed-based) measures of emotional interference, suggesting impaired socioaffective processing. The EIGIT may be useful for future research on the role of socioaffective processing in traits linked with poor health.  相似文献   

19.
中国情绪图片系统的编制--在46名中国大学生中的试用   总被引:19,自引:6,他引:19  
目的:建立本土化的中国情绪图片系统(CAPS)以适应情绪研究的需要。方法:筛选出852幅图片组成CAPS,46名中国大学生对CAPS图片的愉悦度、唤醒度和优势度进行了自我报告的9点量表评分。结果:在CAPS评分中,唤醒度评分结果的一致性最高,愉悦度和优势度评分结果的标准差大于唤醒度的标准差。散点图显示,CAPS在愉悦度和唤醒度上评分分布较为广泛。结论:国际情绪图片系统(IAPS)具有较好的国际通用性,但仍由于文化、个性等因素存在差异,因此编制本土化的中国情绪图片系统是有必要的。  相似文献   

20.
目的:对美国NIMH情绪与注意研究中心编制的国际情绪图片系统在中国老年人群中进行试用,将结果与NIMH常模进行比较,探讨其异同.方法:在大连市三个社区的老年活动中心选取老年人116名(男:51人,女:65人),年龄60-80岁之间,对从国际情绪图片系统中选取的60幅图片(负性:23幅.中性:12幅,正性:25幅)进行愉悦度、唤醒度、优势度的9点量表评分,结果与NIMH常模进行比较.结果:中国老年人三个维度的评分与NIMH常模的相关系数分别为:0.92,0.54,0.88(P均<0.001).中国老年人群的唤醒度和优势度评分高于NIMH常模[(5.33±0.93)V8.(4.83±1.25),(5.60±1.20)vs.(5.19±1.21),P<0.001],而愉悦度评分低于NIMH常模[(4.99±2.28)vs.(5.28±1.85),P=0.020].男女老年人群对大多数图片的情绪感受相近,但女性老年人愉悦度评分高于男性老年人[(5.05±2.33)vs.(4.93±2.24),P=0.002].在愉悦度一唤醒度二维情感空间中,60幅图片呈"<"形分布.正性图片愉悦度和唤醒度呈线性相关(r=0.71,P<0.001),负性图片两者的相关性不显著(r=-0.35,P>0.05).结论:国际情绪图片系统具有较好的国际通用性.但由于中国老年人与NIMH常模在文化、生活、年龄等方面存在差异,故对部分图片的情绪感受不同,提示国际情绪图片系统应用于中国老年人群之前有必要进行本土化的修订.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号