首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Social neuroscience》2013,8(2):101-120
Abstract

Many studies provide support for the role of the fusiform gyrus in face recognition and its sensitivity to emotional expressions. Recently, category-specific representation was also observed for neutral human bodies in the middle temporal/middle occipital gyrus (extrastriate body area) but it is not clear whether this area is also sensitive to emotional bodily expressions. Besides these areas, other regions that process the affective information carried by the face and the body may be common and/or specific to the face or the body. To clarify these issues we performed a systematic comparison of how the whole brain processes faces and bodies and how their affective information is represented. Participants categorized emotional facial and bodily expressions while brain activity was measured using functional magnetic resonance imaging. Our results show that, first, the amygdala and the fusiform gyrus are sensitive to recognition of facial and bodily fear signals. Secondly, the extrastriate body area–area V5/MT is specifically involved in processing bodies without being sensitive to the emotion displayed. Thirdly, other important areas such as the superior temporal sulcus, the parietal lobe and subcortical structures represent selectively facial and bodily expressions. Finally, some face/body differences in activation are a function of the emotion expressed.  相似文献   

2.
Brain imaging studies in humans have shown that face processing in several areas is modulated by the affective significance of faces, particularly with fearful expressions, but also with other social signals such gaze direction. Here we review haemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala. fMRI studies show that these increased responses in fusiform cortex to fearful faces are abolished by amygdala damage in the ipsilateral hemisphere, despite preserved effects of voluntary attention on fusiform; whereas emotional increases can still arise despite deficits in attention or awareness following parietal damage, and appear relatively unaffected by pharmacological increases in cholinergic stimulation. Fear-related modulations of face processing driven by amygdala signals may implicate not only fusiform cortex, but also earlier visual areas in occipital cortex (e.g., V1) and other distant regions involved in social, cognitive, or somatic responses (e.g., superior temporal sulcus, cingulate, or parietal areas). In the temporal domain, evoked-potentials show a widespread time-course of emotional face perception, with some increases in the amplitude of responses recorded over both occipital and frontal regions for fearful relative to neutral faces (as well as in the amygdala and orbitofrontal cortex, when using intracranial recordings), but with different latencies post-stimulus onset. Early emotional responses may arise around 120ms, prior to a full visual categorization stage indexed by the face-selective N170 component, possibly reflecting rapid emotion processing based on crude visual cues in faces. Other electrical components arise at later latencies and involve more sustained activities, probably generated in associative or supramodal brain areas, and resulting in part from the modulatory signals received from amygdala. Altogether, these fMRI and ERP results demonstrate that emotion face perception is a complex process that cannot be related to a single neural event taking place in a single brain regions, but rather implicates an interactive network with distributed activity in time and space. Moreover, although traditional models in cognitive neuropsychology have often considered that facial expression and facial identity are processed along two separate pathways, evidence from fMRI and ERPs suggests instead that emotional processing can strongly affect brain systems responsible for face recognition and memory. The functional implications of these interactions remain to be fully explored, but might play an important role in the normal development of face processing skills and in some neuropsychiatric disorders.  相似文献   

3.
4.
Whether a single perceptual process or separate and possibly independent processes support facial identity and expression recognition is unclear. We used a morphed-face discrimination test to examine sensitivity to facial expression and identity information in patients with occipital or temporal lobe damage, and structural and functional MRI to correlate behavioral deficits with damage to the core regions of the face-processing network. We found selective impairments of identity perception in two patients with right inferotemporal lesions and two prosopagnosic patients with damage limited to the anterior temporal lobes. Of these four patients one exhibited damage to the right fusiform and occipital face areas, while the remaining three showed sparing of these regions. Thus impaired identity perception can occur with damage not only to the fusiform and occipital face areas, but also to other medial occipitotemporal structures that likely form part of a face recognition network. Impaired expression perception was seen in the fifth patient with damage affecting the face-related portion of the posterior superior temporal sulcus. This subject also had difficulty in discriminating identity when irrelevant variations in expression needed to be discounted. These neuropsychological and neuroimaging data provide evidence to complement models which address the separation of expression and identity perception within the face-processing network.  相似文献   

5.
A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.  相似文献   

6.
Recognition of emotional facial expressions is universal for all humans, but signed language users must also recognize certain non-affective facial expressions as linguistic markers. fMRI was used to investigate the neural systems underlying recognition of these functionally distinct expressions, comparing deaf ASL signers and hearing nonsigners. Within the superior temporal sulcus (STS), activation for emotional expressions was right lateralized for the hearing group and bilateral for the deaf group. In contrast, activation within STS for linguistic facial expressions was left lateralized only for signers and only when linguistic facial expressions co-occurred with verbs. Within the fusiform gyrus (FG), activation was left lateralized for ASL signers for both expression types, whereas activation was bilateral for both expression types for nonsigners. We propose that left lateralization in FG may be due to continuous analysis of local facial features during on-line sign language processing. The results indicate that function in part drives the lateralization of neural systems that process human facial expressions.  相似文献   

7.
OBJECTIVE: To test the hypothesis that fear recognition deficits in neurologic patients reflect damage to an emotion-specific neural network. BACKGROUND: Previous studies have suggested that the perception of fear in facial expressions is mediated by a specialized neural system that includes the amygdala and certain posterior right-hemisphere cortical regions. However, the neuropsychological findings in patients with amygdala damage are inconclusive, and the contribution of distinct cortical regions to fear perception has only been examined in one study. Methods: We studied the recognition of six basic facial expressions by asking subjects to match these emotions with the appropriate verbal labels. RESULTS: Both normal control subjects (n = 80) and patients with focal brain damage (n = 63) performed significantly worse in recognizing fear than in recognizing any other facial emotion, with errors consisting primarily of mistaking fear for surprise. Although patients were impaired relative to control subjects in recognizing fear, we could not obtain convincing evidence that left, right, or bilateral lesions were associated with disproportionate impairments of fear perception once we adjusted for differences in overall recognition performance for the other five facial emotion categories. The proposed special role of the amygdala and posterior right-hemisphere cortical regions in fear perception was also not supported. CONCLUSIONS: Fear recognition deficits in neurologic patients may be attributable to task difficulty factors rather than damage to putative neural systems dedicated to fear perception.  相似文献   

8.
In this study, we describe a 58-year-old male patient (FZ) with a right-amygdala lesion after temporal lobe infarction. FZ is unable to recognize fearful facial expressions. Instead, he consistently misinterprets expressions of fear for expressions of surprise. Employing EEG/ERP measures, we investigated whether presentation of fearful and surprised facial expressions would lead to different response patterns. We also measured ERPs to aversively conditioned and unconditioned fearful faces.

We compared ERPs elicited by supraliminally and subliminally presented conditioned fearful faces (CS+), unconditioned fearful faces (CS–) and surprised faces. Despite FZ's inability to recognize fearful facial expressions in emotion recognition tasks, ERP components showed different response patterns to pictures of surprised and fearful facial expressions, indicating that covert or implicit recognition of fear is still intact.

Differences between ERPs to CS+ and CS– were only found when these stimuli were presented subliminally. This indicates that intact right amygdala function is not necessary for aversive conditioning.

Previous studies have stressed the importance of the right amygdala for discriminating facial emotional expressions and for classical conditioning. Our study suggests that the right amygdala is necessary for explicit recognition of fear, while implicit recognition of fear and classical conditioning may still occur following lesion of the right amygdala.  相似文献   

9.
Faces are multidimensional stimuli that convey information for complex social and emotional functions. Separate neural systems have been implicated in the recognition of facial identity (mainly extrastriate visual cortex) and emotional expression (limbic areas and the superior temporal sulcus). Working-memory (WM) studies with faces have shown different but partly overlapping activation patterns in comparison to spatial WM in parietal and prefrontal areas. However, little is known about the neural representations of the different facial dimensions during WM. In the present study 22 subjects performed a face-identity or face-emotion WM task at different load levels during functional magnetic resonance imaging. We found a fronto-parietal-visual WM-network for both tasks during maintenance, including fusiform gyrus. Limbic areas in the amygdala and parahippocampal gyrus demonstrated a stronger activation for the identity than the emotion condition. One explanation for this finding is that the repetitive presentation of faces with different identities but the same emotional expression during the identity-task is responsible for the stronger increase in BOLD signal in the amygdala. These results raise the question how different emotional expressions are coded in WM. Our findings suggest that emotional expressions are re-coded in an abstract representation that is supported at the neural level by the canonical fronto-parietal WM network.  相似文献   

10.
Social Phobia (SP) is a marked and persistent fear of social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others. Faces of others are perceived as threatening by social phobic patients (SPP). To investigate how face processing is altered in the distributed neural system for face perception in Social Phobia, we designed an event-related fMRI study in which Healthy Controls (HC) and SPP were presented with angry, fearful, disgusted, happy and neutral faces and scrambled pictures (visual baseline). As compared to HC, SPP showed increased neural activity not only in regions involved in emotional processing including left amygdala and insula, as expected from previous reports, but also in the bilateral superior temporal sulcus (STS), a part of the core system for face perception that is involved in the evaluation of expression and personal traits. In addition SPP showed a significantly weaker activation in the left fusiform gyrus, left dorsolateral prefrontal cortex, and bilateral intraparietal sulcus as compared to HC. These effects were found not only in response to emotional faces but also to neutral faces as compared to scrambled pictures. Thus, SPP showed enhanced activity in brain areas related to processing of information about emotional expression and personality traits. In contrast, brain activity was decreased in areas for attention and for processing other information from the face, perhaps as a result of a feeling of wariness. These results indicate a differential modulation of neural activity throughout the different parts of the distributed neural system for face perception in SPP as compared to HC.  相似文献   

11.
EMPATH: a neural network that categorizes facial expressions   总被引:2,自引:0,他引:2  
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.  相似文献   

12.
BACKGROUND: Considerable literature has focused on neural responses evoked by face viewing. We extend that literature and explore the neural correlates of maternal attachment with an fMRI study in which mothers view photographs of their own children. METHOD: Seven mothers performed a one-back repetition detection task while viewing photographs of their own child, friends of their child, unfamiliar children, and unfamiliar adults. RESULTS: Viewing one's own child versus a familiar child was associated with activation in the amygdala, insula, anterior paracingulate cortex, and posterior superior temporal sulcus (STS). Viewing familiar versus unfamiliar children elicited increased activation in regions associated with familiarity in adults. Viewing unfamiliar children versus unfamiliar adults was associated with activation in the fusiform gyrus, intraparietal sulcus, precuneus, and posterior STS. CONCLUSIONS: The sight of one's own child versus that of a familiar child activates regions that mediate emotional responses (amygdala, insula) and are associated with theory of mind functions (anterior paracingulate cortex, posterior superior temporal sulcus). These activations may reflect the intense attachment, vigilant protectiveness, and empathy that characterize normal maternal attachment. The sight of an unfamiliar child's face compared with that of an unfamiliar adult engages areas associated with attention as well as face perception.  相似文献   

13.
14.
Neural substrates of facial emotion processing using fMRI   总被引:9,自引:0,他引:9  
We identified human brain regions involved in the perception of sad, frightened, happy, angry, and neutral facial expressions using functional magnetic resonance imaging (fMRI). Twenty-one healthy right-handed adult volunteers (11 men, 10 women; aged 18-45; mean age 21.6 years) participated in four separate runs, one for each of the four emotions. Participants viewed blocks of emotionally expressive faces alternating with blocks of neutral faces and scrambled images. In comparison with scrambled images, neutral faces activated the fusiform gyri, the right lateral occipital gyrus, the right superior temporal sulcus, the inferior frontal gyri, and the amygdala/entorhinal cortex. In comparisons of emotional and neutral faces, we found that (1) emotional faces elicit increased activation in a subset of cortical regions involved in neutral face processing and in areas not activated by neutral faces; (2) differences in activation as a function of emotion category were most evident in the frontal lobes; (3) men showed a differential neural response depending upon the emotion expressed but women did not.  相似文献   

15.
A widely adopted neural model of face perception (Haxby, Hoffman, & Gobbini, 2000) proposes that the posterior superior temporal sulcus (STS) represents the changeable features of a face, while the face-responsive fusiform gyrus (FFA) encodes invariant aspects of facial structure. ‘Changeable features’ of a face can include rigid and non-rigid movements. The current study investigated neural responses to rigid, moving faces displaying shifts in social attention. Both functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) were used to investigate neural responses elicited when participants viewed video clips in which actors made a rigid shift of attention, signalled congruently from both the eyes and head. These responses were compared to those elicited by viewing static faces displaying stationary social attention information or a scrambled video displaying directional motion. Both the fMRI and MEG analyses demonstrated heightened responses along the STS to turning heads compared to static faces or scrambled movement conditions. The FFA responded to both turning heads and static faces, showing only a slight increase in response to the dynamic stimuli. These results establish the applicability of the Haxby model to the perception of rigid face motions expressing changes in social attention direction. Furthermore, the MEG beamforming analyses found an STS response in an upper frequency band (30-80 Hz) which peaked in the right anterior region. These findings, derived from two complementary neuroimaging techniques, clarify the contribution of the STS during the encoding of rigid facial action patterns of social attention, emphasising the role of anterior sulcal regions alongside previously observed posterior areas.  相似文献   

16.
The human amygdala responds selectively to consciously and unconsciously perceived facial expressions, particularly those that convey potential threat such as fear and anger. In many social situations, multiple faces with varying expressions confront observers yet little is known about the neural mechanisms involved in encoding several faces simultaneously. Here we used event-related fMRI to measure neural activity in pre-defined regions of interest as participants searched multi-face arrays for a designated target expression (fearful or happy). We conducted separate analyses to examine activations associated with each of the four multi-face arrays independent of target expression (stimulus-driven effects), and activations arising from the search for each of the target expressions, independent of the display type (strategic effects). Comparisons across display types, reflecting stimulus-driven influences on visual search, revealed activity in the amygdala and superior temporal sulcus (STS). By contrast, strategic demands of the task did not modulate activity in either the amygdala or STS. These results imply an interactive threat-detection system involving several neural regions. Crucially, activity in the amygdala increased significantly when participants correctly detected the target expression, compared with trials in which the identical target was missed, suggesting that the amygdala has a limited capacity for extracting affective facial expressions.  相似文献   

17.
BACKGROUND: Individuals with social phobia (SP) have altered behavioral and neural responses to emotional faces and are hypothesized to have deficits in inhibiting emotion-related amygdala responses. We tested for such amygdala deficits to emotional faces in a sample of individuals with SP. METHOD: We used functional magnetic resonance imaging (fMRI) to examine the neural substrates of emotional face processing in 14 generalized SP (gSP) and 14 healthy comparison (HC) participants. Analyses focused on the temporal dynamics of the amygdala, prefrontal cortex (PFC), and fusiform face area (FFA) across blocks of neutral, fear, contempt, anger, and happy faces in gSP versus HC participants. RESULTS: Amygdala responses in participants with gSP occurred later than the HC participants to fear, angry, and happy faces. Parallel PFC responses were found for happy and fear faces. There were no group differences in temporal response patterns in the FFA. CONCLUSIONS: This finding might reflect a neural correlate of atypical orienting responses among individuals with gSP. Commonly reported SP deficits in habituation might reflect neural regions associated with emotional self-evaluations rather than the amygdala. This study highlights the importance of considering time-varying modulation when examining emotion-related processing in individuals with gSP.  相似文献   

18.
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.  相似文献   

19.
The neural basis of face processing has been extensively studied in the past two decades. The current dominant neural model proposed by Haxby et al. (2000); Gobbini and Haxby (2007) suggests a division of labor between the fusiform face area (FFA), which processes invariant facial aspects, such as identity, and the posterior superior temporal sulcus (pSTS), which processes changeable facial aspects, such as expression. An extension to this model for the processing of dynamic faces proposed by O’Toole et al. (2002) highlights the role of the pSTS in the processing of identity from dynamic familiar faces. To evaluate these models, we reviewed recent neuroimaging studies that examined the processing of identity and expression with static and dynamic faces. Based on accumulated data we propose an updated model, emphasizing the dissociation between form and motion as the primary functional division between a ventral stream that goes through the FFA and a dorsal stream that goes through the STS, respectively. We also encourage future studies to expand their research to the processing of dynamic faces.  相似文献   

20.
Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号