首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigated the neural correlates of facial processing changes in healthy aging using fMRI and an adaptation paradigm. In the scanner, participants were successively presented with faces that varied in identity, viewpoint, both, or neither and performed a head size detection task independent of identity or viewpoint. In right fusiform face area (FFA), older adults failed to show adaptation to the same face repeatedly presented in the same view, which elicited the most adaptation in young adults. We also performed a multivariate analysis to examine correlations between whole-brain activation patterns and behavioral performance in a face-matching task tested outside the scanner. Despite poor neural adaptation in right FFA, high-performing older adults engaged the same face-processing network as high-performing young adults across conditions, except the one presenting a same facial identity across different viewpoints. Low-performing older adults used this network to a lesser extent. Additionally, high-performing older adults uniquely recruited a set of areas related to better performance across all conditions, indicating age-specific involvement of this added network. This network did not include the core ventral face-processing areas but involved the left inferior occipital gyrus, frontal, and parietal regions. Although our adaptation results show that the neuronal representations of the core face-preferring areas become less selective with age, our multivariate analysis indicates that older adults utilize a distinct network of regions associated with better face matching performance, suggesting that engaging this network may compensate for deficiencies in ventral face processing regions.  相似文献   

2.
A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.  相似文献   

3.
Abnormal activation of the social brain during face perception in autism   总被引:1,自引:0,他引:1  
ASD involves a fundamental impairment in processing social-communicative information from faces. Several recent studies have challenged earlier findings that individuals with autism spectrum disorder (ASD) have no activation of the fusiform gyrus (fusiform face area, FFA) when viewing faces. In this study, we examined activation to faces in the broader network of face-processing modules that comprise what is known as the social brain. Using 3T functional resonance imaging, we measured BOLD signal changes in 10 ASD subjects and 7 healthy controls passively viewing nonemotional faces. We replicated our original findings of significant activation of face identity-processing areas (FFA and inferior occipital gyrus, IOG) in ASD. However, in addition, we identified hypoactivation in a more widely distributed network of brain areas involved in face processing [including the right amygdala, inferior frontal cortex (IFC), superior temporal sulcus (STS), and face-related somatosensory and premotor cortex]. In ASD, we found functional correlations between a subgroup of areas in the social brain that belong to the mirror neuron system (IFC, STS) and other face-processing areas. The severity of the social symptoms measured by the Autism Diagnostic Observation Schedule was correlated with the right IFC cortical thickness and with functional activation in that area. When viewing faces, adults with ASD show atypical patterns of activation in regions forming the broader face-processing network and social brain, outside the core FFA and IOG regions. These patterns suggest that areas belonging to the mirror neuron system are involved in the face-processing disturbances in ASD.  相似文献   

4.
Brain imaging studies in humans have shown that face processing in several areas is modulated by the affective significance of faces, particularly with fearful expressions, but also with other social signals such gaze direction. Here we review haemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala. fMRI studies show that these increased responses in fusiform cortex to fearful faces are abolished by amygdala damage in the ipsilateral hemisphere, despite preserved effects of voluntary attention on fusiform; whereas emotional increases can still arise despite deficits in attention or awareness following parietal damage, and appear relatively unaffected by pharmacological increases in cholinergic stimulation. Fear-related modulations of face processing driven by amygdala signals may implicate not only fusiform cortex, but also earlier visual areas in occipital cortex (e.g., V1) and other distant regions involved in social, cognitive, or somatic responses (e.g., superior temporal sulcus, cingulate, or parietal areas). In the temporal domain, evoked-potentials show a widespread time-course of emotional face perception, with some increases in the amplitude of responses recorded over both occipital and frontal regions for fearful relative to neutral faces (as well as in the amygdala and orbitofrontal cortex, when using intracranial recordings), but with different latencies post-stimulus onset. Early emotional responses may arise around 120ms, prior to a full visual categorization stage indexed by the face-selective N170 component, possibly reflecting rapid emotion processing based on crude visual cues in faces. Other electrical components arise at later latencies and involve more sustained activities, probably generated in associative or supramodal brain areas, and resulting in part from the modulatory signals received from amygdala. Altogether, these fMRI and ERP results demonstrate that emotion face perception is a complex process that cannot be related to a single neural event taking place in a single brain regions, but rather implicates an interactive network with distributed activity in time and space. Moreover, although traditional models in cognitive neuropsychology have often considered that facial expression and facial identity are processed along two separate pathways, evidence from fMRI and ERPs suggests instead that emotional processing can strongly affect brain systems responsible for face recognition and memory. The functional implications of these interactions remain to be fully explored, but might play an important role in the normal development of face processing skills and in some neuropsychiatric disorders.  相似文献   

5.
Within the neural face-processing network, the right occipital face area (rOFA) plays a prominent role, and it has been suggested that it receives both feed-forward and re-entrant feedback from other face sensitive areas. Its functional role is less well understood and whether the rOFA is involved in the initial analysis of a face stimulus or in the detailed integration of different face properties remains an open question. The present study investigated the functional role of the rOFA with regard to different face properties (identity, expression, and gaze) using transcranial magnetic stimulation (TMS). Experiment 1 showed that the rOFA integrates information across different face properties: performance for the combined processing of identity and expression decreased after TMS to the rOFA, while no impairment was seen in gaze processing. In Experiment 2 we examined the temporal dynamics of this effect. We pinpointed the impaired integrative computation to 170 ms post stimulus presentation. Together the results suggest that TMS to the rOFA affects the integrative processing of facial identity and expression at a mid-latency processing stage.  相似文献   

6.
We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.  相似文献   

7.
More than a decade of research has demonstrated that faces evoke prioritized processing in a ‘core face network’ of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.  相似文献   

8.
Face perception is mediated by a distributed cortical network   总被引:11,自引:0,他引:11  
The neural system associated with face perception in the human brain was investigated using functional magnetic resonance imaging (fMRI). In contrast to many studies that focused on discreet face-responsive regions, the objective of the current study was to demonstrate that regardless of stimulus format, emotional valence, or task demands, face perception evokes activation in a distributed cortical network. Subjects viewed various stimuli (line drawings of unfamiliar faces and photographs of unfamiliar, famous, and emotional faces) and their phase scrambled versions. A network of face-responsive regions was identified that included the inferior occipital gyrus, fusiform gyrus, superior temporal sulcus, hippocampus, amygdala, inferior frontal gyrus, and orbitofrontal cortex. Although bilateral activation was found in all regions, the response in the right hemisphere was stronger. This hemispheric asymmetry was manifested by larger and more significant clusters of activation and larger number of subjects who showed the effect. A region of interest analysis revealed that while all face stimuli evoked activation within all regions, viewing famous and emotional faces resulted in larger spatial extents of activation and higher amplitudes of the fMRI signal. These results indicate that a mere percept of a face is sufficient to localize activation within the distributed cortical network that mediates the visual analysis of facial identity and expression.  相似文献   

9.
The N170 waveform is larger over posterior temporal cortex when healthy subjects view faces than when they view other objects. Source analyses have produced mixed results regarding whether this effect originates in the fusiform face area (FFA), lateral occipital cortex, or superior temporal sulcus (STS), components of the core face network. In a complementary approach, we assessed the face-selectivity of the right N170 in five patients with acquired prosopagnosia, who also underwent structural and functional magnetic resonance imaging. We used a non-parametric bootstrap procedure to perform single-subject analyses, which reliably confirmed N170 face-selectivity in each of 10 control subjects. Anterior temporal lesions that spared the core face network did not affect the face-selectivity of the N170. A face-selective N170 was also present in another subject who had lost only the right FFA. However, face-selectivity was absent in two patients with lesions that eliminated the occipital face area (OFA) and FFA, sparing only the STS. Thus while the right FFA is not necessary for the face-selectivity of the N170, neither is the STS sufficient. We conclude that the face-selective N170 in prosopagnosia requires residual function of at least two components of the core face-processing network.  相似文献   

10.
11.
Most studies assessing facial affect recognition in patients with TLE reported emotional disturbances in patients with TLE. Results from the few fMRI studies assessing neural correlates of affective face processing in patients with TLE are divergent. Some, but not all, found asymmetrical mesiotemporal activations, i.e., stronger activations within the hemisphere contralateral to seizure onset. Little is known about the association between neural correlates of affect processing and subjective evaluation of the stimuli presented. Therefore, we investigated the neural correlates of processing dynamic fearful faces in 37 patients with mesial temporal lobe epilepsy (TLE; 18 with left-sided TLE (lTLE), 19 with right-sided TLE (rTLE)) and 20 healthy subjects. We additionally assessed individual ratings of the fear intensity and arousal perception of the fMRI stimuli and correlated these data with the activations induced by the fearful face paradigm and activation lateralization within the mesiotemporal structures (in terms of individual lateralization indices, LIs). In healthy subjects, whole-brain analysis showed bilateral activations within a widespread network of mesial and lateral temporal, occipital, and frontal areas. The patient groups activated different parts of this network. In patients with lTLE, we found predominantly right-sided activations within the mesial and lateral temporal cortices and the superior frontal gyrus. In patients with rTLE, we observed bilateral activations in the posterior regions of the lateral temporal lobe and within the occipital cortex. Mesiotemporal region-of-interest analysis showed bilateral symmetric activations associated with watching fearful faces in healthy subjects. According to the region of interest and LI analyses, in the patients with lTLE, mesiotemporal activations were lateralized to the right hemisphere. In the patients with rTLE, we found left-sided mesiotemporal activations. In patients with lTLE, fear ratings were comparable to those of healthy subjects and were correlated with relatively stronger activations in the right compared to the left amygdala. Patients with rTLE showed significantly reduced fear ratings compared to healthy subjects, and we did not find associations with amygdala lateralization. Although we found stronger activations within the contralateral mesial temporal lobe in the majority of all patients, our results suggest that only in the event of left-sided mesiotemporal damage is the right mesial temporal lobe able to preserve intact facial fear recognition. In the event of right-sided mesiotemporal damage, fear recognition is disturbed. This underlines the hypothesis that the right amygdala is biologically predisposed to processing fear, and its function cannot be fully compensated in the event of right-sided mesiotemporal damage.  相似文献   

12.
Recognising a person''s identity often relies on face and body information, and is tolerant to changes in low‐level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low‐level visual input in the anterior face‐responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants'' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high‐level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.  相似文献   

13.
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel‐based global brain connectivity method based on resting‐state fMRI to characterize the within‐network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the “Reading the Mind in the Eyes” Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting‐state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC‐pSTS and OFA‐pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub‐like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930–1940, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

14.
We report a functional magnetic resonance imaging (fMRI) adaptation study of two well-described patients, DF and PS, who present face identity recognition impairments (prosopagnosia) following brain-damage. Comparing faces to non-face objects elicited activation in all visual areas of the cortical face processing network that were spared subsequent to brain damage. The common brain lesion in the two patients was in the right inferior occipital cortex, in the territory of the right “occipital face area” (‘OFA’), which strengthens the critical role of this region in processing faces. Despite the lesion to the right ‘OFA’, there was normal range of sensitivity to faces in the right “fusiform face area” (‘FFA’) in both patients, supporting a non-hierarchical model of face processing at the cortical level. At the same time, however, sensitivity to individual face representations, as indicated by release from adaptation to identity, was abnormal in the right ‘FFA’ of both patients. This suggests that the right ‘OFA’ is necessary to individualize faces, perhaps through reentrant interactions with other cortical face sensitive areas. The lateral occipital area (LO) is damaged bilaterally in patient DF, who also shows visual object agnosia. However, in patient PS, in whom LO was spared, sensitivity to individual representations of non-face objects was still found in this region, as in the normal brain, consistent with her preserved object recognition abilities. Taken together, these observations, which fruitfully combine functional imaging and neuropsychology, place strong constraints on the possible functional organization of the cortical areas mediating face processing in the human brain.  相似文献   

15.
Right temporal lobe structures are involved in face and facial expression processing and in mnestic functions. Face and facial expression memory was investigated in 15 patients with left (LTLE) and 18 patients with right (RTLE) temporal lobe epilepsy as well as 13 healthy controls. Pairs of pictures combining four faces and four emotions had to be matched according to face identity or facial expression. In the memory tasks, the two pictures of a pair were divided by a memory interval of 2000 milliseconds, whereas in the perception tasks (control condition) both pictures were presented simultaneously. RTLE patients had significantly lower scores than healthy controls in face memory. LTLE patients had significantly lower scores than healthy controls in face and facial expression memory. The data confirm impaired face memory in RTLE patients and show that LTLE patients display deficits in face as well as in facial expression memory. Results are discussed according to functional reorganization, memory strategies, perception performance, naming problems, and group characteristics.  相似文献   

16.
Functional magnetic resonance imaging was used during emotion recognition to identify changes in functional brain activation in 21 first-episode, treatment-naive major depressive disorder patients before and after antidepressant treatment. Following escitalopram oxalate treatment, patients exhibited decreased activation in bilateral precentral gyrus, bilateral middle frontal gyrus, left middle temporal gyrus, bilateral postcentral gyrus, left cingulate and right parahippocampal gyrus, and increased activation in right superior frontal gyrus, bilateral superior parietal lobule and left occipital gyrus during sad facial expression recognition. After antidepressant treatment, patients also exhibited decreased activation in the bilateral middle frontal gyrus, bilateral cingulate and right parahippocampal gyrus, and increased activation in the right inferior frontal gyrus, left fusiform gyrus and right precuneus during happy facial expression recognition. Our experimental findings indicate that the limbic-cortical network might be a key target region for antidepressant treatment in major depressive disorder.  相似文献   

17.
Engell AD  Haxby JV 《Neuropsychologia》2007,45(14):3234-3241
The perception of facial expression and gaze-direction are important aspects of non-verbal communication. Expressions communicate the internal emotional state of others while gaze-direction offers clues to their attentional focus and future intentions. Cortical regions in the superior temporal sulcus (STS) play a central role in the perception of expression and gaze, but the extent to which the neural representations of these facial gestures are overlapping is unknown. In the current study 12 subjects observed neutral faces with direct-gaze, neutral faces with averted-gaze, or emotionally expressive faces with direct-gaze while we scanned their brains with functional magnetic resonance imaging (fMRI), allowing a comparison of the hemodynamic responses evoked by perception of expression and averted-gaze. The inferior occipital gyri, fusiform gyri, STS and inferior frontal gyrus were more strongly activated when subjects saw facial expressions than when they saw neutral faces. The right STS was more strongly activated by the perception of averted-gaze than direct-gaze faces. A comparison of the responses within right STS revealed that expression and averted-gaze activated distinct, though overlapping, regions of cortex. We propose that gaze-direction and expression are represented by dissociable overlapping neural systems.  相似文献   

18.
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.  相似文献   

19.
Humans can identify individual faces under different viewpoints, even after a single encounter. We determined brain regions responsible for processing face identity across view changes after variable delays with several intervening stimuli, using event-related functional magnetic resonance imaging during a long-term repetition priming paradigm. Unfamiliar faces were presented sequentially either in a frontal or three-quarter view. Each face identity was repeated once after an unpredictable lag, with either the same or another viewpoint. Behavioral data showed significant priming in response time, irrespective of view changes. Brain imaging results revealed a reduced response in the lateral occipital and fusiform cortex with face repetition. Bilateral face-selective fusiform areas showed view-sensitive repetition effects, generalizing only from three-quarter to front-views. More medial regions in the left (but not in the right) fusiform showed repetition effects across all types of viewpoint changes. These results reveal that distinct regions within the fusiform cortex hold view-sensitive or view-invariant traces of novel faces, and that face identity is represented in a view-sensitive manner in the functionally defined face-selective areas of both hemispheres. In addition, our finding of a better generalization after exposure to a 3/4-view than to a front-view demonstrates for the first time a neural substrate in the fusiform cortex for the common recognition advantage of three-quarter faces. This pattern provides new insights into the nature of face representation in the human visual system.  相似文献   

20.
Understanding the neurobiological substrates of self-recognition yields important insight into socially and clinically critical cognitive functions such as theory of mind. Experimental evidence suggests that right frontal and parietal lobes preferentially process self-referent information. Recognition of one's own face is an important parameter of self-recognition, but well-controlled experimental data on the brain substrates of self-face recognition is limited. The goal of this study was to characterize the activation specific to self-face in comparison with control conditions of two levels of familiarity: unknown unfamiliar face and the more stringent control of a personally familiar face. We studied 12 healthy volunteers who made "unknown," "familiar," and "self" judgments about photographs of three types of faces: six different novel faces, a personally familiar face (participant's fraternity brother), and their own face during an event-related functional MRI (fMRI) experiment. Contrasting unknown faces with baseline showed activation of the inferior occipital lobe, which supports previous findings suggesting the presence of a generalized face-processing area within the inferior occipital-temporal region. Activation in response to a familiar face, when contrasted with an unknown face, invoked insula, middle temporal, inferior parietal, and medial frontal lobe activation, which is consistent with an existing hypothesis suggesting familiar face recognition taps neural substrates that are different from those involved in general facial processing. Brain response to self-face, when contrasted with familiar face, revealed activation in the right superior frontal gyrus, medial frontal and inferior parietal lobes, and left middle temporal gyrus. The contrast familiar vs. self produced activation only in the anterior cingulate gyrus. Our results support the existence of a bilateral network for both perceptual and executive aspects of self-face processing that cannot be accounted for by a simple hemispheric dominance model. This network is similar to those implicated in social cognition, mirror neuron matching, and face-name matching. Our findings also show that some regions of the medial frontal and parietal lobes are specifically activated by familiar faces but not unknown or self-faces, indicating that these regions may serve as markers of face familiarity and that the differences between activation associated with self-face recognition and familiar face recognition are subtle and appear to be localized to lateral frontal, parietal, and temporal regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号