首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The neural basis of face processing has been extensively studied in the past two decades. The current dominant neural model proposed by Haxby et al. (2000); Gobbini and Haxby (2007) suggests a division of labor between the fusiform face area (FFA), which processes invariant facial aspects, such as identity, and the posterior superior temporal sulcus (pSTS), which processes changeable facial aspects, such as expression. An extension to this model for the processing of dynamic faces proposed by O’Toole et al. (2002) highlights the role of the pSTS in the processing of identity from dynamic familiar faces. To evaluate these models, we reviewed recent neuroimaging studies that examined the processing of identity and expression with static and dynamic faces. Based on accumulated data we propose an updated model, emphasizing the dissociation between form and motion as the primary functional division between a ventral stream that goes through the FFA and a dorsal stream that goes through the STS, respectively. We also encourage future studies to expand their research to the processing of dynamic faces.  相似文献   

2.
《Brain stimulation》2020,13(4):1008-1013
BackgroundNeuroimaging studies suggest that facial expression recognition is processed in the bilateral posterior superior temporal sulcus (pSTS). Our recent repetitive transcranial magnetic stimulation (rTMS) study demonstrates that the bilateral pSTS is causally involved in expression recognition, although involvement of the right pSTS is greater than involvement of the left pSTS.Objective/Hypothesis: In this study, we used a dual-site TMS to investigate whether the left pSTS is functionally connected to the right pSTS during expression recognition. We predicted that if this connection exists, simultaneous TMS disruption of the bilateral pSTS would impair expression recognition to a greater extent than unilateral stimulation of the right pSTS alone.MethodsParticipants attended two TMS sessions. In Session 1, participants performed an expression recognition task while rTMS was delivered to the face-sensitive right pSTS (experimental site), object-sensitive right lateral occipital complex (control site) or no rTMS was delivered (behavioural control). In Session 2, the same experimental design was used, except that continuous theta-burst stimulation (cTBS) was delivered to the left pSTS immediately before behavioural testing commenced. Session order was counter-balanced across participants.ResultsIn Session 1, rTMS to the rpSTS impaired performance accuracy compared to the control conditions. Crucially in Session 2, the size of this impairment effect doubled after cTBS was delivered to the left pSTS.ConclusionsOur results provide evidence for a causal functional connection between the left and right pSTS during expression recognition. In addition, this study further demonstrates the utility of the dual-site TMS for investigating causal functional links between brain regions.  相似文献   

3.
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face‐selective areas; in contrast, whether motion‐sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi‐voxel pattern analysis (MVPA) to explore facial expression decoding in both face‐selective and motion‐sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes‐obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye‐related information on facial expression decoding were also examined. It was found that motion‐sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face‐selective and motion‐sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye‐related information was also observed. Overall, the findings showed that emotional expressions are represented in motion‐sensitive areas in addition to conventional face‐selective areas, suggesting that motion‐sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye‐related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113–3125, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

4.
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal‐movement and ventral‐form visual cortex regions. Here, we explored, for the first time, whether similar dorsal–ventral interactions (assessed via functional connectivity), might also exist for visual‐speech processing. We then examined whether altered dorsal–ventral connectivity is observed in adults with high‐functioning autism spectrum disorder (ASD), a disorder associated with impaired visual‐speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal‐movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral‐form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT—right OFA, left TVSA—left FFA). The results confirmed our hypothesis that functional connectivity between dorsal‐movement and ventral‐form regions exists during visual‐speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face‐to‐face communication.  相似文献   

5.
This study investigated how changes of functional connectivity over time accompany consolidation of face memories. Based on previous research, it was hypothesized that connectivity changes in networks initially active during face perception and face encoding would be associated with individual recognition memory performance. Resting‐state functional connectivity was examined shortly before, shortly after and about 40 min after incidental learning of faces. Memory performance was assessed in a surprise recognition test shortly after the last resting‐state session. Results reveal that memory performance‐related connectivity between the left fusiform face area and other brain areas gradually changed over the course of the experiment. Specifically, the increase in connectivity with the contralateral fusiform gyrus, the hippocampus, the amygdala and the inferior frontal gyrus correlated with recognition memory performance. As the increase in connectivity in the two final resting‐state sessions was associated with memory performance, the present results demonstrate that memory formation is not restricted to the incidental learning phase but continues and increases in the following 40 min. It is discussed that the delayed increase in inter‐hemisphere connectivity between the left and right fusiform gyrus is an indicator for memory formation and consolidation processes.  相似文献   

6.
Within the neural face-processing network, the right occipital face area (rOFA) plays a prominent role, and it has been suggested that it receives both feed-forward and re-entrant feedback from other face sensitive areas. Its functional role is less well understood and whether the rOFA is involved in the initial analysis of a face stimulus or in the detailed integration of different face properties remains an open question. The present study investigated the functional role of the rOFA with regard to different face properties (identity, expression, and gaze) using transcranial magnetic stimulation (TMS). Experiment 1 showed that the rOFA integrates information across different face properties: performance for the combined processing of identity and expression decreased after TMS to the rOFA, while no impairment was seen in gaze processing. In Experiment 2 we examined the temporal dynamics of this effect. We pinpointed the impaired integrative computation to 170 ms post stimulus presentation. Together the results suggest that TMS to the rOFA affects the integrative processing of facial identity and expression at a mid-latency processing stage.  相似文献   

7.
Time sensitivity is affected by emotional stimuli such as fearful faces. The effect of threatening stimuli on time perception depends on numerous factors, including task type and duration range. We applied a two‐interval forced‐choice task using face stimuli to healthy volunteers to evaluate time perception and emotion interaction using functional magnetic resonance imaging. We conducted finite impulse response analysis to examine time series for the significantly activated brain areas and psycho‐physical interaction to investigate the connectivity between selected regions. Time perception engaged a right‐lateralised frontoparietal network, while a face discrimination task activated the amygdala and fusiform face area (FFA). No voxels were active with regard to the effect of expression (fearful versus neutral). In parallel with this, our behavioural results showed that attending to the fearful faces did not cause duration overestimation. Finally, connectivity of the amygdala and FFA to the middle frontal gyrus increased during the face processing condition compared to the timing task. Overall, our results suggest that the prefrontal–amygdala connectivity might be required for the emotional processing of facial stimuli. On the other hand, attentional load, task type and task difficulty are discussed as possible factors that influence the effects of emotion on time perception.  相似文献   

8.
The ability to process and respond to emotional facial expressions is a critical skill for healthy social and emotional development. There has been growing interest in understanding the neural circuitry underlying development of emotional processing, with previous research implicating functional connectivity between amygdala and frontal regions. However, existing work has focused on threatening emotional faces, raising questions regarding the extent to which these developmental patterns are specific to threat or to emotional face processing more broadly. In the current study, we examined age‐related changes in brain activity and amygdala functional connectivity during an fMRI emotional face matching task (including angry, fearful, and happy faces) in 61 healthy subjects aged 7–25 years. We found age‐related decreases in ventral medial prefrontal cortex activity in response to happy faces but not to angry or fearful faces, and an age‐related change (shifting from positive to negative correlation) in amygdala–anterior cingulate cortex/medial prefrontal cortex (ACC/mPFC) functional connectivity to all emotional faces. Specifically, positive correlations between amygdala and ACC/mPFC in children changed to negative correlations in adults, which may suggest early emergence of bottom‐up amygdala excitatory signaling to ACC/mPFC in children and later development of top‐down inhibitory control of ACC/mPFC over amygdala in adults. Age‐related changes in amygdala–ACC/mPFC connectivity did not vary for processing of different facial emotions, suggesting changes in amygdala–ACC/mPFC connectivity may underlie development of broad emotional processing, rather than threat‐specific processing. Hum Brain Mapp 37:1684–1695, 2016. © 2016 Wiley Periodicals, Inc .  相似文献   

9.
Facial expression and sex recognition in schizophrenia and depression.   总被引:1,自引:0,他引:1  
BACKGROUND: Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. OBJECTIVE: To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. METHODS: We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia (n=29) and compared their scores with those of depression patients (n=20) and control subjects (n=20). RESULTS: Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. CONCLUSION: Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.  相似文献   

10.
To recognize individuals, the brain often integrates audiovisual information from familiar or unfamiliar faces, voices, and auditory names. To date, the effects of the semantic familiarity of stimuli on audiovisual integration remain unknown. In this functional magnetic resonance imaging (fMRI) study, we used familiar/unfamiliar facial images, auditory names, and audiovisual face‐name pairs as stimuli to determine the influence of semantic familiarity on audiovisual integration. First, we performed a general linear model analysis using fMRI data and found that audiovisual integration occurred for familiar congruent and unfamiliar face‐name pairs but not for familiar incongruent pairs. Second, we decoded the familiarity categories of the stimuli (familiar vs. unfamiliar) from the fMRI data and calculated the reproducibility indices of the brain patterns that corresponded to familiar and unfamiliar stimuli. The decoding accuracy rate was significantly higher for familiar congruent versus unfamiliar face‐name pairs (83.2%) than for familiar versus unfamiliar faces (63.9%) and for familiar versus unfamiliar names (60.4%). This increase in decoding accuracy was not observed for familiar incongruent versus unfamiliar pairs. Furthermore, compared with the brain patterns associated with facial images or auditory names, the reproducibility index was significantly improved for the brain patterns of familiar congruent face‐name pairs but not those of familiar incongruent or unfamiliar pairs. Our results indicate the modulatory effect that semantic familiarity has on audiovisual integration. Specifically, neural representations were enhanced for familiar congruent face‐name pairs compared with visual‐only faces and auditory‐only names, whereas this enhancement effect was not observed for familiar incongruent or unfamiliar pairs. Hum Brain Mapp 37:4333–4348, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

11.
Previous research suggests a role of the dorsomedial prefrontal cortex (dmPFC) in metacognitive representation of social information, while the right posterior superior temporal sulcus (pSTS) has been linked to social perception. This study targeted these functional roles in the context of spontaneous mentalizing. An animated shapes task was presented to 46 subjects during functional magnetic resonance imaging. Stimuli consisted of video clips depicting animated shapes whose movement patterns prompt spontaneous mentalizing or simple intention attribution. Based on their differential response during spontaneous mentalizing, both regions were characterized with respect to their task‐dependent connectivity profiles and their associations with autistic traits. Functional network analyses revealed highly localized coupling of the right pSTS with visual areas in the lateral occipital cortex, while the dmPFC showed extensive coupling with instances of large‐scale control networks and temporal areas including the right pSTS. Autistic traits were related to mentalizing‐specific activation of the dmPFC and to the strength of connectivity between the dmPFC and posterior temporal regions. These results are in good agreement with the hypothesized roles of the dmPFC and right pSTS for metacognitive representation and perception‐based processing of social information, respectively, and further inform their implication in social behavior linked to autism. Hum Brain Mapp 38:3791–3803, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

12.
Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual’s face.  相似文献   

13.
Human object recognition is dependent on occipito-temporal cortex (OTC), but a complete understanding of the complex functional architecture of this area must account for how it is connected to the wider brain. Converging functional magnetic resonance imaging evidence shows that univariate responses to different categories of information (e.g., faces, bodies, and nonhuman objects) are strongly related to, and potentially shaped by, functional and structural connectivity to the wider brain. However, to date, there have been no systematic attempts to determine how distal connectivity and complex local high-level responses in occipito-temporal cortex (i.e., multivoxel response patterns) are related. Here, we show that distal functional connectivity is related to, and can reliably index, high-level representations for several visual categories (i.e., tools, faces, and places) within occipito-temporal cortex; that is, voxel sets that are strongly connected to distal brain areas show higher pattern discriminability than less well-connected sets do. We further show that in several cases, pattern discriminability is higher in sets of well-connected voxels than sets defined by local activation (e.g., strong amplitude responses to faces in fusiform face area). Together, these findings demonstrate the important relationship between the complex functional organization of occipito-temporal cortex and wider brain connectivity.SIGNIFICANCE STATEMENT Human object recognition relies strongly on OTC, yet responses in this broad area are often considered in relative isolation to the rest of the brain. We employ a novel connectivity-guided voxel selection approach with functional magnetic resonance imaging data to show higher sensitivity to information (i.e., higher multivoxel pattern discriminability) in voxel sets that share strong connectivity to distal brain areas, relative to (1) voxel sets that are less strongly connected, and in several cases, (2) voxel sets that are defined by strong local response amplitude. These findings underscore the importance of distal contributions to local processing in OTC.  相似文献   

14.
Activity in category selective regions of the temporal and parietal lobes during encoding has been associated with subsequent memory for face and scene stimuli. Reactivation theories of memory consolidation predict that after encoding connectivity between these category‐selective regions and the hippocampus should be modulated and predict recognition memory. However, support for this proposal has been limited in humans. Here, participants completed a resting‐state functional MRI (fMRI) scan, followed by face‐ and place‐encoding tasks, followed by another resting‐state fMRI scan during which they were asked to think about the stimuli they had previously encountered. Individual differences in face recognition memory were predicted by the degree to which connectivity between face‐responsive regions of the fusiform gyrus and perirhinal cortex increased following the face‐encoding task. In contrast, individual differences in scene recognition were predicted by connectivity between the hippocampus and a scene‐selective region of the retrosplenial cortex before and after the place‐encoding task. Our results provide novel evidence for category specificity in the neural mechanisms supporting memory consolidation.  相似文献   

15.
There is increasing appreciation that network‐level interactions among regions produce components of face processing previously ascribed to individual regions. Our goals were to use an exhaustive data‐driven approach to derive and quantify the topology of directed functional connections within a priori defined nodes of the face processing network and evaluate whether the topology is category‐specific. Young adults were scanned with fMRI as they viewed movies of faces, objects, and scenes. We employed GIMME to model effective connectivity among core and extended face processing regions, which allowed us to evaluate all possible directional connections, under each viewing condition (face, object, place). During face processing, we observed directional connections from the right posterior superior temporal sulcus to both the right occipital face area and right fusiform face area (FFA), which does not reflect the topology reported in prior studies. We observed connectivity between core and extended regions during face processing, but this limited to a feed‐forward connection from the FFA to the amygdala. Finally, the topology of connections was unique to face processing. These findings suggest that the pattern of directed functional connections within the face processing network, particularly in the right core regions, may not be as hierarchical and feed‐forward as described previously. Our findings support the notion that topologies of network connections are specialized, emergent, and dynamically responsive to task demands.  相似文献   

16.
Within the object recognition‐related ventral visual stream, the human fusiform gyrus (FG), which topographically connects the striate cortex to the inferior temporal lobe, plays a pivotal role in high‐level visual/cognitive functions. However, though there are many previous investigations of distinct functional modules within the FG, the functional organization of the whole FG in its full functional heterogeneity has not yet been established. In the current study, a replicable functional organization of the FG based on distinct anatomical connectivity patterns was identified. The FG was parcellated into medial (FGm), lateral (FGl), and anterior (FGa) regions using diffusion tensor imaging. We validated the reasonability of such an organizational scheme from the perspective of resting‐state whole brain functional connectivity patterns and the involvement of functional subnetworks. We found corroborating support for these three distinct modules, and suggest that the FGm serves as a transition region that combines multiple stimuli, the FGl is responsible for categorical recognition, and the FGa is involved in semantic understanding. These findings support two organizational functional transitions of the ventral temporal gyrus, a posterior/anterior direction of visual/semantic processing, and a media/lateral direction of high‐level visual processing. Our results may facilitate a more detailed study of the human FG in the future. Hum Brain Mapp 37:3003–3016, 2016. © 2016 Wiley Periodicals, Inc .  相似文献   

17.
Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face‐ and voice‐sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross‐modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice‐face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face‐sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face‐sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals. Hum Brain Mapp, 36:324–339, 2015. © 2014 Wiley Periodicals, Inc .  相似文献   

18.
Functional magnetic resonance imaging (fMRI) is increasingly used to characterize functional connectivity between brain regions. Given the vast number of between‐voxel interactions in high‐dimensional fMRI data, it is an ongoing challenge to detect stable and generalizable functional connectivity in the brain among groups of subjects. Component models can be used to define subspace representations of functional connectivity that are more interpretable. It is, however, unclear which component model provides the optimal representation of functional networks for multi‐subject fMRI datasets. A flexible cross‐validation approach that assesses the ability of the models to predict voxel‐wise covariance in new data, using three different measures of generalization was proposed. This framework is used to compare a range of component models with varying degrees of flexibility in their representation of functional connectivity, evaluated on both simulated and experimental resting‐state fMRI data. It was demonstrated that highly flexible subject‐specific component subspaces, as well as very constrained average models, are poor predictors of whole‐brain functional connectivity, whereas the best‐generalizing models account for subject variability within a common spatial subspace. Within this set of models, spatial Independent Component Analysis (sICA) on concatenated data provides more interpretable brain patterns, whereas a consistent‐covariance model that accounts for subject‐specific network scaling (PARAFAC2) provides greater stability in functional connectivity relationships between components and their spatial representations. The proposed evaluation framework is a promising quantitative approach to evaluating component models, and reveals important differences between subspace models in terms of predictability, robustness, characterization of subject variability, and interpretability of the model parameters. Hum Brain Mapp 38:882–899, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

19.
Our ability to process complex social cues presented by faces improves during adolescence. Using multivariate analyses of neuroimaging data collected longitudinally from a sample of 38 adolescents (17 males) when they were 10, 11.5, 13 and 15 years old, we tested the possibility that there exists parallel variations in the structural and functional development of neural systems supporting face processing. By combining measures of task-related functional connectivity and brain morphology, we reveal that both the structural covariance and functional connectivity among ‘distal’ nodes of the face-processing network engaged by ambiguous faces increase during this age range. Furthermore, we show that the trajectory of increasing functional connectivity between the distal nodes occurs in tandem with the development of their structural covariance. This demonstrates a tight coupling between functional and structural maturation within the face-processing network. Finally, we demonstrate that increased functional connectivity is associated with age-related improvements of face-processing performance, particularly in females. We suggest that our findings reflect greater integration among distal elements of the neural systems supporting the processing of facial expressions. This, in turn, might facilitate an enhanced extraction of social information from faces during a time when greater importance is placed on social interactions.  相似文献   

20.
Post‐task resting state dynamics can be viewed as a task‐driven state where behavioral performance is improved through endogenous, non‐explicit learning. Tasks that have intrinsic value for individuals are hypothesized to produce post‐task resting state dynamics that promote learning. We measured simultaneous fMRI/EEG and DTI in Division‐1 collegiate baseball players and compared to a group of controls, examining differences in both functional and structural connectivity. Participants performed a surrogate baseball pitch Go/No‐Go task before a resting state scan, and we compared post‐task resting state connectivity using a seed‐based analysis from the supplementary motor area (SMA), an area whose activity discriminated players and controls in our previous results using this task. Although both groups were equally trained on the task, the experts showed differential activity in their post‐task resting state consistent with motor learning. Specifically, we found (1) differences in bilateral SMA–L Insula functional connectivity between experts and controls that may reflect group differences in motor learning, (2) differences in BOLD‐alpha oscillation correlations between groups suggests variability in modulatory attention in the post‐task state, and (3) group differences between BOLD‐beta oscillations that may indicate cognitive processing of motor inhibition. Structural connectivity analysis identified group differences in portions of the functionally derived network, suggesting that functional differences may also partially arise from variability in the underlying white matter pathways. Generally, we find that brain dynamics in the post‐task resting state differ as a function of subject expertise and potentially result from differences in both functional and structural connectivity. Hum Brain Mapp 37:4454–4471, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号