首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Materna S  Dicke PW  Thier P 《Neuropsychologia》2008,46(11):2759-2765
Neuroimaging and lesion studies suggest that the superior temporal sulcus (STS) region is involved in eye gaze processing. Hence, the STS region is suggested to be the location of the "eye-direction detector", a key element in the "mindreading model" proposed by Baron-Cohen [Baron-Cohen, S. (1995). Mindblindness: An essay on autism and theory of mind. Cambridge: The MIT Press]. Not only the eyes, but also a pointing finger of another person can inform us about the direction of attention of the other one. In an event-related functional magnetic resonance imaging experiment, healthy human subjects actively followed a directional cue provided either by the eyes or, alternatively, the pointing finger of another person to make an eye movement toward an object in space. Our results show clearly that the posterior STS region is equally involved in processing directional information from either source. The only difference between the two cues was found in the lingual gyrus, in which a stronger blood-oxygen-level-dependent (BOLD) response was observed during the finger pointing compared to the gaze following task. We suggest that different structures might be involved in the initial processing of directional information coming from the eyes or the pointing finger. These different streams of information may then converge in the posterior STS region, orchestrating the usage of a wider range of socially relevant directional cues able to inform us about the direction of attention and the intentions of another person.  相似文献   

2.
It has been suggested that in humans the mirror neuron system provides a neural substrate for imitation behaviour, but the relative contributions of different brain regions to the imitation of manual actions is still a matter of debate. To investigate the role of the mirror neuron system in imitation we used fMRI to examine patterns of neural activity under four different conditions: passive observation of a pantomimed action (e.g., hammering a nail); (2) imitation of an observed action; (3) execution of an action in response to a word cue; and (4) self‐selected execution of an action. A network of cortical areas, including the left supramarginal gyrus, left superior parietal lobule, left dorsal premotor area and bilateral superior temporal sulcus (STS), was significantly active across all four conditions. Crucially, within this network the STS bilaterally was the only region in which activity was significantly greater for action imitation than for the passive observation and execution conditions. We suggest that the role of the STS in imitation is not merely to passively register observed biological motion, but rather to actively represent visuomotor correspondences between one's own actions and the actions of others. Hum Brain Mapp, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

3.
The superior temporal sulcus (STS) plays an important role in the perception of biological motion and in the representation of higher order information about other's goals and intentions. Using a rapid event related functional magnetic resonance imaging paradigm (fMRI), children (n=37, mean age 11.0) and adults (n=17, mean age 25.3) viewed congruent or incongruent actions. Congruency (and incongruency) of a reach toward an object was a function of whether the object had just previously received positive or negative regard. Relative to congruent trials, both children and adults showed an increase in activation in the posterior STS bilaterally, in response to incongruent trials. In children, these STS regions exhibited developmental changes. Specifically, the differential response to incongruent trials relative to congruent trials was larger in older children in both hemispheres.  相似文献   

4.
The right posterior superior temporal sulcus (pSTS) is a neural region involved in assessing the goals and intentions underlying the motion of social agents. Recent research has identified visual cues, such as chasing, that trigger animacy detection and intention attribution. When readily available in a visual display, these cues reliably activate the pSTS. Here, using functional magnetic resonance imaging, we examined if attributing intentions to random motion would likewise engage the pSTS. Participants viewed displays of four moving circles and were instructed to search for chasing or mirror-correlated motion. On chasing trials, one circle chased another circle, invoking the percept of an intentional agent; while on correlated motion trials, one circle’s motion was mirror reflected by another. On the remaining trials, all circles moved randomly. As expected, pSTS activation was greater when participants searched for chasing vs correlated motion when these cues were present in the displays. Of critical importance, pSTS activation was also greater when participants searched for chasing compared to mirror-correlated motion when the displays in both search conditions were statistically identical random motion. We conclude that pSTS activity associated with intention attribution can be invoked by top–down processes in the absence of reliable visual cues for intentionality.  相似文献   

5.
Summary A test movie consisting of 13 different silent movie scenes, each 10s in duration, was developed to test in patients the elementary abilities, perception and recognition of mimic and gestural expression. Each scene was subjected to 10 (2×5) verbal or non-verbal multiple-choice tests. Quantitative analysis of normal control group results is described. All sub-tests were very easy for normals and resulted in error scores below 5%. Thus the test is not designed to differentiate within a group of normal subjects but to characterize a pathological reduction in mimic, gesture and person recognition in schizophrenic and brain-lesioned patients.By measuring the dependency of correct recognition of the different movie scenes on the inspection duration, it was shown that the projection time of 10s applied in the full test led to a fairly high amount of informational redundancy. This was intentional so that stimulus material could be well perceived and recognized even by patients with somewhat fluctuating attentiveness.  相似文献   

6.
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant‐Directed Gestures (PDG) (e.g., “Hello, come here”), noncommunicative Self‐adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant‐Directed Sentences (PDS), matched in content to PDG, (2) Third‐person Sentences (3PS), describing a character's actions from a third‐person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface‐based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third‐person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant‐Directed Sentences to Third‐person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant‐directed communicative intent through gesture and language. Hum Brain Mapp 37:3444–3461, 2016. © 2016 Wiley Periodicals, Inc .  相似文献   

7.
Categorical perception (CP) is a mechanism whereby non-identical stimuli that have the same underlying meaning become invariantly represented in the brain. Through behavioral identification and discrimination tasks, CP has been demonstrated to occur broadly across the auditory modality, including in perception of speech (e.g. phonemes) and music (e.g. chords) stimuli. Several functional imaging studies have linked CP of speech with activity in multiple regions of the left superior temporal sulcus (STS). As language processing is generally left-hemisphere dominant and, conversely, fine-grained spectral processing shows a right hemispheric bias, we hypothesized that CP of musical stimuli would be associated with right STS activity. Here, we used functional magnetic resonance imaging (fMRI) to test healthy, musically-trained volunteers as they (a) underwent a musical chord adaptation/habituation paradigm and (b) performed an active discrimination task on within- and between-category chord pairs, as well as an acoustically-matched, more continuously-perceived orthogonal sound set. As predicted, greater right STS activity was linked to categorical processing in both experimental paradigms. The results suggest that the left and right STS are functionally specialized and that the right STS may take on a key role in CP of spectrally complex sounds.  相似文献   

8.
The strength of brain responses to others' pain has been shown to depend on the intensity of the observed pain. To investigate the temporal profile of such modulation, we recorded neuromagnetic brain responses of healthy subjects to facial expressions of pain. The subjects observed grayscale photos of the faces of genuine chronic pain patients when the patients were suffering from their ordinary pain (Chronic) and when the patients' pain was transiently intensified (Provoked). The cortical activation sequence during observation of the facial expressions of pain advanced from occipital to temporo‐occipital areas, and it differed between Provoked and Chronic pain expressions in the right middle superior temporal sulcus (STS) at 300–500 ms: the responses were about a third stronger for Provoked than Chronic pain faces. Furthermore, the responses to Provoked pain faces were about 40% stronger in the right than the left STS, and they decreased from the first to the second measurement session by one‐fourth, whereas no similar decrease in responses was found for Chronic pain faces. Thus, the STS responses to the pain expressions were modulated by the intensity of the observed pain and by stimulus repetition; the location and latency of the responses suggest close similarities between processing of pain and other affective facial expressions. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

9.
A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.  相似文献   

10.
Previous research suggests a role of the dorsomedial prefrontal cortex (dmPFC) in metacognitive representation of social information, while the right posterior superior temporal sulcus (pSTS) has been linked to social perception. This study targeted these functional roles in the context of spontaneous mentalizing. An animated shapes task was presented to 46 subjects during functional magnetic resonance imaging. Stimuli consisted of video clips depicting animated shapes whose movement patterns prompt spontaneous mentalizing or simple intention attribution. Based on their differential response during spontaneous mentalizing, both regions were characterized with respect to their task‐dependent connectivity profiles and their associations with autistic traits. Functional network analyses revealed highly localized coupling of the right pSTS with visual areas in the lateral occipital cortex, while the dmPFC showed extensive coupling with instances of large‐scale control networks and temporal areas including the right pSTS. Autistic traits were related to mentalizing‐specific activation of the dmPFC and to the strength of connectivity between the dmPFC and posterior temporal regions. These results are in good agreement with the hypothesized roles of the dmPFC and right pSTS for metacognitive representation and perception‐based processing of social information, respectively, and further inform their implication in social behavior linked to autism. Hum Brain Mapp 38:3791–3803, 2017. © 2017 Wiley Periodicals, Inc.  相似文献   

11.
A widely adopted neural model of face perception (Haxby, Hoffman, & Gobbini, 2000) proposes that the posterior superior temporal sulcus (STS) represents the changeable features of a face, while the face-responsive fusiform gyrus (FFA) encodes invariant aspects of facial structure. ‘Changeable features’ of a face can include rigid and non-rigid movements. The current study investigated neural responses to rigid, moving faces displaying shifts in social attention. Both functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) were used to investigate neural responses elicited when participants viewed video clips in which actors made a rigid shift of attention, signalled congruently from both the eyes and head. These responses were compared to those elicited by viewing static faces displaying stationary social attention information or a scrambled video displaying directional motion. Both the fMRI and MEG analyses demonstrated heightened responses along the STS to turning heads compared to static faces or scrambled movement conditions. The FFA responded to both turning heads and static faces, showing only a slight increase in response to the dynamic stimuli. These results establish the applicability of the Haxby model to the perception of rigid face motions expressing changes in social attention direction. Furthermore, the MEG beamforming analyses found an STS response in an upper frequency band (30-80 Hz) which peaked in the right anterior region. These findings, derived from two complementary neuroimaging techniques, clarify the contribution of the STS during the encoding of rigid facial action patterns of social attention, emphasising the role of anterior sulcal regions alongside previously observed posterior areas.  相似文献   

12.
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel‐based global brain connectivity method based on resting‐state fMRI to characterize the within‐network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the “Reading the Mind in the Eyes” Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting‐state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC‐pSTS and OFA‐pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub‐like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930–1940, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

13.
Whether a single perceptual process or separate and possibly independent processes support facial identity and expression recognition is unclear. We used a morphed-face discrimination test to examine sensitivity to facial expression and identity information in patients with occipital or temporal lobe damage, and structural and functional MRI to correlate behavioral deficits with damage to the core regions of the face-processing network. We found selective impairments of identity perception in two patients with right inferotemporal lesions and two prosopagnosic patients with damage limited to the anterior temporal lobes. Of these four patients one exhibited damage to the right fusiform and occipital face areas, while the remaining three showed sparing of these regions. Thus impaired identity perception can occur with damage not only to the fusiform and occipital face areas, but also to other medial occipitotemporal structures that likely form part of a face recognition network. Impaired expression perception was seen in the fifth patient with damage affecting the face-related portion of the posterior superior temporal sulcus. This subject also had difficulty in discriminating identity when irrelevant variations in expression needed to be discounted. These neuropsychological and neuroimaging data provide evidence to complement models which address the separation of expression and identity perception within the face-processing network.  相似文献   

14.
We demonstrated that neonatal isolation (1-h pup isolation; postnatal days 2-9) impairs context-induced fear conditioning in adult male rats and tends to enhance this effect and foot shock sensitivity in females. In this study, we examine the effects of brief (i.e., handling; 15 min) and prolonged (3 h) maternal separations (postnatal days 1-21) on fear conditioning and foot shock sensitivity in adult male and female rats. Identical training and test conditions from our prior study were employed so comparisons of the three early life stressors could be made. Context- and cue-elicited freezing and ultrasonic vocalizations (USVs; 22 kHz) were measured after 10 tone-shock training trials in Experiment 1. In Experiment 2, foot shock responses (flinch, jump, sonic vocalizations) to escalating shock levels were assessed. Brief maternal separation impaired context- and cue-conditioned fear in rats of both sexes as assessed by USVs. Prolonged maternal separation only impaired context fear in female rats. There were no effects on foot shock sensitivity. Results of this and other studies suggest that early life stress impairs fear conditioning in adult rats whereas stress experienced in adulthood has the opposite effect. These opposing effects may reflect developmental differences on stress-induced alterations on hippocampal regulation of the hypothalamic-pituitary-adrenal axis.  相似文献   

15.
People with autism spectrum disorder (ASD) often have difficulty comprehending social situations in the complex, dynamic contexts encountered in the real world. To study the social brain under conditions which approximate naturalistic situations, we measured brain activity with functional magnetic resonance imaging while participants watched a full-length episode of the sitcom The Office. Having quantified the degree of social awkwardness at each moment of the episode, as judged by an independent sample of controls, we found that both individuals with ASD and control participants showed reliable activation of several brain regions commonly associated with social perception and cognition (e.g. those comprising the ‘mentalizing network’) during the more awkward moments. However, individuals with ASD showed less activity than controls in a region near right temporo-parietal junction (RTPJ) extending into the posterior end of the right superior temporal sulcus (RSTS). Further analyses suggested that, despite the free-form nature of the experimental design, this group difference was specific to this RTPJ/RSTS area of the mentalizing network; other regions of interest showed similar activity across groups with respect to both location and magnitude. These findings add support to a body of evidence suggesting that RTPJ/RSTS plays a special role in social processes across modalities and may function atypically in individuals with ASD navigating the social world.  相似文献   

16.
The human superior temporal sulcus (STS) has been suggested to be involved in gaze processing, but temporal data regarding this issue are lacking. We investigated this topic by combining fMRI and MEG in four normal subjects. Photographs of faces with either averted or straight eye gazes were presented and subjects passively viewed the stimuli. First, we analyzed the brain areas involved using fMRI. A group analysis revealed activation of the STS for averted compared to straight gazes, which was confirmed in all subjects. We then measured brain activity using MEG, and conducted a 3D spatial filter analysis. The STS showed higher activity in response to averted versus straight gazes during the 150–200 ms period, peaking at around 170 ms, after stimulus onset. In contrast, the fusiform gyrus, which was detected by the main effect of stimulus presentations in fMRI analysis, exhibited comparable activity across straight and averted gazes at about 170 ms. These results indicate involvement of the human STS in rapid processing of the eye gaze of another individual.  相似文献   

17.
The present investigation was designed to determine the origins in the temporal lobe, and terminations in the pons, of the temporopontine pathway. Injections of tritiated amino acids were placed in multimodal regions in the upper bank of the superior temporal sulcus (STS), and in unimodal visual, somatosensory, and auditory areas in different sectors of the lower bank of the STS, the superior temporal gyrus (STG), and the supratemporal plane (STP). The distribution of terminal label in the nuclei of the basis pontis was studied using the autoradiographic technique. Following injections of isotope into the multimodal areas (TPO and PGa) in the upper bank of the STS, intense aggregations of label were observed in the extreme dorsolateral, dorsolateral, and lateral nuclei of the pons, and modest amounts of label were seen in the peripeduncular nucleus. The caudalmost area TPO projected in addition to the ventral and intrapeduncular pontine nuclei. The second auditory area, AII, and the adjacent auditory association areas of the STG and STP contributed modest projections to the dorsolateral, lateral, and peripedunuclar nuclei, but generally spared the extreme dorsolateral nucleus. The lower bank of the STS, which subserves central vision, the somatosensory associated region at the fundus of the rostral STS, and the primary auditory area did not project to the pons. The higher order, multimodal STS contribution to the corticopontocerebellar circuit may provide a partial anatomical substrate for the hypothesis that the cerebellum contributes to the modulation of nonmotor functions.  相似文献   

18.
AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions.METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest.RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences.CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation.  相似文献   

19.
Electrophysiological experiments in monkeys, and more recent functional magnetic resonance imaging (fMRI) studies in human subjects have shown that the superior temporal sulcus (STS) plays a role in face processing. The various roles of the STS in human cognition, including face processing, language, audio-visual integration, and motion perception, are expected to be subserved by the widespread neural connectivity of the STS with other brain regions, such as the primary sensory, limbic, and prefrontal areas. Among the multiple components involved in face processing, fMRI studies have shown that the STS is predominantly involved in the processing of gaze direction and emotional expression. Peak coordinates reported in previous fMRI studies were plotted on the Montreal Neurological Institute's standard brain template. A large cluster was located along the posterior to middle part of the STS, a role that is not selective for a particular task or for cognitive processing. The author focused on 3 aspects of the functional role of the STS: (i) the STS performs regionally specific functions; the posterior STS has low face-selectivity, and the anterior STS is relatively selective for face stimuli, (ii) the STS not only detects gaze direction and facial expression, but also detects intention, as indicated by the experimental stimuli, and (iii) the STS response increases when the intention and the subsequent results do not match. Based on these findings, the author speculates that the role of the STS in human cognition and emotion is to process "social attention," which is a crucial human skill for making inferences with respect to others' goals, intentions, and actions.  相似文献   

20.
We investigated the functional neuroanatomy of vowel processing. We compared attentive auditory perception of natural German vowels to perception of nonspeech band-passed noise stimuli using functional magnetic resonance imaging (fMRI). More specifically, the mapping in auditory cortex of first and second formants was considered, which spectrally characterize vowels and are linked closely to phonological features. Multiple exemplars of natural German vowels were presented in sequences alternating either mainly along the first formant (e.g., [u]-[o], [i]-[e]) or along the second formant (e.g., [u]-[i], [o]-[e]). In fixed-effects and random-effects analyses, vowel sequences elicited more activation than did nonspeech noise in the anterior superior temporal cortex (aST) bilaterally. Partial segregation of different vowel categories was observed within the activated regions, suggestive of a speech sound mapping across the cortical surface. Our results add to the growing evidence that speech sounds, as one of the behaviorally most relevant classes of auditory objects, are analyzed and categorized in aST. These findings also support the notion of an auditory "what" stream, with highly object-specialized areas anterior to primary auditory cortex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号