首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The face and voice can independently convey the same information about emotion. When we see an angry face or hear an angry voice, we can perceive a person's anger. These two different sensory cues are interchangeable in this sense. However, it is still unclear whether the same group of neurons process signals for facial and vocal emotions. We recorded neuronal activity in the amygdala of monkeys while watching nine video clips of species-specific emotional expressions: three monkeys showing three emotional expressions (aggressive threat, scream, and coo). Of the 227 amygdala neurons tested, 116 neurons (51%) responded to at least one of the emotional expressions. These "monkey-responsive" neurons-that is, neurons that responded to monkey-specific emotional expression-preferred the scream to other emotional expressions irrespective of identity. To determine the element crucial to neuronal responses, the activity of 79 monkey-responsive neurons was recorded while a facial or vocal element of a stimulus was presented alone. Although most neurons (61/79, 77%) strongly responded to the visual but not to the auditory element, about one fifth (16/79, 20%) maintained a good response when either the facial or vocal element was presented. Moreover, these neurons maintained their stimulus-preference profiles under facial and vocal conditions. These neurons were found in the central nucleus of the amygdala, the nucleus that receives inputs from other amygdala nuclei and in turn sends outputs to other emotion-related brain areas. These supramodal responses to emotion would be of use in generating appropriate responses to information regarding either facial or vocal emotion.  相似文献   

3.
Human neuropsychological studies suggest that the amygdala is implicated in social cognition, in which cognition of seen gaze-direction, especially the direct gaze, is essential, and that the perception of gaze direction is modulated by the head orientation of the facial stimuli. However, neural correlates to these issues remain unknown. In the present study, neuronal activity was recorded from the macaque monkey amygdala during performance of a sequential delayed non-matching-to-sample task based on gaze direction. The facial stimuli consisted of two head orientations (frontal; straight to the monkey, profile; 30 degrees rightwards from the front) with different gaze directions (directed toward and averted to the left or right of the monkey). Of the 1091 neurons recorded, 61 responded to more than one facial stimulus. Of these face-responsive neurons, 44 displayed responses selective to the facial stimuli (face neurons). Most amygdalar face neurons discriminated both gaze direction and head orientation, and exhibited a significant interaction between the two types about information. Furthermore, factor analysis on the response magnitudes of the face neurons to the facial stimuli revealed that two factors derived from these facial stimuli were correlated with two head orientations. The overall responses of the face neurons to direct gazes in the profile and frontal faces were significantly larger than that to averted gazes. The results suggest that information of both gaze and head direction is integrated in the amygdala, and that the amygdala is implicated in detection of direct gaze.  相似文献   

4.
5.
Summary Single neurons were recorded in the inferotemporal cortex (IT) of a monkey trained to discriminate three selected human faces from a large number of different faces. Neurons which were not responsive to non-face visual stimuli used in the task but were responsive to certain sets of faces were found in the gyrus of the IT. The correlation analysis between the quantified facial features and the responses has revealed that face neurons detect the combination of the distances between facial parts such as eyes, mouth, eyebrows, hair, and so on. One of the face neurons detected the combination of the degree that the forehead above the left eye covered with hair and the distance between the eyes and the mouth. The results of this analysis have given appropriate reason for naming the neurons as the face neurons.  相似文献   

6.
Three experiments investigated the effects of identity of facial expression and person of two successively presented faces on face recognition. In each experiment, a combination of a familiar/unfamiliar person with neutral/smile expression was presented. Four different groups of 8 or 12 undergraduates were asked to judge the facial expression or the familiarity of the second face. The results showed that reaction times in both judgments were shorter when the same person repeatedly appeared than when two different people were presented. This repetition effect was not affected by facial expression of the stimulus for expression judgments, while it depended on the expression and familiarity of the first and second faces for familiarity judgments. In facial expression judgments, reaction times to smile faces were shorter than those to the neutral faces, only when subjects were familiar with those faces. This facial expression effect appeared only when the identical familiar and smile person was repeatedly presented for familiarity judgments. These results suggested the interdependency between analysis of facial expression and that of person information in face recognition processes.  相似文献   

7.
The initial learning and subsequent behavioral expression of fear are often viewed as independent processes with potentially unique neural substrates. Laboratory animal studies of Pavlovian fear conditioning suggest that the amygdala is important for both forming stimulus associations and for subsequently expressing learned behavioral responses. In the present article, human amygdala activity was studied during the autonomic expression of conditional fear in two differential conditioning experiments with event-related functional magnetic resonance imaging and concurrent recording of skin conductance responses (SCRs). Trials were classified on the basis of individual participants' SCRs. Significant amygdala responding was detected only during trials on which a signal both predicted shock and elicited significant conditional SCR. Conditional stimulus presentation or autonomic activity alone was not sufficient. These results indicate that amygdala activity may specifically reflect the expression of learned fear responses and support the position that this region plays a central role in the expression of emotional reactions.  相似文献   

8.
Introduction: Identifying individual identities from faces is crucial for social functioning. In schizophrenia, previous studies showed mixed results as to whether face identity discrimination is compromised. How a social category factor (such as gender and race) affects schizophrenia patients’ facial identity discrimination is unclear.

Methods: Using psychophysics, we examined perceptual performance on within- and between- category face identity discrimination tasks in patients (n = 51) and controls (n = 31). Face images from each of six pairs of individuals (two White females, two White males, two Black males, two Asian females, one Black male and one White male, and one White female and one White male) were morphed to create additional images along a continuum of dissimilarity in facial morphology.

Results: Patients underperformed for five out of the six face pairs (the Black/White male pair was the exception). Perceptual performance was correlated with morphological changes in face images being discriminated, to a greater extent in patients than in controls.

Conclusions: Face identity discrimination in schizophrenia was most impaired for those faces that presumably have extensive social exposures (such as White males). Patients’ perceptual performance appears to depend more on physical feature changes of faces.  相似文献   


9.
1. We studied how neurons in the middle temporal visual area (MT) of anesthetized macaque monkeys responded to textured and nontextured visual stimuli. Stimuli contained a central rectangular "figure" that was either uniform in luminance or consisted of an array of oriented line segments. The figure moved at constant velocity in one of four orthogonal directions. The region surrounding the figure was either uniform in luminance or contained a texture array (whose elements were identical or orthogonal in orientation to those of the figure), and it either was stationary or moved along with the figure. 2. A textured figure moving across a stationary textured background ("texture bar" stimulus) often elicited vigorous neural responses, but, on average, the responses to texture bars were significantly smaller than to solid (uniform luminance) bars. 3. Many cells showed direction selectivity that was similar for both texture bars and solid bars. However, on average, the direction selectivity measured when texture bars were used was significantly smaller than that for solid bars, and many cells lost significant direction selectivity altogether. The reduction in direction selectivity for texture bars generally reflected a combination of decreased responsiveness in the preferred direction and increased responsiveness in the null (opposite to preferred) direction. 4. Responses to a texture bar in the absence of a texture background ("texture bar alone") were very similar to the responses to solid bars both in the magnitude of response and in the degree of direction selectivity. Conversely, adding a static texture surround to a moving solid bar reduced direction selectivity on average without a reduction in response magnitude. These results indicate that the static surround is largely responsible for the differences in direction selectivity for texture bars versus solid bars. 5. In the majority of MT cells studied, responses to a moving texture bar were largely independent of whether the elements in the bar were of the same orientation as the background elements or of the orthogonal orientation. Thus, for the class of stimuli we used, orientation contrast does not markedly affect the responses of MT neurons to moving texture patterns. 6. The optimum figure length and the shapes of the length tuning curves determined with the use of solid bars and texture bars differed significantly in most of the cells examined. Thus neurons in MT are not simply selective for a particular figure shape independent of whatever cues are used to delineate the figure.  相似文献   

10.
认知是脑的高级功能,其产生机制是脑科学的研究前沿之一。脑的认知功能涉及大脑皮质和皮质下的广泛脑区,不同部位涉及不同的认知类型。杏仁体(amygdala)是参与情绪性学习认知的重要结构。作者工作过的日本富山医科药科大学(现为富山大学医学部)研究生院系统情绪科学研究室长期从事杏仁体的认知功能研究,成绩卓著。本文综述了该研究室对猴脑杏仁体社会认知功能以及相关的最新研究成果,以飨读者。  相似文献   

11.
Alexithymic individuals have difficulties in identifying and verbalizing their emotions. The amygdala is known to play a central role in processing emotion stimuli and in generating emotional experience. In the present study automatic amygdala reactivity to facial emotion was investigated as a function of alexithymia (as assessed by the 20-Item Toronto Alexithymia Scale). The Beck-Depression Inventory (BDI) and the State-Trait-Anxiety Inventory (STAI) were administered to measure participants' depressivity and trait anxiety. During 3T fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 21 healthy volunteers. The amygdala was selected as the region of interest (ROI) and voxel values of the ROI were extracted, summarized by mean and tested among the different conditions. A detection task was applied to assess participants' awareness of the masked emotional faces shown in the fMRI experiment. Masked sad and happy facial emotions led to greater right amygdala activation than masked neutral faces. The alexithymia feature difficulties identifying feelings was negatively correlated with the neural response of the right amygdala to masked sad faces, even when controlling for depressivity and anxiety. Reduced automatic amygdala responsivity may contribute to problems in identifying one's emotions in everyday life. Low spontaneous reactivity of the amygdala to sad faces could implicate less engagement in the encoding of negative emotional stimuli.  相似文献   

12.
Previous psychometric studies using a visual search task suggested that interpersonal fear in individuals with social anxiety disorder (SAD) may be processed by unconscious preattentive mechanisms. However, little is known about relationships between social anxiety and preattentive emotional responses. We explored whether social anxiety is associated with preattentive emotional responses to facial expression. Groups with high and low social anxiety were selected from 125 healthy volunteers according to scores on the Social Phobia Inventory. Fearful and happy faces were presented subliminally using backward masking, with skin conductance responses (SCRs) being measured as an autonomic index of emotional responses. SCRs to these two facial expressions were compared between groups. The group with high social anxiety showed significantly greater differences in SCRs between masked fearful and happy faces than the group with low social anxiety. Social anxiety was associated with unconscious autonomic responses to fearful faces. A preattentive interpersonal threat evaluation system may be an important factor in SAD.  相似文献   

13.
A neural model is presented that explains how outcome-specific learning modulates affect, decision-making, and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.  相似文献   

14.
The distributed model of face processing proposes an anatomical dissociation between brain regions that encode invariant aspects of faces, such as identity, and those that encode changeable aspects of faces, such as expression. We tested for a neuroanatomical dissociation for identity and expression in face perception using a functional MRI (fMRI) adaptation paradigm. Repeating identity across face pairs led to reduced fMRI signal in fusiform cortex and posterior superior temporal sulcus (STS), whereas repeating emotional expression across pairs led to reduced signal in a more anterior region of STS. These results provide neuroanatomical evidence for the distributed model of face processing and highlight a dissociation within right STS between a caudal segment coding identity and a more rostral region coding emotional expression.  相似文献   

15.
16.
Connections between the insular cortex and the amygdala were investigated with tritiated amino acid autoradiography and horseradish peroxidase histochemistry in the rhesus monkey. Our findings revealed widespread reciprocal connections between the insular cortex and almost all subnuclei of the amygdaloid complex. The posterior insula projects predominantly to the dorsal aspect of the lateral and to the central amygdaloid nuclei. In contrast, the anterior insula projects to the anterior amygdaloid area as well as the medial, the cortical, the accessory basal magnocellular, the medial basal and the lateral amygdaloid nuclei. Approximately 90% of the cells of origin of the insulo-amygdala projections are located in layers 2 and 3 with fewer labeled cells in layer 5. Amygdalo-insular projections arise predominantly from medial and anterior parts of the amygdala and reach mostly layers 2 and 5 of the insula.The results demonstrate that the insula and the amygdala have widespread reciprocal connections, which may explain their functional similarities.  相似文献   

17.
Event-related potentials offer evidence for face distinctive neural activity that peaks at about 170 ms following the onset of face stimuli (the N170 effect). We investigated the role of the perceptual mechanism reflected by the N170 effect by comparing the adaptation of the N170 amplitude when target faces were preceded either by identical face images or by different faces relative to when they were preceded by objects. In two experiments, we demonstrate that the N170 is equally adapted by repetition of the same or different faces. Thus, our findings show that the N170 is sensitive to the category rather than the identity of a face. This outcome supports the hypothesis that the N170 effect reflects the activity of a perceptual mechanism which discriminates faces from objects and streams face stimuli to dedicated circuits, specialized in encoding and decoding information about the face.  相似文献   

18.
19.
Connections between the amygdala and auditory cortical areas TC, and the rostral, intermediate and caudal regions of area TA (TAr, TAi and TAc, respectively) in the macaque monkey (Macaca fuscata and Macaca nemestrina) were investigated following placements of cortical deposits of wheat germ agglutinin-conjugated horseradish peroxidase (WGA-HRP). Areas TC and TAc received weak projections and these derived only from the lateral basal nucleus. Areas TAi and TAr received projections from the lateral, lateral basal and accessory basal nuclei. In contrast, corticopetal projections to the amygdala originated in areas TAi and TAr, but never in TAc or TC. The projections from areas TAi and TAr terminated only in the lateral nucleus, and in particular at the lateral part of the middle and caudal portions of the amygdala. Thus, the amygdalofugal projections to the auditory cortices are more widespread and more complex than the amygdalopetal projections of the auditory cortices. As judged from experiments in which deposits were made at different sites along the rostrocaudal axis of the auditory cortex, there was a progressive increase seen in density of the amygdala connections with more anteriorly-placed injection sites.  相似文献   

20.
Face processing differences have been observed between AS and control subjects at the behavioural and neurological levels. The purpose of the present study was to investigate the neurophysiological basis of processing faces and facial features (eyes and mouths) in adults with AS relative to age- and gender-matched typically-developing controls. These results were compared with ERPs generated to objects in both groups to determine if any differences were specific to facial stimuli. Although both groups elicited earlier N170 latencies to faces than to face parts and to eyes relative to mouths, adults with AS exhibited delayed N170 latencies to faces and face parts relative to controls. This difference was not observed to objects. Together these findings suggest that adults with AS may be slower to process facial information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号