首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
How do native listeners process grammatical errors that are frequent in non-native speech? We investigated whether the neural correlates of syntactic processing are modulated by speaker identity. ERPs to gender agreement errors in sentences spoken by a native speaker were compared with the same errors spoken by a non-native speaker. In line with previous research, gender violations in native speech resulted in a P600 effect (larger P600 for violations in comparison with correct sentences), but when the same violations were produced by the non-native speaker with a foreign accent, no P600 effect was observed. Control sentences with semantic violations elicited comparable N400 effects for both the native and the non-native speaker, confirming no general integration problem in foreign-accented speech. The results demonstrate that the P600 is modulated by speaker identity, extending our knowledge about the role of speaker's characteristics on neural correlates of speech processing.  相似文献   

2.
Williams syndrome (WS), a neurodevelopmental genetic disorder due to a microdeletion in chromosome 7, is described as displaying an intriguing socio-cognitive phenotype. Deficits in prosody production and comprehension have been consistently reported in behavioral studies. It remains, however, to be clarified the neurobiological processes underlying prosody processing in WS.This study aimed at characterizing the electrophysiological response to neutral, happy, and angry prosody in WS, and examining if this response was dependent on the semantic content of the utterance. A group of 12 participants (5 female and 7 male), diagnosed with WS, with age range between 9 and 31 years, was compared with a group of typically developing participants, individually matched for chronological age, gender and laterality. After inspection of EEG artifacts, data from 9 participants with WS and 10 controls were included in ERP analyses.Participants were presented with neutral, positive and negative sentences, in two conditions: (1) with intelligible semantic and syntactic information; (2) with unintelligible semantic and syntactic information (‘pure prosody’ condition). They were asked to decide which emotion was underlying the auditory sentence.Atypical event-related potentials (ERP) components were related with prosodic processing (N100, P200, N300) in WS. In particular, reduced N100 was observed for prosody sentences with semantic content; more positive P200 for sentences with semantic content, in particular for happy and angry intonations; and reduced N300 for both types of sentence conditions.These findings suggest abnormalities in early auditory processing, indicating a bottom-up contribution to the impairment in emotional prosody processing and comprehension. Also, at least for N100 and P200, they suggest the top-down contributions of semantic processes in the sensory processing of speech. This study showed, for the first time, that abnormalities in ERP measures of early auditory processing in WS are also present during the processing of emotional vocal information. This may represent a physiological signature of underlying impaired on-line language and socio-emotional processing.  相似文献   

3.
Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope‐based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio‐only stream and (b) Incongruent AV (eavesdropping)—attending the audio‐only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto‐occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near‐ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.  相似文献   

4.
《Social neuroscience》2013,8(1):59-75
Abstract

The orbitofrontal cortex (OFC) is functionally linked to a variety of cognitive and emotional functions. In particular, lesions of the human OFC lead to large-scale changes in social and emotional behavior. For example, patients with OFC lesions are reported to suffer from deficits in affective decision-making, including impaired emotional face and voice expression recognition (e.g., Hornak et al., 1996, 2003). However, previous studies have failed to acknowledge that emotional processing is a multistage process. Thus, different stages of emotional processing (e.g., early vs. late) in the same patient group could be affected in a qualitatively different manner. The present study investigated this possibility and tested implicit emotional speech processing in an ERP experiment followed by an explicit behavioral emotional recognition task. OFC patients listened to vocal emotional expressions of anger, fear, disgust, and happiness compared to a neutral baseline spoken either with or without lexical content. In line with previous evidence (Paulmann & Kotz, 2008b), both patients and healthy controls differentiate emotional and neutral prosody within 200 ms (P200). However, the recognition of emotional vocal expressions is impaired in OFC patients as compared to healthy controls. The current data serve as first evidence that emotional prosody processing is impaired only at a late, and not at an early processing stage in OFC patients.  相似文献   

5.
This study examines entrainment of speech timing and rhythm with a model speaker in healthy persons and individuals with Parkinson’s. We asked whether participants coordinate their speech initiation and rhythm with the model speaker, and whether the regularity of metrical structure of sentences influences this behaviour. Ten native German speakers with hypokinetic dysarthria following Parkinson’s and 10 healthy controls heard a sentence (‘prime’) and subsequently read aloud another sentence (‘target’). Speech material comprised 32 metrically regular and irregular sentences, respectively. Turn-taking delays and alignment of speech rhythm were measured using speech wave analyses. Results showed that healthy participants initiated speech more closely in rhythm with the model speaker than patients. Metrically regular prime sentences induced anticipatory responses relative to metrically irregular primes. Entrainment of speech rhythm was greater in metrically regular targets, especially in individuals with Parkinson’s. We conclude that individuals with Parkinson’s may exploit metrically regular cues in speech.  相似文献   

6.
Emotional manipulations have been demonstrated to produce leftward shifts in perceptual asymmetries. However, much of this research has used linguistic tasks to assess perceptual asymmetry and there are therefore two interpretations of the leftward shift. It may reflect a leftward shift in the spatial distribution of attention as a consequence of emotional activation of the right hemisphere; alternatively it may reflect emotional facilitation of right hemisphere linguistic processing. The current study used two non-linguistic attention tasks to determine whether emotional prosody influences the spatial distribution of visual attention. In a dual-task paradigm participants listened to semantically neutral sentences in neutral, happy or sad prosodies while completing a target discrimination task (Experiment 1) and a target detection task (Experiments 2 and 3). There was only one condition in one experiment that induced perceptual asymmetries that interacted with emotional prosody, suggesting that task-irrelevant emotional prosody only rarely directs attention. Instead a more likely cause of the leftward perceptual shift for comprehension of emotional speech is facilitation of right hemisphere linguistic processing.  相似文献   

7.
BACKGROUND: Oxytocin dysfunction might contribute to the development of social deficits in autism, a core symptom domain and potential target for intervention. This study explored the effect of intravenous oxytocin administration on the retention of social information in autism. METHODS: Oxytocin and placebo challenges were administered to 15 adult subjects diagnosed with autism or Asperger's disorder, and comprehension of affective speech (happy, indifferent, angry, and sad) in neutral content sentences was tested. RESULTS: All subjects showed improvements in affective speech comprehension from pre- to post-infusion; however, whereas those who received placebo first tended to revert to baseline after a delay, those who received oxytocin first retained the ability to accurately assign emotional significance to speech intonation on the speech comprehension task. CONCLUSIONS: These results are consistent with studies linking oxytocin to social recognition in rodents as well as studies linking oxytocin to prosocial behavior in humans and suggest that oxytocin might facilitate social information processing in those with autism. These findings also provide preliminary support for the use of oxytocin in the treatment of autism.  相似文献   

8.
The interaction of information derived from the voice and facial expression of a speaker contributes to the interpretation of the emotional state of the speaker and to the formation of inferences about information that may have been merely implied in the verbal communication. Therefore, we investigated the brain processes responsible for the integration of emotional information originating from different sources. Although several studies have reported possible sites for integration, further investigation using a neutral emotional condition is required to locate emotion-specific networks. Using functional magnetic resonance imaging (fMRI), we explored the brain regions involved in the integration of emotional information from different modalities in comparison to those involved in integrating emotionally neutral information. There was significant activation in the superior temporal gyrus (STG); inferior frontal gyrus (IFG); and parahippocampal gyrus, including the amygdala, under the bimodal versus the unimodal condition, irrespective of the emotional content. We confirmed the results of previous studies by finding that the bimodal emotional condition elicited strong activation in the left middle temporal gyrus (MTG), and we extended this finding to locate the effects of emotional factors by using a neutral condition in the experimental design. We found anger-specific activation in the posterior cingulate, fusiform gyrus, and cerebellum, whereas we found happiness-specific activation in the MTG, parahippocampal gyrus, hippocampus, claustrum, inferior parietal lobule, cuneus, middle frontal gyrus (MFG), IFG, and anterior cingulate. These emotion-specific activations suggest that each emotion uses a separate network to integrate bimodal information and shares a common network for cross-modal integration.  相似文献   

9.
Two studies were conducted in order to determine whether the poor performance of RHD patients on emotional prosody tasks could be attributed to a defect in perceiving/categorizing emotional prosody (processing defect) or to a problem in being distracted by the semantic content of affectively intoned sentences (distraction defect). In one study, patients with RHD, LHD or NHD listened to affectively intoned sentences in which the semantic content was congruent or incongruent with the emotional prosody. In a second study, the patients listened to affectively intoned sentences that had been speech filtered or unfiltered. Findings from these studies indicate that both processing and distraction defects are present in RHD patients.  相似文献   

10.
Hughlings Jackson noted that, although some aphasic patients were unable to use propositional speech, affective speech appeared to be spared. The purpose of this experiment was to study patients with unilateral hemispheric disease in order to ascertain if there are hemispheric asymmetries in the comprehension of affective speech. Six subjects had right temporoparietal lesions (left unilateral neglect) and six subjects had left temporoparietal lesions (fluent aphasias). These subjects were presented with 32 tape recorded sentences. In 16 trials the patients were asked to judge the emotional mood of the speaker (happy, sad, angry, indifferent) and in 16 trials the patients were asked to judge the content. Line drawings containing facial expressions of the four emotions or line drawings corresponding with the four basic contents were displayed with each sentence and the patient responded by pointing. All 12 subjects made perfect scores on the content portion of the test. On the emotional portion the right hemispheric patients scored a mean of 4-17 and the left hemispheric group scored a mean 10-17. The difference between these means is significantly (P less than 0-01) and suggests that patients with right hemispheric dysfunction and neglect have a defect in the comprehension of affective speech.  相似文献   

11.
Deficits in emotional prosodic processing, the expression of emotions in voice, have been widely reported in patients with schizophrenia, not only in comprehending emotional prosody but also expressing it. Given that prosodic cues are important in memory for voice and speaker identity, Cutting has proposed that prosodic deficits may contribute to the misattribution that appears to occur in auditory hallucinations in psychosis. The present study compared hallucinating patients with schizophrenia, non-hallucinating patients and normal controls on an emotional prosodic processing task. It was hypothesised that hallucinators would demonstrate greater deficits in emotional prosodic processing than non-hallucinators and normal controls. Participants were 67 patients with a diagnosis of schizophrenia or schizoaffective disorder (hallucinating = 38, non-hallucinating = 29) and 31 normal controls. The prosodic processing task used in this study comprised a series of semantically neutral sentences expressed in happy, sad and neutral voices which were rated on a 7-point Likert scale from sad (− 3) through neutral (0) to happy (+ 3). Significant deficits in the prosodic processing tasks were found in hallucinating patients compared to non-hallucinating patients and normal controls. No significant differences were observed between non-hallucinating patients and normal controls. In the present study, patients experiencing auditory hallucinations were not as successful in recognising and using prosodic cues as the non-hallucinating patients. These results are consistent with Cutting's hypothesis, that prosodic dysfunction may mediate the misattribution of auditory hallucinations.  相似文献   

12.
13.
Patients with Parkinson's disease (PD) tend to speak monotonously with minor modulation of pitch and intensity. The goal of this study was to find out whether these speech changes can be explained mainly by motor impairment, i.e. akinesia and rigidity of the articulatory apparatus, or whether alterations of emotional processing play an additional role. Sixteen patients with mild PD and 16 healthy controls (HC) were compared. Fundamental frequencies (pitch) and intensities (loudness) were determined as (1) maximal upper and lower values achieved in nonemotional speech (phonation capacity), (2) upper and lower values used when speaking “Anna” in emotional intonation (neutral, sad, happy) as requested (production task), or (3) when imitating a professional speaker (imitation task). Although groups did not significantly differ in their phonation capacity, patients showed a significantly smaller pitch and intensity range than HC in the production task. In the imitation task, however, ranges were again similar. These results suggest that alterations of emotional processing contribute to speech changes in PD, especially regarding emotional prosody, in addition to motor impairment. © 2008 Movement Disorder Society  相似文献   

14.
Prior research revealed sex differences in the processing of unattended changes in speaker prosody. The present study aimed at investigating the role of estrogen in mediating these effects. To this end, the electroencephalogram (EEG) was recorded while participants watched a silent movie with subtitles and passively listened to a syllable sequence that contained occasional changes in speaker prosody. In one block, these changes were neutral, whereas in another block they were emotional. Estrogen values were obtained for each participant and correlated with the mismatch negativity (MMN) amplitude elicited in the EEG. As predicted, female listeners had higher estrogen values than male listeners and showed reduced MMN amplitudes to neutral as compared to emotional change in speaker prosody. Moreover, in both, male and female listeners, MMN amplitudes were negatively correlated with estrogen when the change in speaker prosody was neutral, but not when it was emotional. This suggests that estrogen is associated with reduced distractibility by neutral, but not emotional, events. Emotional events are spared from this reduction in distractibility and more likely to penetrate voluntary attention directed elsewhere. Taken together, the present findings provide evidence for a role of estrogen in human cognition and emotion.  相似文献   

15.
To evaluate the right hemisphere's role in encoding speech prosody, an acoustic investigation of timing characteristics was undertaken in speakers with and without focal right-hemisphere damage (RHD) following cerebrovascular accident. Utterances varying along different prosodic dimensions (emphasis, emotion) were elicited from each speaker using a story completion paradigm, and measures of utterance rate and vowel duration were computed. Results demonstrated parallelism in how RHD and healthy individuals encoded the temporal correlates of emphasis in most experimental conditions. Differences in how RHD speakers employed temporal cues to specify some aspects of prosodic meaning (especially emotional content) were observed and corresponded to a reduction in the perceptibility of prosodic meanings when conveyed by the RHD speakers. Findings indicate that RHD individuals are most disturbed when expressing prosodic representations that vary in a graded (rather than categorical) manner in the speech signal (Blonder, Pickering, Heath et al., 1995; Pell, 1999a).  相似文献   

16.
An acoustic-perceptual investigation was performed on various aspects of timing in the speech of a 21-year-old adult speaker of Thai who reportedly did not start speaking until the age of 7. Selected aspects of timing included: ( I ) the voicing contrast in Thai homorganic word-initial stops; (2) the duration contrast in Thai short and long vowels; and (3) the duration patterns of phrases and sentences in Thai connected speech. Measures of stop consonant voicing and vowel length were taken from monosyllabic citation forms; measures of syllables, phrases and sentences from an oral reading of a paragraph-sized passage. Findings indicated that speech timing skills relatcd to stop consonant voicing, vowel length, and rhythm can be differentially impaired, and moreover, that the pattern of impairment appears to be related to the size of the temporal planning unit.  相似文献   

17.
Background: Studies of aphasic sentence production have identified a number of promising approaches to improving performance at the single sentence level, but these studies have typically failed to show measurable effects on multi‐sentence productions (spontaneous or narrative speech). The difficulty for aphasic speakers of producing connected speech during therapy is likely to contribute to this effect. Computer software that allows patients to record, replay, and concatenate partial utterances has shown promise in allowing narrative‐level practice during treatment of even severely non‐fluent patients.

Aims: This single‐case study continues research using SentenceShaper ®, a computer program that supports speakers' productions while they are being formulated. The goal is to investigate the utility of a two‐step treatment that supplements improvements achieved from use of the software alone with explicit structural treatment (of multi‐clause sentences).

Methods & Procedures: We describe an aphasic speaker (CI) with severely non‐fluent, fragmented, and agrammatic speech who participated in two treatment phases. Initially, as in previous studies, CI practised producing narratives (based on wordless picture books or silent videos) while using SentenceShaper, with no explicit focus on specific syntactic elements. This phase produced marked structural improvement, so a second treatment, focused on the production of multi‐clause sentences, was designed to exploit his success using the system. Following a period of targeted treatment on such structures, CI practiced producing narratives that incorporated these structures with the help of SentenceShaper. Structural analyses based on the Quantitative Production Analysis system compared Baseline and Post‐treatment 1 performance, and then compared improvements Post‐treatment 1 with those shown after treatment 2.

Outcomes & Results: Structural measures (including mean sentence length, proportion of words in sentences and sentence well‐formedness) improved significantly from Baseline following Treatment 1, and improved significantly again following Treatment 2, such that sentence length and well‐formedness moved into the normal range.

Conclusions: Results indicate that this combined approach may be helpful in improving the connected speech of even chronic and severely non‐fluent speakers. The characteristics of this aphasic speaker that might have contributed to this outcome, and the limitations of this study, are considered.  相似文献   

18.
Previous research has shown that it is possible to predict which speaker is attended in a multispeaker scene by analyzing a listener's electroencephalography (EEG) activity. In this study, existing linear models that learn the mapping from neural activity to an attended speech envelope are replaced by a non‐linear neural network (NN). The proposed architecture takes into account the temporal context of the estimated envelope and is evaluated using EEG data obtained from 20 normal‐hearing listeners who focused on one speaker in a two‐speaker setting. The network is optimized with respect to the frequency range and the temporal segmentation of the EEG input, as well as the cost function used to estimate the model parameters. To identify the salient cues involved in auditory attention, a relevance algorithm is applied that highlights the electrode signals most important for attention decoding. In contrast to linear approaches, the NN profits from a wider EEG frequency range (1–32 Hz) and achieves a performance seven times higher than the linear baseline. Relevant EEG activations following the speech stimulus after 170 ms at physiologically plausible locations were found. This was not observed when the model was trained on the unattended speaker. Our findings therefore indicate that non‐linear NNs can provide insight into physiological processes by analyzing EEG activity.  相似文献   

19.
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time‐resolved decoding of sensor‐level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time‐resolved relevance patterns in source space track expression‐related information from the visual cortex (100 ms) to higher‐level temporal and frontal areas (200–500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.  相似文献   

20.
Whereas most previous studies on emotion in language have focussed on single words, we investigated the influence of the emotional valence of a word on the syntactic and semantic processes unfolding during sentence comprehension, by means of event-related brain potentials (ERP). Experiment 1 assessed how positive, negative, and neutral adjectives that could be either syntactically correct or incorrect (violation of number agreement) modulate syntax-sensitive ERP components. The amplitude of the left anterior negativity (LAN) to morphosyntactic violations increased in negative and decreased in positive words in comparison to neutral words. In Experiment 2, the same sentences were presented but positive, negative, and neutral adjectives could be either semantically correct or anomalous given the sentence context. The N400 to semantic anomalies was not significantly affected by the valence of the violating word. However, positive words in a sentence seemed to influence semantic correctness decisions, also triggering an apparent N400 reduction irrespective of the correctness value of the word. Later linguistic processes, as reflected in the P600 component, were unaffected in either experiment. Overall, our results indicate that emotional valence in a word impacts the syntactic and semantic processing of sentences, with differential effects as a function of valence and domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号