首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a recent issue of this journal, Turnbull and Bryson (2001) examined a possible relation between left ear (right hemisphere) advantage for perception of emotional speech and the universal preference of mothers to cradle infants on the left side, referring to a hypothesis that we had previously suggested (Sieratzki & Woll, 1996). Although they concluded that their data do not support our theory, reanalysis does suggest a connection between the hemispheric asymmetry for speech prosody and the leftward cradling bias.  相似文献   

2.
Most women prefer to cradle an infant to the left side. It has been suggested that this bias is due to the specialisation of the right hemisphere for emotion, but investigations of visual asymmetries found no empirical support for this proposal. In a recent article, Sieratzki and Woll (1996) suggested that more emphasis should be placed on the auditory, rather than the visual, modality. Using a dichotic listening procedure we investigated whether ear preference for the perception of emotion in speech was related to the lateral cradling bias. Although the findings of both a leftward lateral cradling bias and a left ear emotion perception advantage were replicated, we found no association between the two variables--and thus fail to support the recent suggestions of a possible cause for the lateral cradling bias.  相似文献   

3.
The functional organization of the human auditory cortex is still not well understood with respect to speech perception and language lateralization. Especially, there is comparatively little data available in the brain imaging literature focusing on the timing of phonetic processing. We recorded auditory-evoked potentials (AEP) from 27 scalp and additional EOG channels in 12 healthy volunteers performing a free report dichotic listening task with simple speech sounds (CV syllables: [ba], [da], [ga], [pa], [ta], [ka]). ERP analysis employed independent components analysis (ICA) wavelet denoising for artifact reduction and improvement of the SNR. The main finding was a 15-ms shorter average latency of the N1-AEP recorded from the scalp approximately overlying the left supratemporal cortical plane compared to the N1-AEP over the homologous right side. Corresponding N1 amplitudes did not differ between these sites. The individual AEP latency differences significantly correlated with the ear advantage as an index of speech/language lateralization. The behaviorally relevant difference in N1 latency between the hemispheres indicates that an important key to understanding speech perception is to consider the functional implications of neuronal event timing.  相似文献   

4.
Recent evidence indicates that emotional stimuli may be accorded special priority in information processing. Extending that research, this study tested the hypothesis that communication between the left and right hemispheres would be facilitated for emotional compared to non-emotional faces. Sixty-eight participants matched angry, happy, and neutral face photographs either within a single visual field (i.e., within one hemisphere) or across opposite visual fields (i.e., between the two hemispheres). An overall performance advantage favoring across-field trials was modulated by the emotionality of the face. Specifically, the across-field advantage was significantly greater for angry and happy faces compared to neutral faces, a pattern evident for both accuracy and reaction time data. Possible interpretations of the enhanced interhemispheric processing advantage include increased computational complexity or subcortical transfer of emotionally salient information.  相似文献   

5.
There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%.  相似文献   

6.
Images of individuals posing with the left cheek toward the camera are rated as more emotionally expressive than images with the right cheek toward the camera, which is theorized to be due to right hemisphere specialization for emotion processing. Liberals are stereotyped as being more emotional than conservatives. In the present study, we presented images of people displaying either leftward or rightward posing biases in an online task, and asked participants to rate people’s perceived political orientation. Participants rated individuals portrayed with a leftward posing bias as significantly more liberal than those presented with a rightward bias. These findings support the idea that posing direction is related to perceived emotionality of an individual, and that liberals are stereotyped as more emotional than conservatives. Our results differ from those of a previous study, which found conservative politicians are more often portrayed with a leftward posing bias, suggesting differences between posing output for political parties and perceived political orientation. Future research should investigate this effect in other countries, and the effect of posing bias on perceptions of politicians.  相似文献   

7.
Conventional dichotic listening techniques can unquestionably be used for nominal (left vs. right) categorization of ear or hemispheric differences. These techniques cannot, however, be used for ordinal comparisons of the size of ear advantages among different subjects or different tasks if the measure of the size of the lateral asymmetry is confounded with effects of varying overall performance, attentional bias, or choice of laterality index. A psychophysical procedure is described which is designed to avoid these confounds by measuring discrimination ability, in decibels, as the interaural intensity difference (IID) required for a specific accuracy. Results from two experiments with right-handed subjects showed an average right-ear advantage of about 3 db for phoneme discrimination and an average left-ear advantage of the same size for intonation discrimination.  相似文献   

8.
Tonal sequences differing in emotional quality were presented dichotically. Subjects listened to a specified ear and judged the emotional tone of the stimulus heard at that ear. Accuracy was better for identifying the emotional tone of stimuli presented to the left ear. This left ear advantage was greatest where the target and competing stimuli were of different affect. The findings provide further evidence for the role of the right hemisphere in processing emotional information.  相似文献   

9.
P Eling 《Neuropsychologia》1983,21(4):419-423
In this study data are reported on the reliability of scores of ear advantages obtained in two dichotic tasks, viz. category monitoring and rhyme monitoring. Stimuli were presented at a much higher rate than in earlier reported experiments. This procedure appears to be effective in producing acceptable reliable results with respect to both the direction and the magnitude of ear advantage scores. Furthermore, the data on responses to individual target words are discussed. It appears that neither stimulus characteristics of the target words alone nor strategies of the subjects can explain the major part of variation in reaction times and ear differences. It is suggested that the combination of the stimulus characteristics of the target word and of the word simultaneously presented to the other ear is a major factor determining response latencies.  相似文献   

10.
The typical finding in dichotic listening with verbal stimuli is the right ear advantage (REA), indicating a left hemisphere processing superiority, thus making this an effective tool in studying hemispheric asymmetry. It has been shown that the amplitude of the REA can be modulated by instructions to direct attention to left or right side. The current study attempted to modulate the REA by changing the dichotic listening stimulus situation. In Experiment 1, a consonant vowel (CV) syllable prime was presented binaurally briefly before the dichotic stimuli (consisting of two CVs). The prime could be the same as either the left or right ear dichotic stimulus, or it could be a different stimulus. Participants were instructed to report the CV they heard best from the dichotic syllable pair. The traditional REA was found when the prime was different from both dichotic stimuli. When the prime matched the CV in the left half of the subsequent dichotic pair, the REA was increased, while if the prime matched the right half, the REA was reduced. In order to see at which perceptual stage the modulation takes place, in Experiment 2 the prime was visual, presented on a PC screen. The same effect was seen, although the modulation of the REA was weaker. We propose that the memory trace of the prime is a source of interference, and causes cognitive control of attention to inhibit recognition of stimuli similar to recent distractors. Based on previous studies we propose that this inhibition of attention is performed by prefrontal cortical areas. Similarities to the mechanisms involved in negative priming and implications for auditory laterality studies are pointed out.  相似文献   

11.
12.
Slow brain potentials were recorded in left-handers and right-handers during: (i) processing of language and mental arithmetic tasks, without vocalization, and (ii) subsequent writing down of the answers with either the right or left hand. Left-sided laterality of negative potentials was taken as evidence of hemispheric dominance. It appeared during the processing of words and numbers in 26 of the 30 left-handers and was localized mainly in the left frontal and temporal parietal regions. Similar results were found with the right-handers. This electrophysiological evidence indicates that the left hemisphere is dominant for language and calculation in the vast majority of left-handers. Only when writing with either their left or right hand do left-handers show less left-sided laterality than right-handers.  相似文献   

13.
An experiment is reported in which subjects were required to make a same-different judgment about two sequentially presented stimuli, the first of which was centrally presented and the second displaced either to the left or right of fixation. Three categories of stimuli were used; photographs of hands, faces, and silhouettes of unfamiliar aeroplanes. The results showed a left visual field advantage for both hands and faces but no hemifield effect for aeroplane silhouettes. Implications of these data for future research are briefly considered.  相似文献   

14.
In three experiments, 5- to 8-year-old children reported digit names from one ear for 30 trials before shifting attention to the other ear. Stimuli were presented dichotically in Experiment 1 and monaurally in Experiments 2 and 3. Dichotic stinulation yielded not only a right-ear advantage but also a priming effect that reflects difficulty in shifting attention in either direction. With monaural stimulation, however, performance with the second ear to be monitored was superior to performance with the first ear. The priming effect thus depends on interaural competition and cannot be attributed to general factors such as fatigue or motivation.  相似文献   

15.
Cortical auditory systems: speech and other complex sounds   总被引:1,自引:0,他引:1  
The neural systems that mediate human perception of speech and other complex sounds are currently the focus of considerable research. Brain mapping studies have provided new insights into the cortical processing of complex sounds. Findings from three lines of brain mapping research--stroke-lesion, neuroimaging, and electrocortical mapping studies--are reviewed. Unresolved questions regarding the relative contributions of cortical and subcortical auditory processing and the existence of separate, functionally specialized, cortical auditory systems for processing speech and nonspeech sounds are discussed. An integrated approach is proposed for future research on the neural bases of complex sound processing.  相似文献   

16.
17.
Hemispheric asymmetries for processing duration of non-verbal and verbal sounds were investigated in 60 right-handed subjects. Two dichotic tests with attention directed to one ear were used, one with complex tones and one with consonant-vowel syllables. Stimuli had three possible durations: 350, 500, and 650 ms. Subjects judged whether the duration of a probe was same or different compared to the duration of the target presented before it. Target and probe were part of two dichotic pairs presented with 1s interstimulus interval and occurred on the same side. Dependent variables were reaction time and accuracy. Results showed a significant right ear advantage for both dependent variables with both complex tones and consonant-vowel syllables. This study provides behavioural evidence of a left hemisphere specialization for duration perception of both musical and speech sounds in line with the current view based on a parameter--rather than domain-specific structuring of hemispheric perceptual asymmetries.  相似文献   

18.
Jackson CJ 《Laterality》2005,10(4):305-320
Two studies investigate how cognitions of aurally presented information interact with aural preference (self-reported preferred ear for listening) in the prediction of personality. In Study 1, participants provided attractiveness cognitions of various statements after listening to aurally presented material. Aural preference x attractiveness interactions significantly predicted Extraversion and Neuroticism. In Study 2, participants provided cognitions of pleasantness from various scenarios. An aural preference x pleasantness interaction significantly predicted Neuroticism. Although other interpretations are possible, I conclude that these findings support the idea of aural preference as a useful measure of hemispheric asymmetry, such that the right hemisphere (left aural preference) provides facilitation of emotional expression, whereas the left hemisphere (right aural preference) provides suppression. My findings support a more historical view of emotional asymmetry than the more modern approach-avoidance perspective and suggest that moderating effects of hemispheric asymmetry are important to include in studies investigating emotions associated with personality.  相似文献   

19.
Chris Jackson 《Laterality》2013,18(4):305-320
Two studies investigate how cognitions of aurally presented information interact with aural preference (self-reported preferred ear for listening) in the prediction of personality. In Study 1, participants provided attractiveness cognitions of various statements after listening to aurally presented material. Aural preference × attractiveness interactions significantly predicted Extraversion and Neuroticism. In Study 2, participants provided cognitions of pleasantness from various scenarios. An aural preference × pleasantness interaction significantly predicted Neuroticism. Although other interpretations are possible, I conclude that these findings support the idea of aural preference as a useful measure of hemispheric asymmetry, such that the right hemisphere (left aural preference) provides facilitation of emotional expression, whereas the left hemisphere (right aural preference) provides suppression. My findings support a more historical view of emotional asymmetry than the more modern approach–avoidance perspective and suggest that moderating effects of hemispheric asymmetry are important to include in studies investigating emotions associated with personality.  相似文献   

20.
Two experiments are reported which were aimed at testing the effect of phonetic similarity and order of report on the right ear advantage in consonant-vowel dichotic tasks. The stimuli were consonant-vowel syllables which comprised the six stop consonants: /ba/, /da/, /ga/, /pa/, /ta/ and /ka/. In Experiment 1 the syllables were contrasted on place of articulation (condition 1) or on voicing (condition 2). In Experiment 2 the stimuli were contrasted either on one feature (place or voicing) or on both features (place and voicing). The results showed that the right ear advantage did not depend on phonetic similarity, whereas it depended on the order of report, being stronger when the perceptual channel was considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号