首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments were carried out involving the measurement of simple reaction-time when subjects responded to speech and to non-speech stimuli. In the first, subjects were required to make a speech response (uttering the vowel [a: ]) to one speech stimulus (the vowel [a: ]) and three non-speech stimuli (a complex tone, a telephone bell and a click). The click stimulus gave significantly longer reaction-times than the other three stimuli; since all stimuli were equated for peak intensity delivered to the subjects' ears, this was due to the short duration of the click (25 msec). There was no evidence that compatibility between the speech stimulus and the speech response had any influence on reaction-time. The second experiment employed a 2 X 2 design with 2 stimuli and 2 response modes. The stimuli were the vowel [a: ] and the telephone bell; the response modes were key-pressing and uttering the vowel [a: ]. The speech stimulus and the speech response gave significantly longer reaction-times than the non-speech simtulus and response. The minimum time for a reaction requiring speech reception is of the order of 180 msec. and the use of the motor speech mechanism adds about 30 msec. to reaction-time. Again no interaction was found between stimulus and response and this is probably due to the extremely simple nature of the speech tasks imposed.  相似文献   

2.
OBJECTIVE: To investigate age-related changes in speech perception by measuring event-related potentials (ERPs) elicited by auditory stimuli varying in their linguistic characteristics from pure tones to words. METHODS: ERPs were recorded from 64 subjects in three age groups (young, middle age and elderly) to auditory target stimuli, using an oddball paradigm. Three different tasks and stimuli were used: tonal, phonological and semantic. RESULTS: N100 latency to tonal targets was significantly shorter than to both types of speech targets. P300 latency to tonal targets was significantly shorter than to phonological targets, which in turn was shorter than to semantic targets. P300 amplitude recorded to the speech targets was significantly larger over the left hemisphere than over the right hemisphere in the young subjects. However, the reverse pattern of asymmetry, favoring the right hemisphere was found in the elderly subjects. The pattern of the hemispheric distribution for the middle aged was somewhere in between the young and elderly. CONCLUSIONS: The data indicate possible progressive changes in left-right asymmetry in language processing with aging. SIGNIFICANCE: Findings may indicate an increased use of compensatory mechanisms for speech processing, or alternatively, an increased use of different generators as individuals age.  相似文献   

3.
Seventy-two female subjects memorized two photographed faces and subsequently discriminated these “target” faces from two “non-target” faces. The faces were presented unilaterally for 150 msec, and manual reaction times for the discriminations served as the dependent variable. The face stimuli were either “neutral” or “emotional” in facial expression, these attributes having been shown, by a preliminary study, to be highly reliable. Faster reaction times were obtained for left visual field than for right visual presentation. Subjects (N = 36) who memorized emotional faces showed significantly faster discrimination of faces presented in the left than in the right visual field (25·7 msec); subjects (N = 36) who memorized faces lacking emotional expression also showed significant left visual field superiority (11·6 msec), but this left field superiority was significantly smaller than that of subjects memorizing emotional faces. Results are consistent with previous tachistoscopic evidence of right hemisphere superiority in face recognition speed and with diverse non-tachistoscopic evidence of preferential memory storage of affective material. The pattern of latencies for the different visual field-response hand conditions supported a model of lateral specialization in which the specialized hemisphere normally processes both directly-received and interhemispherically- transferred stimuli.  相似文献   

4.
Left- and right-handed subjects, selected on the basis of degree of hand preference and for the presence or absence of familial sinistrality, responded to monaurally presented tonal stimuli (440 Hz note played on four different instruments) using their right and left hands on separate occasions. It was found that in both the strong left-handers and the inconsistent strong right-handers, motor control of the hands was related to familial sinistrality (FS). Specifically, strong left-handers and inconsistent strong right-handers with FS have a difference in the motor control of the hands in the left hemisphere, with a left hemisphere-left hand advantage. Strong left-handers and inconsistent strong right-handers with no FS have a difference in the motor control of the hands in the right hemisphere, with a right hemisphere-left hand advantage.  相似文献   

5.
In two experiments, subjects pressed a key labeled Red or Green in response to a 100 msec duration stimulus presented to the left or right visual field. In Experiment I, subjects responded to the meaning of Stroop words; the stimulus was the word Red or Green printed in red, green, or white ink. In Experiment II, subjects responded to ink color; the stimulus was either the word Red or Green printed in red or green ink or a red or green color patch. In each experiment, there were 20 strongly right-handed subjects and 20 strongly left-handed subjects. Half the subjects in each handedness group were male and half were female. In both experiments, RT was faster when words were presented to the right visual field than to the left visual field, suggesting that both meaning and ink color of Stroop words were processed more quickly in the left hemisphere. Results of both experiments revealed faster reactions when meaning and ink color of the Stroop words were congruent than when they were not. A comparison with baseline trials indicated that the RT difference between responses to congruent and incongruent Stroop words was due to the incongruent cue interfering with information processing rather than to the congruent cue facilitating processing. Hypothesized interactions between stimulus position, congruence, handedness and sex were not significant.  相似文献   

6.
In dichotic pursuit auditory tracking tasks (PAT) subjects match a continuously varying pure tone presented to one ear with a second tone presented to the other ear and controlled by unidimensional movements of part of their motor system. In previous studies in which tonal frequency was varied, performance was significantly better when the tone controlled by a speech articulator (tongue, jaw) was presented to the right ear, rather than the left, but not if the tone was hand controlled. This study examined a visual analog of the PAT in which subjects matched the vertical position of a continually moving horizontal line (target) presented on one side of their point of fixation, with a second line (cursor) presented on the other side of their fixation point. Two predictions were confirmed for 12 right handed subjects: that there would be no significant laterality effect for articulatory (jaw) tracking because previous auditory tracking findings were speech related; and, that there would be a significant laterality effect (cursor right field-left hemisphere) for right hand tracking because of the development of a specialized sensorimotor integration mechanism for eye-right hand coordination in the left hemisphere. Alternative explanations for the right hand tracking results, and for the nonsignificant trend towards a laterality effect (cursor left field-right hemisphere) for left hand tracking, were discussed.  相似文献   

7.
Analysis of a word's acoustic structure must precede identification of its meaning. Therefore, these aspects of speech processing could be associated with event-related potential (ERP) components that differed in their timing. To identify electrophysiologic indices of the cortical processing of acoustic and semantic features of speech, we recorded ERPs to the random presentation of nonsense or real words in four conditions designed to manipulate the extent to which the speech sounds were processed. In one condition subjects responded to all stimuli; in a second and third, to a designated nonsense or real word; and in the final condition to words within a specified semantic category. To define the cortical activity associated with acoustic processing, ERPs obtained when no discrimination was required were subtracted from those recorded during the identification of a specified speech target. The difference waveforms exhibited a negative potential that began about 50 msec after stimulus onset and lasted about 200 msec. Difference waveforms obtained by subtracting the non-discrimination ERP from those obtained during semantic discrimination exhibited a negative potential with similar onset timing. We concluded that the early negative potential indexed acoustic processes necessary for stimulus identification. To identify potentials associated with determination of a word's meaning, we subtracted the verbal discrimination from the semantic discrimination ERPs. This difference waveform exhibited a later negativity beginning at 150 msec and lasting about 250 msec. This potential may be related to the semantic processing of speech.  相似文献   

8.
Forty schizophrenic and 40 non-schizophrenic male psychiatric inpatients, matched for age, intelligence, and social competence, were administered equivalent form nonsense syllable discrimination tasks under each of four conditions: an initial baseline information feedback condition, two response contingent reinforcement conditions, and a final baseline information feedback condition. Subjects of both diagnostic groups were further divided into eight groups of ten patients each. Half the groups received praise and half received censure during the response contingent conditions. In counterbalanced order, the reinforcing agent was the experimenter in one condition and a recorded voice of one of the parents in the other. Half received the mother's and half received the father's voice. Schizophrenics responded faster under experimenter than under parent reinforcement. Poor premorbid schizophrenics, making more errors than goods, responded faster under experimenter as opposed to parent reinforcement. Among non-schizophrenics, education, social competence, and I.Q. were all negatively correlated with response latency and number of errors. Among schizophrenics, the higher the I.Q., the fewer the errors and the shorter the response latency, except when such subjects were reinforced by their parents. Then no significant relationship existed. Marital status, however, was significantly related to performance, with unmarried status positively correlated with slower response to parents. All subjects responded more quickly and accurately in the final than in the first baseline condition regardless of diagnosis or interpolated reinforcement experiences. Data were interpreted as providing limited support for the social censure hypothesis of Garmezy and his colleagues, but as consistent with recent formulations concerning the schizophrenic patient in relation to his family.  相似文献   

9.
In Mandarin Chinese, a tonal language, pitch level and pitch contour are two dimensions of lexical tones according to their acoustic features (i.e., pitch patterns). A change in pitch level features a step change whereas that in pitch contour features a continuous variation in voice pitch. Currently, relatively little is known about the hemispheric lateralization for the processing of each dimension. To address this issue, we made whole-head electrical recordings of mismatch negativity in native Chinese speakers in response to the contrast of Chinese lexical tones in each dimension. We found that pre-attentive auditory processing of pitch level was obviously lateralized to the right hemisphere whereas there is a tendency for that of pitch contour to be lateralized to the left. We also found that the brain responded faster to pitch level than to pitch contour at a pre-attentive stage. These results indicate that the hemispheric lateralization for early auditory processing of lexical tones depends on the pitch level and pitch contour, and suggest an underlying inter-hemispheric interactive mechanism for the processing.  相似文献   

10.
11.
The effects of handedness, sex and the influence of hand placement in extrapersonal space on temporal information processing was investigated by measuring thresholds for perceiving the simultaneity of pairs of tactile stimuli. Simultaneity thresholds of preferred right handed and left handed university students with left hemisphere speech representation were compared using unimanual and bimanual stimulation at three hand placements (midline, lateral and crossed). In unimanual conditions two fingers of one hand were stimulated (single hemisphere), whereas in the bimanual conditions one finger of each hand was stimulated (cross hemispheres). Bimanual minus unimanual thresholds provided an estimate of interhemisphere transmission time (IHTT) regardless of hand placement. The effects of hemispace varied with the type of stimulation. With unimanual stimulation, overall thresholds were longer at the midline placement, however, with bimanual stimulation, thresholds were longer when the hands were spatially separated (crossed and/or uncrossed). Left handers' IHTTs were 8 ms faster than those of right handers. IHTTs in males were faster than females with hands placed in lateral (by 10.8 ms) or crossed (by 9.8 ms) but not midline positions. It was concluded that the cerebral hemispheres are equally capable of discriminating temporal intervals, but that the left hemisphere predominates when there is uncertainty about location of stimulation.  相似文献   

12.
《Clinical neurophysiology》2014,125(8):1568-1575
ObjectiveThis study examined the impact of spectral resolution on the processing of lexical tones and the number of frequency channels required for a cochlear implant (CI) to transmit Chinese tonal information to the brain.MethodsERPs were recorded in an auditory oddball task. Normal-hearing participants listened to speech sounds of two tones and their CI simulations in 1, 4, 8, or 32 channels. The mismatch response elicited by speech sounds and CI simulations in different numbers of channels were compared.ResultsThe mismatch negativity (MMN) was observed for speech sounds. For the 1-channel CI simulations, deviants elicited a more positive waveform than standard stimuli. No MMN response was observed with the 4-channel simulations. A reliable MMN response was observed for the 8- and 32-channel simulations. The MMN responses elicited by the 8- and 32-channel simulations were equivalent in magnitudes and smaller than that elicited by speech sounds.ConclusionsMore than eight frequency channels are required for a CI to transmit Chinese tonal information. The presence of both positive and negative mismatch responses suggests multiple mechanisms underlying auditory mismatch responses.SignificanceThe current findings of spectral resolution constraints on the transmission of tonal information should be taken into account in the design of the CI devices.  相似文献   

13.
Abstract: The amplitude changes in steady-state visual evoked potentials (VEPs) from pattern-reversal stimulations at 30 min after an intramuscular injection of haloperidol were examined in 10 treated schizophrenics. The VEP amplitudes to hemi-field or full-field pattern-reversal stimulations with a standard check size were almost unchanged after the haloperidol injection as compared with those before the injection. However, while the VEP amplitudes to the full-field stimulations were significantly higher in the midline occipital portion than in the right occipital portion before the injection, no significant difference was observed after the injection. Further, the VEPs similarly responded to the change in the check sizes used in the full-field stimulations before and after the injection, both showing significantly lower amplitudes only at the large check size of 4.1° in a visual angle than that of 1.0°. These results indicate that acutely administered haloperidol has little effect on steady-state VEPs from pattern-reversal stimulations in treated schizophrenics.  相似文献   

14.
By means of fMRI measurements, the present study identifies brain regions in left and right peri-sylvian areas that subserve grammatical or prosodic processing. Normal volunteers heard 1) normal sentences; 2) so-called syntactic sentences comprising syntactic, but no lexical-semantic information; and 3) manipulated speech signals comprising only prosodic information, i.e., speech melody. For all conditions, significant blood oxygenation signals were recorded from the supratemporal plane bilaterally. Left hemisphere areas that surround Heschl gyrus responded more strongly during the two sentence conditions than to speech melody. This finding suggests that the anterior and posterior portions of the superior temporal region (STR) support lexical-semantic and syntactic aspects of sentence processing. In contrast, the right superior temporal region, in especially the planum temporale, responded more strongly to speech melody. Significant brain activation in the fronto-opercular cortices was observed when participants heard pseudo sentences and was strongest during the speech melody condition. In contrast, the fronto-opercular area is not prominently involved in listening to normal sentences. Thus, the functional activation in fronto-opercular regions increases as the grammatical information available in the sentence decreases. Generally, brain responses to speech melody were stronger in right than left hemisphere sites, suggesting a particular role of right cortical areas in the processing of slow prosodic modulations.  相似文献   

15.
The effects of handedness, sex and the influence of hand placement in extrapersonal space on temporal information processing was investigated by measuring thresholds for perceiving the simultaneity of pairs of tactile stimuli. Simultaneity thresholds of preferred right handed and left handed university students with left hemisphere speech representation were compared using unimanual and bimanual stimulation at three hand placements (midline, lateral and crossed). In unimanual conditions two fingers of one hand were stimulated (single hemisphere), whereas in the bimanual conditions one finger of each hand was stimulated (cross hemispheres). Bimanual minus unimanual thresholds provided an estimate of interhemisphere transmission time (IHTT) regardless of hand placement. The effects of hemispace varied with the type of stimulation. With unimanual stimulation, overall thresholds were longer at the midline placement, however, with bimanual stimulation, thresholds were longer when the hands were spatially separated (crossed and/or uncrossed). Left handers' IHTTs were 8 ms faster than those of right handers. IHTTs in males were faster than females with hands placed in lateral (by 10.8 ms) or crossed (by 9.8 ms) but not midline positions. It was concluded that the cerebral hemispheres are equally capable of discriminating temporal intervals, but that the left hemisphere predominates when there is uncertainty about location of stimulation.  相似文献   

16.
We studied with functional neuroimaging the cortical response to auditory sentences, comparing two recognition tasks that either targeted the speaker's voice or the verbal content. The right anterior superior temporal sulcus responded during the voice but not during the verbal content task. This response was therefore specifically related to the analysis of nonverbal features of speech. However, the dissociation between verbal and nonverbal analysis was only partial. Left middle temporal regions previously implicated in semantic processing responded in both tasks. This indicates that implicit semantic processing occurred even when the task directed attention to nonverbal input analysis. The verbal task yielded greater bilateral activation in the fusiform/lingual region, presumably reflecting an implicit translation of auditory sentences into visual representations. This result confirms the participation of visual cortical regions in verbal analysis of speech.  相似文献   

17.
Healthy, right-handed volunteers (six male, six female) either saw or imagined the hands of a clock set a particular time. In both conditions, they then judged whether the angle between the clock hands was greater than or less than 90 degrees. Subjects pressed one of two response keys to indicate their decision, and hand of response (left/right) was counterbalanced within and between subjects. The subjects had significantly longer reaction times and made significantly more errors when the imaginary angles formed by the clock hands were located in left hemispace (e.g. 8:30) than right hemispace (e.g. 4:30). With visible hands, there was no reaction time difference between visual hemifields, although significantly more errors were made when the angle formed by the hands fell within the left visual field. In the perceptual task (visible hands), reaction times and error rates increased monotonically as the distance between the hands approximated more closely to 90 degrees. This psychophysical relationship was not found in the representational task (imaginary hands). Rather, there was a significant positive correlation between reaction times/error rates and the magnitude of the number indicative of the position of the minute hand. The latter finding is consistent with the hypothesis that the lateral asymmetry in the representational task (reaction times and error rates are higher in left hemispace) is due to the time taken to mentally rotate the imaginary minute hand in a clockwise direction. No such operation is required in the perceptual condition where the hands are clearly visible.  相似文献   

18.
Lexical decision vocal reaction times of a group of English-proficient, Chinese-Mandarin speakers and group of monolingual, English speakers were measured to unilaterally presented concrete and abstract English words. Results of an ANOVA showed a significant group x visual field x stimulus type interaction. Post-hoc analysis showed a significant right visual field advantage for the Chinese subjects. For the English speaking subjects, right visual field stimulations yielded significantly faster vocal reaction times to the abstract words than to the concrete ones, while the opposite occurred for left visual-field inputs. Also, a correlation between the two lateral conditions was significant for the Chinese subjects but not for the English speakers. These findings suggest that the English speakers evidenced dissociated left and right hemispheric linguistic processing while the Chinese subjects left hemisphere was responsible for the final phonological stages of linguistic analysis. Such findings support a phonological "monitor-user" hypothesis for cerebral dominance characteristics in bilingual Chinese speakers.  相似文献   

19.
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech‐in‐noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full‐scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non‐vocal sounds. Within the control group, the left and right IC responded more when recognising speech‐in‐noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.  相似文献   

20.
Contrary to the classical view, recent neuroimaging studies claim that phonological processing, as part of auditory speech perception, is subserved by both the left and right temporal lobes and not the left temporal lobe alone. This study seeks to explore whether there are variations in the lateralization of response to verbal and nonverbal sounds by varying spectral complexity of those sounds. White noise was gradually transformed into either speech or music sounds using a “sound morphing” procedure. The stimuli were presented in an event‐related design and the evoked brain responses were measured using fMRI. The results demonstrated that the left temporal lobe was predominantly sensitive to gradual manipulation of the speech sounds while the right temporal lobe responded to all sounds and manipulations. This effect was especially pronounced within the middle region of the left superior temporal sulcus (mid‐STS). This area could be further subdivided into a more posterior area, which showed a linear response to the manipulation of speech sounds, and an anteriorly adjacent area which showed the strongest interaction between the speech and music sound manipulations. Such a differential and selective response was not seen in other brain areas and not when the sound “morphed” into a music stimulus. This gives further experimental evidence for the assumption of a posterior‐anterior processing stream in the left temporal lobe. In addition, the present findings support the notion that the left mid STS area is more sensitive to speech signals compared to the right homologue. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号