首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
 Auditory-evoked mismatch fields (MMFs) elicited by vowel contrasts and plosive stop consonant place-of-articulation contrasts were recorded over the left hemisphere of neurologically and audiologically normal subjects. Two experiments were conducted: vowels were presented in isolation in experiment 1 and embedded in consonant-vowel syllables in experiment 2. Best-fit equivalent MMF sources were obtained using the model of a single, spatiotemporal current dipole in a sphere. In both experiments, MMF sources activated by place-of-articulation contrasts were later in latency and smaller in dipole moment amplitude than MMF sources excited by vowel contrasts. There was evidence, albeit not unambiguous, for the vowel-contrast MMF sources being located more posteriorly than the consonant-contrast MMF sources in experiment 1 and more laterally in experiment 2. In both experiments, the MMF source excited by the contrast between /da/ and /ga/ was more anterior than the MMF source excited by the contrast between /da/ and /ba/. The effects on latency and dipole moment may be interpreted to mirror differences in perceptual discriminability and auditory memory decay between consonantal place-of-articulation contrasts and vowel contrasts. Similarly, the effects on location may be interpreted to reflect featural specificity of the mismatch response. Interestingly, the dipole source analysis results show a correspondence to the pattern of preservation and loss of the mismatch response to vowel and consonant place-of-articulation contrasts recently observed in Wernicke’s aphasia. Received: 14 June 1996 / Accepted: 6 February 1997  相似文献   

2.
Summary Neuromagnetic responses in the human auditory cortex evoked by various burst stimuli of pure tones and monosyllable speech sounds were measured separately from two hemispheres. Both the pure tones and speech sounds elicited clear responses with two main peaks. The field patterns over the scalp at the peak latencies indicated a single current dipole as an equivalent field generator. The depth of the current dipoles computed from the mapped field data was deeper from the scalp for a tone stimulus of higher frequency, which confirms the tonotopic organization in the auditory cortex. A difference was found in the dipole locations in the horizontal plane for the speech stimuli of a vowel /a/ and a consonant-vowel /ka/. It suggests the sensitivity of the magnetic responses to the acoustic structure of the speech sound.  相似文献   

3.
Two tone stimuli, one frequent (standard) and the other infrequent (a slightly higher, deviant tone), were presented in random order and at short intervals to subjects reading texts they had selected. In different blocks, standards were either 250, 1,000, or 4,000 Hz, with the deviants always being 10% higher in frequency than the standards of the same blocks. Magnetic responses elicited by the standard and deviant tones included N1m, the magnetoencephalographic equivalent of the electrical N1 (its supratemporal component). In addition, deviant stimuli elicited MMNm, the magnetic equivalent of the electrical mismatch negativity, MMN. The equivalent dipole sources of the two responses were located in supratemporal auditory cortex, with the MMNm source being anterior to that of N1m. The dipole orientations of both sources in the sagittal plane depended on stimulus frequency, suggesting that the responses are generated by tonotopically organized neuronal populations. The tonotopy reflected by the frequency dependence of the MMNm source might be that of the neural trace system underlying frequency representation of auditory stimuli in sensory memory.  相似文献   

4.
We studied the effects of sleep on auditory evoked magnetic fields following pure tone stimulation applied to the right ear of 10 healthy normal volunteers to investigate the changes in the processing of auditory perception in the primary auditory cortex. Dual 37-channel biomagnetometers were used to record auditory evoked magnetic fields over the bilateral temporal lobes in response to presented tones. Auditory evoked magnetic fields were compared for three stimulus frequencies (250, 1000 and 4000 Hz) and three sleep stages (awake state, sleep stages 1 and 2). Four main components, M50, M100, M150 and M200, were identified with latencies of approximately 50, 100, 150 and 200 ms, respectively. The latency of each component had a tendency to be prolonged with the depth of sleep stage in all frequencies. The amplitude ratios of the early-latency components (M50 and M100) showed a tendency of reduction compared with the same components in the awake state. By contrast, the amplitude ratios of the long-latency components (M150 and M200) were significantly enhanced with an increase in the sleep stage compared with the same components in the awake state. The equivalent current dipoles of all components in all conditions were detected at the superior temporal cortex (the primary auditory cortex). As for the changes in the equivalent current dipole location of each component, the equivalent current dipole was detected in the more posterior and medial region in responses to the high-frequency tone (1000 and 4000 Hz) compared with those to 250 Hz tone stimulation. Although the equivalent current dipoles of the early-latency components (M50 and M100) were in regions more anterior and superior compared to those in the awake state, there was no consistent tendency of changes in equivalent current dipole locations between each sleep stage in the late-latency components (M150 and M200). These findings are probably due to the difference in generating mechanisms between the early- and late-latency components.  相似文献   

5.
We made a detailed source analysis of the magnetic field responses that were elicited in the human brain by different monosyllabic speech sounds, including vowel, plosive, fricative, and nasal speech. Recordings of the magnetic field responses from a lateral area of the left hemisphere of human subjects were made using a multichannel SQUID magnetometer, having 37 field-sensing coils. A single source of the equivalent current dipole of the field was estimated from the spatial distribution of the evoked responses. The estimated sources of an N1m wave occurring at about 100 ms after the stimulus onset of different monosyllables were located close to each other within a 10-mm-sided cube in the three-dimensional space of the brain. Those sources registered on the magnetic resonance images indicated a restricted area in the auditory cortex, including Heschl's gyri in the superior temporal plane. In the spatiotemporal domain the sources exhibited apparent movements, among which anterior shift with latency increase on the anteroposterior axis and inferior shift on the inferosuperior axis were common in the responses to all monosyllables. However, selective movements that depended on the type of consonants were observed on the mediolateral axis; the sources of plosive and fricative responses shifted laterally with latency increase, but the source of the vowel response shifted medially. These spatiotemporal movements of the sources are discussed in terms of dynamic excitation of the cortical neurons in multiple areas of the human auditory cortex.  相似文献   

6.
Intracerebral sources of human auditory steady-state responses   总被引:11,自引:0,他引:11  
The objective of this study was to localize the intracerebral generators for auditory steady-state responses. The stimulus was a continuous 1000-Hz tone presented to the right or left ear at 70 dB SPL. The tone was sinusoidally amplitude-modulated to a depth of 100% at 12, 39, or 88 Hz. Responses recorded from 47 electrodes on the head were transformed into the frequency domain. Brain electrical source analysis treated the real and imaginary components of the response in the frequency domain as independent samples. The latency of the source activity was estimated from the phase of the source waveform. The main source model contained a midline brainstem generator with two components (one vertical and lateral) and cortical sources in the left and right supratemporal plane, each containing tangential and radial components. At 88 Hz, the largest activity occurred in the brainstem and subsequent cortical activity was minor. At 39 Hz, the initial brainstem component remained and significant activity also occurred in the cortical sources, with the tangential activity being larger than the radial. The 12-Hz responses were small, but suggested combined activation of both brainstem and cortical sources. Estimated latencies decreased for all source waveforms as modulation frequency increased and were shorter for the brainstem compared to cortical sources. These results suggest that the whole auditory nervous system is activated by modulated tones, with the cortex being more sensitive to slower modulation frequencies.  相似文献   

7.
ERPs for infrequent omissions and inclusions of stimulus elements   总被引:5,自引:0,他引:5  
A negative event-related potential (ERP) wave called mismatch negativity (MMN) is elicited by an infrequent deviant stimulus in a sequence of frequent standard stimuli. Omission of a stimulus in a sequence of stimuli, however, has been considered to elicit a negativity different from MMN. The present study addressed this issue by examining ERPs for infrequent omissions and inclusions of compound stimuli or their elements. Three kinds of stimuli were presented: a 1000-Hz sine wave tone (Sine), white noise with the 1000-Hz frequency sharply filtered out (Noise), and a composite of the pure tone and the filtered white noise (SiNoise). All stimuli had 50 ms duration and were presented with a regular interstimulus interval of 650 ms. Intensities were 75 dB SPL for the tone and noise stimuli and slightly higher for the composite stimulus. The three kinds of stimuli were presented on separate runs, either as the frequent stimulus or one of two infrequent stimuli, each with 10% probability. Infrequent omission of the large stimulus element (Sine deviant to SiNoise) tended to elicit later MMN than inclusion of the same element (SiNoise deviant to Sine). Omission of the small stimulus element (Noise deviant to SiNoise) elicited a smaller and later MMN than did inclusion of the same element (SiNoise deviant to Noise). These data suggest that MMNs are also elicited by partial stimulus omissions, although they seem to be more sensitive to other kinds of stimulus deviances.  相似文献   

8.
The effects of salient foreground stimuli in evoked potentials to weak background probe stimuli were examined in situations requiring passive observation or discriminative judgments of foreground tone stimuli. The background probe stimuli consisted of a continual train of weak acoustic stimuli presented at a rate of about 40 stimuli per second. Under such conditions, a 40-Hz steady-state rhythm (SSR) is established, which has been proposed to consist of the algebraic summation of individual middle-latency components evoked by stimuli in the train. The 40-Hz SSR was averaged over trials and extracted from the composite event-related potential signal using narrow-band digital filtering, for continuous examination of latency and amplitude during the course of the period immediately preceding and following the foreground stimulus. The foreground stimulus was followed by a brief period (peaking at about 200 ms) during which the latency of response to the background probe stimuli was reduced. The extent of this latency reduction was in proportion to the magnitude of the simultaneous slow-wave ERP responses and, to a lesser extent, heart rate responses. It is proposed that the results may reflect a transient period of sensitization during orienting, at a presumably early level in the auditory system, and that the method thus offers a means for determining the extent and temporal course of such effects.  相似文献   

9.
The first two formant frequencies (F1 and F2) are the cues important for vowel identification. In the categorization of the naturally spoken vowels, however, there are overlaps among the vowels in the F1 and F2 plane. The fundamental frequency (F0), the third formant frequency (F3) and the spectral envelope have been proposed as additional cues. In the present study, to investigate the spectral regions essential for the vowel identification, untrained subjects performed the forced-choice identification task in response to Japanese isolated vowels (/a, o, u, e, i/), in which some spectral regions were deleted. Minimum spectral regions needed for correct vowel identification were the two regions including F1 and F2 (the first and fourth in the quadrisected F1-F2 regions in Bark scale). This was true even when phonetically different vowels had a similar combination of F1 and F2 frequency components. F0 and F3 cues were not necessarily needed. It is concluded that the relative importance in the spectral region is not equivalent, but weighted on the two critical spectral regions. The auditory system may identify the vowels by analyzing the information of the spectral shapes and the formant frequencies (F1 and F2) in these critical spectral regions.  相似文献   

10.
Slow acoustic modulations below 20 Hz, of varying bandwidths, are dominant components of speech and many other natural sounds. The dynamic neural representations of these modulations are difficult to study through noninvasive neural-recording methods, however, because of the omnipresent background of slow neural oscillations throughout the brain. We recorded the auditory steady-state responses (aSSR) to slow amplitude modulations (AM) from 14 human subjects using magnetoencephalography. The responses to five AM rates (1.5, 3.5, 7.5, 15.5, and 31.5 Hz) and four types of carrier (pure tone and 1/3-, 2-, and 5-octave pink noise) were investigated. The phase-locked aSSR was detected reliably in all conditions. The response power generally decreases with increasing modulation rate, and the response latency is between 100 and 150 ms for all but the highest rates. Response properties depend only weakly on the bandwidth. Analysis of the complex-valued aSSR magnetic fields in the Fourier domain reveals several neural sources with different response phases. These neural sources of the aSSR, when approximated by a single equivalent current dipole (ECD), are distinct from and medial to the ECD location of the N1m response. These results demonstrate that the globally synchronized activity in the human auditory cortex is phase locked to slow temporal modulations below 30 Hz, and the neural sensitivity decreases with an increasing AM rate, with relative insensitivity to bandwidth.  相似文献   

11.
The auditory evoked magnetic fields to very high frequency tones   总被引:6,自引:0,他引:6  
We studied the auditory evoked magnetic fields (AEFs) in response to pure tones especially at very high frequencies (from 4000 Hz to 40,000 Hz). This is the first systematic study of AEFs using tones above 5000 Hz, the upper audible range of humans, and ultrasound. We performed two experiments. In the first, AEFs were recorded in 12 subjects from both hemispheres under binaural listening conditions. Six types of auditory stimulus (pure tones of five different frequencies: 4000 Hz, 8000 Hz, 10,000 Hz, 12,000 Hz, 14,000 Hz, and a click sound as the target stimulus) were used. In the second experiment, we used 1000 Hz, 15,000 Hz, and two ultrasounds with frequencies of 20,000 Hz and 40,000 Hz. The subjects could detect all stimuli in the first experiment but not the ultrasounds in the second experiment.We analyzed N1m, the main response with approximately 100 ms in peak latency, and made the following findings. (1) N1m responses to the tones up to 12,000 Hz were clearly recorded from at least one hemisphere in all 12 subjects. N1m for 14,000 Hz was identified in at least one hemisphere in 10 subjects, and in both hemispheres in six subjects. No significant response could be identified to ultrasounds over 20,000 Hz. (2) The amplitude of the N1m to the tones above 8000 Hz was significantly smaller than that to 4000 Hz in both hemispheres. There was a tendency for the peak latency of the N1m to be longer for the tones with higher frequencies, but no significant change was found. (3) The equivalent current dipole (ECD) of the N1m was located in the auditory cortex. There was a tendency for the ECD for the tones with higher frequencies to lie in more medial and posterior areas, but no significant change was found. (4) As for the interhemispheric difference, the N1m amplitude for all frequency tones was significantly larger and the ECDs were estimated to be located more anterior and medial in the right hemisphere than the left. The priority of the right hemisphere, that is the larger amplitude, for very high frequency tones was confirmed. (5) The orientation of the ECD in the left hemisphere became significantly more vertical the higher the tones. This result was consistent with previous studies which revealed the sensitivity of the frequency difference in the left hemisphere.From these findings we suggest that tonotopy in the auditory cortex exists up to the upper limit of audible range within the small area, where the directly air-conducted ultrasounds are not reflected.  相似文献   

12.
Acoustic complexity of a stimulus has been shown to modulate the electromagnetic N1 (latency ∼110 ms) and P2 (latency 190 ms) auditory evoked responses. We compared the relative sensitivity of electroencephalography (EEG) and magnetoencephalography (MEG) to these neural correlates of sensation. Simultaneous EEG and MEG were recorded while participants listened to three variants of a piano tone. The piano stimuli differed in their number of harmonics: the fundamental frequency (f 0 ), only, or f 0 and the first two or eight harmonics. The root mean square (RMS) of the amplitude of P2 but not N1 increased with spectral complexity of the piano tones in EEG and MEG. The RMS increase for P2 was more prominent in EEG than MEG, suggesting important radial sources contributing to the P2 only in EEG. Source analysis revealing contributions from radial and tangential sources was conducted to test this hypothesis. Source waveforms revealed a significant increase in the P2 radial source amplitude in EEG with increased spectral complexity of piano tones. The P2 of the tangential source waveforms also increased in amplitude with increased spectral complexity in EEG and MEG. The P2␣auditory evoked response is thus represented by both tangential (gyri) and radial (sulci) activities. The radial contribution is expressed preferentially in EEG, highlighting the importance of combining EEG with MEG where complex source configurations are suspected.  相似文献   

13.
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition‐suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound‐flash incongruence reduced accuracy in a same‐different location discrimination task (i.e., the ventriloquism effect) and reduced the location‐specific repetition‐suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information.  相似文献   

14.
The ability of an auditory stimulus to facilitate the amplitude and latency of the unconditioned nictitating membrane (NM) response in rabbits was investigated over a wide range of interstimulus intervals (ISIs) for both delay (Experiments 1-4) and trace (Experiments 3 and 4) procedures. The auditory stimulus was a 1000-Hz tone (T) at either 85 or 95 dB, and the reflex-eliciting stimulus was a 2.0 psi (pounds per square inch) corneal air puff (AP). The results indicate that (a) robust facilitation of the NM response, as measured by an increased amplitude and a reduced latency, can be obtained at long ISIs (2,000-32,000 ms); (b) increasing the tone intensity can increase reflex facilitation of the peak amplitude; (c) at comparable ISIs, delay procedures produce more facilitation of both amplitude and latency than do trace procedures; and (d) when trace procedures are used, amplitude and latency facilitation by a 125-ms tone follows an inverted U-shaped ISI function in which facilitation peaks between 125 and 500 ms, rapidly decreases between 1,000 and 2,000 ms, and disappears by 4,000 ms.  相似文献   

15.
The medial division of the ventral nucleus of the lateral lemniscus (VNLLm) contains a specialized population of neurons that is sensitive to interaural temporal disparities (ITDs), a potent cue for sound localization along the azimuth. Unlike many ITD-sensitive neurons elsewhere in the auditory system, neurons in the VNLLm respond only at the onset of tones. An onset response may be significant for behavior because, under echoic conditions, tones require sharp onsets for accurate localization. In contrast, noise can generally be localized even with gradual onsets, presumably because transients occur at random intervals in noise. We recorded responses of neurons in the VNLLm to tones and noise in unanesthetized rabbits. We found that although tones elicited a transient response, noise elicited a sustained response as if it was a sequence of transients. The responses to tones indicate that these neurons represent a secondary stage in the processing of ITDs. The onset response to tones was only weakly synchronized to the phase of the tone, indicating that neurons in the VNLLm inherit their sensitivity to ITDs from their inputs. The latencies were short (~8 ms), implying that the ITD sensitivity is derived from ascending inputs. Most neurons in the VNLLm discharged maximally at the same ITD at all frequencies, a characteristic shared with neurons of the medial superior olive. However, the latency of neurons in the VNLLm to interaurally delayed stimuli is linked strongly to the timing of the contralateral stimulus. This suggests that these neurons receive a suprathreshold, contralateral input that is modulated by a subthreshold input conveying information about ITDs. Other stations in the auditory pathway contain a subset of neurons that respond transiently to tones and are sensitive to ITDs. These neurons may represent a novel pathway that assists in localizing sounds in the presence of reflections.  相似文献   

16.
Integration of cues from multiple sensory channels improves our ability to sense and respond to stimuli. Cues arising from a single event may arrive at the brain asynchronously, requiring them to be ??bound?? in time. The perceptual asynchrony between vestibular and auditory stimuli has been reported to be several times greater than other stimulus pairs. However, these data were collected using electrically evoked vestibular stimuli, which may not provide similar results to those obtained using actual head rotations. Here, we tested whether auditory stimuli and vestibular stimuli consisting of physiologically relevant mechanical rotations are perceived with asynchronies consistent with other sensory systems. We rotated 14 normal subjects about the earth-vertical axis over a raised-cosine trajectory (0.5?Hz, peak velocity 10?deg/s) while isolated from external noise and light. This trajectory minimized any input from extravestibular sources such as proprioception. An 800-Hz, 10-ms auditory tone was presented at stimulus onset asynchronies ranging from 200?ms before to 700?ms after the onset of motion. After each trial, subjects reported whether the stimuli were ??simultaneous?? or ??not simultaneous.?? The experiment was repeated, with subjects reporting whether the tone or rotation came first. After correction for the time the rotational stimulus took to reach vestibular perceptual threshold, asynchronies spanned from ?41?ms (auditory stimulus leading vestibular) to 91?ms (vestibular stimulus leading auditory). These values are significantly lower than those previously reported for stimulus pairs involving electrically evoked vestibular stimuli and are more consistent with timing relationships between pairs of non-vestibular stimuli.  相似文献   

17.
Auditory event-related brain potentials (ERPs) were recorded during auditory and visual selective attention tasks. Auditory stimuli consisted of frequent standard tones (1000 Hz) and infrequent deviant tones (1050 Hz and 1300 Hz) delivered randomly to the left and right ears. Visual stimuli were vertical line gratings randomly presented on a video monitor at mean intervals of 6 s. During auditory attention, the subject attended to the stimuli in a designated ear and responded to the 1300-Hz deviants occurring among the attended tones. During visual attention, the subject responded to the occasional visual stimuli. ERPs for tones delivered to the attended ear were negatively displaced relative to ERPs elicited by tones delivered to the unattended ear and to ERPs elicited by auditory stimuli during visual attention. This attention effect consisted of negative difference waves with early and late components. Mismatch negativities (MMNs) were elicited by 1300-Hz and 1050-Hz deviants irrespective of whether they occurred among attended or unattended tones. MMN amplitudes were unaffected by attention, supporting the proposal that the MMN is generated by an automatic cerebral discrimination process.  相似文献   

18.
Visual performance is better in response to vertical and horizontal stimuli than oblique ones in many visual tasks; this is called the orientation effect. In order to elucidate the electrophysiological basis of this psychophysical effect, we studied the effects of stimulus orientation on the amplitudes and latencies of visual evoked potentials (VEPs) over different spatial frequencies of the visual stimulation. VEPs to sinusoidal gratings at four orientations (vertical, horizontal, and oblique at 45 degrees and 135 degrees) with eight spatial frequencies (0.5-10.7 cycles/deg) at reversal rates of 1 Hz and 4 Hz were recorded in nine subjects. At 1-Hz stimulation, the amplitude and latency of P100 were measured. At 4-Hz stimulation, VEPs were Fourier-analyzed to obtain phase and amplitude of the second harmonic response (2F). At 1-Hz stimulation, P100 latencies were decreased for oblique stimuli compared with those for horizontal and vertical stimuli at lower spatial frequencies. Conversely, those for oblique stimuli were increased compared with those for horizontal and vertical stimuli at higher spatial frequencies. At 4-Hz stimulation, spatial tuning observed in 2F amplitude of the oblique gratings shifted to lower spatial frequencies when compared with those of vertical stimulation. The alteration of the VEP spatial frequency function caused by the oblique stimuli was in good agreement with the orientation effect observed in psychophysical studies. Our study may have a clinical implication in that VEP testing with stimuli in more than one orientation at slow and fast temporal modulations can be useful in evaluating neurological disease affecting the visual system.  相似文献   

19.
This study investigated the relationship between the distinctness of vowels in speech and impressions of the speaker's personality and speech style. Vowel sounds are considered to carry mainly phonetic information. For the experiment, formant frequencies of vowel sounds in original speech were altered to synthesize speech stimuli into four levels of formant contrast among different vowels. In Experiment 1, 36 university students listened to the speech stimuli and evaluated the speaker's personality using the Big Five scale. In Experiment 2, 35 participants evaluated the speech style. As the phonetic contrast between vowels became bigger, the trait evaluations of "conscientiousness" showed an asymptotic increase. "Agreeableness" was evaluated as high when the vowel contrast was somewhat bigger than the original before beginning to decrease. Regarding speech styles, "naturalness" and "fluency" were evaluated highest when vowel contrasts were somewhat bigger. "Pleasantness" was evaluated equally high for original and somewhat big contrasts, but lowest for the smallest contrast. In conclusion, vowel distinctness conveys not only phonetic information but also contributes to impressions of speech style and the speaker's personality systematically.  相似文献   

20.
Auditory event-related brain potentials (ERPs) were recorded for 250- and 4,000-Hz tone bursts in an intermodal selective attention task. Tonotopic changes were evident in the scalp distribution of the rising phase of the auditory N1 (mean peak latency 116 ms); the N1 was more frontally distributed following the 4,000-Hz than following the 250-Hz tone bursts, and it included a contralateral P90 component that was absent following 250-Hz tones. ERPs related to intermodal selective attention were isolated as negative and positive auditory difference waves (Nda and Pdas). Neither the Nda nor the Pda showed changes in distribution with tone frequency, but both showed Ear × Frequency changes in distribution. ERPs for deviant tones included mismatch negativities (MMNs) and, in attend auditory conditions, N2b and P3 components. These components did not change in scalp distribution with tone frequency. One possible explanation is that tonotopic displacements of ERP distributions on the scalp surface depend on angular displacements in generator fields on gyral convexities. The results are consistent with the possibility that auditory processing radiates outward with increasing latency from tonotopic fields on Heschl's gyri to more gyrus-free regions of the planun temporale and anterior superior temporal plane.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号