首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Auditory hallucinations are generally defined as false perceptions. Recent developments in auditory neuroscience have rapidly increased our understanding of normal auditory perception revealing (partially) separate pathways for the identification ("what") and localization ("where") of auditory objects. The current review offers a reexamination of the nature of auditory hallucinations in schizophrenia using this object-based framework. First, the structural and functional organization of auditory what and where pathways is briefly described. Then, using recent functional neuroimaging data from healthy subjects and patients with schizophrenia, key phenomenological features of hallucinations are linked to abnormal processing both within and between these pathways. Finally, current cognitive explanations of hallucinations, based on intrusive cognitions and impaired source memory, are briefly outlined and set within this framework to provide an integrated cognitive neuropsychological model of auditory hallucinations.  相似文献   

2.
Averaging (in statistical terms, estimation of the location of data) is one of the most commonly used procedures in neuroscience and the basic procedure for obtaining event-related potentials (ERP). Only the arithmetic mean is routinely used in the current practice of ERP research, though its sensitivity to outliers is well-known. Weighted averaging is sometimes used as a more robust procedure, however, it can be not sufficiently appropriate when the signal is nonstationary within a trial. Trimmed estimators provide an alternative way to average data. In this paper, a number of such location estimators (trimmed mean, Winsorized mean and recently introduced trimmed L-mean) are reviewed, as well as arithmetic mean and median. A new robust location estimator tanh, which allows the data-dependent optimization, is proposed for averaging of small number of trials. The possibilities to improve signal-to-noise ratio (SNR) of averaged waveforms using trimmed location estimators are demonstrated for epochs randomly drawn from a set of real auditory evoked potential data.  相似文献   

3.
OBJECTIVES: The locations of electrical sources in the brain can be calculated using EEG data. However, the accuracy of these calculations is not well known because it is usually not possible to compare calculated source locations with actual locations since little accurate location information is available about most sources in the brain. METHODS: In this study, sources at known locations are created by injecting current into electrodes implanted in the brains of human subjects. The locations of the implanted and scalp EEG electrodes are determined from CTs. The EEG signals produced by these dipolar sources are used to calculate source locations in spherical head models containing brain, skull, and scalp layers. The brain and scalp layers have the same electrical conductivity while 3 different skull conductivity ratios of 1/80th, 1/40th, and 1/20th of brain and scalp conductivity are used. Localization errors have been determined for 177 sources in 13 subjects. RESULTS: An average localization error of 10.6 (SD=5.5) mm for all 177 source was obtained for a skull conductivity ratio of 1/40. The average errors for the other ratios are only a few millimeters larger. The average localization error for 108 sources at superior locations in the brain is 9.2 (4.4) mm. The average error for 69 inferior location sources is 12.8 (6.2) mm. There are no significant differences in localization accuracy for deep and superficial sources. CONCLUSIONS: These results indicate that the best average localization that can be achieved using a spherical head model is approximately 10 mm. More realistic head models will be required for greater localization accuracy.  相似文献   

4.
OBJECTIVES: To determine the accuracy with which electrical sources in the human brain can be located using realistically shaped boundary element models of the head and to compare this accuracy with that using spherical head models. METHODS: In a previous study, electroencephalographs (EEGs) produced by sources at known locations in the brains of human subjects were recorded. The sources were created by injecting current into implanted depth electrodes. The locations of the implanted depth and scalp EEG electrodes and head shape were determined from computerized tomography images. The EEGs were used to calculate source locations in spherical head models and localization accuracy was determined by comparing the calculated and actual locations. In this study, these same EEGs are used to determine localization accuracy in realistically shaped head models. RESULTS: An average localization error of 10.5 (SD=5.4) mm was obtained in the realistically shaped models for all 176 sources in 13 subjects. This compares with 10.6 (5.5) mm in the spherical models. The average localization error for 105 sources at superior locations in the brain is 9.1 (4.2) mm. The average error for 71 inferior location sources is 12.4 (6.4) mm. The corresponding values for the spherical models are 9.2 (4.4) and 12.8 (6.2) mm. CONCLUSIONS: The realistically shaped head boundary element models used in this study produced very nearly the same localization accuracy as spherical models.  相似文献   

5.
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory–visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head‐orienting responses to auditory–visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach‐to‐target localization responses were more accurate and faster to spatially congruent auditory–visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head‐orienting reaction times and approach‐to‐target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour.  相似文献   

6.
This paper presents a study of the intrinsic localization error bias due to the use of a spherical geometry model on EEG simulated data obtained from realistically shaped models. About 2000 dipoles were randomly chosen on the segmented cortex surface of a particular subject. Forward calculations were performed using a uniformly meshed model for each dipole located at a depth greater than 20 mm below the brain surface, and locally refined models were used for shallower dipoles. Inverse calculations were performed using four different spherical models and another uniformly meshed model. It was found that the best spherical model lead to localization errors of 5–6 mm in the upper part of the head, and of 15–25 mm in the lower part. The influence of the number of electrodes upon this intrinsic bias was also studied. It was found that using 32 electrodes instead of 19 improves the localization by 2.7 mm on average, while using 63 instead of 32 electrodes lead to improvements of less than 1 mm. Finally, simulations involving two simultaneously active dipoles (one in the vicinity of each auditory cortex) show localization errors increasing by about 2–3 mm.  相似文献   

7.
Vision during early life plays an important role in calibrating sound localization behavior. This study investigates the effects of visual deprivation on sound localization and on the neural representation of auditory space. Nine barn owls were raised with eyelids sutured closed; one owl was congenitally anophthalmic. Data from these birds were compared with data from owls raised with normal visual experience. Sound localization behavior was significantly less precise in blind-reared owls than in normal owls. The scatter of localization errors was particularly large in elevation, though it was abnormally large in both dimensions. However, there was no systematic bias to the localization errors measured over a range of source locations. This indicates that the representation of auditory space is degraded in some way for blind-reared owls, but on average is properly calibrated. The spatial tuning of auditory neurons in the optic tectum was studied in seven of the blind-reared owls to assess the effects of early visual deprivation on the neural representation of auditory space. In normal owls, units in the optic tectum are sharply tuned for sound source location and are organized systematically according to the locations of their receptive fields to form a map of auditory space. In blind-reared owls, the following auditory properties were abnormal: (1) auditory tuning for source elevation was abnormally broad, (2) the progression of the azimuths and elevations of auditory receptive fields across the tectum was erratic, and (3) in five of the seven owls, the auditory representation of elevation was systematically stretched, and in the two others large portions of the representation of elevation were flipped upside down. The following unit properties were apparently unaffected by blind rearing: (1) the sharpness of tuning for sound source azimuth, (2) the orientation of the auditory representation of azimuth, and (3) the mutual alignment of the auditory and visual receptive fields in the region of the tectum representing the area of space directly in front of the animal. The data demonstrate that the brain is capable of generating an auditory map of space without vision, but that the normal precision and topography of the map depend on visual experience. The space map results from the tuning of tectal units for interaural intensity differences (IIDs) and interaural time differences (ITDs; Olsen et al., 1989).(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

8.
Evaluation of L1 and L2 minimum norm performances on EEG localizations.   总被引:3,自引:0,他引:3  
OBJECTIVE: In this work we study the performance of minimum norm methods to estimate the localization of brain electrical activity. These methods are based on the simplest forms of L(1) and L(2) norm estimates and are applied to simulated EEG data. The influence of several factors like the number of electrodes, grid density, head model, the number and depth of the sources and noise levels was taken into account. The main objective of the study is to give information about the dependence, on these factors, of the localization sources, to allow for proper interpretation of the data obtained in real EEG records. METHODS: For the tests we used simulated dipoles and compared the localizations predicted by the L(1) and L(2) norms with the location of these point-like sources. We varied each parameter separately and evaluated the results. RESULTS: From this work we conclude that, the grid should be constructed with approximately 650 points, so that the information about the orientation of the sources is preserved, especially for L(2) norm estimates; in favorable noise conditions, both L(1) and L(2) norm approaches are able to distinguish between more than one point-like sources. CONCLUSIONS: The critical dependence of the results on the noise level and source depth indicates that regularized and weighted solutions should be used. Finally, all these results are valid both for spherical and for realistic head models.  相似文献   

9.
A number of procedures have been employed to decompose recorded scalp potential wave forms into their hypothesized constituent elements. The shortcomings of the various decomposition methods (principal components analysis, topographic components modeling, inverse dipole localization and spatio-temporal dipole modeling) are reviewed and a new dipole components model, which incorporates the strengths of the topographic components model and the spatio-temporal dipole model, is presented. This model decomposes ERPs into subcomponents reflecting the activity of dipole sources with location and orientation fixed across subjects and with the temporal activity of each dipole modeled as a decaying sinusoid. The requirement that the equivalent dipole generators be the same across subjects and experimental conditions permits analysis of inter-group differences and of the effects of experimental variables. An application of the model to data from a 3-tone auditory target detection task is presented, and equivalent dipole sources of the components of the auditory evoked potential are described. Assumptions inherent in the model, as well as practical obstacles to its widespread implementation, are discussed.  相似文献   

10.
A problem frequently facing researchers examining abundance of expression of a given antigen is measurement. When the antigen is confined to the nucleus, absolute numbers of nuclei or a percentage of nuclei expressing the antigen in a given region can be estimated. When the antigen is localized to cytoplasm, cytoplasmic organelles or processes or membranes, the assessment becomes more difficult. In these settings, an observer/experimenter may assign a density score but intra- and inter-observer agreement using a three-tiered system, and finer resolution than this, is unlikely to be reproducible. Digital image analysis provides an opportunity to minimize observer bias in quantification of immunohistochemical staining. Previously, reported digital methods have mostly employed chromogen-staining methods and often report mean image brightness. We report a method for quantitatively assessing and expressing abundance of expression of an antigen in neural tissue stained with immunofluorescent methods by determining the brightness-area-product (BAP). The described protocol utilizes simple to use commercially available software and calculates BAP rather than mean brightness as a measure more representative of antigen abundance and visual interpretation. Accordingly, we propose this protocol as a useful adjunct to observer interpretation of fluorescent immunohistochemistry and its application to assessment of antigen abundance for varying patterns of antigen localization.  相似文献   

11.
Visual localization ability influences cross-modal bias   总被引:1,自引:0,他引:1  
The ability of a visual signal to influence the localization of an auditory target (i.e., "cross-modal bias") was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multisensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially "unified" (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the midline. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.  相似文献   

12.
Hyperexcitability and the imbalance of excitation/inhibition are one of the leading causes of abnormal sensory processing in Fragile X syndrome (FXS). The precise timing and distribution of excitation and inhibition is crucial for auditory processing at the level of the auditory brainstem, which is responsible for sound localization ability. Sound localization is one of the sensory abilities disrupted by loss of the Fragile X Mental Retardation 1 (Fmr1) gene. Using triple immunofluorescence staining we tested whether there were alterations in the number and size of presynaptic structures for the three primary neurotransmitters (glutamate, glycine, and GABA) in the auditory brainstem of Fmr1 knockout mice. We found decreases in either glycinergic or GABAergic inhibition to the medial nucleus of the trapezoid body (MNTB) specific to the tonotopic location within the nucleus. MNTB is one of the primary inhibitory nuclei in the auditory brainstem and participates in the sound localization process with fast and well‐timed inhibition. Thus, a decrease in inhibitory afferents to MNTB neurons should lead to greater inhibitory output to the projections from this nucleus. In contrast, we did not see any other significant alterations in balance of excitation/inhibition in any of the other auditory brainstem nuclei measured, suggesting that the alterations observed in the MNTB are both nucleus and frequency specific. We furthermore show that glycinergic inhibition may be an important contributor to imbalances in excitation and inhibition in FXS and that the auditory brainstem is a useful circuit for testing these imbalances.  相似文献   

13.
In order to study auditory spatial localization in subjects with posterior damage involving the parietal lobe, we investigated their manual pointing performances to linguistic and white noise signals distributed over six sound sources situated in the anterior auditory field at ear level. The results showed: (1) A striking difference between patterns of deficits associated with right and left damage. In subjects with right damage, auditory localization deficits occurred in the horizontal plane, were manifested as restrictions in the peripheral left auditory hemifield and tended to be related to left visual neglect. In subjects with left damage, auditory localization deficits occurred in the entire auditory field in the horizontal as well as vertical planes, and they were particularly strong in the antero-frontal region. (2) One subject with right damage and visual neglect but no left auditory spatial restriction, showed deficits in the right hemifield where sound source location tended to be overestimated. This subject also showed a better discrimination of the origin of a white noise than of a linguistic signal. Results are discussed in terms of hemispheric asymmetries of function.  相似文献   

14.
15.
Després O  Candas V  Dufour A 《Neuropsychologia》2005,43(13):1955-1962
Several studies have reported that the early-blind displays higher auditory spatial abilities than the sighted. Although many studies have attempted to delineate the cortical structures that undergo functional reorganization in blind people, few have tried to determine which auditory or non-auditory processes mediate these increased auditory spatial abilities. The aim of this paper is to investigate the role of eye movements and orientation of attention in auditory localization in blind humans. Although we found, in a first experiment, that the influence of eye movements on auditory spatial localization is preserved in spite of congenital visual deprivation, the saccade influence on spatial hearing is not more pronounced in the blind than in the sighted. In a second experiment, early-blind and sighted subjects undertook a task involving discrimination of sound elevation in which auditory targets followed uninformative auditory cues on either side with an intermediate elevation. When sounds were emitted from the frontal hemifield, both groups showed similar auditory localization performance. Although the auditory cue did not affect discrimination accuracy in both groups, early-blind subjects exhibited shorter reaction times than sighted subjects when sound sources were placed at far-lateral locations. Attentional cues, however, had similar effects on both groups of subjects, suggesting that improved auditory spatial abilities are not mediated by attention orienting mechanisms.  相似文献   

16.
OBJECTIVE: The estimation of cortical current activity from scalp-recorded potentials is a complicated mathematical problem that requires fairly precise knowledge of the location of the scalp electrodes. It is expected that spatial mislocalization of electrodes will introduce errors in this estimation. The present study uses simulated and real data to quantify these errors for dipole current sources in a spherical head model. METHODS: A 3-dimensional digitizer was used to locate the positions of 31 scalp electrodes placed on the head according to the 10-20 system in 10 normal subjects. Dipole localizations were performed on auditory evoked potentials (AEPs) collected from these subjects. RESULTS: Computer simulations with several dipole source configurations suggest that errors in locations and orientations on the order of 5 mm and 5 degrees, respectively, are possible for electrode mislocalizations of about 5 degrees. In actual experimental settings, digitized electrode positions were typically mislocalized by an average of about 4 degrees from their standard 10-20 positions on a spherical model. These differences in electrode positions translated to mean differences of about 8 mm in dipole locations and 5 degrees in dipole orientations. CONCLUSIONS: Dipole estimation errors due to electrode mislocalizations are within the limits of errors due to other modeling approximations and noise.  相似文献   

17.
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.  相似文献   

18.
Spence C  Driver J 《Neuroreport》2000,11(9):2057-2061
Sound localization can be affected by vision; in the ventriloquism effect, sounds that are hard to localize within hearing become mislocalized toward the location of concurrent visual events. Here we tested whether spatial attention is drawn to the illusory location of a ventriloquized sound. The study exploited our previous finding that visual cues do not attract auditory attention. We report an important exception to this rule; auditory attention can be drawn to the location of a visual cue when it is paired with a concurrent unlocalizable sound, to produce ventriloquism. This demonstrates that crossmodal integration can precede reflexive shifts of attention, with such shifts taking place toward the crossmodally determined illusory location of a sound. It also shows that ventriloquism arises automatically, with objective as well as subjective consequences.  相似文献   

19.
The orienting of attention to the spatial location of sensory stimuli in one modality based on sensory stimuli presented in another modality (i.e., cross‐modal orienting) is a common mechanism for controlling attentional shifts. The neuronal mechanisms of top‐down cross‐modal orienting have been studied extensively. However, the neuronal substrates of bottom‐up audio‐visual cross‐modal spatial orienting remain to be elucidated. Therefore, behavioral and event‐related functional magnetic resonance imaging (FMRI) data were collected while healthy volunteers (N = 26) performed a spatial cross‐modal localization task modeled after the Posner cuing paradigm. Behavioral results indicated that although both visual and auditory cues were effective in producing bottom‐up shifts of cross‐modal spatial attention, reorienting effects were greater for the visual cues condition. Statistically significant evidence of inhibition of return was not observed for either condition. Functional results also indicated that visual cues with auditory targets resulted in greater activation within ventral and dorsal frontoparietal attention networks, visual and auditory “where” streams, primary auditory cortex, and thalamus during reorienting across both short and long stimulus onset asynchronys. In contrast, no areas of unique activation were associated with reorienting following auditory cues with visual targets. In summary, current results question whether audio‐visual cross‐modal orienting is supramodal in nature, suggesting rather that the initial modality of cue presentation heavily influences both behavioral and functional results. In the context of localization tasks, reorienting effects accompanied by the activation of the frontoparietal reorienting network are more robust for visual cues with auditory targets than for auditory cues with visual targets. Hum Brain Mapp 35:964–974, 2014. © 2013 Wiley Periodicals, Inc.  相似文献   

20.
Spatial attention mediates the selection of information from different parts of space. When a brief cue is presented shortly before a target [cue to target onset asynchrony (CTOA)] in the same location, behavioral responses are facilitated, a process called attention capture. At longer CTOAs, responses to targets presented in the same location are inhibited; this is called inhibition of return (IOR). In the visual modality, these processes have been demonstrated in both humans and non‐human primates, the latter allowing for the study of the underlying neural mechanisms. In audition, the effects of attention have only been shown in humans when the experimental task requires sound localization. Studies in monkeys with the use of similar cues but without a sound localization requirement have produced negative results. We have studied the effects of predictive acoustic cues on the latency of gaze shifts to visual and auditory targets in monkeys experienced in localizing sound sources in the laboratory with the head unrestrained. Both attention capture and IOR were demonstrated with acoustic cues, although with a faster time course than with visual cues. Additionally, the effect was observed across sensory modalities (acoustic cue to visual target), suggesting that the underlying neural mechanisms of these effects may be mediated within the superior colliculus, a center where inputs from both vision and audition converge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号