首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Presynaptic mechanisms influencing the probability of neurotransmitter release from an axon terminal, such as facilitation, augmentation, and presynaptic feedback inhibition, are fundamental features of biological neurons and are cardinal physiological properties of synaptic connections in the hippocampus. The consequence of these presynaptic mechanisms is that the probability of release becomes a function of the temporal pattern of action potential occurrence, and hence, the strength of a given synapse varies upon the arrival of each action potential invading the terminal region. From the perspective of neural information processing, the capability of dynamically tuning the synaptic strength as a function of the level of neuronal activation gives rise to a significant representational and processing power of temporal spike patterns at the synaptic level. Furthermore, there is an exponential growth in such computational power when the specific dynamics of presynaptic mechanisms varies quantitatively across axon terminals of a single neuron, a recently established characteristic of hippocampal synapses. During learning, alterations in the presynaptic mechanisms lead to different pattern transformation functions, whereas changes in the postsynaptic mechanisms determine how the synaptic signals are to be combined. We demonstrate the computational capability of dynamic synapses by performing speech recognition from unprocessed, noisy raw waveforms of words spoken by multiple speakers with a simple neural network consisting of a small number of neurons connected with synapses incorporating dynamically determined probability of release. The dynamics included in the model are consistent with available experimental data on hippocampal neurons in that parameter values were chosen so as to be consistent with time constants of facilitative and inhibitory processes governing the dynamics of hippocampal synaptic transmission studied using nonlinear systems analytic procedures. © 1997 Wiley-Liss, Inc.  相似文献   

2.
On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain–computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes (which information is explicitly present in input data streams and would be crucial to consider in some tasks) and of the information contained in the timing of the following spikes that is learned by the dynamic synapses as a whole spatio-temporal pattern.  相似文献   

3.
The Liquid State Machine (LSM) is a biologically plausible computational neural network model for real-time computing on time-varying inputs, whose structure and function were inspired by the properties of neocortical columns in the central nervous system of mammals. The LSM uses spiking neurons connected by dynamic synapses to project inputs into a high dimensional feature space, allowing classification of inputs by linear separation, similar to the approach used in support vector machines (SVMs). The performance of a LSM neural network model on pattern recognition tasks mainly depends on its parameter settings. Two parameters are of particular interest: the distribution of synaptic strengths and synaptic connectivity. To design an efficient liquid filter that performs desired kernel functions, these parameters need to be optimized. We have studied performance as a function of these parameters for several models of synaptic connectivity. The results show that in order to achieve good performance, large synaptic weights are required to compensate for a small number of synapses in the liquid filter, and vice versa. In addition, a larger variance of the synaptic weights results in better performance for LSM benchmark problems. We also propose a genetic algorithm-based approach to evolve the liquid filter from a minimum structure with no connections, to an optimized kernel with a minimal number of synapses and high classification accuracy. This approach facilitates the design of an optimal LSM with reduced computational complexity. Results obtained using this genetic programming approach show that the synaptic weight distribution after evolution is similar in shape to that found in cortical circuitry.  相似文献   

4.
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain.  相似文献   

5.
J Storck  F J?kel  G Deco 《Neural networks》2001,14(3):275-285
We apply spiking neurons with dynamic synapses to detect temporal patterns in a multi-dimensional signal. We use a network of integrate-and-fire neurons, fully connected via dynamic synapses, each of which is given by a biologically plausible dynamical model based on the exact pre- and post-synaptic spike timing. Dependent on their adaptable configuration (learning) the synapses automatically implement specific delays. Hence, each output neuron with its set of incoming synapses works as a detector for a specific temporal pattern. The whole network functions as a temporal clustering mechanism with one output per input cluster. The classification capability is demonstrated by illustrative examples including patterns from Poisson processes and the analysis of speech data.  相似文献   

6.
According to dendritic cable theory, proximal synapses give rise to inputs with short delay, high amplitude, and short duration. In contrast, inputs from distal synapses have long delays, low amplitude, and long duration. Nevertheless, large scale neural networks are seldom built with realistically layered synaptic architectures and corresponding electrotonic parameters. Here, we use a simple model to investigate the spike response dynamics of networks with different electrotonic structures. The networks consist of a layer of neurons receiving a sparse feedforward projection from a set of inputs, as well as sparse recurrent connections from within the layer. Firing patterns are set in the inputs, and recorded from the neuron (output) layer. The feedforward and recurrent synapses are independently set as proximal or distal, representing dendritic connections near or far from the soma, respectively. Analyses of firing dynamics indicate that recurrent distal synapses tend to concentrate network activity in fewer neurons, while proximal recurrent synapses result in a more homogeneous activity distribution. In addition, when the feedforward input is regular (spiking or bursting) and asynchronous, the output is regular if recurrent synapses are more distal than feedforward ones, and irregular in the opposite configuration. Finally, the amplitude of network fluctuations in response to asynchronous input is lower if feedforward and recurrent synapses are electrotonically distant from one another (in either configuration). In conclusion, electrotonic effects reflecting different dendritic positions of synaptic inputs significantly influence network dynamics.  相似文献   

7.
《Brain stimulation》2019,12(6):1402-1409
BackgroundDeep brain stimulation (DBS) is a successful clinical therapy for a wide range of neurological disorders; however, the physiological mechanisms of DBS remain unresolved. While many different hypotheses currently exist, our analyses suggest that high frequency (∼100 Hz) stimulation-induced synaptic suppression represents the most basic concept that can be directly reconciled with experimental recordings of spiking activity in neurons that are being driven by DBS inputs.ObjectiveThe goal of this project was to develop a simple model system to characterize the excitatory post-synaptic currents (EPSCs) and action potential signaling generated in a neuron that is strongly connected to pre-synaptic glutamatergic inputs that are being directly activated by DBS.MethodsWe used the Tsodyks-Markram (TM) phenomenological synapse model to represent depressing, facilitating, and pseudo-linear synapses driven by DBS over a wide range of stimulation frequencies. The EPSCs were then used as inputs to a leaky integrate-and-fire neuron model and we measured the DBS-triggered post-synaptic spiking activity.ResultsSynaptic suppression was a robust feature of high frequency stimulation, independent of the synapse type. As such, the TM equations were used to define alternative DBS pulsing strategies that maximized synaptic suppression with the minimum number of stimuli.ConclusionsSynaptic suppression provides a biophysical explanation to the intermittent, but still time-locked, post-synaptic firing characteristics commonly seen in DBS experimental recordings. Therefore, network models attempting to analyze or predict the effects of DBS on neural activity patterns should integrate synaptic suppression into their simulations.  相似文献   

8.
《Neural networks》1999,12(6):825-836
We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.  相似文献   

9.
N Levy  D Horn  I Meilijson  E Ruppin 《Neural networks》2001,14(6-7):815-824
We investigate the formation of a Hebbian cell assembly of spiking neurons, using a temporal synaptic learning curve that is based on recent experimental findings. It includes potentiation for short time delays between pre- and post-synaptic neuronal spiking, and depression for spiking events occurring in the reverse order. The coupling between the dynamics of synaptic learning and that of neuronal activation leads to interesting results. One possible mode of activity is distributed synchrony, implying spontaneous division of the Hebbian cell assembly into groups, or subassemblies, of cells that fire in a cyclic manner. The behavior of distributed synchrony is investigated both by simulations and by analytic calculations of the resulting synaptic distributions.  相似文献   

10.
Szabo TM  Zoran MJ 《Brain research》2007,1129(1):63-71
Electrical synapses are abundant before and during developmental windows of intense chemical synapse formation, and might therefore contribute to the establishment of neuronal networks. Transient electrical coupling develops and is then eliminated between regenerating Helisoma motoneurons 110 and 19 during a period of 48-72 h in vivo and in vitro following nerve injury. An inverse relationship exists between electrical coupling and chemical synaptic transmission at these synapses, such that the decline in electrical coupling is coincident with the emergence of cholinergic synaptic transmission. In this study, we have generated two- and three-cell neuronal networks to test whether predicted synaptogenic capabilities were affected by previous synaptic interactions. Electrophysiological analyses demonstrated that synapses formed in three-cell neuronal networks were not those predicted based on synaptogenic outcomes in two-cell networks. Thus, new electrical and chemical synapse formation within a neuronal network is dependent on existing connectivity of that network. In addition, new contacts formed with established networks have little impact on these existing connections. These results suggest that network-dependent mechanisms, particularly those mediated by gap junctional coupling, regulate synapse formation within simple neural networks.  相似文献   

11.
Synapses are essential elements for computation and information storage in both real and artificial neural systems. An artificial synapse needs to remember its past dynamical history, store a continuous set of states, and be “plastic” according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory–resistor (memristor for short). In particular, by using simple and inexpensive off-the-shelf components we have built a memristor emulator which realizes all required synaptic properties. Most importantly, we have demonstrated experimentally the formation of associative memory in a simple neural network consisting of three electronic neurons connected by two memristor–emulator synapses. This experimental demonstration opens up new possibilities in the understanding of neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behavior with electronic neural networks.  相似文献   

12.
Activity-dependent synaptic plasticity has important implications for network function. The previously developed model of the hippocampal CA1 area, which contained pyramidal cells (PC) and two types of interneurons involved in feed-forward and recurrent inhibition, respectively, and received synaptic inputs from CA3 neurons via the Schaffer collaterals, was enhanced by incorporating dynamic synaptic connections capable of changing their weights depending on presynaptic activation history. The model output was presented as field potentials, which were compared with those derived experimentally. The parameters of Schaffer collateral-PC excitatory model synapse were determined, with which the model successfully reproduced the complicated dynamics of train-stimulation sequential potentiation/depression observed in experimentally recorded field responses. It was found that the model better reproduces the time course of experimental field potentials if the inhibitory synapses on PC are also made dynamic, with expressed properties of frequency-dependent depression. This finding supports experimental evidence that these synapses are subject to activity-dependent depression. The model field potentials in response to various randomly generated and real (derived from recorded CA3 unit activity) long stimulating trains were calculated, illustrating that short-term plasticity with the observed characteristics could play specific roles in frequency processing in hippocampus and thus providing a new tool for the theoretical study of activity-dependent synaptic plasticity.  相似文献   

13.
A van Schaik 《Neural networks》2001,14(6-7):617-628
We present an electronic circuit modelling the spike generation process in the biological neuron. This simple circuit is capable of simulating the spiking behaviour of several different types of biological neurons. At the same time, the circuit is small so that many neurons can be implemented on a single silicon chip. This is important, as neural computation obtains its power not from a single neuron, but from the interaction between a large number of neurons. Circuits that model these interactions are also presented in this paper. They include the circuits for excitatory, inhibitory and shunting inhibitory synapses, a circuit which models the regeneration of spikes on the axon, and a circuit which models the reduction of input strength with the distance of the synapse to the cell body on the dendrite of the cell. Together these building blocks allow the implementation of electronic spiking neural networks.  相似文献   

14.
Forty-five years ago the surprising discovery was made, in a Melbourne University laboratory, that peripheral synapses exist that release neither noradrenaline nor acetylcholine. The same laboratory went on to show that one of these then novel transmitters is adenosine 5'-triphosphate (ATP), for which a class of receptors has been dubbed P2X7. Recent linkage studies have shown that the P2X7 gene is associated with major depression and bipolar disorder. This speculative paper considers possible mechanisms that could link polymorphisms in the P2X7 gene with the functioning of neural networks, especially in the hippocampus. A selective review of the neurobiological literature on the location and function of the P2X7 receptor at synapses and on astrocytes as well as microglial cells was performed in the context of determining viable hypotheses as to the function of these receptors during synaptic transmission in the neural networks of the hippocampus. It is suggested that P2X7 receptors participate in a regenerative loop at central glutamatergic synapses. In this loop glutamate-evoked release of ATP from both astrocytes and microglia cells, as well as ATP derived from an autocatalytic release from astrocytes, provides purines that can act on presynaptic P2X7 purinergic receptors. This increases glutamate release to further the amount of ATP at the synapse, leading to a new functional state of the neural network in which the synapse participates. This synaptic ATP can also act on microglia P2X7 receptors to release the cytokine tumour necrosis factor-alpha (TNF-alpha), as can glutamate, with this TNF-alpha acting on the post-synaptic neuronal membrane to increase glutamate alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) receptors there. As synaptic ATP and glutamate are maintained by the regenerative loop they provide a sustained release of TNF-alpha, and therefore of AMPA receptor enhancement, increasing synaptic efficacy, and so contributing to the new functional state of the neural network. Infections can change this state by activating toll-like (TOL) receptors on the microglia concomitantly with their P2X7 receptor activation by the regenerative loop, thereby releasing the cytokine interleukin-1beta, which decreases the AMPA receptors in the neural membrane, so decreasing synaptic efficacy and changing the functional state of the neural network in which the synapse resides. Polymorphisms in the P2X7 gene that modify operation of the regenerative loop or the release of cytokines, as can infections, change the functional state of neural networks, which may then lead to vulnerability to mood disorders.  相似文献   

15.
Neurons, within the nervous system, are organized in different neural networks through synaptic connections. Two fundamental components are dynamically interacting in these functional units. The first one are the neurons themselves, and far from being simple action potential generators, they are capable of complex electrical integrative properties due to various types, number, distribution and modulation of voltage-gated ionic channels. The second elements are the synapses where a similar complexity and plasticity is found. Identifying both cellular and synaptic intrinsic properties is necessary to understand the links between neural networks behavior and physiological function, and is a useful step towards a better control of neurological diseases.  相似文献   

16.
A new type of synaptic contact has been found in Aplysia californica, in which a post-synaptic spine extensively invaginates the pre-synaptic element. The post-synaptic spine, usually less than 0.25 micrometer in diameter, may protrude up to 2 micrometer into the pre-synaptic element. In some instances a larger post-synaptic element indents and forms multiple thin projections into the pre-synaptic varicosity. Along or at the end of these projections a zone occurs at which the surface membranes of the two apposed synaptic elements are rigidly parallel, and the extracellular gap is approximately 60% greater than normal and contains a small amount of electron-dense material. Synaptic vesicles are concentrated against the pre-synaptic membrane in these regions. There are twice as many vesicles per unit area positioned against the membrane at these zones than at similar active zones occurring in the alternative type of synapse, which has a flat, rather than indented, geometry. Single pre-synaptic varicosities have been found to form both flat and indented synapses. These findings raise the possibility that these two forms of synapse may be dynamic transformations of each other, having differing synaptic effectiveness.  相似文献   

17.
During learning and development, neural circuitry is refined, in part, through changes in the number and strength of synapses. Most studies of long-term changes in synaptic strength have concentrated on Hebbian mechanisms, where these changes occur in a synapse-specific manner. While Hebbian mechanisms are important for modifying neuronal circuitry selectively, they might not be sufficient because they tend to destabilize the activity of neuronal networks. Recently, several forms of homeostatic plasticity that stabilize the properties of neural circuits have been identified. These include mechanisms that regulate neuronal excitability, stabilize total synaptic strength, and influence the rate and extent of synapse formation. These forms of homeostatic plasticity are likely to go 'hand-in-glove' with Hebbian mechanisms to allow experience to modify the properties of neuronal networks selectively.  相似文献   

18.
Changes in human/animal behaviour and the involved neural functions are characterized by structural alterations in the brain circuitry. These changes comprise the formation of new synapses and the elimination of existing synapses aside from the modulation of connecting properties within other ones. The mechanisms of neuronal branching and cell contacting regulate and prepare for the processes of synaptic formation. In this study, we present a set of methods to detect, describe and analyse the dynamics attributed to the process of cell contacting in cell cultures in vitro. This involves the dynamics of branching and seeking for synaptic partners. The proposed technique formally distinguishes between the actual formed synapses and the potential synaptic sites, i.e. where cell contacts are likely. The study investigates the dynamic behaviour of these potential synaptic sites within the process of seeking for contacts. The introduced tools use morphological image processing algorithms to automatically detect the sites of interest. Results indicate that the introduced tools can reliably describe experimentally observed branching and seeking for contacts dynamics. Being straightforward in terms of implementation and analysis, our framework represents a solid method for studying the neural preparation phases of synaptic formation via cell contacting in random networks using standard phase contrast microscopy.  相似文献   

19.
Daniel Durstewitz   《Neural networks》2009,22(8):1189-1200
In cortical networks, synaptic excitation is mediated by AMPA- and NMDA-type receptors. NMDA differ from AMPA synaptic potentials with regard to peak current, time course, and a strong voltage-dependent nonlinearity. Here we illustrate based on empirical and computational findings that these specific biophysical properties may have profound implications for the dynamics of cortical networks, and via dynamics on cognitive functions like active memory. The discussion will be led along a minimal set of neural equations introduced to capture the essential dynamics of the various phenomena described. NMDA currents could establish cortical bistability and may provide the relatively constant synaptic drive needed to robustly maintain enhanced levels of activity during working memory epochs, freeing fast AMPA currents for other computational purposes. Perhaps more importantly, variations in NMDA synaptic input–due to their biophysical particularities–control the dynamical regime within which single neurons and networks reside. By provoking bursting, chaotic irregularity, and coherent oscillations their major effect may be on the temporal pattern of spiking activity, rather than on average firing rate. During active memory, neurons may thus be pushed into a spiking regime that harbors complex temporal structure, potentially optimal for the encoding and processing of temporal sequence information. These observations provide a qualitatively different view on the role of synaptic excitation in neocortical dynamics than entailed by many more abstract models. In this sense, this article is a plead for taking the specific biophysics of real neurons and synapses seriously when trying to account for the neurobiology of cognition.  相似文献   

20.
《Neural networks》2002,15(2):155-161
This article throws new light on the possible role of synapses in information transmission through theoretical analysis and computer simulations. We show that the internal dynamic state of a synapse may serve as a transient memory buffer that stores information about the most recent segment of the spike train that was previously sent to this synapse. This information is transmitted to the postsynaptic neuron through the amplitudes of the postsynaptic response for the next few spikes. In fact, we show that most of this information about the preceding spike train is already contained in the postsynaptic response for just two additional spikes. It is demonstrated that the postsynaptic neuron receives simultaneously information about the specific type of synapse which has transmitted these pulses. In view of recent findings by Gupta et al. [Science, 287 (2000) 273] that different types of synapses are characteristic for specific types of presynaptic neurons, the postsynaptic neuron receives in this way partial knowledge about the identity of the presynaptic neuron from which it has received information. Our simulations are based on recent data about the dynamics of GABAergic synapses. We show that the relatively large number of synaptic release sites that make up a GABAergic synaptic connection makes these connections suitable for such complex information transmission processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号