首页 | 本学科首页   官方微博 | 高级检索  
检索        


Local dendritic balance enables learning of efficient representations in networks of spiking neurons
Authors:Fabian A Mikulasch  Lucas Rudelt  Viola Priesemann
Institution:aMax Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany;bBernstein Center for Computational Neuroscience Göttingen, 37077 Göttingen, Germany
Abstract:How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory–inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.

Many neural systems have to encode high-dimensional and complex input signals in their activity. It has long been hypothesized that these encodings are highly efficient; that is, neural activity faithfully represents the input while also obeying energy and information constraints (13). For populations of spiking neurons, such an efficient code requires two central features: First, neural activity in the population has to be coordinated, such that no spike is fired superfluously (4); second, individual neural activity should represent reoccurring patterns in the input signal, which reflect the statistics of the sensory stimuli (2, 3). How can this coordination and these efficient representations emerge through local plasticity rules?To coordinate neural spiking, choosing the right recurrent connections between coding neurons is crucial. In particular, recurrent connections have to ensure that neurons do not spike redundantly to encode the same input. An early result was that in unstructured networks the redundancy of spiking is minimized when excitatory and inhibitory currents cancel on average in the network (57), which is also termed loose global excitatory–inhibitory (E-I) balance (8). To reach this state, recurrent connections can be chosen randomly with the correct average magnitude, leading to asynchronous and irregular neural activity (5) as observed in vivo (4, 9). More recently, it became clear that recurrent connections can ensure a much more efficient encoding when E-I currents cancel not only on average, but also on fast timescales and in individual neurons (4), which is also termed tight detailed E-I balance (8). Here, recurrent connections have to be finely tuned to ensure that the network response to inputs is precisely distributed over the population. To achieve this intricate recurrent connectivity, different local plasticity rules have been proposed, which robustly converge to a tight balance and thereby ensure that networks spike efficiently in response to input signals (10, 11).To efficiently encode high-dimensional input signals, it is additionally important that neural representations are adapted to the statistics of the input. Often, high-dimensional signals contain redundancies in the form of reoccurring spatiotemporal patterns, and coding neurons can reduce activity by representing these patterns in their activity. For example, in an efficient code of natural images, the activity of neurons should represent the presence of edges, which are ubiquitous in these images (3). Early studies of recurrent networks showed that such efficient representations can be found through Hebbian-like learning of feedforward weights (12, 13). With Hebbian learning the repeated occurrence of patterns in the input is associated with postsynaptic activity, causing neurons to become detectors of these patterns. Recurrent connections indirectly guide this learning process by forcing neurons to fire for distinct patterns in the input. Recent efforts rigorously formalized this idea for models of spiking neurons in balanced networks (11) and spiking neuron sampling from generative models (1417). The great strength of these approaches is that the learning rules can be derived from first principles and turn out to be similar to spike-timing–dependent plasticity (STDP) curves that have been measured experimentally.However, to enable the learning of efficient representations, these models have strict requirements on network dynamics. Most crucially, recurrent inhibition has to ensure that neural responses are sufficiently decorrelated. In the neural sampling approaches, learning therefore relies on strong winner-take-all dynamics (1417). In the framework of balanced networks, transmission of inhibition has to be nearly instantaneous to ensure strong decorrelation (18). These requirements are likely not met in realistic situations, where neural activity often shows positive correlations (1922).We here derive a learning scheme that overcomes these limitations. First, we show that when neural activity is correlated, learning of feedforward connections has to incorporate nonlocal information about the activity of other neurons. Second, we show that recurrent connections can provide this nonlocal information by learning to locally balance specific feedforward inputs on the dendrites. In simulations of spiking neural networks we demonstrate the benefits of learning with dendritic balance over Hebbian-like learning for the efficient encoding of high-dimensional signals. Hence, we extend the idea that tightly balancing inhibition provides information about the population code locally and show that it can be used not only to distribute neural responses over a population, but also for an improved learning of feedforward weights.
Keywords:efficient coding  synaptic plasticity  balanced state  neural sampling  dendritic computation
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号