Nonlinear convergence boosts information coding in circuits with parallel outputs |
| |
Authors: | Gabrielle J. Gutierrez Fred Rieke Eric T. Shea-Brown |
| |
Affiliation: | aDepartment of Applied Mathematics, University of Washington, Seattle, WA, 98195;bDepartment of Physiology and Biophysics, University of Washington, Seattle, WA, 98195 |
| |
Abstract: | Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.Sensory systems, by necessity, compress a wealth of information gathered by receptors into the smaller amount of information needed to guide behavior. In many systems, this compression occurs via common circuit motifs—namely, convergence of multiple inputs to a single neuron and divergence of inputs to multiple parallel pathways (1). Selective nonlinear circuit elements transform inputs, selecting some parts of the signal while discarding others. Here, we investigate how these motifs work together to determine how much information is retained in compressive neural circuits.These issues are highly relevant to signaling in the retina, because the bottleneck produced by the optic nerve ensures that considerable feed-forward convergence occurs prior to the transmission of signals to central targets. This convergence reduces the dimension of signals as they traverse the retina. In total, signals from 100 million photoreceptors modulate the output of 1 million ganglion cells (2, 3). If the dynamic range of the ganglion cell is not sufficiently expanded beyond that of the photoreceptors and bipolar cells, this convergent circuit architecture could lead to a compression of input signals in which some information or stimulus resolution is lost—resulting in ambiguously encoded stimuli. It is estimated that the population of ganglion cells collectively transmits approximately bits of information (3–5) and that this is much less than the amount of information available to the photoreceptors (2). However, not much is known about how neuron properties interact with a convergent circuit structure to drive or mitigate a loss of information.Receptive field subunits are a key feature of the retina’s convergent circuitry. Multiple bipolar cells converge onto a single ganglion cell—forming functional subunits within the receptive field of the ganglion cell (6, 7). Ganglion-cell responses can often be modeled as a linear sum of a population of nonlinear subunits. These subunit models have been used to investigate center-surround interactions (8–12) and to explain the nonlinear integration of signals across space (7, 10, 13–15).While it is clear that subunits have the potential to compress inputs, it is not known whether this architecture subserves an efficient code where inputs are encoded with minimal ambiguity. For decades, information theory (16, 17) has been used to quantify the amount of information that neurons encode (3, 5, 18–27). The efficient-coding hypothesis proposes that the distribution of neural responses should be one that is maximally informative about the inputs (21, 22, 28). Take the example of a stimulus variable, such as luminance, where the brightness level is encoded by the number of spikes in the response. An input/output mapping in which most of the possible luminance levels are encoded by the same response (i.e., the same number of spikes or firing rate) makes many bright and dim inputs ambiguous and provides very little information.Information can be maximized at the level of a single neuron by distributing the responses such that they optimally disambiguate inputs (23). A nonlinear response function optimized for the distribution of inputs can make the most of the neuron’s dynamic range. Adaptive rescaling of the response nonlinearity to changes in the input statistics can maintain maximal information in the output (29–31). Alternatively, information can be maximized by optimizing connection weights in the circuit, perhaps in combination with optimizing the nonlinearities (19, 32, 33). These past works, however, have not made explicit how the set of motifs found in most neural circuits, and in the retina in particular, combine to collectively influence coding efficiency.Our contribution here is to dissect a canonical neural circuit in silico and to investigate how much each of its components contribute to or detract from the information encoded by the circuit about stimuli. These circuit components, considered separately, have the potential to discard information. We begin with the simplest motif of converging inputs to single neurons and analyze the role of rectifying nonlinear subunits applied to each of these multiple inputs. We then add a diverging motif which splits the response into two opposing pathways. We find that rectifying nonlinear subunits mitigates the loss of information from convergence when compared to circuits with linear subunits. This is despite the fact that the rectifying nonlinear subunits, considered in isolation, lead to a loss of information. Moreover, this ability of nonlinear subunits to retain information stems from a reformatting of the inputs to encode distinct stimulus features compared with their linear counterparts. Our study contributes to a better understanding of how biologically inspired circuit structures and neuron properties combine to impact coding efficiency in neural circuits. |
| |
Keywords: | neural computation efficient coding retina sensory processing information theory |
|
|