首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our perception of the environment relies on the efficient propagation of neural signals across cortical networks. During the time course of a day, neural responses fluctuate dramatically as the state of the brain changes to possibly influence how electrical signals propagate across neural circuits. Despite the importance of this issue, how patterns of spiking activity propagate within neuronal circuits in different brain states remains unknown. Here, we used multielectrode laminar arrays to reveal that brain state strongly modulates the propagation of neural activity across the layers of early visual cortex (V1). We optogenetically induced synchronized state transitions within a group of neurons and examined how far electrical signals travel during wakefulness and rest. Although optogenetic stimulation elicits stronger neural responses during wakefulness relative to rest, signals propagate only weakly across the cortical column during wakefulness, and the extent of spread is inversely related to arousal level. In contrast, the light-induced population activity vigorously propagates throughout the entire cortical column during rest, even when neurons are in a desynchronized wake-like state prior to light stimulation. Mechanistically, the influence of global brain state on the propagation of spiking activity across laminar circuits can be explained by state-dependent changes in the coupling between neurons. Our results impose constraints on the conclusions of causal manipulation studies attempting to influence neural function and behavior, as well as on previous computational models of perception assuming robust signal propagation across cortical layers and areas.

The extent and accuracy with which neural signals propagate within and across neural circuits play a critical role in shaping behavior and cognition. One key variable that could potentially influence signal propagation across neural networks is global brain state (14). Indeed, during the time course of a day, the state of the brain undergoes dramatic changes from wakefulness to drowsiness and sleep (3, 57). Multiple lines of evidence in rodents and monkeys have shown that distinct brain states are associated with specific changes in neural responses (24, 8). Neurons strongly respond during wakefulness when animals are in an aroused state, and responses diminish during drowsiness and sleep (6, 911). However, despite significant progress in our understanding of state-dependent sensory coding across neural circuits (25, 8, 12, 13), the influence of brain state on the propagation of electrical signals remains unknown.The cortical column constitutes an ideal locus to examine the propagation of neural signals. For over a century, neuroscientists have observed remarkable regularity in the cortical microarchitecture: Clusters of cells are synaptically connected to form small columns orthogonal to the cortical surface (14, 15). These microcolumns constitute the elementary functional units of cortical circuitry (16) and consist of distinct layers that each contain a characteristic distribution of cell types and connections with other layers (15, 1719). Understanding how neural signals propagate across laminar circuits would greatly contribute to deciphering the functional principles of cortical column operation.In principle, the strong intracortical connections within and between cortical layers (1720) imply that signals emitted by individual neurons would vigorously propagate across the entire microcolumn. Indeed, during wakefulness, the input granular (G) cortical layers relay stimulus information to the output supragranular (SG) layers, which send feedforward projections to downstream areas (18, 20). Furthermore, neurons in SG layers project back to infragranular (IG) layers, which in turn project to granular layers; hence, signals are circulated across the entire microcolumn (17, 18). Thus, from a theoretical standpoint, it can be argued that electrical signals are robustly transmitted during wakefulness across cortical layers to contribute to perception and cognition. In reality, how robustly signals travel across layers in different states of wakefulness, and especially when the state of the brain undergoes dramatic changes, such as during drowsiness and sleep, remains unknown.Previous studies were unable to address these issues due to inherent restrictions of techniques such as in vitro slice recordings [e.g., (21)] and in vivo recordings during anesthesia (6, 10, 22) that severely limit the behavioral repertoire and hence the interpretation of cortical dynamics across laminar circuits. Even studies focused on in vivo laminar recordings failed to investigate state-dependent signal propagation across cortical layers (2325). Here, we examined the propagation of neural signals across the cortical column in different brain states using multielectrode laminar arrays. We discovered that the global brain state strongly modulates the propagation of neural activity across the layers of the early visual cortex (area V1). We optogenetically activated specific cell populations during wakefulness to find that even though the elicited neural signals were stronger than those during rest, they propagated to other layers only weakly. Further, arousal was inversely related to the extent of signal spread. In contrast, the light-induced activity of the same neural population robustly propagated throughout the entire cortical column during rest, even when neurons were in a desynchronized wake-like state prior to light stimulation. The differential propagation of electrical signals in different brain states can be explained by state-dependent changes in the degree of coupling between individual neurons and their local population. Our results impose constraints on the conclusions of causal manipulation studies attempting to influence neural function and behavior, as well as on previous computational models of perception assuming robust signal propagation across cortical layers and areas.  相似文献   

2.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.

A key property of cortical circuits is their capacity to reorganize structurally and functionally with experience (13). In primary visual cortex, adaptive reorganization is well documented during development (47) and growing evidence indicates that sensory responses continue to adapt in adulthood (813). The continual refinement of sensory neurons based on the statistics of the sensory environment is at odds with the traditional view of the primary visual cortex as a collection of static filters or feature detectors, passively converting sensory input into a sparse code for further feedforward processing across the visual hierarchy (14). In fact, considerable evidence suggests that primary visual cortex does not statically encode the environment but has rich spatial and temporal dynamics. For example, sensory-evoked activity propagates through the local network in wavelike patterns (1517), displays a high degree of temporal structure (18), and can persist long after the cessation of stimulation (1922). These rich dynamic properties exhibited by early visual neurons suggest an active involvement of primary visual cortex populations in the coordinated representation of visual stimuli. Most strikingly, repetitive visual exposure can alter the strength and selectivity of neuronal responses in the primary visual cortex, leaving a lasting mark on postexposure activity in both awake and anesthetized animals (23, 24). Yet, it remains unclear how such changes affect the joint encoding of stimuli across neuronal populations and ultimately the information transmitted to downstream areas.Given that primary neurons adapt their responses as a function of repeated exposure, one compelling hypothesis is that exposure-driven changes are coordinated across neuronal populations to collectively improve the representation and maintenance of recently experienced stimuli. Here, we test this hypothesis by investigating the impact of visual exposure on the persistent population response of neurons in cat area 17 to brief, structured stimulation. We employ a large set of abstract stimuli (letters of the Latin alphabet and Arabic numerals) that provide a rich variety of spatial conjunctions across low-level features and are well suited to capture aspects of distributed coding. We find five main signatures of functional reorganization. First, visual exposure optimizes stimulus maintenance in primary visual cortex by increasing the magnitude and decreasing the variability of neuronal responses that persist after stimulus offset. Second, these changes are associated with neural recruitment, a broadening of the dynamic range neurons employ to respond to stimuli, and an enhancement of stimulus-specific tiling of neuronal responses. Third, refinement of individual responses results in increased stimulus encoding at the population level; i.e., a simple hypothetical downstream decoder increases its accuracy in identifying recent stimuli from brief snippets of population activity. Fourth, the exposure-driven enhancements in stimulus persistence maintain the representational structure of stimuli, resulting in improved stimulus reconstruction. Fifth, exposure strengthens patterns in postexposure spontaneous activity. Finally, modeling demonstrates that exposure-driven enhancements in stimulus persistence can arise from recurrent network interactions via local, unsupervised plasticity mechanisms.  相似文献   

3.
Sharp-wave ripples (SWRs) are highly synchronous neuronal activity events. They have been predominantly observed in the hippocampus during offline states such as pause in exploration, slow-wave sleep, and quiescent wakefulness. SWRs have been linked to memory consolidation, spatial navigation, and spatial decision-making. Recently, SWRs have been reported during visual search, a form of remote spatial exploration, in macaque hippocampus. However, the association between SWRs and multiple forms of awake conscious and goal-directed behavior is unknown. We report that ripple activity occurs in macaque visual areas V1 and V4 during focused spatial attention. The occurrence of ripples is modulated by stimulus characteristics, increased by attention toward the receptive field, and by the size of the attentional focus. During attention cued to the receptive field, the monkey’s reaction time in detecting behaviorally relevant events was reduced by ripples. These results show that ripple activity is not limited to hippocampal activity during offline states, rather they occur in the neocortex during active attentive states and vigilance behaviors.

Hippocampal sharp-wave ripples (SWR, ripples) are large amplitude deflections (sharp-waves) of the local field potential (LFP) in the hippocampus of rodents, humans, and nonhuman primates, associated with a brief fast oscillatory pattern (ripple). Ripple oscillations vary in frequency from 140 to 200 Hz in rodent and 80 to 180 Hz in nonhuman primates and humans (16). SWRs occur at ~0.5 Hz in the hippocampus, 0.1 to 0.5 Hz in the posterior parietal, retrosplenial, cingulate, and at 0.05 Hz in somatosensory, motor, and visual cortices during nonrapid eye movement (NREM) sleep (7). During hippocampal SWRs, 15% of hippocampal pyramidal cells discharge synchronously, which triggers activation in cortical areas, but suppression in midbrain and brainstem regions (2, 3).Ripples support memory consolidation by transferring information acquired during waking to cortical networks during sleep and quiescence (79). Consolidation occurs through temporal replay of event-related activity in the hippocampus during ripples (1015). SWRs are also predictive of future trajectory and performance during spatial navigation tasks (1618). Finally, they are implicated in the correct temporal sequencing of place cell activity preceding novel spatial experiences (preplay) (19). Memory consolidation in the visual cortex requires NREM sleep spindle activity (20), which are coordinated with hippocampal ripples (21), increasing hippocampal–neocortical coupling (22) and associated information transfer.In rodents, hippocampal SWRs are pronounced during offline states (23, 24), but they occur during awake states in humans (25), as well as nonhuman primates during visual search and goal-directed visual exploration, termed exploratory SWRs (26, 27). Hippocampal SWR occurrence of monkeys is increased when the subject’s gaze is focused near a target object during search or when patients observe familiar pictures of scenes or faces (5, 26, 28). Here, ripple rates also increased during free recall along with a high-frequency band activation around the time of ripple in the visual cortex, suggesting a role of SWRs in activating the visual cortex during episodic and semantic memory retrieval (5, 19, 26, 2831). Memory and attention are intertwined, and working memory and attention affect neural activity in striate and extrastriate cortex similarly (32, 33). Given that ripples are strongly linked to memory, it is tempting to speculate that they are linked to attention.Attention affects components of neural circuits, that in the hippocampus drive ripples. Sharp-waves are generated by excitatory afferents from CA3 to CA1 (34, 35), but ripples are evoked by parvalbumin-positive (PV+) interneurons inside CA1 (36). Optogenetic activation of PV+ cells induces hippocampal ripples (37), and PV+ cells can be active during each ripple cycle (38, 39). Narrow spiking cells, often associated with PV+ cells (40) are more affected by spatial attention than broad spiking cells (41). Hence, we hypothesize that ripples increase in the visual cortex when animals are cued to attend to the receptive field (RF, referred to as cue RF conditions), relative to when animals are cued to attend to the opposite hemifield (cue away conditions), as PV+ drive is likely increased.Attention might also increase ripple rates through cholinergic mechanisms. Ripples during offline states coincide with reduced septal acetylcholine (ACh) release into the hippocampus, and cholinergic suppression of hippocampal SWR impairs spatial working memory (42). ACh plays an important role in working memory and spatial attention, and hence SWRs should decrease during cue RF conditions, if ACh levels are increased. However, cholinergic receptor distribution differs between the hippocampus and primate visual cortex. In the human hippocampus, M1 receptors are predominantly expressed in excitatory pyramidal cells (43), while in the (primate) visual cortex, they are predominantly expressed on inhibitory interneurons, and here especially on PV+ cells (44). Hence, increased ACh levels with attention (45) might trigger higher ripple rates in the visual cortex. SWRs have been suggested to be an alternative working memory system to the proposed system of “delay activity” or “neuronal chaining” and theta activity (46, 47). If SWRs help refocus by “reminding” the system about current task demands, then we hypothesize that ripples increase when animals are required to attend to the RF under cue RF conditions.In rodents, somatostatin-positive (SOM+) interneurons are suppressed during ripple episodes (39), and SOM+ cell activity is linked to surround suppression (48). Whether similar mechanisms occur in the primate neocortex, where SOM+ is not a good marker for interneurons (49, 50), is unknown. However, primate calbindin-positive (CB+) cells may be homolog to rodent SOM+ cells, and may serve a similar function. If so, spatial attention, which causes surround “exclusion” (51), might do so through reduced CB+ activity (51, 52), and thereby increase ripple rates. The link between increased SOM+ activity and reduced ripple rates (in rodents) further suggests that larger stimuli, inducing surround suppression and higher SOM+ (CB+) cell activity, result in reduced ripple rates.To examine these predictions, we recorded LFPs and spiking activity in visual areas V1 and V4 of two male macaque monkeys performing a cued spatial attention task. Ripple activity was detected in both regions, ripples occurred more often when monkeys were cued to deploy attention to the RF of the recorded neurons, smaller stimuli resulted in higher ripple rates than larger stimuli, and ripple occurrence was predictive of better behavioral performance. Thus, ripples occur in cortical visual areas and are involved in cognitive functions beyond memory consolidation and retrieval.  相似文献   

4.
To explore how neural circuits represent novel versus familiar inputs, we presented mice with repeated sets of images with novel images sparsely substituted. Using two-photon calcium imaging to record from layer 2/3 neurons in the mouse primary visual cortex, we found that novel images evoked excess activity in the majority of neurons. This novelty response rapidly emerged, arising with a time constant of 2.6 ± 0.9 s. When a new image set was repeatedly presented, a majority of neurons had similarly elevated activity for the first few presentations, which decayed to steady state with a time constant of 1.4 ± 0.4 s. When we increased the number of images in the set, the novelty response’s amplitude decreased, defining a capacity to store ∼15 familiar images under our conditions. These results could be explained quantitatively using an adaptive subunit model in which presynaptic neurons have individual tuning and gain control. This result shows that local neural circuits can create different representations for novel versus familiar inputs using generic, widely available mechanisms.

Because the behavioral consequences of a sensory stimulus can depend on whether that stimulus is novel or familiar, sensory systems can benefit from employing different representations of novel versus familiar stimuli. At the level of human psychophysics, stimulus novelty can enhance salience and capture attention (13), while familiarity can speed visual search (4). Novelty also affects aversive conditioning (57) and fear conditioning (8, 9). In human brain imaging, novel stimuli have been shown to generate the mismatch negativity (MMN) (10, 11) while repeated stimuli lead to repetition suppression (12). Explicit representation of novelty has been shown at higher stages of the sensory hierarchy, such as in the hippocampus (13) and inferotemporal cortex (1416), and has been interpreted as a possible substrate of recognition memory (17). Lower in sensory hierarchies, the representation of novelty can be enhanced by stimulus-specific adaptation (SSA) (1821) as well as by gain control (22, 23). Novelty signals are also prominently present in midbrain dopamine neurons (24).Explicit representation of stimulus novelty is also related to theories of predictive coding, in which neural circuits carry out computations that emphasize novel or surprising information. Theories of predictive coding have had a long history, starting with ideas about how the receptive field structure of retinal ganglion cells more efficiently encodes natural visual scenes by removing redundant data (2528) and including the idea that active adaptation may aid in this process (18). Theories of predictive coding in the neocortex have typically focused on the idea that feedback from higher cortical areas encodes a prediction about lower-level sensory data (29) that is subtracted from the lower-level representation, so that the signals traveling up the cortical hierarchy represent surprise or novelty (30, 31). However, a recent study failed to find these signatures of predictive coding (32).Here, we investigate novelty processing in the mouse primary visual cortex. We repeatedly presented a set of images, each composed of a random superposition of Gabor functions, and then occasionally presented novel images drawn from the same ensemble. Using two-photon imaging of the Ca2+ sensor GCaMP6f to measure neural activity in layer 2/3 of awake, head-fixed mice (33), we found that the majority of neurons exhibited excess activity in response to a novel image. This distinction between novel versus familiar images was quickly reached, emerging with a time constant of 2.6 ± 0.9 s. Similarly, when we began presenting a new set of images, a majority of the neurons exhibited elevated firing that relaxed to a steady state with a time constant of 1.4 ± 0.4 s. When we presented novel images within larger image sets, the amplitude of novelty response decreased, defining a capacity of the system to encode ∼15 familiar images. All of these findings could be explained qualitatively using an adaptive subunit model in which neurons presynaptic to a recorded neuron have both individual tuning to visual stimuli and adaptive gain control.  相似文献   

5.
A feature of early postnatal neocortical development is a transient peak in signaling via metabotropic glutamate receptor 5 (mGluR5). In visual cortex, this change coincides with increased sensitivity of excitatory synapses to monocular deprivation (MD). However, loss of visual responsiveness after MD occurs via mechanisms revealed by the study of long-term depression (LTD) of synaptic transmission, which in layer 4 is induced by acute activation of NMDA receptors (NMDARs) rather than mGluR5. Here we report that chronic postnatal down-regulation of mGluR5 signaling produces coordinated impairments in both NMDAR-dependent LTD in vitro and ocular dominance plasticity in vivo. The data suggest that ongoing mGluR5 signaling during a critical period of postnatal development establishes the biochemical conditions that are permissive for activity-dependent sculpting of excitatory synapses via the mechanism of NMDAR-dependent LTD.Temporary monocular deprivation (MD) sets in motion synaptic changes in visual cortex that result in impaired vision through the deprived eye. The primary cause of visual impairment is depression of excitatory thalamocortical synaptic transmission in layer 4 of visual cortex (13). The study of long-term depression (LTD) of synapses, elicited in vitro by electrical or chemical stimulation, has revealed many of the mechanisms involved in deprived-eye depression (4). In slices of visual cortex, LTD in layer 4 is induced by NMDA receptor (NMDAR) activation and expressed by posttranslational modification and internalization of AMPA receptors (AMPARs) (5, 6). MD induces identical NMDAR-dependent changes in AMPARs, and synaptic depression induced by deprivation in vivo occludes LTD in visual cortex ex vivo (68). Manipulations of NMDARs and AMPAR trafficking that interfere with LTD also prevent the effects of MD (7, 911).Although NMDAR-dependent LTD is widely expressed in the brain (12, 13), it is now understood that different circuits use different mechanisms for long-term homosynaptic depression (14). For example, in the CA1 region of hippocampus, synaptic activation of either NMDARs or metabotropic glutamate receptor 5 (mGluR5) induces LTD. In both cases, depression is expressed postsynaptically as a reduction in AMPARs, but these forms of LTD are not mutually occluding and have distinct signaling requirements (15). A defining feature of mGluR5-dependent postsynaptic LTD in CA1 is a requirement for the immediate translation of synaptic mRNAs (16). In visual cortex, there is evidence that induction of LTD in layers 2–4 requires NMDAR activation, whereas induction of LTD in layer 6 requires activation of mGluR5 (17, 18).The hypothesis that mGluRs, in addition to NMDARs, play a key role in visual cortical plasticity can be traced back more than 25 y to observations that glutamate-stimulated phosphoinositide turnover, mediated in visual cortex by mGluR5 coupled to phospholipase C, is elevated during the postnatal period of heightened sensitivity to MD (19). Early attempts to test this hypothesis were inconclusive owing to the use of weak and nonselective orthosteric compounds (2022); however, subsequent experiments did confirm that NMDAR-dependent LTD occurs normally in layers 2/3 of visual cortex in Grm5 knockout mice (23).The idea that mGluR5 is critically involved in visual cortical plasticity in vivo was rekindled with the finding that deprived-eye depression fails to occur in layer 4 of Grm5+/− mutant mice (24). This finding was unexpected because, as reviewed above, a considerable body of evidence has implicated the mechanism of NMDAR-dependent LTD in deprived-eye depression. In the present study, we reexamined the role of mGluR5 in LTD and ocular dominance plasticity in layer 4, using the Grm5+/− mouse and a highly specific negative allosteric modulator, 2-chloro-4-((2,5-dimethyl-1-(4-(trifluoromethoxy)phenyl)-1H-imidazol-4-yl)ethynyl)pyridine (CTEP), that has proven suitable for chronic inhibition of mGluR5 (25, 26). Our data show that NMDAR-dependent LTD and deprived-eye depression in layer 4 require mGluR5 signaling during postnatal development.  相似文献   

6.
Subplate neurons are early-born cortical neurons that transiently form neural circuits during perinatal development and guide cortical maturation. Thereafter, most subplate neurons undergo cell death, while some survive and renew their target areas for synaptic connections. However, the functional properties of the surviving subplate neurons remain largely unknown. This study aimed to characterize the visual responses and experience-dependent functional plasticity of layer 6b (L6b) neurons, the remnants of subplate neurons, in the primary visual cortex (V1). Two-photon Ca2+ imaging was performed in V1 of awake juvenile mice. L6b neurons showed broader tunings for orientation, direction, and spatial frequency than did layer 2/3 (L2/3) and L6a neurons. In addition, L6b neurons showed lower matching of preferred orientation between the left and right eyes compared with other layers. Post hoc 3D immunohistochemistry confirmed that the majority of recorded L6b neurons expressed connective tissue growth factor (CTGF), a subplate neuron marker. Moreover, chronic two-photon imaging showed that L6b neurons exhibited ocular dominance (OD) plasticity by monocular deprivation during critical periods. The OD shift to the open eye depended on the response strength to the stimulation of the eye to be deprived before starting monocular deprivation. There were no significant differences in visual response selectivity prior to monocular deprivation between the OD changed and unchanged neuron groups, suggesting that OD plasticity can occur in L6b neurons showing any response features. In conclusion, our results provide strong evidence that surviving subplate neurons exhibit sensory responses and experience-dependent plasticity at a relatively late stage of cortical development.

The mammalian cerebral cortex consists of six layers, with distinct roles in information processing (1, 2). At the bottom of the neocortex, on the boundary between the gray matter and white matter, there is a thin sheet of neurons called layer 6b (L6b) (3). Layer 6b neurons are thought to be remnants of subplate neurons based on their location and cell-type marker expression (4). During prenatal and early postnatal periods, subplate neurons form transient neuronal circuits that play key roles in cortical maturation (57). In the embryonic cortex, subplate neurons form short-lived synapses with early immature neurons to regulate radial migration (8). During perinatal development, subplate neurons transiently receive inputs from ingrowing thalamic axons and innervate layer 4 (L4) to guide thalamic inputs to the eventual target, L4 (5, 6). Thus, the circuits formed by subplate neurons at the perinatal developmental stage are essential to establish basic neuronal circuits before starting experience-dependent refinements (57). Subsequently, subplate neurons largely disappear due to programmed cell death, but some survive and reside in L6b (5, 6). In the adult cortex, L6b neurons form neuronal circuits with local and long-distance neurons, which are different from those formed during early development (912). Therefore, surviving subplate neurons may acquire a role in information processing after remodeling of neuronal connections. A recent study using three-photon Ca2+ imaging demonstrated that L6b neurons show visual responses with broad orientation/direction tuning in the adult mouse primary visual cortex (V1) (13). However, comparable evidence for L6b response properties with other layer neurons in V1 is lacking (1420). Moreover, L6b neurons have diverse morphology and molecular expression (2124). Neurons born during subplate neurogenesis show the different expression patterns of subplate markers in postnatal L6b (4). However, the response properties in each subtype of L6b neurons remain unknown.The sensory responsiveness of cortical neurons is considerably refined by sensory experience relatively late in development, referred to as the critical period (25, 26). Previous studies have demonstrated that sensory activities before the onset of the critical period affect the arrangement of subplate neuron neurites in the barrel cortex and local subplate circuits in the auditory cortex (27, 28). However, there is no direct evidence that the sensory responses of surviving subplate neurons are modified by sensory experience during the critical period. If experience-dependent plasticity occurs in subplate neuron responses, they will contribute to the experience-dependent development of sensory functions and possibly to the functions in the mature cortex. Ocular dominance (OD) plasticity in V1 is a canonical model used to examine experience-dependent refinement of sensory responses (25, 26, 29, 30). If one eye is occluded for several days during the critical period, neurons in V1 lose their response to the deprived eye. OD plasticity is robustly preserved across species and cell types. Therefore, OD plasticity is suitable for evaluating experience-dependent plasticity in L6b neurons.This study aimed to characterize the visual responses and OD plasticity of L6b neurons in V1. Toward this goal, two-photon Ca2+ imaging was performed in awake juvenile mice, followed by 3D immunohistochemistry with a subplate neuronal marker, connective tissue growth factor (CTGF) (4, 31). L6b neurons showed broader tuning to visual stimuli and lower binocular matching of orientation preference than did layer 2/3 (L2/3) and L6a neurons. Chronic two-photon imaging revealed significant OD plasticity in individual L6b neurons during the critical period. Our results provide strong evidence that L6b neurons, presumed to be subplate neuron remnants, exhibit sensory responses and experience-dependent functional plasticity at a relatively late stage of cortical development.  相似文献   

7.
Like other sensory systems, the visual system is topographically organized: Its sensory neurons, the photoreceptors, and their targets maintain point-to-point correspondence in physical space, forming a retinotopic map. The iterative wiring of circuits in the visual system conveniently facilitates the study of its development. Over the past few decades, experiments in Drosophila have shed light on the principles that guide the specification and connectivity of visual system neurons. In this review, we describe the main findings unearthed by the study of the Drosophila visual system and compare them with similar events in mammals. We focus on how temporal and spatial patterning generates diverse cell types, how guidance molecules distribute the axons and dendrites of neurons within the correct target regions, how vertebrates and invertebrates generate their retinotopic map, and the molecules and mechanisms required for neuronal migration. We suggest that basic principles used to wire the fly visual system are broadly applicable to other systems and highlight its importance as a model to study nervous system development.

The visual system is integral to the detection and processing of environmental stimuli such as food, mates, or predators. Like auditory and somatosensory neurons, the visual system maintains point-to-point correspondence in space between sensory receptors and downstream processing centers, a phenomenon known as retinotopy for the visual system (1). Retinotopy facilitates the study of neural circuit development and function as it allows one to extrapolate general principles about visual system assembly by focusing on a single subunit. This, combined with the visual system’s accessibility, has made it one of the best-studied sensory modalities.Drosophila has a long history as a genetic model organism used to study visual system development. Immunohistochemistry and molecular genetic experiments performed in Drosophila have unearthed some of the developmental mechanisms that build the visual system. Methodologies such as Golgi staining (2), highly specific enhancer trap lines (3, 4), single-cell RNA sequencing (58), and electron microscopy reconstructions (913) have allowed for careful morphological and molecular characterization of visual system components, providing both the roster of neuronal types as well as the identity of their synaptic partners (i.e., the connectome). Importantly, the neural stem cells of the Drosophila optic lobe (called neuroblasts) are generated as a wave of differentiation sweeps over a neuroepithelium, allowing one to simultaneously observe and compare neuroblasts of different ages at a single time point (1416). Many concepts underlying Drosophila visual system development are readily applicable to other systems, such as the vertebrate visual system (17) and cortex (18).Nervous system development follows a number of reproducible steps. Cell types must be specified and generated in the correct proportions. Neurons must target the correct (optic) ganglia and segregate their axons/dendrites into the correct target regions. Neurons of the same type must distribute their arbors across the topographic map. They must assemble themselves with stereotypic/columnar organization and project to the correct layers, and finally, upon reaching proximity to their synaptic partners, neurons must make the correct connections. All of these steps must be coordinated under precise spatiotemporal control.Below, we describe the major concepts that have emerged from the study of the Drosophila visual system, with a specific emphasis on how the medulla optic ganglion is generated. The development of the retina (1922), the formation of the optic neuroepithelia (23, 24), and the formation of neuropil layers (25, 26) have been discussed at length in numerous reviews and therefore, will not be mentioned further.  相似文献   

8.
Visual development depends on sensory input during an early developmental critical period. Deviation of the pointing direction of the two eyes (strabismus) or chronic optical blur (anisometropia) separately and together can disrupt the formation of normal binocular interactions and the development of spatial processing, leading to a loss of stereopsis and visual acuity known as amblyopia. To shed new light on how these two different forms of visual deprivation affect the development of visual cortex, we used event-related potentials (ERPs) to study the temporal evolution of visual responses in patients who had experienced either strabismus or anisometropia early in life. To make a specific statement about the locus of deprivation effects, we took advantage of a stimulation paradigm in which we could measure deprivation effects that arise either before or after a configuration-specific response to illusory contours (ICs). Extraction of ICs is known to first occur in extrastriate visual areas. Our ERP measurements indicate that deprivation via strabismus affects both the early part of the evoked response that occurs before ICs are formed as well as the later IC-selective response. Importantly, these effects are found in the normal-acuity nonamblyopic eyes of strabismic amblyopes and in both eyes of strabismic patients without amblyopia. The nonamblyopic eyes of anisometropic amblyopes, by contrast, are normal. Our results indicate that beyond the well-known effects of strabismus on the development of normal binocularity, it also affects the early stages of monocular feature processing in an acuity-independent fashion.Over 50 y of research on experimental animal models has indicated that deprivation of normal visual experience during a developmental critical period perturbs both the structure and function of primary visual cortex (14). The animal models were developed to understand the underlying neural mechanisms of amblyopia, a common human developmental disorder of spatial vision associated with the presence of strabismus, anisometropia, or form deprivation during early life (5). Amblyopia is classically defined on the basis of poor visual acuity, but many other visual functions are known to be affected (68).The earliest experimental studies of visual deprivation focused on the effects of monocular lid suture, and these studies showed devastating effects on the ability of the deprived eye to drive neural responses, retain synaptic connections, and guide visual behavior (911). Later work studied less extreme forms of deprivation that are common in humans, such as the effects of strabismus (deviation of the pointing direction of the two eyes) (12, 13) or anisometropia (chronic optical blur) (14, 15). More recent studies (16, 17) have found that losses in cell responses in primary visual cortex appear to be insufficient to explain the magnitude of behaviorally measured deficits. Based on these results, a hypothesis has been put forward that these forms of deprivation have their primary effects in extrastriate cortex (16).Motivated by this idea, psychophysicists have sought evidence that extrastriate cortex is particularly impaired in human amblyopia. This work has used tasks whose execution is fundamentally limited by processing resources that single-cell physiology suggests are located in extrastriate cortex. As a second step, these studies have scaled stimuli based on visual acuity and compensated for contrast sensitivity losses to equate the output of early visual cortex from the amblyopic eye to that of normal-vision participants. Despite a nominal match at the level of early visual cortex outputs, patients with amblyopia still show deficits on illusory tilt perception (18), contour integration (1923), global motion sensitivity (8, 2428), object enumeration (29), and object tracking (7, 30). The impairments listed above have been interpreted to indicate that amblyopia may involve abnormalities in “higher-level” (e.g., extrastriate) neural processing that occur independent of any deficits in early processing stages (e.g., in striate cortex). A limitation of the existing psychophysical approaches has been the need to make an assumption that the stimulus scaling used to equate stimuli for visibility fully equilibrates the activity of early visual cortex. It would be preferable to take an approach that allows one to measure neural responses directly from both early and later stages of visual processing. Here we use event-related potentials (ERPs) and a stimulation paradigm that allow us to record responses from both early visual cortex and higher-level, extrastriate areas.Our approach is similar in spirit to existing psychophysical approaches: We use a stimulus configuration—illusory contours (ICs)—that previous single-unit studies have shown to be first extracted in extrastriate cortex (3134). ICs, also referred to as subjective contours, render object borders that are perceptually vivid but that are created in the absence of luminance contrast or chrominance gradients (35). ICs have been widely used to study mechanisms of scene segmentation and grouping operations that are among the most fundamental tasks the visual system has to perform (36). ICs have garnered considerable interest because of their “inferential” nature—despite the lack of luminance edges, the visual system uses implicit configural cues to infer the presence of a contour. Finally, behavioral investigations in macaque suggest that IC perception is strongly dependent on higher visual areas, including V4 (37, 38) and inferotemporal (IT) cortex (39, 40).Instead of attempting to equate the visibility of stimuli in the amblyopic eye to that of normal control eyes, as has been typical practice in the study of amblyopia, we make a close analysis of the effects of deprivation that is based on ERP responses from the nonamblyopic eyes of patients with anisometropic or strabismic amblyopia. These eyes have normal visual acuity and normal or even supernormal contrast sensitivity (41), making the stimuli nominally equivisible without the need for scaling. We then measure evoked responses at early latencies before the time that IC selectivity arises to assess the integrity of early visual cortex, and compare these responses to those measured at longer latencies after robust IC selectivity has been established. Previous single-unit studies that have used ICs of the type used in the present study indicate that they are first extracted no later than V2 (31, 42, 43) or V4 (34). Given the difference in species and stimuli, we will refer in the following to evoked responses that lack IC sensitivity as having arisen in “early” visual cortex, rather than in specific visual areas. To further specify the site of deprivation effects, we also study a group of stereo-blind patients with strabismus who do not have amblyopia (normal visual acuity in each eye).A second goal of our study is to compare the effects of deprivation from unilateral blur (anisometropia) to that caused by strabismus. The human psychophysical literature has made a distinction in the pattern of visual loss associated with strabismus versus that associated with anisometropia (44). At least some of the differences in performance between these two types of deprivation can be explained on the level of residual stereopsis, which typically differs between these two populations (41). Whenever these two types of deprivation have been compared in terms of their effects on the monocular cell properties of V1, there has been little to differentiate the effects of the two types of deprivation (16, 45, 46). Unfortunately, there are relatively few studies of the effects of critical period deprivation on the cell-tuning properties in extrastriate cortex of any species (15, 17, 47), and there has been no comparison of the effects of strabismus vs. anisometropia in extrastriate cortex. The implication of the existing animal literature is that strabismus and anisometropia have comparable effects on early visual cortex and thus the divergence in their behavioral phenotype, as well as the major effects of deprivation, will lie in extrastriate cortex. Here we show that these two types of deprivation have differential effects very early in visual cortex, possibly as early as the transfer of information from V1 to V2.  相似文献   

9.
Humans make sense of the world by organizing things into categories. When and how does this process begin? We investigated whether real-world object categories that spontaneously emerge in the first months of life match categorical representations of objects in the human visual cortex. Using eye tracking, we measured the differential looking time of 4-, 10-, and 19-mo-olds as they looked at pairs of pictures belonging to eight animate or inanimate categories (human/nonhuman, faces/bodies, real-world size big/small, natural/artificial). Taking infants’ looking times as a measure of similarity, for each age group, we defined a representational space where each object was defined in relation to others of the same or of a different category. This space was compared with hypothesis-based and functional MRI-based models of visual object categorization in the adults’ visual cortex. Analyses across different age groups showed that, as infants grow older, their looking behavior matches neural representations in ever-larger portions of the adult visual cortex, suggesting progressive recruitment and integration of more and more feature spaces distributed over the visual cortex. Moreover, the results characterize infants’ visual categorization as an incremental process with two milestones. Between 4 and 10 mo, visual exploration guided by saliency gives way to an organization according to the animate–inanimate distinction. Between 10 and 19 mo, a category spurt leads toward a mature organization. We propose that these changes underlie the coupling between seeing and thinking in the developing mind.

Objects are the units of attention and perception; categories are the units of thought. We see objects (e.g., a rounded spongy red and white-dotted shape on an elongated support), but we think about objects primarily in terms of categories (e.g., the mushroom Amanita muscaria). By recognizing an object as member of a category, we understand what that object is and retrieve its visible (e.g., it is red with white spots) as well as its invisible properties (e.g., it is hallucinogenic). Categorization is thus the basis of inference and decision, although not all inferences and decisions require categorization.Objects can be categorized according to a virtually infinite number of perceptual and nonperceptual dimensions (1, 2). Insight on the most basic and general dimensions for object categorization in humans has been gained by studying how information is organized in the vast brain territory for visual object representation, which forms the occipitotemporal visual ventral stream.Here, categories emerge from the topography of responses to visual objects, resolving into a large-scale organization that distinguishes between animate and inanimate objects, and crumbles in finer-grained distinctions between human vs. nonhuman animals, small vs. big (in terms of real-world size) (3, 4), and natural vs. artificial objects (3, 511). Underneath this organization lies a mosaic of local hot spots of strong selectivity for stimuli, such as faces, bodies, and scenes (1215). Because of its organization and role in object recognition, the visual ventral stream is regarded as the interface between perception and cognition, forming the backbone for semantic categorization and representation of object and action knowledge in the rest of the brain (16).Besides the topography, categorical distinctions in the visual cortex also emerge from dissimilarity between distributed patterns of neural activity evoked by individual objects (7, 17, 18). Thus, in visual areas, activity patterns recorded with functional MRI (fMRI) are more similar (i.e., less discriminable) for two animate objects (e.g., parrot and camel) than between an animate and an inanimate object (e.g., parrot and car). Visual object categories represented in the visual cortex prove behaviorally relevant, predicting the way in which individuals parse the visual world. For example, in a visual search for a target–object among a set of distractors, people are faster to discriminate and find a target among objects of a different visual category (e.g., a cat among artificial objects) than among objects of the same visual category: search times increase as neural similarity between target and distractors increases (5).The organization of the human visual cortex by object categories appears to be a hallmark in the evolution of the primate brain: it is replicated in the visual cortex of monkeys (7, 19) and is resistant to variations of individual visual experience (2023). A similar organization across species and conspecifics with different environment and life-long visual experience suggests a neural code optimized by evolution. This line of thinking encourages the hypothesis that object representation in the visual cortex reflects biological constraints and dispositions (24); as such, it would emerge early in life or even be present at birth.There is initial evidence for signatures, or precursors, of neural specialization to object categories (faces, bodies, animals, and scenes) in the visual cortex of newborns or young infants, based on electroencephalography (2530) or fMRI (31, 32). Behavioral counterparts of those neural effects include early preference for faces or face-like stimuli over inverted faces (3335), for biological over nonbiological motion (36, 37), and for canonical over distorted bodies (3840).While preference implies discrimination between two objects, visual categorization entails the ability to use the visual properties of a category (e.g., shape) to identify its members and keep them separate from other categories. By 4 mo, infants are already able to do so: exposed to various exemplars of a category (e.g., cats), they exhibit a novelty effect, looking longer at an object of a new category than at a novel object of the same category (4143).But when do infants begin to see the visual world as adults do? Here, we investigate whether the categorical dimensions that drive the large-scale organization of the human visual cortex could account for the spontaneous emergence and development of real-word object categories in infancy. In particular, under the hypothesis that the structuring of visual object information toward an adult-like organization begins at birth (27, 31, 32), we asked when such organization becomes functional so as to account for how infants explore the visual world.We examined the development of visual object categorization in infancy, considering, in one experimental design, objects that have highlighted categorical representations in the visual cortex of human adults (and monkeys): animate vs. inanimate, human vs. nonhuman (animate), faces vs. bodies, natural vs. artificial inanimate, and real-world big vs. small (inanimate) (7). Each of the above distinctions defines a categorization model, whereby a given (behavioral or physiological) correlate of object perception would be more similar for two objects of the same category than for two objects of different categories.Using eye tracking, we recorded the most reliable and informative measure of infants’ cognition thus far: the looking behavior (44, 45). Infants of 4, 10, and 19 mo viewed two objects at a time on a screen, while we measured the looking time toward either object. We took the looking time difference between two stimuli as a measure of dissimilarity, under the assumption that looking times for two objects seen for the first time would be more similar, the closer their visual representation is (see also ref. 46). Since two stimuli of the same visual category are, normally, more similar than two stimuli from different categories, we expected the variations in differential looking times (DLTs) to reflect variations in representational similarity, uncovering categorical distinctions.In classic categorization studies, infants’ looking times are used to capture differences in novelty/familiarity, created ad hoc within the experimental session [e.g., through the presentation of multiple exemplars of a category during familiarization (4143, 47, 48)]. Thus, a methodological challenge (and innovation) of the current work was to use looking times to capture differences in the perceived (dis)similarity between two objects, in the absence of any controlled unbalance in the exposure to a given category (at least within the experimental session). As a result, this approach defined a model where each object was represented in relation to the others (i.e., how similar/dissimilar it was from exemplars of the same and different categories). A model based on a relative measurement can be quantitatively compared with any model based on another relative measurement, whatever the source of the measurements (e.g., reaction times, neural activity) (49). We compared the model of visual object representation emerging from the infants’ looking behavior, with synthetic (i.e., hypothesis-driven) and data-driven (i.e., fMRI-based) models reflecting visual object representation in the mature visual cortex. This approach had previously allowed connecting data from brain-activity recording, behavioral measurements in adults, and computational modeling (49). Here, by studying the relationship between the infants’ looking behavior and the organization of visual object information in the adults’ brain, we connected another branch, which is another step toward a unified theory of the origin and development of functional organization in the human brain.  相似文献   

10.
Transcranial magnetic stimulation (TMS) is widely used in clinical interventions and basic neuroscience. Additionally, it has become a powerful tool to drive plastic changes in neuronal networks. However, highly resolved recordings of the immediate TMS effects have remained scarce, because existing recording techniques are limited in spatial or temporal resolution or are interfered with by the strong TMS-induced electric field. To circumvent these constraints, we performed optical imaging with voltage-sensitive dye (VSD) in an animal experimental setting using anaesthetized cats. The dye signals reflect gradual changes in the cells'' membrane potential across several square millimeters of cortical tissue, thus enabling direct visualization of TMS-induced neuronal population dynamics. After application of a single TMS pulse across visual cortex, brief focal activation was immediately followed by synchronous suppression of a large pool of neurons. With consecutive magnetic pulses (10 Hz), widespread activity within this “basin of suppression” increased stepwise to suprathreshold levels and spontaneous activity was enhanced. Visual stimulation after repetitive TMS revealed long-term potentiation of evoked activity. Furthermore, loss of the “deceleration–acceleration” notch during the rising phase of the response, as a signature of fast intracortical inhibition detectable with VSD imaging, indicated weakened inhibition as an important driving force of increasing cortical excitability. In summary, our data show that high-frequency TMS changes the balance between excitation and inhibition in favor of an excitatory cortical state. VSD imaging may thus be a promising technique to trace TMS-induced changes in excitability and resulting plastic processes across cortical maps with high spatial and temporal resolutions.Over recent decades, transcranial magnetic stimulation (1) (TMS) has become a frequently used method for noninvasive diagnostics, therapeutic treatment, and intervention for neurorehabilitation of neurological disorders (28). Additionally, TMS has proved a valuable tool in basic brain research as its perturbative effects allow area-selective manipulation of immediate cortical function (911), as well as its long-lasting alteration through plasticity and learning protocols (12, 13). However, direct measurements of the TMS-induced cortical dynamics at highly resolved spatiotemporal scales are missing because “online approaches” (14), using modern neuroimaging techniques such as functional MRI (fMRI) (1518), magnetoencephalography (19), EEG (20), and near-infrared (21) or intrinsic optical imaging (22), are limited in either spatial or temporal resolutions or in both.Here we overcame these limitations, using optical imaging with voltage-sensitive dyes (VSD), which exploits the dye’s property to transduce gradual changes in voltage across neuronal membranes into fluorescent light signals. In contrast to imaging methods applicable in humans, this method is invasive but allows avoiding the commonly experienced contamination of signals by artifacts due to the strong TMS-induced electric field. In combination with a tandem-lens system of large numerical aperture (23) and a fast CCD camera as detector, VSD imaging captures several square millimeters of cortex with an emphasis on superficial layers (2432), allowing us to record activity changes within milliseconds across millions of neurons at once with a spatial resolution of ∼50 μm (for review see ref. 33). We measured activity in cat primary visual cortex (V1) upon repetitive TMS (rTMS) (0.15 Hz, 1 Hz, and 10 Hz) and describe its effects on fundamental processing characteristics during subsequent visual stimulation.  相似文献   

11.
Recurrent loops in the visual cortex play a critical role in visual perception, which is likely not mediated by purely feed-forward pathways. However, the development of recurrent loops is poorly understood. The role of recurrent processing has been studied using visual backward masking, a perceptual phenomenon in which a visual stimulus is rendered invisible by a following mask, possibly because of the disruption of recurrent processing. Anatomical studies have reported that recurrent pathways are immature in early infancy. This raises the possibility that younger infants process visual information mainly in a feed-forward manner, and thus, they might be able to perceive visual stimuli that adults cannot see because of backward masking. Here, we show that infants under 7 mo of age are immune to visual backward masking and that masked stimuli remain visible to younger infants while older infants cannot perceive them. These results suggest that recurrent processing is immature in infants under 7 mo and that they are able to perceive objects even without recurrent processing. Our findings indicate that the algorithm for visual perception drastically changes in the second half of the first year of life.

The standard view of cortical visual processing is that visual information is hierarchically processed along feed-forward pathways, with more complex representations created serially by relaying information from lower to higher visual areas (1). However, recurrent processing, which is mediated by corticocortical feedback and intra-areal horizontal pathways (2, 3), also contributes to fundamental visual functions (46). Although the role of recurrent processing is not yet well understood, many studies have proposed that visual perception is not mediated by a purely feed-forward system but rather by a system incorporating recurrent loops (712).Anatomical studies of the infant brain have reported that feedback and horizontal connections develop later than feed-forward connections (1315). Anatomical data from postmortem brains of human infants have shown that the adult-like laminar termination pattern of forward connections between V1 and V2 emerges by 4 mo of age (14), but the feedback (14) and long-range horizontal (15) connections are immature at that age. These findings imply that until at least around the second half of the first year of life, recurrent processing is immature, and visual information may be processed mainly in a feed-forward manner. However, this possibility has so far not been examined. Although neuroimaging studies of human infants have revealed functional and structural brain development (16, 17), it is difficult to determine whether the observed activities or structures are derived from feed-forward or recurrent pathways using imaging techniques in human infant studies. Thus, in the present study, we examined recurrent processing in early infancy using visual backward masking.Visual backward masking is a perceptual phenomenon in which a stimulus is rendered invisible by a mask presented after the target stimulus. We used object substitution masking (OSM) (18), a type of backward masking which is thought to arise from a disruption of recurrent processing (1823). In OSM, target perception is impaired when a target is briefly presented with a sparse mask that surrounds the target, and the mask remains on screen after the target disappears, while target perception remains intact when the target and the mask disappear simultaneously. OSM has been proposed to occur because the temporally trailing mask disrupts the recurrent signals related to the target (18). Although some studies have questioned the recurrent explanation (24, 25), evidence from psychophysical studies suggests that OSM can be plausibly explained by the recurrent theory rather than the feed-forward theory (18, 20, 22, 23). Indeed, a neuroimaging study has shown that recurrent activities in the early visual cortex are modulated when target perception is impaired by OSM (19).Although an infant study has suggested that OSM occurs in 6-mo-old infants (26), its mechanism and development remain poorly understood. If visual processing is performed without recurrent processing in early infancy, as suggested by the anatomical studies (14, 15), OSM may not occur in younger infants. In other words, younger infants may be able to perceive a masked stimulus that older infants cannot.  相似文献   

12.
13.
Neurons throughout the primate inferior temporal (IT) cortex respond selectively to visual images of faces and other complex objects. The response magnitude of neurons to a given image often depends on the size at which the image is presented, usually on a flat display at a fixed distance. While such size sensitivity might simply reflect the angular subtense of retinal image stimulation in degrees, one unexplored possibility is that it tracks the real-world geometry of physical objects, such as their size and distance to the observer in centimeters. This distinction bears fundamentally on the nature of object representation in IT and on the scope of visual operations supported by the ventral visual pathway. To address this question, we assessed the response dependency of neurons in the macaque anterior fundus (AF) face patch to the angular versus physical size of faces. We employed a macaque avatar to stereoscopically render three-dimensional (3D) photorealistic faces at multiple sizes and distances, including a subset of size/distance combinations designed to cast the same size retinal image projection. We found that most AF neurons were modulated principally by the 3D physical size of the face rather than its two-dimensional (2D) angular size on the retina. Further, most neurons responded strongest to extremely large and small faces, rather than to those of normal size. Together, these findings reveal a graded encoding of physical size among face patch neurons, providing evidence that category-selective regions of the primate ventral visual pathway participate in a geometric analysis of real-world objects.

We experience the world in three-dimensional (3D) space, perceiving and interacting with objects and individuals in a scene. For humans and other primates, much of this experience is served by vision, with broad stretches of the cerebral cortex ostensibly devoted to making visual sense of the world. For example, individual neurons throughout the inferior temporal (IT) cortex of the macaque respond selectively to meaningful objects, with neurons of similar response properties often aggregated in functional clusters (13). One striking finding about the object selectivity of IT neurons is its tolerance to natural image transformations, such as scaling and translation (412). Namely, if stimuli are ranked based on the responses they elicit from a given neuron, this ranking often remains unchanged when stimuli are translated on the screen or scaled up or down several-fold in size. Scale tolerance in object selectivity is thought to reflect the capacity of the brain to compute a conceptual or abstracted representation of the retinal image separate from its metric details. While the mechanism underlying this apparently intrinsic feature of ventral stream visual processing is poorly understood, it is thought to be critical for image-based object recognition (4, 1317).At the same time, scaling an image up or down can greatly change the responses of IT neurons to stimuli, even as the rank-order selectivity to stimuli is preserved (1820). This size-dependent rate modulation is poorly understood and seldom considered explicitly. One relatively unexplored possibility is that some IT neurons encode the physical dimensions of objects, in addition to their shape, and their featural, and semantic properties. The explicit coding of parameters such as absolute object size and distance from the observer might facilitate visual operations concerned with the perception of scene geometry and interaction with the local environment. Additionally, the brain may benefit by retaining internal metric information about the typical sizes of objects (21, 22), as this information could be applied to subsequent perceptual judgments about objects and individuals in the context of natural visual behaviors (23).The visual encoding of 3D space is usually associated with parietal cortex in the dorsal visual pathway, where coordinate transformations are thought to convert retinal signals to 3D information about objects and the environment that can be used to guide effector actions (24). However, a few studies have demonstrated that neurons in the ventral pathway also exhibit signals related to 3D spatial perception. For example, in area V4 neural responses to a given retinal image are modulated based on the physical distance at which that image is presented (25) as well as volumetric 3D shape parameters (26). At later ventral pathway processing stages, the superior temporal sulcus (STS) is marked by selectivity to 3D object shape, potentially reflecting their interplay with intraparietal areas concerned with 3D visual geometry (2732). While these findings demonstrate that 3D information influences responses across the ventral visual pathway, little is known about whether these areas explicitly encode the physical dimensions of objects, such as their size or distance from the observer.Here we explicitly investigate how a population of category-selective neurons in macaque IT encode the physical dimensions of objects. We recorded from the anterior fundus (AF) face patch (33), a well-studied face-selective region of the STS where neurons are known to be both selective for faces and sensitive to their spatial scale (34). We asked whether such scale sensitivity primarily reflects the 2D image of a face on the retina or the 3D physical geometry of the face and head in the real world. In most visual electrophysiology experiments, the retinal and physical geometry of an image are yoked: scaling an image on a display alters both its physical size and its retinal subtense. Moreover, absent other explicit depth cues, an image has ambiguous depth and thus cannot be uniquely mapped to the 3D world. In the present study, we used a recently developed macaque avatar model (35) to stereoscopically render photorealistic 3D faces of unambiguous physical size and distance. We found that the size sensitivity of most AF neurons was dictated primarily by the physical dimensions of a face rather than by its angular subtense on the retina. We further discovered that neural responses were strongest to extreme-sized faces rather than normal sized faces, opposing intuition but consistent with ideas of predictive coding. We discuss how object-selective IT neurons might contribute to important and conserved elements of natural visual behavior through their encoding of real-world geometric parameters.  相似文献   

14.
Darkness and brightness are very different perceptually. To understand the neural basis for the visual difference, we studied the dynamical states of populations of neurons in macaque primary visual cortex when a spatially uniform area (8° × 8°) of the visual field alternated between black and white. Darkness evoked sustained nerve-impulse spiking in primary visual cortex neurons, but bright stimuli evoked only a transient response. A peak in the local field potential (LFP) γ band (30–80 Hz) occurred during darkness; white-induced LFP fluctuations were of lower amplitude, peaking at 25 Hz. However, the sustained response to white in the evoked LFP was larger than for black. Together with the results on spiking, the LFP results imply that, throughout the stimulus period, bright fields evoked strong net sustained inhibition. Such cortical brightness adaptation can explain many perceptual phenomena: interocular speeding up of dark adaptation, tonic interocular suppression, and interocular masking.Light adaptation is a vitally important visual function for enabling a stable perception of the visual world when background luminance levels can be as different as night and day. Previous psychophysical studies suggested that light adaptation was caused mainly by gain control mechanisms in the retina (13) that have been well studied (4). However, some psychophysical results suggested that there might be also a cortical contribution to light adaptation (5), but the nature of the cortical contribution is much less well understood. Here, we report our studies of cortical adaptation to brightness and darkness in macaque primary visual cortex (V1) and the implications for visual perception.We asked the following question: how does macaque V1 cortex respond to large dark and bright regions like those that would comprise the background of a visual scene during the night or the day, respectively? The experiments reported here focused on two cortical layers, 4C and 2/3. The layers of V1 are distinct stages of processing of visual signals (6, 7). The input layer 4C is the first cortical stage where the cortex could distinguish between blackness and whiteness (8). Layer 2/3 comprise one of the main visual outputs of V1 to extrastriate visual cortex (9). To obtain a comprehensive view of the response to black and white in cortical layers 4C and 2/3, we used measurements of population activity: multiunit spike rate, termed multiunit activity (MUA), and local field potential (LFP) (1012).Cortical brightness adaptation was evident in the qualitatively different dynamics of neural population activity in layers 4C and 2/3 when the monkeys viewed black and white regions. Both black and white large-area stimuli evoked transient excitatory responses in MUA, but in response to a white region, there was a slowly developing but much stronger inhibition of spike activity. Such suppression of sustained spiking in cortical neurons by white backgrounds would increase the signal/noise ratio of targets on white backgrounds. Such cortical brightness adaptation is likely the explanation for many previously observed perceptual phenomena such as tonic interocular suppression, dichoptic effects in light and dark adaptation, and interocular masking (5, 1316).  相似文献   

15.
Illusory figures demonstrate the visual system’s ability to infer surfaces under conditions of fragmented sensory input. To investigate the role of midlevel visual area V4 in visual surface completion, we used multielectrode arrays to measure spiking responses to two types of visual stimuli: Kanizsa patterns that induce the perception of an illusory surface and physically similar control stimuli that do not. Neurons in V4 exhibited stronger and sometimes rhythmic spiking responses for the illusion-promoting configurations compared with controls. Moreover, this elevated response depended on the precise alignment of the neuron’s peak visual field sensitivity (receptive field focus) with the illusory surface itself. Neurons whose receptive field focus was over adjacent inducing elements, less than 1.5° away, did not show response enhancement to the illusion. Neither receptive field sizes nor fixational eye movements could account for this effect, which was present in both single-unit signals and multiunit activity. These results suggest that the active perceptual completion of surfaces and shapes, which is a fundamental problem in natural visual experience, draws upon the selective enhancement of activity within a distinct subpopulation of neurons in cortical area V4.Visual illusions are valuable stimuli for studying the neural basis of visual processing because they reveal the brain’s internal mechanisms for interpreting sensory input. Illusory figures, for example, exploit the visual system’s capacity to construct contours, shapes, and surfaces despite the lack of a continuous physical border (1, 2). Illusory figures are perceived by a range of phylogenetically diverse species, including monkeys, cats, owls, and bees, pointing to perceptual completion as a fundamental aspect of natural vision (3).Neural correlates of illusory figures have been found in a wide range of brain areas. Recordings in monkeys revealed that illusory figures evoke spiking responses from neurons in visual areas as early as V1 and V2 and as late as the inferotemporal cortex (49). Neuroimaging studies in humans similarly found responses to illusory figures throughout visual cortex (1013).Several theoretical models postulate mechanisms of illusory figure perception (1419). A common feature of these models is spatial integration of the inducing elements combined with an active interpolation to complete the surface. These processes are frequently assigned to neurons in midlevel areas, whose receptive fields are large enough to cover separate elements yet sensitive enough to distinguish between local features such as orientation, curvature, and colinearity (20, 21). A range of evidence suggests that visual area V4 in particular may play an active role in surface completion. First, the receptive fields of V4 neurons are large by comparison with V1 and V2 receptive fields and are therefore able to integrate information across spatially separated stimulus components (22). Second, psychophysical studies demonstrate that the perception of certain similar illusory figures varies over visual space in a manner consistent with the retinotopy of V4 (23, 24). Third, both human (1013) and nonhuman primate (25) functional imaging studies reveal responses to illusory contours and surfaces in area V4. Fourth, ablation of area V4 in the macaque selectively impairs performance on discrimination tasks that involve illusory contours (26).Here we investigate the neural representation of illusory surfaces in macaque area V4 using Kanizsa patterns known to give rise to the perception of illusory surfaces. Illusion-promoting patterns elicited electrophysiological responses that were often rhythmic and were significantly enhanced in their firing rate compared with physically similar control patterns that did not promote the illusion. This enhancement depended critically on the spatial alignment of the illusory surface with the point of peak V4 receptive field sensitivity, or “RF focus.” Only neurons with receptive fields focused on the illusory surface showed elevated responses to the illusory surface, whereas those with receptive fields focused on the inducing elements did not. This effect was observed for neurons whose receptive fields, as defined by conventional mapping techniques, were several degrees in size and overlapped with both the illusory surface and the inducer elements. These observations suggest that V4 neurons play an active role in the representation of illusory surfaces and are sensitive to stimulus details much finer than would be predicted based on receptive field size alone.  相似文献   

16.
During critical periods, all cortical neural circuits are refined to optimize their functional properties. The prevailing notion is that the balance between excitation and inhibition determines the onset and closure of critical periods. In contrast, we show that maturation of silent glutamatergic synapses onto principal neurons was sufficient to govern the duration of the critical period for ocular dominance plasticity in the visual cortex of mice. Specifically, postsynaptic density protein-95 (PSD-95) was absolutely required for experience-dependent maturation of silent synapses, and its absence before the onset of critical periods resulted in lifelong juvenile ocular dominance plasticity. Loss of PSD-95 in the visual cortex after the closure of the critical period reinstated silent synapses, resulting in reopening of juvenile-like ocular dominance plasticity. Additionally, silent synapse-based ocular dominance plasticity was largely independent of the inhibitory tone, whose developmental maturation was independent of PSD-95. Moreover, glutamatergic synaptic transmission onto parvalbumin-positive interneurons was unaltered in PSD-95 KO mice. These findings reveal not only that PSD-95–dependent silent synapse maturation in visual cortical principal neurons terminates the critical period for ocular dominance plasticity but also indicate that, in general, once silent synapses are consolidated in any neural circuit, initial experience-dependent functional optimization and critical periods end.Immature cortical neural networks, which are formed primarily under genetic control (1), require experience and training to shape and optimize their functional properties. This experience-dependent refinement is considered to be a general developmental process for all functional cortical domains and typically peaks during their respective critical periods (CPs) (2, 3). Known examples for CPs span functional domains as diverse as filial imprinting and courtship song learning in birds (4, 5); cognitive functions, such as linguistic or musical skills in humans (6, 7); and likely best studied, the different features of sensory modalities (3). CPs are characterized by the absolute requirement for experience in a restricted time window for neural network optimization. Lack of visual experience during the CP for visual cortex refinements can, for example, cause irreversible visual impairment (8). Refinements during the CP play an essential role (9). Although some functions can be substantially ameliorated after the CP, they are rarely optimally restored.It is believed that the neural network refinement is based on synapse stabilization and elimination (1012) and includes forms of long-term synaptic plasticity to remodel excitatory synapses of principal neurons (13, 14). Although long-term plasticity at these excitatory synapses is instructive for shaping neural networks for functional output and their expression coincides with CPs, it is not known whether the remodeling itself governs the duration of CPs. In contrast, only permissive mechanisms have been shown to terminate CPs. Among these, the developmental increase of local inhibition appears to be the dominating mechanism to regulate cortical plasticity and CPs (1517). Additionally, extracellular matrix remodeling is involved, as well as receptors of immune signaling, such as paired Ig-like receptor B (PirB), or axon pathfinding, such as Nogo (1821). However, a specific function to directly regulate synapse remodeling during initial neural network optimization is not known and a potential instructive function of PirB was described for adult cortical plasticity but not plasticity of the initial synapse remodeling during CPs (22).AMPA receptor-silent synapses have been proposed to be efficient plasticity substrates during early cortical network refinements (13, 23, 24). Silent synapses are thought to be immature, still-developing excitatory synapses containing only NMDA receptors (NMDARs) but lacking AMPA receptors (AMPARs) (23, 24). They are functionally dormant but can evolve into fully transmitting synapses by experience-dependent insertion of AMPARs, a plasticity process thought to occur frequently in developing cortices (10). Although they appear as the ideal synaptic substrate for CP plasticity and their maturation correlates with sensory experience (10, 25), it has not been experimentally tested whether maturation of silent synapses indeed causes the termination of critical periods. This conceptual model contrasts with the current view that increased local inhibition and the expression of plasticity brakes ends critical periods (1820, 26). We hypothesize that experience-dependent unsilencing of silent synapses, which results in strengthening and maturation of excitatory synapses, governs network stabilization and refinement during critical periods, and that the progressive decrease of silent synapses leads to the closure of critical periods.Experience-dependent cortical plasticity is classically tested with ocular dominance (OD) plasticity (ODP) in the primary visual cortex (V1), induced by monocular deprivation (MD). In the binocular region of mouse V1, neurons respond to sensory inputs from both eyes, but activity is dominated by afferents from the contralateral eye. During the critical period, a brief MD induces an OD shift of visually evoked responses in V1 toward the open eye (2729). This juvenile ODP is mediated by a reduction of deprived eye responses in V1 and is temporally confined to a critical period (30, 31).A molecular candidate regulating the cellular basis of critical period plasticity is postsynaptic density protein-95 (PSD-95), whose expression in the visual cortex increases on eye opening and thus the onset of visual experience (32). PSD-95 promotes the maturation of AMPA receptor-silent excitatory synapses in hippocampal neurons and is required for activity-driven synapse stabilization (3335). In juvenile PSD-95 KO mice, ODP displays the same features as in WT mice (36). However, as adult PSD-95 KO mice have not yet been analyzed, it is unknown whether PSD-95 is essential for the closure of critical periods. Thus, PSD-95 appeared to be the ideal molecular candidate to test our conceptual model that progressive silent synapse maturation marks the closure of critical periods.  相似文献   

17.
Inferotemporal (IT) cortex in humans and other primates is topographically organized, containing multiple hierarchically organized areas selective for particular domains, such as faces and scenes. This organization is commonly viewed in terms of evolved domain-specific visual mechanisms. Here, we develop an alternative, domain-general and developmental account of IT cortical organization. The account is instantiated in interactive topographic networks (ITNs), a class of computational models in which a hierarchy of model IT areas, subject to biologically plausible connectivity-based constraints, learns high-level visual representations optimized for multiple domains. We find that minimizing a wiring cost on spatially organized feedforward and lateral connections, alongside realistic constraints on the sign of neuronal connectivity within model IT, results in a hierarchical, topographic organization. This organization replicates a number of key properties of primate IT cortex, including the presence of domain-selective spatial clusters preferentially involved in the representation of faces, objects, and scenes; columnar responses across separate excitatory and inhibitory units; and generic spatial organization whereby the response correlation of pairs of units falls off with their distance. We thus argue that topographic domain selectivity is an emergent property of a visual system optimized to maximize behavioral performance under generic connectivity-based constraints.

Inferotemporal (IT) cortex subserves higher-order visual abilities in primates, including the visual recognition of objects and faces. By adulthood in humans, IT cortex, and ventral temporal cortex more generally, contains substantial functional topographic organization, including the presence of domain-selective spatial clusters in reliable spatial locations, including clusters for faces (13), objects (4), buildings and scenes (5, 6), and words (7). Similar domain-level topographic properties have been found in rhesus macaque monkeys, including multiple regions of clustered face selectivity (810). Intriguingly, this selectivity is encompassed in a larger-scale “mosaic” of category selectivity, in which areas of category selectivity themselves have further columnar clustering within them (1113), and moreover, category selectivity appears to exist as clusters within general dimensions of object space (14) spatially organized to smoothly map neuronal correlations over space (15), pointing to more general principles of organization beyond the domain level. In line with this idea, human IT cortex also exhibits larger-scale organization for properties such as animacy and real-world size (16, 17), and midlevel features characteristic of these properties and domains have been shown to account well for patterns of high-level visual selectivity (18). How these domain-level and more general facets of functional organization arise, how they are related, and whether and in what ways they rely on innate specification and/or experience-based developmental processes remain contentious.Recent work has demonstrated that the neural basis of face recognition depends crucially on experience, given that deprivation of face viewing in juvenile macaque monkeys prevents the emergence of face-selective regions (19). Relatedly, the absence of exposure to written forms through reading acquisition precludes the emergence of word-selective regions (20, 21). That there exists clustered neural response selectivity for evolutionarily new visual categories such as written words offers further evidence that the topographic development of the human visual system has a critical experience-dependent component (22, 23). In contrast with a system in which innate mechanisms are determined through natural selection, this experiential plasticity permits the tuning of the visual system based on the most frequent and important visual stimuli that are actually encountered, thereby enabling greater flexibility for ongoing adaptation across the lifespan.There is considerable computational evidence that experience-dependent neural plasticity can account for the response properties of the visual system at the single-neuron level. Classic work demonstrated that the statistics of natural images are sufficient for learning V1-like localized edge tuning within a sparse coding framework (24, 25). More recently, deep convolutional neural networks (DCNNs) trained on image classification have been successful in accounting for the tuning of neurons in V1, V2, V4, and IT in a hierarchically consistent manner, where deeper layers of the DCNN map onto later layers of the anatomical hierarchy (26, 27).Above the single-neuron level, considerable prior work has demonstrated that topographic organization in V1 may emerge from self-organizing, input-driven mechanisms (2834) (for review, see ref. 35). For example, the pinwheel architecture of spatially repeating smooth orientation selectivity overlaid with global retinotopy has been shown to be well accounted for by self-organizing maps (SOMs) (31, 32, 36).One notable application of an SOM to modeling high-level visual cortex by Cowell and Cottrell (37) demonstrated stronger topographic clustering for faces compared to other object categories (e.g., chairs, shoes), suggesting that the greater topographic clustering of faces in IT is due to greater within-category similarity among faces compared to these other categories. This work provides a strong case for domain-general developmental principles underlying cortical topography in IT, but at least two important issues remain unaddressed. First, rather than supporting only discrimination of face from nonface categories (as in ref. 37), face representations in humans (and likely nonhuman primates, although see ref. 38) must support the more difficult and fine-grained task of individuation; this task requires a “spreading transformation” of representations for different face identities (39, 40), which could alter the feature space and its topographic mapping and necessitate a more domain-specialized representation than that examined by ref. 37. Second, rather than a single face-selective area, IT cortex actually contains multiple hierarchically organized face-selective regions with preferential interconnectivity (41). Generally, SOMs are not well equipped to explain such hierarchical topographic interactions, as they are designed to map a feature space into a topographic embedding, but not to transform the feature space hierarchically in the way needed to untangle invariant visual object representation from the statistics of natural images (42). This suggests that SOMs are an incomplete model of topographic development in cortical networks.An alternative approach to studying topographic organization involves incorporating distance-dependent constraints on neural computation within more general neural network models (4346). Of particular interest is a hierarchical neural network developed by Jacobs and Jordan (45) in which error-driven learning was augmented with a spatial loss function penalizing large weights to a greater degree on longer versus shorter connections. This model was shown to develop topographic organization for “what” versus “where” information when trained with spatially segregated output units for the two tasks. Closely related work by Plaut and Behrmann (47) demonstrated that a similarly spatially constrained model with biased demands on input (e.g., retinotopy) and output (e.g., left-lateralized language) could account for the organization of domain-specific areas in IT cortex, such as the foveal bias for words and faces, leftward lateralization of words, and rightward lateralization of faces (4850).However, to date, none of these structurally biased neural network models have been applied to large-scale sets of naturalistic images, the statistics of which are thought to organize high-level visual representations in IT cortex (51), and the topography in these models (45, 47) has been analyzed at a relatively coarse level. Nonetheless, this early work raises the possibility that the application of distance-dependent constraints in a deep neural architecture trained on natural images might provide a more comprehensive account of topographic organization in IT.Along these lines, Lee et al. (15) have recently modeled the topography of IT cortex with topographic deep artificial neural networks (TDANNs) that are trained on a large set of natural images using a correlation-based layout that explicitly encourages units within a layer of the network to be spatially nearer to units with correlated responses and farther from units with uncorrelated or anticorrelated responses. As a result, the TDANN developed face-selective topography that corresponded well with data from macaque monkeys. However, this approach imposes topographic functional organization on the network based on measured functional responses, rather than deriving it from realistic principles of cortical structure and function, such as constraints on connectivity. Moreover, like the SOM, the TDANN can explain only within-area topographic organization and not spatial relationships between areas, such as the stream-like organization of multiple stages of IT cortex (3, 52) and their embedding in a network coupled with upstream and downstream cortical areas (48). Thus, the question remains whether such basic structural principles can account for the topographic organization of IT.In the current work, we combined the approaches of task-optimized DCNN modeling (15, 51) with flexible connectivity-constrained architectures (45, 47) to develop a hierarchical model of topographic organization in IT cortex. We implemented a bias toward local connectivity through minimization of an explicit wiring cost function (45) alongside a task performance cost function. Intriguingly, we observed that this pressure on local connectivity was, on its own, insufficient to drive substantial topographic organization in our model. This led us to explore two neurobiological constraints on the sign of connectivity—strictly excitatory feedforward connectivity and the separation of excitation and inhibition—with the result that both, and particularly, excitatory feedforward connectivity, provided a powerful further inductive bias for developing topographic organization when combined with a bias toward local connectivity. Our results begin to shed light on the factors underlying hierarchical topographic organization in the primate visual system.  相似文献   

18.
As we comprehend narratives, our attentional engagement fluctuates over time. Despite theoretical conceptions of narrative engagement as emotion-laden attention, little empirical work has characterized the cognitive and neural processes that comprise subjective engagement in naturalistic contexts or its consequences for memory. Here, we relate fluctuations in narrative engagement to patterns of brain coactivation and test whether neural signatures of engagement predict subsequent memory. In behavioral studies, participants continuously rated how engaged they were as they watched a television episode or listened to a story. Self-reported engagement was synchronized across individuals and driven by the emotional content of the narratives. In functional MRI datasets collected as different individuals watched the same show or listened to the same story, engagement drove neural synchrony, such that default mode network activity was more synchronized across individuals during more engaging moments of the narratives. Furthermore, models based on time-varying functional brain connectivity predicted evolving states of engagement across participants and independent datasets. The functional connections that predicted engagement overlapped with a validated neuromarker of sustained attention and predicted recall of narrative events. Together, our findings characterize the neural signatures of attentional engagement in naturalistic contexts and elucidate relationships among narrative engagement, sustained attention, and event memory.

We engage with the world and construct memories by attending to information in our external environment (1). However, the degree to which we pay attention waxes and wanes over time (2, 3). Such fluctuations of attention not only influence our ongoing perceptual experience but can also have consequences for what we later remember (4).Changes in attentional states are typically studied with continuous performance tasks (CPTs), which require participants to respond to rare targets in a constant stream of stimuli or respond to every presented stimulus except the rare target (57). Paying attention to taxing CPTs, however, often feels different from paying attention in other everyday situations. For example, when we listen to the radio, watch a television show, or have a conversation with family and friends, sustaining focus can feel comparatively effortless. Psychology research has characterized feelings of effortless attention in other contexts, such as flow states of complete absorption in an activity (8). When comprehending narratives, our attention may be naturally captured by the story, causing us to become engaged in the experience. Narrative engagement has been defined as an experience of being deeply immersed in a story with heightened emotional arousal and attentional focus (9, 10). Building on this theoretical definition, we characterize how subjective engagement fluctuates as narratives unfold and test the hypothesis that engagement scales with a story’s emotional content as well as an individual’s sustained attentional state.Functional neuroimaging studies have used naturalistic, narrative stimuli to examine how we perceive (11, 12) and remember (13) structured events based on memory of contexts (1416), prior knowledge or beliefs (17, 18), and emotional and social reasoning (1921). However, strikingly few neuroimaging studies have directly probed attention during naturalistic paradigms. Among these, Regev et al. (22) examined how selectively attending to narrative inputs from a particular sensory modality (e.g., auditory) while suppressing the other modality (e.g., visual) enhances stimulus-locked brain responses to the attended inputs. With nonnarrative movies, Çukur et al. (23) found that semantic representations were warped toward attended object categories during visual search. However, both of these studies relied on experimental manipulations of attention to elucidate its relationship to brain activity. Work has also reported that intersubject synchrony of functional MRI (fMRI) and electroencephalography (EEG) activity, a measure of neural reliability, relates to heightened attention to stimuli (2428) and subsequent memory (29, 30). However, the neural signatures of dynamically changing attentional states in naturalistic contexts and their consequences for event memory remain poorly understood.Previous work suggests that goal-directed focus and attentional control rely on activity in frontoparietal cortical regions comprising a large-scale attention network (31, 32). Regions of the default mode network (DMN) are thought to activate antagonistically to the attention network (33), showing increased activity during off-task thought and mind wandering while deactivating during external attention (34, 35). Interestingly, however, other work provides seemingly counterintuitive evidence that DMN activity characterizes moments of stable and optimal attentional task performance (7, 3638) and is centrally involved in narrative comprehension and memory (1113, 39, 40). Given conceptualizations of frontoparietal regions as task positive and DMN regions as task negative in certain contexts—but DMN as task positive in others—what roles do these networks play when our attention waxes and wanes during real-world narratives?Beyond the canonical DMN and frontoparietal networks, studies have shown that synchrony between a widely distributed set of brain regions reflects changes in attentional states within and across individuals (41, 42). Recent literature showed that functional connectivity (FC), a statistical measure of neural synchrony between pairwise brain regions, predicts attentional state changes during task performance (43, 44). Since these studies characterized attentional states in controlled task conditions, we further ask whether naturalistic attentional engagement is reflected in functional brain connectivity.The current study characterizes attentional states in real-world settings by tracking subjective engagement during movie watching and story listening. In doing so, we address three primary aims: testing the theoretical conception of engagement as emotion-laden attention, examining how engagement is reflected in large-scale brain dynamics, and elucidating the consequences of engagement during encoding for subsequent memory. We first measured self-reported engagement as behavioral participants watched an episode of television series Sherlock or listened to an audio-narrated story, Paranoia. Providing empirical support for its theoretical definition, changes in engagement were driven by the emotional contents of narratives and related to fluctuations of a validated FC index of sustained attention during psychological tasks (43). We next related group-average engagement time courses to fMRI activity observed as a separate pool of participants watched Sherlock (13) or listened to Paranoia (18). Dynamic intersubject correlation analysis (ISC) (45) revealed that activity in large-scale functional networks, especially the DMN, was more synchronized across individuals during more engaging periods of the narratives. Furthermore, using time-resolved predictive modeling (46), we found that patterns of time-resolved FC predicted engagement dynamics and that these same patterns predicted later event recall. Thus, we provide evidence for engagement as emotion-laden sustained attention, elucidate the role of brain network dynamics in engagement, and demonstrate relationships between engagement and episodic memory.  相似文献   

19.
The dendrites of neocortical pyramidal neurons are excitable. However, it is unknown how synaptic inputs engage nonlinear dendritic mechanisms during sensory processing in vivo, and how they in turn influence action potential output. Here, we provide a quantitative account of the relationship between synaptic inputs, nonlinear dendritic events, and action potential output. We developed a detailed pyramidal neuron model constrained by in vivo dendritic recordings. We drive this model with realistic input patterns constrained by sensory responses measured in vivo and connectivity measured in vitro. We show mechanistically that under realistic conditions, dendritic Na+ and NMDA spikes are the major determinants of neuronal output in vivo. We demonstrate that these dendritic spikes can be triggered by a surprisingly small number of strong synaptic inputs, in some cases even by single synapses. We predict that dendritic excitability allows the 1% strongest synaptic inputs of a neuron to control the tuning of its output. Active dendrites therefore allow smaller subcircuits consisting of only a few strongly connected neurons to achieve selectivity for specific sensory features.

There is longstanding evidence from in vitro experiments that dendrites of mammalian neurons are electrically excitable (1, 2), and theoretical work has demonstrated that these active properties can be exploited for computations so that single neurons can perform functions that could otherwise only be performed by a network (37). Recently, technical breakthroughs have enabled dendritic integration to be studied in vivo using both imaging and electrophysiological techniques (8, 9). These experiments have revealed that the integration of synaptic events in vivo can be highly nonlinear and that this process influences the response properties of single neurons and neuronal populations in vivo (1016). For example, patch-clamp recordings from dendrites in mouse primary visual cortex (V1) have demonstrated that dendritic spikes are triggered by visual input and that they may contribute to the orientation selectivity of somatic membrane potential (17). However, important mechanistic questions are still unanswered. How many synaptic inputs must be locally coactive on a dendrite to recruit dendritic spikes? What is the contribution of individual dendritic spikes to somatic action potential (AP) output and its orientation selectivity? Moreover, we do not understand how the answers to these questions depend on the type of dendritic spike. Finally, how do active dendrites, by supporting dendritic spikes, influence which synaptic inputs control AP output and its tuning?These issues are extremely challenging to address experimentally. We have therefore taken a modeling approach, constrained by in vitro and in vivo experimental data, in order to provide a quantitative understanding of the relationship between synaptic input, dendritic spikes, and AP output during sensory processing in V1. We constructed a detailed active model of a layer (L) 2/3 pyramidal neuron in mouse V1 and combined this with a model of the presynaptic inputs it receives during visual stimulation with drifting gratings in vivo (1721).Our model reproduces key features of the experimental data on dendritic and somatic responses to visual stimulation as observed in vivo and allows us to identify the synaptic inputs that trigger dendritic Na+ spikes and NMDA spikes. We also provide a quantitative explanation for how these dendritic spikes determine neuronal output in vivo. Our results show that dendritic spikes can be triggered by a surprisingly small number of synaptic inputs—in some cases even by single synapses. We also find that during sensory processing, already few dendritic spikes are effective at driving somatic output. Overall, this strategy allows a remarkably small number of strong synaptic inputs to dominate neural output, which may reduce the number of neurons required to represent a given sensory feature.  相似文献   

20.
Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual tasks, but they have differential effects in some tasks. Extant models of visual attention assume that the effects of these two types of attention are identical and consequently do not explain differences between them. Here, we develop a model of spatial resolution and attention that distinguishes between endogenous and exogenous attention. We focus on texture-based segmentation as a model system because it has revealed a clear dissociation between both attention types. For a texture for which performance peaks at parafoveal locations, endogenous attention improves performance across eccentricity, whereas exogenous attention improves performance where the resolution is low (peripheral locations) but impairs it where the resolution is high (foveal locations) for the scale of the texture. Our model emulates sensory encoding to segment figures from their background and predict behavioral performance. To explain attentional effects, endogenous and exogenous attention require separate operating regimes across visual detail (spatial frequency). Our model reproduces behavioral performance across several experiments and simultaneously resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and foveal impairments by exogenous attention. Overall, we unveil a computational dissociation between each attention type and provide a generalizable framework for predicting their effects on perception across the visual field.

Endogenous and exogenous spatial attention prioritize subsets of visual information and facilitate their processing without concurrent eye movements (13). Selection by endogenous attention is goal-driven and adapts to task demands, whereas exogenous attention transiently and automatically orients to salient stimuli (13). In most visual tasks, both types of attention typically improve visual perception similarly [e.g., acuity (46), visual search (7, 8), perceived contrast (911)]. Consequently, models of visual attention do not distinguish between endogenous and exogenous attention (e.g., refs. 1219). However, stark differences also exist. Each attention type differentially modulates neural responses (20, 21) and fundamental properties of visual processing, including temporal resolution (22, 23), texture sensitivity (24), sensory tuning (25), contrast sensitivity (26), and spatial resolution (2734).The effects of endogenous and exogenous attention are dissociable during texture segmentation, a visual task constrained by spatial resolution [reviews (13)]. Whereas endogenous attention optimizes spatial resolution to improve the detection of an attended texture (3234), exogenous attention reflexively enhances resolution even when detrimental to perception (2731, 34). Extant models of attention do not explain these well-established effects.Two main hypotheses have been proposed to explain how attention alters spatial resolution. Psychophysical studies ascribe attentional effects to modulations of spatial frequency (SF) sensitivity (30, 33). Neurophysiological (13, 35, 36) and neuroimaging (37, 38) studies bolster the idea that attention modifies spatial profiles of neural receptive fields (RFs) (2). Both hypotheses provide qualitative predictions of attentional effects but do not specify their underlying neural computations.Differences between endogenous and exogenous attention are well established in segmentation tasks and thus provide an ideal model system to uncover their separate roles in altering perception. Texture-based segmentation is a fundamental process of midlevel vision that isolates regions of local structure to extract figures from their background (3941). Successful segmentation hinges on the overlap between the visual system’s spatial resolution and the levels of detail (i.e., SF) encompassed by the texture (39, 41, 42). Consequently, the ability to distinguish between adjacent textures varies as resolution declines toward the periphery (4346). Each attention type differentially alters texture segmentation, demonstrating that their effects shape spatial resolution [reviews (13)].Current models of texture segmentation do not explain performance across eccentricity and the distinct modulations by attention. Conventional models treat segmentation as a feedforward process that encodes the elementary features of an image (e.g., SF and orientation), transforms them to reflect the local structure (e.g., regions of similarly oriented bars), and then pools across space to emphasize texture-defined contours (39, 41, 47). Few of these models account for variations in resolution across eccentricity (46, 48, 49) or endogenous (but not exogenous) attentional modulations (18, 50). All others postulate that segmentation is a “preattentive” (42) operation whose underlying neural processing is impervious to attention (39, 41, 4649).Here, we develop a computational model in which feedforward processing and attentional gain contribute to segmentation performance. We augment a conventional model of texture processing (39, 41, 47). Our model varies with eccentricity and includes contextual modulation within local regions in the stimulus via normalization (51), a canonical neural computation (52). The defining characteristic of normalization is that an individual neuron is (divisively) suppressed by the summed activity of neighboring neurons responsive to different aspects of a stimulus. We model attention as multiplicative gains [attentional gain factors (15)] that vary with eccentricity and SF. Attention shifts sensitivity toward fine or coarse spatial scales depending on the range of SFs enhanced.Our model is image-computable, which allowed us to reproduce behavior directly from grayscale images used in psychophysical experiments (6, 26, 27, 2933). The model explains three signatures of texture segmentation hitherto unexplained within a single computational framework (Fig. 1): 1) the central performance drop (CPD) (2734, 4346) (Fig. 1A), that is, the parafoveal advantage of segmentation over the fovea; 2) the improvements in the periphery and impairments at foveal locations induced by exogenous attention (2732, 34) (Fig. 1B); and 3) the equivalent improvements across eccentricity by endogenous attention (3234) (Fig. 1C).Open in a separate windowFig. 1.Signatures of texture segmentation. (A) CPD. Shaded region depicts the magnitude of the CPD. Identical axis labels are omitted in B and C. (B) Exogenous attention modulation. Exogenous attention improves segmentation performance in the periphery and impairs it near the fovea. (C) Endogenous attention modulation. Endogenous attention improves segmentation performance across eccentricity.Whereas our analyses focused on texture segmentation, our model is general and can be applied to other visual phenomena. We show that the model predicts the effects of attention on contrast sensitivity and acuity, i.e., in tasks in which both endogenous and exogenous attention have similar or differential effects on performance. To preview our results, model comparisons revealed that normalization is necessary to elicit the CPD and that separate profiles of gain enhancement across SF (26) generate the effects of exogenous and endogenous attention on texture segmentation. A preferential high-SF enhancement reproduces the impairments by exogenous attention due to a shift in visual sensitivity toward details too fine to distinguish the target at foveal locations. The transition from impairments to improvements in the periphery results from exogenous attentional gain gradually shifting to lower SFs that are more amenable for target detection. Improvements by endogenous attention result from a uniform enhancement of SFs that encompass the target, optimizing visual sensitivity for the attended stimulus across eccentricity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号