首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The link between mind, brain, and behavior has mystified philosophers and scientists for millennia. Recent progress has been made by forming statistical associations between manifest variables of the brain (e.g., electroencephalogram [EEG], functional MRI [fMRI]) and manifest variables of behavior (e.g., response times, accuracy) through hierarchical latent variable models. Within this framework, one can make inferences about the mind in a statistically principled way, such that complex patterns of brain–behavior associations drive the inference procedure. However, previous approaches were limited in the flexibility of the linking function, which has proved prohibitive for understanding the complex dynamics exhibited by the brain. In this article, we propose a data-driven, nonparametric approach that allows complex linking functions to emerge from fitting a hierarchical latent representation of the mind to multivariate, multimodal data. Furthermore, to enforce biological plausibility, we impose both spatial and temporal structure so that the types of realizable system dynamics are constrained. To illustrate the benefits of our approach, we investigate the model’s performance in a simulation study and apply it to experimental data. In the simulation study, we verify that the model can be accurately fitted to simulated data, and latent dynamics can be well recovered. In an experimental application, we simultaneously fit the model to fMRI and behavioral data from a continuous motion tracking task. We show that the model accurately recovers both neural and behavioral data and reveals interesting latent cognitive dynamics, the topology of which can be contrasted with several aspects of the experiment.  相似文献   

2.
As sensory stimuli and behavioral demands change, the attentive brain quickly identifies task-relevant stimuli and associates them with appropriate motor responses. The effects of attention on sensory processing vary across task paradigms, suggesting that the brain may use multiple strategies and mechanisms to highlight attended stimuli and link them to motor action. To better understand factors that contribute to these variable effects, we studied sensory representations in primary auditory cortex (A1) during two instrumental tasks that shared the same auditory discrimination but required different behavioral responses, either approach or avoidance. In the approach task, ferrets were rewarded for licking a spout when they heard a target tone amid a sequence of reference noise sounds. In the avoidance task, they were punished unless they inhibited licking to the target. To explore how these changes in task reward structure influenced attention-driven rapid plasticity in A1, we measured changes in sensory neural responses during behavior. Responses to the target changed selectively during both tasks but did so with opposite sign. Despite the differences in sign, both effects were consistent with a general neural coding strategy that maximizes discriminability between sound classes. The dependence of the direction of plasticity on task suggests that representations in A1 change not only to sharpen representations of task-relevant stimuli but also to amplify responses to stimuli that signal aversive outcomes and lead to behavioral inhibition. Thus, top-down control of sensory processing can be shaped by task reward structure in addition to the required sensory discrimination.  相似文献   

3.
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.  相似文献   

4.
Decisions are based on the subjective values of choice options. However, subjective value is a theoretical construct and not directly observable. Strikingly, distinct theoretical models competing to explain how subjective values are assigned to choice options often make very similar behavioral predictions, which poses a major difficulty for establishing a mechanistic, biologically plausible explanation of decision-making based on behavior alone. Here, we demonstrate that model comparison at the neural level provides insights into model implementation during subjective value computation even though the distinct models parametrically identify common brain regions as computing subjective value. We show that frontal cortical regions implement a model based on the statistical distributions of available rewards, whereas intraparietal cortex and striatum compute subjective value signals according to a model based on distortions in the representations of probabilities. Thus, better mechanistic understanding of how cognitive processes are implemented arises from model comparisons at the neural level, over and above the traditional approach of comparing models at the behavioral level alone.

Psychology, economics, and other social sciences develop competing models and theories to explain how we make choices. However, the capacity to select the one model that explains behavior best is limited by the fact that different models often make similar predictions about behavior. Therefore, it can be difficult to distinguish between models based on behavioral data alone. To illustrate, one landmark study applied 11 different models of subjective valuation to participants’ choices between different lotteries in an attempt to identify the model that best represented each participant’s preferences (1). Even though different models explained behavior better at the individual level, on average the models made the same predictions on over 90% of decisions across two experimental datasets, suggesting that the prediction similarity of competing models is pervasive. The prediction similarity is particularly striking given that the models make vastly different assumptions about underlying processes (see last paragraph of the Introduction). Moreover, these difficulties of model selection are not limited to the realm of value-based decision-making but emerge in various areas of behavioral research on, for example, learning (2), memory (3), and perception (4). Informing the model selection process by neuroscientific data is one possible solution for this problem.Computational and decision neurosciences aim to characterize the neural mechanisms implementing cognitive functions and test if existing behavioral models accurately describe neural processes. Typically, competing models are fitted to behavioral data and their likelihoods or amounts of explained variance are compared. The winning model is then used to generate estimates of unobservable, i.e., latent, variables (e.g., subjective values or prediction errors), and activity correlating with these variables at the neural level is used to conclude that the brain implements that model (5, 6). However, the insights that can be gained with this approach are severely limited if different models predict similar behavior or correlated latent variables at the neural level.Here, we addressed this problem by testing whether brain activity can be directly used to compare and select between competing theories with similar behavioral predictions. That is, even if competing theories make similar behavioral predictions and identify the same brain regions, are the neural computations in these regions differentially captured by the different theories? By asking this question and performing model comparison at the neural level, we deviate from the standard practice in model-based neuroscience of first fitting different models to behavioral data and then using the behaviorally best-fitting model to analyze the neural data. This procedure, if successful, would be applicable also to other areas of behavioral, cognitive, and clinical neuroscience, which use computational models to explain functions such as reinforcement learning, perceptual decision-making, and psychiatric disorders.We performed model comparison in the context of value-based decision-making and scanned participants while they chose between lotteries with a wide range of magnitudes and probabilities. These lotteries were specifically designed to differentiate between different models of choice preference (7). We compared three major decision theories (expected utility [EU] theory, prospect theory [PT], and the mean-variance-skewness [MVS] model), which are all consistent with the idea that risky decisions are based on assigning subjective values to risky choice alternatives (811). EU essentially proposes that choice preferences can be represented by summing up probability-weighted subjective values (utilities) of outcomes, using objective probability (see Experimental Procedures for details of all three models). Like EU, PT also employs a mechanism of weighting subjective values of outcomes with probabilities. It additionally assumes that probability is processed subjectively (typically overweighting small and underweighting large probabilities), that outcomes become gains or losses in relation to a reference point, and that losses weigh more heavily than gains of equal absolute size. In contrast, the MVS model suggests that choice preferences can be represented by a linear combination of individually weighted summary statistics of outcome distributions (1216). Thus, computations of subjective value and implied processes differ between these models. At the formal level, they are either nested or equivalent under specific conditions (17). Moreover, all of the models have been successfully used to explain behavior (18), and attempts to adjudicate between models based on behavior alone yielded conflicting results (1, 19).  相似文献   

5.
Contrary to the widespread belief that people are positively motivated by reward incentives, some studies have shown that performance-based extrinsic reward can actually undermine a person's intrinsic motivation to engage in a task. This "undermining effect" has timely practical implications, given the burgeoning of performance-based incentive systems in contemporary society. It also presents a theoretical challenge for economic and reinforcement learning theories, which tend to assume that monetary incentives monotonically increase motivation. Despite the practical and theoretical importance of this provocative phenomenon, however, little is known about its neural basis. Herein we induced the behavioral undermining effect using a newly developed task, and we tracked its neural correlates using functional MRI. Our results show that performance-based monetary reward indeed undermines intrinsic motivation, as assessed by the number of voluntary engagements in the task. We found that activity in the anterior striatum and the prefrontal areas decreased along with this behavioral undermining effect. These findings suggest that the corticobasal ganglia valuation system underlies the undermining effect through the integration of extrinsic reward value and intrinsic task value.  相似文献   

6.
A great deal of research focuses on how humans and animals learn from trial-and-error interactions with the environment. This research has established the viability of reinforcement learning as a model of behavioral adaptation and neural reward valuation. Error-driven learning is inefficient and dangerous, however. Fortunately, humans learn from nonexperiential sources of information as well. In the present study, we focused on one such form of information, instruction. We recorded event-related potentials as participants performed a probabilistic learning task. In one experiment condition, participants received feedback only about whether their responses were rewarded. In the other condition, they also received instruction about reward probabilities before performing the task. We found that instruction eliminated participants' reliance on feedback as evidenced by their immediate asymptotic performance in the instruction condition. In striking contrast, the feedback-related negativity, an event-related potential component thought to reflect neural reward prediction error, continued to adapt with experience in both conditions. These results show that, whereas instruction may immediately control behavior, certain neural responses must be learned from experience.  相似文献   

7.
In the laboratory, animals’ motivation to work tends to be positively correlated with reward magnitude. But in nature, rewards earned by work are essential to survival (e.g., working to find water), and the payoff of that work can vary on long timescales (e.g., seasonally). Under these constraints, the strategy of working less when rewards are small could be fatal. We found that instead, rats in a closed economy did more work for water rewards when the rewards were stably smaller, a phenomenon also observed in human labor supply curves. Like human consumers, rats showed elasticity of demand, consuming far more water per day when its price in effort was lower. The neural mechanisms underlying such “rational” market behaviors remain largely unexplored. We propose a dynamic utility maximization model that can account for the dependence of rat labor supply (trials/day) on the wage rate (milliliter/trial) and also predict the temporal dynamics of when rats work. Based on data from mice, we hypothesize that glutamatergic neurons in the subfornical organ in lamina terminalis continuously compute the instantaneous marginal utility of voluntary work for water reward and causally determine the amount and timing of work.

When animals have two ways to get a resource like water, they tend to choose the way that gets them more water for less work. Neural mechanisms underlying choices involving value comparisons are well studied (1). The reward literature has focused on how the relative subjective value or “utility” of each option is determined by weighing benefits (such as reward magnitude or quality) against costs (such as delay, risk, or effort). The identified neural mechanisms for utility computation mostly involve striatal and limbic reward circuits and dopamine.Much less is known about how animals assess the absolute value of a single, available option to decide whether or not to attempt to harvest a potential reward. In one of the few such studies, when mice were offered only one way to get water at a time, they worked harder during the time blocks when the water reward was larger (2). This makes sense—save energy for when the work will pay off most—but it can’t be the whole story. If motivation were driven entirely by expected reward, animals would be less motivated to work for water during a drought (because they would expect less reward per unit of effort) and might die of thirst. This problem is partly offset by the fact that the perceived value of a reward is normalized according to recent experience, such that rewards that would have been considered small in a rich environment are perceived as large relative to a lean environment (35). But normalization would at best equalize motivation between rich and lean environments. If the difficulty of getting water changes slowly compared to the timescale of physiological necessity, animals must invest the most effort to gain it precisely when the reward for that effort is least.To explore how animals adapt to this kind of challenge, we maintained rats in a live-in environment where all their water was earned by performing a difficult sensory task. We varied the reward magnitude and measured rats’ effort output and water consumption. As expected, rats did more trials per day when the reward per trial was smaller, thus maintaining healthy hydration levels regardless of reward size. More surprisingly, however, rats worked for more water per day (and far more than they needed) when it was easier to earn. This suggests that they can regulate their consumption dramatically (up to threefold) to conserve effort when times are lean or cash in during times of abundance. In economic terms, rats show a strong elasticity of demand for water, even though essential commodities without substitutes are expected to be inelastic.Classic animal behavior studies noted both these effects in experiments designed to validate economic utility maximization theory (68). Here, we revisit and extend that theoretical framework with the goal of relating utility maximization to behavioral dynamics and candidate neural mechanisms. This study differs from the recent literature on utility maximization in choice behavior in two ways. Behaviorally, we focus here on the choice between action and inaction under a closed economy with closed-loop feedback on value (in which state changes as a function of past choices). Mechanistically, we implicate lamina terminalis, a forebrain circuit which has not been previously linked to utility computations.  相似文献   

8.
During perceptual decision-making, the brain encodes the upcoming decision and the stimulus information in a mixed representation. Paradigms suitable for studying decision computations in isolation rely on stimulus comparisons, with choices depending on relative rather than absolute properties of the stimuli. The adoption of tasks requiring relative perceptual judgments in mice would be advantageous in view of the powerful tools available for the dissection of brain circuits. However, whether and how mice can perform a relative visual discrimination task has not yet been fully established. Here, we show that mice can solve a complex orientation discrimination task in which the choices are decoupled from the orientation of individual stimuli. Moreover, we demonstrate a typical discrimination acuity of 9°, challenging the common belief that mice are poor visual discriminators. We reached these conclusions by introducing a probabilistic choice model that explained behavioral strategies in 40 mice and demonstrated that the circularity of the stimulus space is an additional source of choice variability for trials with fixed difficulty. Furthermore, history biases in the model changed with task engagement, demonstrating behavioral sensitivity to the availability of cognitive resources. In conclusion, our results reveal that mice adopt a diverse set of strategies in a task that decouples decision-relevant information from stimulus-specific information, thus demonstrating their usefulness as an animal model for studying neural representations of relative categories in perceptual decision-making research.

The focus of perceptual decision-making research is to reveal the processes by which sensory information is used to inform decisions and guide behavior (1). When considering the neural underpinnings of these processes, both sensory and decision information are often found to be encoded by the same neural populations, making the identification of unique neural signatures of decision-making challenging (24). To overcome this problem, behavioral tasks that rely on relative rather than absolute values of stimulus properties can be advantageous (58). In these tasks, the same amount of information about the correct choice can be given by many combinations of stimuli, which allows the separation of the sensory and decision components of neural activity. Effectively, these tasks introduce invariance of choice categories with respect to specific stimuli.In visual decision-making, a task with these characteristics is an orientation discrimination task featuring invariance with respect to specific orientations, which requires a subject to make relative orientation comparisons of stimuli. The convenience of this task relates to the well-characterized neural encoding of stimulus orientations in the striatal visual cortex of all mammalian species (9, 10). The mouse animal model, which features an unmatched abundant set of experimental tools for the dissection of neural circuits (1113), performing a relative orientation discrimination task could be a promising study system for examining the neural mechanisms underlying sensory decision-making. However, whether mice can be trained in a relative orientation discrimination task, and which strategies they may adopt in this task, have been unknown.Here, we implemented a two-alternative forced-choice (2AFC) discrimination task for mice in which they had to report the more vertical orientation of two simultaneously presented grating stimuli. Importantly, the vertical orientation was not shown in the majority of trials, and the same value of “relative verticality” was given by many pairs of oriented gratings. Animals could adopt similar but not optimal choice strategies, albeit at the cost of water reward, which allowed us to explore a continuum of naturally arising strategies. To characterize these strategies, we designed a probabilistic choice model that quantified how animals combined information from the two stimuli. We expanded the model to account for trial history–induced biases and analyzed the dependence of these biases on the engagement state of the animal. Finally, with the help of the model, we estimated orientation discrimination acuity and showed that mice perform this task with high levels of accuracy and sensitivity to small differences in orientation.While the use of complex visual discrimination tasks in mice can be challenging because of the difficulty in training animals, modeling their choice strategies, parameterizing visual objects, and finding their neural representations, our complex discrimination task addresses these problems by extending the existing orientation discrimination protocols (1418). We suggest that our task will allow for the exploration of links between neural and behavioral variability (19) in the context of heuristics and suboptimal choice strategies in rodent perceptual decision-making (20).  相似文献   

9.
Sensitivity to satiety constitutes a basic requirement for neuronal coding of subjective reward value. Satiety from natural ongoing consumption affects reward functions in learning and approach behavior. More specifically, satiety reduces the subjective economic value of individual rewards during choice between options that typically contain multiple reward components. The unconfounded assessment of economic reward value requires tests at choice indifference between two options, which is difficult to achieve with sated rewards. By conceptualizing choices between options with multiple reward components (“bundles”), Revealed Preference Theory may offer a solution. Despite satiety, choices against an unaltered reference bundle may remain indifferent when the reduced value of a sated bundle reward is compensated by larger amounts of an unsated reward of the same bundle, and then the value loss of the sated reward is indicated by the amount of the added unsated reward. Here, we show psychophysically titrated choice indifference in monkeys between bundles of differently sated rewards. Neuronal chosen value signals in the orbitofrontal cortex (OFC) followed closely the subjective value change within recording periods of individual neurons. A neuronal classifier distinguishing the bundles and predicting choice substantiated the subjective value change. The choice between conventional single rewards confirmed the neuronal changes seen with two-reward bundles. Thus, reward-specific satiety reduces subjective reward value signals in OFC. With satiety being an important factor of subjective reward value, these results extend the notion of subjective economic reward value coding in OFC neurons.

There are no specific sensory receptors for rewards, and their value is determined by the needs of individual decision makers. Thus, rewards have subjective value rather than being solely characterized by physical measures such as molecular concentrations, milliliters of juice, or units of money. Accordingly, neuronal signals in prime reward structures, such as orbitofrontal cortex (OFC) and dopamine neurons, code reward value on a subjective basis (13). One of the key factors determining subjective value is satiety that arises from ongoing reward consumption. After drinking a cup of coffee, we may desire a glass of water. We feel sated on coffee while still seeking liquid. Apparently, the coffee has lost more value for us than water. Such value loss is often referred to as sensory-specific satiety (or, more appropriately here, reward-specific satiety) and contrasts with general satiety that refers indiscriminately to all rewards (4, 5). Thus, reward-specific satiety is a key factor of subjective reward value, and any claim toward neuronal subjective reward value coding should include sensitivity to reward-specific satiety.Reward-specific satiety in humans, monkeys, and rodents affects approach behavior, goal-directed behavior, operant responding, learning, and pleasantness associated with the specific reward. Lesioning, inactivation, and neural studies demonstrate the involvement of frontal cortex, and in particular OFC, in behavioral changes induced by general and reward-specific satiety. The studies assessed alterations of associative strength, cognitive representations for learning, approach behavior, and goal-directed behavior (618) but did not address the appreciation and maximization of subjective economic reward value that constitute the central interest of economic decision theory (1820) and current neuroeconomics research (1, 2124). Economic reward value cannot be measured directly but is inferred from observable choice (1921). Value estimations are made at choice indifference between a test option and a reference option, which renders them immune to unselective general satiety and controls for confounding slope changes of choice functions (25). While choice indifference is possible with milder value differences from reward type, reward delay, reward risk, or spontaneous fluctuations (1, 3, 26), it may fail with substantial satiety when animals categorically prefer nonsated alternatives (1517). By contrast, choice indifference becomes feasible when an added amount of unsated reward can compensate for the value loss of the sated reward. Such tests require reward options with two reward components (“bundles”). Indeed, all choice options constitute bundles; they are either single rewards with multiple components, like the taste and fluid of a cup of coffee, or contain multiple rewards, like meat and vegetables of a meal we choose.The rationale of our experiment rests on the tenet that candidate neuronal signals for subjective economic value need to be sensitive to reward-specific satiety. We used bundles whose multiple rewards sated differentially and allowed testing at choice indifference. Using strict concepts of Revealed Preference Theory (19, 27, 28), we had demonstrated that monkeys chose rationally between two-reward bundles by satisfying completeness, transitivity, and independence of irrelevant alternatives (24). Using these validations, we now estimated the loss of economic value from reward consumption during ongoing task performance. Using two-reward bundle options, instead of single-reward options, we tested choice indifference at specifically set, constant levels. We used multiple, equally preferred indifference points (IPs) for constructing two-dimensional graphic indifference curves (IC) on which all bundles had by definition the same subjective value. The slopes of these ICs demonstrated subjective value relationships (“currency”) between two bundle rewards. Ongoing reward consumption changed the IC slopes in characteristic ways that suggested reward-specific subjective value reduction. During full recording periods of individual OFC neurons, chosen value responses tracked the IC slope changes in a systematic way that suggested a neuronal correlate for reward-specific satiety. These data support and extend the claim of subjective economic reward value coding by OFC neurons.  相似文献   

10.
Mast cells are resident in the brain and contain numerous mediators, including neurotransmitters, cytokines, and chemokines, that are released in response to a variety of natural and pharmacological triggers. The number of mast cells in the brain fluctuates with stress and various behavioral and endocrine states. These properties suggest that mast cells are poised to influence neural systems underlying behavior. Using genetic and pharmacological loss-of-function models we performed a behavioral screen for arousal responses including emotionality, locomotor, and sensory components. We found that mast cell deficient KitW−sh/W−sh (sash−/−) mice had a greater anxiety-like phenotype than WT and heterozygote littermate control animals in the open field arena and elevated plus maze. Second, we show that blockade of brain, but not peripheral, mast cell activation increased anxiety-like behavior. Taken together, the data implicate brain mast cells in the modulation of anxiety-like behavior and provide evidence for the behavioral importance of neuroimmune links.  相似文献   

11.
Animals learn both whether and when a reward will occur. Neural models of timing posit that animals learn the mean time until reward perturbed by a fixed relative uncertainty. Nonetheless, animals can learn to perform actions for reward even in highly variable natural environments. Optimal inference in the presence of variable information requires probabilistic models, yet it is unclear whether animals can infer such models for reward timing. Here, we develop a behavioral paradigm in which optimal performance required knowledge of the distribution from which reward delays were chosen. We found that mice were able to accurately adjust their behavior to the SD of the reward delay distribution. Importantly, mice were able to flexibly adjust the amount of prior information used for inference according to the moment-by-moment demands of the task. The ability to infer probabilistic models for timing may allow mice to adapt to complex and dynamic natural environments.Animals learn the delay until a reward will be delivered following either an animal’s own action or the presentation of a conditioned stimulus (1). The ability of animals to correctly infer reward delays is thought to be critical for a range of adaptive behaviors (2, 3) from operant and classical conditioning (4) to optimal foraging (5). Rodents asked to reproduce a particular time interval do so with a variance that scales in proportion to the mean (4, 6). Based largely upon this observation, neural models of timing propose that rodents learn the time interval between an action and its outcome as a mean interval perturbed by a constant coefficient of variation (∼0.15) (4, 710). The constant variability with which the mean time is known (“scalar timing”) is conceived of as the uncertainty with which a rodent knows the expected reward delay interval (11, 12). In the case of a constant reward delay such models will suffice to estimate an expected reward delay for a future action.By contrast to the reliable timing of most operant conditioning paradigms, in a natural environment the timing with which events occur can be arbitrarily large (exceeding the variance of scalar timing) and dynamic. Consider, for example, the timing of responses from conspecifics in a social setting. Or consider a foraging animal that must decide how long to persist searching a patch for food (13). In the presence of variable information, optimal decisions require an agent to infer probability distributions that incorporate uncertainty learned through repeated experience (14, 15). Financial decision theory posits that knowledge of both the mean and variance of expected returns is necessary to select a portfolio optimally. Consistent with these predictions recent studies have shown that human subjects are capable of tracking uncertainty (16, 17). Moreover, neural correlates of uncertainty about future rewards have been observed in midbrain dopamine neurons of nonhuman primates (18) and in dopamine-recipient brain regions in human subjects (17). Although financial decision theory has largely considered uncertainty in the magnitude of returns after a fixed period, an agent may also be subject to uncertainty about the time until a positive return is realized as we described above. Optimal decisions about the amount of time one should persist in waiting for a positive return likewise require information about the average delay and the uncertainty (19).Thus, action in the presence of uncertainty requires probabilistic information and optimal performance often requires knowledge of detailed probability distributions or their parameters. This raises the question of whether agents can infer the necessary probabilistic models. Several lines of evidence suggest that primates can infer probabilistic information about reward timing and that these inferred distributions are used to guide behavior. Human subjects asked to reproduce precise time intervals showed sensitivity to the distribution from which individual intervals were selected (20, 21). Human subjects given the option to wait for delayed rewards adjust their behavior optimally as a function of the probability distribution from which reward delays were drawn (19). Moreover, nonhuman primates allocate attention according to an arbitrary variability in timing (22, 23). Rodents can learn several discrete reward delays (1, 7, 24); however, it has been less clear whether rodents can adapt optimally to changes in uncertainty. A recent behavioral study demonstrated that mice can learn to switch between two expected reward delays rapidly (25), consistent with an inferred, probabilistic model of the task structure. Nonetheless, it remains unclear whether mice can infer probabilistic models of reward timing. Moreover, the dynamics by which a probabilistic model is constructed from recent experience remains poorly understood.Here, we develop a switching interval variance (SIV) operant conditioning task for mice. Optimal performance of the SIV task required mice to adapt their behavior to the mean and the SD of reward delays. We find that mice adjust their behavior to the SD of reward delays across an order of magnitude change in variability. Quantitative analysis of the behavior was consistent with a process of statistical inference but not with switches among a small number of well-learned strategies. Our data were well fit by a model in which mice inferred a probabilistic model of reward delays from many tens of previous trials. Thus, our data suggest that the ability to infer probabilistic models for timing is not the privilege of primates, but rather arose much earlier in evolution.  相似文献   

12.
Daily life requires transitions between performance of well-practiced, automatized behaviors reliant upon internalized representations and behaviors requiring external focus. Such transitions involve differential activation of the default mode network (DMN), a group of brain areas associated with inward focus. We asked how optogenetic modulation of the ventral pallidum (VP), a subcortical DMN node, impacts task switching between internally to externally guided lever-pressing behavior in the rat. Excitation of the VP dramatically compromised acquisition of an auditory discrimination task, trapping animals in a DMN state of automatized internally focused behavior and impairing their ability to direct attention to external sensory stimuli. VP inhibition, on the other hand, facilitated task acquisition, expediting escape from the DMN brain state, thereby allowing rats to incorporate the contingency changes associated with the auditory stimuli. We suggest that VP, instant by instant, regulates the DMN and plays a deterministic role in transitions between internally and externally guided behaviors.

A considerable amount of our time is spent performing automatic or habitual behaviors that are based on acquired knowledge about our environment. For example, we cross the road upon a green traffic light, we open the fridge to get a drink, or we shake hands when meeting a friend. These acquired action patterns are appropriate as long as the behavioral context remains stable, and they are beneficial because they permit fast, effective responses and free up cognitive resources for other purposes. However, when contingencies change, learned behaviors need to be modified. For example, the potential transmission of viruses makes physical contact undesirable, such that hand shaking must be suppressed. This necessitates adapting our routine response patterns, which in turn requires cognitive flexibility. Successful control of behavior thus involves both the maintenance of appropriate learned action patterns under stable environmental conditions as well as their flexible modification in response to changed environmental demands.Recently, it has been suggested that the rapid performance of learned responses in a stable behavioral context is associated with activation of the default mode network (DMN) (1, 2). The DMN was originally described in humans as an interconnected set of brain regions that are active during the resting state in the absence of behavioral tasks (3), and early work has emphasized that internally directed processes, such as accessing autobiographical information or memory retrieval (4, 5), are associated with DMN activation. The work of Vatansever and colleagues adds a new perspective, as it implicates the DMN in the generation of automatized behaviors that, as the authors note, resembles an “autopilot” type of behavioral control. DMN-associated behaviors thus encompass not only internally orientated mental operations but also tasks involving actions that are reliant on internalized representations for which feedback from the environment is not a priority.The DMN encompasses significant portions of the medial frontal and medial parietal cortex, but recent tractography work in humans has revealed that some subcortical brain structures also represent important DMN nodes (6). One of these noncortical DMN nodes is the basal forebrain (BF), which contains a collection of nuclei with substantial GABAergic and cholinergic corticopetal projections to DMN structures (7, 8). Previously, in primates, it has indeed been shown that BF inactivation leads to changes in global functional MRI signals involving the DMN (9), and we have recently provided evidence suggesting that the BF represents a subcortical DMN node in the rat (10), an idea that has gained recent support (6, 11, 12). In particular, we documented pronounced gamma oscillations in the ventral pallidum (VP) region of the BF during quiet wakefulness and self-grooming, behaviors compatible with DMN activation. Additionally, we demonstrated a directional influence of VP activity on the anterior cingulate cortex (ACC), an important cortical DMN region in rodents (10, 1214), as well as a tight coupling between local field potential activity in the ACC and retrosplenial cortex (15), another node of the DMN, suggesting that the VP may be crucial for DMN-related transitions and state maintenance. Based on these considerations, we hypothesize here that VP activation, triggering a DMN-dominated brain state, should promote the execution of routine, learned responses in a stable behavioral context. We thus used an operant conditioning paradigm, in which rats lever-press on a variable interval (VI) reward schedule, as this is known to be particularly effective in producing highly stereotyped behavior that is driven by an acquired internal model (16, 17). Our task can be considered analogous to an “autopilot” mode of task performance in humans, as it has been used to formalize a dissociation between actions directed at achieving an outcome and behavioral responses elicited by a triggering stimulus (S-R). Early in training, lever pressing is variable, relatively inefficient, and sensitive to changes in reward value or contingency. That is, the animals’ actions are flexible to environmental exigencies and directed at a particular outcome. As training proceeds, the animals become experts at the task, and, as is the case for humans, this is evidenced by a decreasing variability and increasing efficiency in behavioral performance (18, 19). At the same time, the behavior becomes internalized and inflexible in that lever-press rates are now insensitive to environmental feedback such as changes to reward value or contingency (16, 17).Convergent literature has already implicated the VP as an important regulator of behavioral participation in laboratory tasks (2025). For example, pharmacological silencing of the VP leads to decreased lever pressing for preferred food in rats (25), an effect that the authors ascribe to the animals’ decreased willingness to expend effort to obtain a reward. More recently, it has been shown that task engagement depends on GABAergic VP neurons (26), with optogenetic silencing of this population decreasing response rate in a classical conditioning paradigm. However, these results are also compatible with an alternative interpretation, in which decreases in responding result not from motivational or appetitive effects but rather from a disengagement with automatic task performance and a redirection of attention to the external environment. A similar logic may also apply to studies showing that activation of GABAergic VP neurons leads to an increase in behavioral response rate such that the relative activity in these neurons may continuously modulate task participation. Consistent with the above findings, animals nose poke and exhibit place preference for optogenetic activation of GABAergic VP neurons (27). This latter study emphasizes a link between task participation and reward anticipation, supported also by synaptic coupling between the VP and the nucleus accumbens (NAcc) (28), where dopamine (DA) circuits play a major role in reward processing. The robust reciprocal connections between the NAcc and VP involving GABAergic and glutamatergic projections underscores the importance of the VP nucleus in reward processing through modulation of mesolimbic DA activity in the ventral tegmental area (29). In addition to this role in reward processing, the VP also projects to the cerebral cortex. A prominent indirect projection involves GABAergic and cholinergic projections to the medial dorsal nucleus of the thalamus (28), which in turn target the medial prefrontal cortical areas including the cingulate and prelimbic cortex in both rodents and primates (30, 31) that today are considered to be part of the DMN (32). The VP also harbors a direct cholinergic corticopetal projection that targets medial prefrontal cortical areas (28), although cholinergic neurons in the VP are far less numerous than in other BF nuclei such as the nearby nucleus basalis (33). In terms of connectivity, the VP could thus exert an influence on reward-related processing by acting on the mesolimbic DA system as well as contributing to global brain state regulation by acting on cortical DMN nodes. Convergent evidence suggests that the VP, particularly its GABAergic neuronal circuit, is a regulator of task participation or willingness to work for reward, and appropriate VP activity levels are important for successful adaptive behavior. These findings are compatible with a potential involvement of the VP in DMN regulation and indeed provide a neural pathway by which the VP could influence cortical state across the multiple interconnected regions making up the DMN.While maintenance of action patterns is adaptive in a stable environment, learned actions must be modifiable when contingencies change, during which the DMN is thought to deactivate, allowing the dorsal attention network to assume control (34, 35). This leads to a transition from internally guided to externally guided behavior crucial to adapting behavioral strategies to sensory input. In other words, the brain must leave “autopilot” mode in order to integrate information from sensory and reward systems. Recent evidence suggests that the DMN may also be involved in these sorts of state transitions (36, 37). In our study, we decided to address this aspect of cognitive flexibility by employing task switching, transferring the rat from the VI operant schedule to an auditory discrimination paradigm. Here, lever presses during the presentation of specific auditory stimuli are rewarded, postive stimulus conditions or (S+), whereas rewards become unavailable for lever presses during the presentation of different auditory stimuli, negative stimulus conditions or (S−). Normally, rats stop lever pressing during the S− stimuli, adapting to the modified environmental contingency. We formulate two hypotheses concerning the influence of VP activity on success of auditory discrimination learning following task switching: We hypothesize that up-regulation of VP activity upon task switching should promote a DMN-dominated brain state, keeping the animals in a state of executing learned lever presses according to the VI schedule and failing to incorporate relevant auditory information from the environment. On the other hand, we anticipate that VP down-regulation upon task switching will tend to reduce the influence of DMN circuits on behavior, reducing the execution of learned responses while promoting acquisition of the auditory discrimination task. In the present study, we address both of these hypotheses using optogenetic activation and silencing of VP vcircuits.  相似文献   

13.
14.
Dopamine is widely observed to signal anticipation of future rewards and thus thought to be a key contributor to affectively charged decision making. However, the experiments supporting this view have not dissociated rewards from the actions that lead to, or are occasioned by, them. Here, we manipulated dopamine pharmacologically and examined the effect on a task that explicitly dissociates action and reward value. We show that dopamine enhanced the neural representation of rewarding actions, without significantly affecting the representation of reward value as such. Thus, increasing dopamine levels with levodopa selectively boosted striatal and substantia nigra/ventral tegmental representations associated with actions leading to reward, but not with actions leading to the avoidance of punishment. These findings highlight a key role for dopamine in the generation of appetitively motivated actions.  相似文献   

15.
A fundamental question in neuroscience is what type of internal representation leads to complex, adaptive behavior. When faced with a deadline, individuals’ behavior suggests that they represent the mean and the uncertainty of an internal timer to make near-optimal, time-dependent decisions. Whether this ability relies on simple trial-and-error adjustments or whether it involves richer representations is unknown. Richer representations suggest a possibility of error monitoring, that is, the ability for an individual to assess its internal representation of the world and estimate discrepancy in the absence of external feedback. While rodents show timing behavior, whether they can represent and report temporal errors in their own produced duration on a single-trial basis is unknown. We designed a paradigm requiring rats to produce a target time interval and, subsequently, evaluate its error. Rats received a reward in a given location depending on the magnitude of their timing errors. During the test trials, rats had to choose a port corresponding to the error magnitude of their just-produced duration to receive a reward. High-choice accuracy demonstrates that rats kept track of the values of the timing variables on which they based their decision. Additionally, the rats kept a representation of the mapping between those timing values and the target value, as well as the history of the reinforcements. These findings demonstrate error-monitoring abilities in evaluating self-generated timing in rodents. Together, these findings suggest an explicit representation of produced duration and the possibility to evaluate its relation to the desired target duration.

In neuroscience, a fundamental question is how rich the internal representation of an individual’s experience must be to yield adaptive behavior. Let us consider a hungry individual in need of finding food fast: The individual may adopt a trial-and-error foraging strategy to maximize reward but may also, to maximize its efficiency, represent rich experiential variables, such as how much time it takes to reach a source of food. Both representing elapsed time and monitoring its inherent uncertainty plays an important role in adaptive behavior, learning, and decision making (1). When representing these variables, the sources of uncertainty are both exogenous (stimuli driven) and endogenous (neural implementation). The mapping of exogenous sources of temporal uncertainty has been well described in timing behavior: For instance, mice can adjust their behaviors to the width of the distribution of temporal intervals provided through external stimuli (2). On the other hand, the endogenous sources of uncertainty for time perception are less understood and more difficult to address.Evidence that animals are sensitive and have access to the internal uncertainty of elapsed time comes from a task in which the individual must produce a required target duration using a lever press or a key press (1, 3, 4). In a task in which individuals must produce an interval of fixed duration to obtain a reward (Fig. 1A), a plausible strategy to maximize reward would be to set the produced duration to be longer than the required target duration so as to allow a margin of error [internal target duration; (5)]. This is because the larger an individual’s representational uncertainty, the larger the margin of error to maximize the reward. Consistent with this, studies have shown that the magnitude of error in produced intervals varies with the magnitude of temporal uncertainties (6, 7), and participants with larger temporal uncertainty set larger margins of errors [Fig. 1B and SI Appendix, Fig. S2; (1, 7)]. The observed optimization of timing behavior begs the question of how rich the representation of elapsed time must be.Open in a separate windowFig. 1.The TP task and error-monitoring protocol. (A) Schematic of a box arrangement with a lever available in the middle of the panel and reward ports on the left and right side of the lever. Reward availability was signaled by the port lit, depicted by the lightbulbs. Reward delivery was triggered by rats’ nose poke in the reward port. Depending on the group assignment, rats had to either hold the lever pressed for a minimum of 3.2 s (HOLD group) or press the lever twice with a minimal delay (3.2 s) between two presses (PRESS group). (B) TP performance, in error-monitoring test sessions, follows Weber’s law for both groups, with signatures of optimality. (Upper) Probability density functions over TPs for each individual rat in HOLD (blue) and PRESS (red) groups. Thresholds Θ (blue and red dashed lines for HOLD and PRESS groups, respectively) are plotted for each individual. (Bottom Left) Average probability density functions over TPs for HOLD and PRESS groups superimposed. Note the distribution shift and width shrinkage for HOLD group. (Bottom Right) For each rat, µ(TP) is plotted against σ(TP). Both at the individual and at the group level the PRESS rats showed larger µ(TP) and σ(TP), visible as an upward right shift of the red curve. This pattern indicates that rats make their choices optimally, taking into account their level of TP variability. The results hold within each rat and across sessions (SI Appendix, Fig. S3). (C) Schematic depiction of how rewards were assigned to specific parts of TP distribution. Green color is used for “small error” (SE) trials and orange color for “large error” (LE) trials. Red color indicates TPs that were out of reward range. The arrows indicate probabilistic assignment of TP type (SE or LE) to left and right ports, on training trials. On test trials, the food–port assignments remained, but both ports were available and, thus, the amount of reward was driven by the rat’s choice. (D) Schematic of a trial structure. From the top to bottom, the succession of task events is depicted. They alternate along TP axis (color bar with red, green, and orange) and show different scenarios that are determined by the rats’ performance on TP in single trials. ITI is the last event in a single-trial sequence.A trial-and-error strategy would predict that near-optimal behavior can be parsimoniously explained by adaptation so that timing behavior would fluctuate around the required duration. The representational view would predict that uncertainty and trial-to-trial errors are experiential variables used by the animals to monitor their timing behavior.To settle the question of whether rodents can monitor their timing errors relative to their target on a trial-by-trial basis, we developed a task inspired by human work. Humans required to generate a time interval can also reliably report the magnitude of their errors and their sign (8) (i.e., they can evaluate by how much [magnitude] their generated duration was too short or too long [sign], with respect to the target duration). Humans can also report how confident they are in their timing behavior (9). We tested here these temporal cognitive abilities in rats, which were required to produce a time interval and correctly report, in order to obtain a reward, the magnitude of their timing errors on some test trials. We show that rats correctly reported the magnitude of their timing error, suggesting that their timing behavior uses explicit representations of time intervals together with their uncertainty around the internal target duration.  相似文献   

16.
The ability to make choices and carry out appropriate actions is critical for individual survival and well-being. Choice behaviors, from hard-wired to experience-dependent, have been observed across the animal kingdom. Although differential engagement of sensory neuronal pathways is a known mechanism, neurobiological substrates in the brain that underlie choice making downstream of sensory perception are not well understood. Here, we report a behavioral paradigm in zebrafish in which a half-light/half-dark visual image evokes an innate choice behavior, light avoidance. Neuronal activity mapping using the immediate early gene c-fos reveals the engagement of distinct brain regions, including the medial zone of the dorsal telencephalic region (Dm) and the dorsal nucleus of the ventral telencephalic area (Vd), the teleost anatomical homologs of the mammalian amygdala and striatum, respectively. In animals that were subjected to the identical sensory stimulus but displayed little or no avoidance, strikingly, the Dm and Vd were not engaged, despite similar levels of activation in the brain nuclei involved in visual processing. Based on these findings and previous connectivity data, we propose a neural circuitry model in which the Dm serves as a brain center, the activity of which predicates this choice behavior in zebrafish.  相似文献   

17.
As it becomes possible to simulate increasingly complex neural networks, it becomes correspondingly important to model the sensory information that animals actively acquire: the biomechanics of sensory acquisition directly determines the sensory input and therefore neural processing. Here, we exploit the tractable mechanics of the well-studied rodent vibrissal (“whisker”) system to present a model that can simulate the signals acquired by a full sensor array actively sampling the environment. Rodents actively “whisk” ∼60 vibrissae (whiskers) to obtain tactile information, and this system is therefore ideal to study closed-loop sensorimotor processing. The simulation framework presented here, WHISKiT Physics, incorporates realistic morphology of the rat whisker array to predict the time-varying mechanical signals generated at each whisker base during sensory acquisition. Single-whisker dynamics were optimized based on experimental data and then validated against free tip oscillations and dynamic responses to collisions. The model is then extrapolated to include all whiskers in the array, incorporating each whisker’s individual geometry. Simulation examples in laboratory and natural environments demonstrate that WHISKiT Physics can predict input signals during various behaviors, currently impossible in the biological animal. In one exemplary use of the model, the results suggest that active whisking increases in-plane whisker bending compared to passive stimulation and that principal component analysis can reveal the relative contributions of whisker identity and mechanics at each whisker base to the vibrissotactile response. These results highlight how interactions between array morphology and individual whisker geometry and dynamics shape the signals that the brain must process.

The nervous system of an animal species coevolves with its sensory and motor systems, which are in continuous interaction with the environment. Because these sensorimotor and environmental feedback loops are so tightly linked to neural function, there has been increasing effort to study neural processing within the context of the animal’s body and environment (e.g., in the fields of neuromechanics and embodied cognition). However, it is challenging to collect neurophysiological data under naturalistic conditions, and thus, simulations have become an increasingly important component of neuroscience. A wide variety of software platforms have been developed to enable simulations of neural populations and circuits (13), the biomechanics of motor systems (4), and the responses of sensory receptors (59). To date, however, no system has been able to fully account for the physical constraints imposed during active sensory acquisition behavior in a natural environment.Here, we describe a simulation framework (WHISKiT Physics) that can model the dynamics of a complete sensory system—the rodent vibrissal array—operating under ethologically relevant conditions. The rat vibrissal array is one of the most widely used models in neuroscience to study active sensing and cortical processing. Although its biomechanics are relatively simple, the vibrissal array subserves a rich and complex repertoire of tactile sensing behavior. As nocturnal animals, rats are experts in using tactile cues from their whiskers to extract information from the environment, such as object distance (10, 11), orientation (12), shape (11, 13), and texture (14, 15). During tactile exploration, rats often use active, coordinated oscillatory movements of their whiskers (whisking) to sample the immediate space at frequencies between 5 to 25 Hz (16). These unique properties make this sensory system ideal to examine the dynamic relationship between motor control and sensory input during goal-directed and exploratory behavior.A total of 30 whiskers are regularly arranged on each side of the rat’s face (mystacial pad) (11). Each whisker is embedded in a follicle where the mechanical signals generated at the whisker base are transduced by a variety of mechanoreceptors before they enter the sensory (trigeminal) pathway (17). Thanks to the clear whisker-based topographic maps reflected in central structures (18) and its parallels to human touch (dorsal column–medial lemniscal pathway), the entire pathway—from the primary sensory neurons, through brainstem and thalamus, up to primary somatosensory cortex—has been subject to extensive research. Nonetheless, to date, the field has lacked the ability to simulate such a system operating under naturalistic conditions (i.e., the full whisker array, active control of whiskers, etc.).WHISKiT Physics incorporates a three-dimensional (3D) dynamical model of the rat vibrissal array to allow researchers to simulate the complete mechanosensory input during active whisking behavior. The model incorporates the typical shape and curvature of each individual whisker as well as the morphology of the rodent’s face and the arrangement of the whiskers on the mystacial pad. Each whisker can be actuated either according to typical equations of motion for whisking (19) or to directly match behavioral data. Because it permits direct control of whisker motion and simultaneous readout of mechanosensory feedback, WHISKiT Physics enables closed-loop simulations of the entire somatosensory modality in the rat, the first of its kind in any sensory modality. After validating models of individual whiskers in the array against several independent datasets, we use the full-array model to simulate vibrissotactile sensory input in four typical exploratory scenarios, both in the laboratory and in the natural environment, and discuss its use in future neural simulation systems. Each of the four scenarios generates unique patterns of data, illustrating that the model could be used to reveal the mechanisms that allow animals to extract relevant information about their environment. Although results are presented for the rat, they are easily extended to include the mouse.  相似文献   

18.
Deciphering the information that eyes, ears, and other sensory organs transmit to the brain is important for understanding the neural basis of behavior. Recordings from single sensory nerve cells have yielded useful insights, but single neurons generally do not mediate behavior; networks of neurons do. Monitoring the activity of all cells in a neural network of a behaving animal, however, is not yet possible. Taking an alternative approach, we used a realistic cell-based model to compute the ensemble of neural activity generated by one sensory organ, the lateral eye of the horseshoe crab, Limulus polyphemus. We studied how the neural network of this eye encodes natural scenes by presenting to the model movies recorded with a video camera mounted above the eye of an animal that was exploring its underwater habitat. Model predictions were confirmed by simultaneously recording responses from single optic nerve fibers of the same animal. We report here that the eye transmits to the brain robust “neural images” of objects having the size, contrast, and motion of potential mates. The neural code for such objects is not found in ambiguous messages of individual optic nerve fibers but rather in patterns of coherent activity that extend over small ensembles of nerve fibers and are bound together by stimulus motion. Integrative properties of neurons in the first synaptic layer of the brain appear well suited to detecting the patterns of coherent activity. Neural coding by this relatively simple eye helps explain how horseshoe crabs find mates and may lead to a better understanding of how more complex sensory organs process information.  相似文献   

19.
Neuronal responses to sensory stimuli are not only driven by feedforward sensory pathways but also depend upon intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation. To understand how these factors together regulate cortical dynamics, we recorded simultaneously spontaneous and somatosensory-evoked multiunit activity from primary somatosensory cortex and from the locus coeruleus (LC) (the neuromodulatory nucleus releasing norepinephrine) in urethane-anesthetized rats. We found that bursts of ipsilateral-LC firing preceded by few tens of milliseconds increases of cortical excitability, and that the 1- to 10-Hz rhythmicity of LC discharge appeared to increase the power of delta-band (1–4 Hz) cortical synchronization. To investigate quantitatively how LC firing might causally influence spontaneous and stimulus-driven cortical dynamics, we then constructed and fitted to these data a model describing the dynamical interaction of stimulus drive, ongoing synchronized cortical activity, and noradrenergic neuromodulation. The model proposes a coupling between LC and cortex that can amplify delta-range cortical fluctuations, and shows how suitably timed phasic LC bursts can lead to enhanced cortical responses to weaker stimuli and increased temporal precision of cortical stimulus-evoked responses. Thus, the temporal structure of noradrenergic modulation may selectively and dynamically enhance or attenuate cortical responses to stimuli. Finally, using the model prediction of single-trial cortical stimulus-evoked responses to discount single-trial state-dependent variability increased by ∼70% the sensory information extracted from cortical responses. This suggests that downstream circuits may extract information more effectively after estimating the state of the circuit transmitting the sensory message.Responsiveness of cortical sensory neurons is state dependent. In other words, the neural responses to a sensory stimulus do not only depend on the features of extrinsic sensory inputs but also on intrinsic network variables that can be collectively defined as the “network state” (1). Cortical sensory neurons receive information about the external world from peripheral receptors via feedforward sensory pathways. However, the abundance of recurrent and feedback connectivity (2) may generate ongoing activity that shapes the background on which the afferent information is processed (3). In addition, neuromodulatory inputs from neurochemically specialized brain nuclei that are not part of the direct spino-thalamo-cortical pathway can modulate the dynamics of cortical networks (4), as well as control the animal’s behavioral state. The concurrent integration of information about the external world and about internal states is likely to be central for computational operations of cortical circuits and for the production of complex behavior, yet its mechanisms and implications for neural information processing are still poorly understood.The locus coeruleus (LC) is a brainstem neuromodulatory nucleus that likely plays a prominent role in shaping cortical states via a highly distributed noradrenaline release in the forebrain (5). In particular, the LC contributes to regulation of arousal and sleep; it is involved in cognitive functions such as vigilance, attention, and selective sensory processing (57); and it modulates cortical sensory responses and cortical excitability (8).Here, we investigated how LC firing influences spontaneous and stimulus evoked cortical activity by performing simultaneous extracellular recordings of spontaneous and somatosensory-stimulation–evoked neural activity in the primary somatosensory cortex (S1) and in both ipsilateral LC (i-LC) and contralateral LC (c-LC) in urethane-anesthetized rats. We first use these data to investigate the statistical relationships between the temporal structure of LC firing and the changes in cortical excitability, and we then construct a dynamical systems model of the temporal variations of cortical multiunit activity (MUA) that describes quantitatively the dynamic relationships between LC firing, spontaneous cortical activity dynamics, and cortical sensory-evoked responses. We use this model to study how a specific neuromodulatory input may influence the information content and the readout of cortical information representations of sensory stimuli.  相似文献   

20.
Proinflammatory cytokines, such as IL-1β, have been implicated in the cellular and behavioral effects of stress and in mood disorders, although the downstream signaling pathways underlying these effects have not been determined. In the present study, we demonstrate a critical role for NF-κB signaling in the actions of IL-1β and stress. Stress inhibition of neurogenesis in the adult hippocampus, which has been implicated in the prodepressive effects of stress, is blocked by administration of an inhibitor of NF-κB. Further analysis reveals that stress activates NF-κB signaling and decreases proliferation of neural stem-like cells but not early neural progenitor cells in the adult hippocampus. We also find that depressive-like behaviors caused by exposure to chronic stress are mediated by NF-κB signaling. Together, these data identify NF-κB signaling as a critical mediator of the antineurogenic and behavioral actions of stress and suggest previously undescribed therapeutical targets for depression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号