首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic‐regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The “multiple memory systems of the brain” have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience‐dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. © 2013 The Authors. Hippocampus Published by Wiley Periodicals, Inc.  相似文献   

2.
Animal approach‐avoidance conflict paradigms have been used extensively to operationalize anxiety, quantify the effects of anxiolytic agents, and probe the neural basis of fear and anxiety. Results from human neuroimaging studies support that a frontal–striatal–amygdala neural circuitry is important for approach‐avoidance learning. However, the neural basis of decision‐making is much less clear in this context. Thus, we combined a recently developed human approach‐avoidance paradigm with functional magnetic resonance imaging (fMRI) to identify neural substrates underlying approach‐avoidance conflict decision‐making. Fifteen healthy adults completed the approach‐avoidance conflict (AAC) paradigm during fMRI. Analyses of variance were used to compare conflict to nonconflict (avoid‐threat and approach‐reward) conditions and to compare level of reward points offered during the decision phase. Trial‐by‐trial amplitude modulation analyses were used to delineate brain areas underlying decision‐making in the context of approach/avoidance behavior. Conflict trials as compared to the nonconflict trials elicited greater activation within bilateral anterior cingulate cortex, anterior insula, and caudate, as well as right dorsolateral prefrontal cortex (PFC). Right caudate and lateral PFC activation was modulated by level of reward offered. Individuals who showed greater caudate activation exhibited less approach behavior. On a trial‐by‐trial basis, greater right lateral PFC activation related to less approach behavior. Taken together, results suggest that the degree of activation within prefrontal‐striatal‐insula circuitry determines the degree of approach versus avoidance decision‐making. Moreover, the degree of caudate and lateral PFC activation related to individual differences in approach‐avoidance decision‐making. Therefore, the approach‐avoidance conflict paradigm is ideally suited to probe anxiety‐related processing differences during approach‐avoidance decision‐making. Hum Brain Mapp 36:449–462, 2015. © 2014 Wiley Periodicals, Inc.  相似文献   

3.
Two of the most commonly used illegal substances by adolescents are alcohol and cannabis. Alcohol use disorder (AUD) and cannabis use disorder (CUD) are associated with poorer decision-making in adolescents. In adolescents, level of AUD symptomatology has been negatively associated with striatal reward responsivity. However, little work has explored the relationship with striatal reward prediction error (RPE) representation and the extent to which any augmentation of RPE by novel stimuli is impacted. One-hundred fifty-one adolescents participated in the Novelty Task while undergoing functional magnetic resonance imaging (fMRI). In this task, participants learn to choose novel or non-novel stimuli to gain monetary reward. Level of AUD symptomatology was negatively associated with both optimal decision-making and BOLD response modulation by RPE within striatum and regions of prefrontal cortex. The neural alterations in RPE representation were particularly pronounced when participants were exploring novel stimuli. Level of CUD symptomatology moderated the relationship between novelty propensity and RPE representation within inferior parietal lobule and dorsomedial prefrontal cortex. These data expand on an emerging literature investigating individual associations of AUD symptomatology levels versus CUD symptomatology levels and RPE representation during reinforcement processing and provide insight on the role of neuro-computational processes underlying reinforcement learning/decision-making in adolescents.  相似文献   

4.
The predicted reward of different behavioral options plays an important role in guiding decisions. Previous research has identified reward predictions in prefrontal and striatal brain regions. Moreover, it has been shown that the neural representation of a predicted reward is similar to the neural representation of the actual reward outcome. However, it has remained unknown how these representations emerge over the course of learning and how they relate to decision making. Here, we sought to investigate learning of predicted reward representations using functional magnetic resonance imaging and multivariate pattern classification. Using a pavlovian conditioning procedure, human subjects learned multiple novel cue-outcome associations in each scanning run. We demonstrate that across learning activity patterns in the orbitofrontal cortex, the dorsolateral prefrontal cortex (DLPFC), and the dorsal striatum, coding the value of predicted rewards become similar to the patterns coding the value of actual reward outcomes. Furthermore, we provide evidence that predicted reward representations in the striatum precede those in prefrontal regions and that representations in the DLPFC are linked to subsequent value-based choices. Our results show that different brain regions represent outcome predictions by eliciting the neural representation of the actual outcome. Furthermore, they suggest that reward predictions in the DLPFC are directly related to value-based choices.  相似文献   

5.
Experimental work in animals has identified numerous neural structures involved in reward processing and reward-dependent learning. Until recently, this work provided the primary basis for speculations about the neural substrates of human reward processing. The widespread use of neuroimaging technology has changed this situation dramatically over the past decade through the use of PET and fMRI. Here, the authors focus on the role played by fMRI studies, where recent work has replicated the animal results in human subjects and has extended the view of putative reward-processing neural structures. In particular, fMRI work has identified a set of reward-related brain structures including the orbitofrontal cortex, amygdala, ventral striatum, and medial prefrontal cortex. Moreover, the human experiments have probed the dependence of human reward responses on learned expectations, context, timing, and the reward dimension. Current experiments aim to assess the function of human reward-processing structures to determine how they allow us to predict, assess, and act in response to rewards. The authors review current accomplishments in the study of human reward processing and focus their discussion on explanations directed particularly at the role played by the ventral striatum. They discuss how these findings may contribute to a better understanding of deficits associated with Parkinson's disease.  相似文献   

6.
Learning to make choices that yield rewarding outcomes requires the computation of three distinct signals: stimulus values that are used to guide choices at the time of decision making, experienced utility signals that are used to evaluate the outcomes of those decisions and prediction errors that are used to update the values assigned to stimuli during reward learning. Here we investigated whether monetary and social rewards involve overlapping neural substrates during these computations. Subjects engaged in two probabilistic reward learning tasks that were identical except that rewards were either social (pictures of smiling or angry people) or monetary (gaining or losing money). We found substantial overlap between the two types of rewards for all components of the learning process: a common area of ventromedial prefrontal cortex (vmPFC) correlated with stimulus value at the time of choice and another common area of vmPFC correlated with reward magnitude and common areas in the striatum correlated with prediction errors. Taken together, the findings support the hypothesis that shared anatomical substrates are involved in the computation of both monetary and social rewards.  相似文献   

7.
Humans and animals often must choose between rewards that differ in their qualities, magnitudes, immediacy, and likelihood, and must estimate these multiple reward parameters from their experience. However, the neural basis for such complex decision making is not well understood. To understand the role of the primate prefrontal cortex in determining the subjective value of delayed or uncertain reward, we examined the activity of individual prefrontal neurons during an inter-temporal choice task and a computer-simulated competitive game. Consistent with the findings from previous studies in humans and other animals, the monkey’s behaviors during inter-temporal choice were well accounted for by a hyperbolic discount function. In addition, the activity of many neurons in the lateral prefrontal cortex reflected the signals related to the magnitude and delay of the reward expected from a particular action, and often encoded the difference in temporally discounted values that predicted the animal’s choice. During a computerized matching pennies game, the animals approximated the optimal strategy, known as Nash equilibrium, using a reinforcement learning algorithm. We also found that many neurons in the lateral prefrontal cortex conveyed the signals related to the animal’s previous choices and their outcomes, suggesting that this cortical area might play an important role in forming associations between actions and their outcomes. These results show that the primate lateral prefrontal cortex plays a central role in estimating the values of alternative actions based on multiple sources of information.  相似文献   

8.
The midbrain lies deep within the brain and has an important role in reward, motivation, movement and the pathophysiology of various neuropsychiatric disorders such as Parkinson''s disease, schizophrenia, depression and addiction. To date, the primary means of acting on this region has been with pharmacological interventions or implanted electrodes. Here we introduce a new noninvasive brain stimulation technique that exploits the highly interconnected nature of the midbrain and prefrontal cortex to stimulate deep brain regions. Using transcranial direct current stimulation (tDCS) of the prefrontal cortex, we were able to remotely activate the interconnected midbrain and cause increases in participants'' appraisals of facial attractiveness. Participants with more enhanced prefrontal/midbrain connectivity following stimulation exhibited greater increases in attractiveness ratings. These results illustrate that noninvasive direct stimulation of prefrontal cortex can induce neural activity in the distally connected midbrain, which directly effects behavior. Furthermore, these results suggest that this tDCS protocol could provide a promising approach to modulate midbrain functions that are disrupted in neuropsychiatric disorders.  相似文献   

9.
To survive under changing circumstances, we have to make appropriate decisions on our behavior. For this purpose, the brain should recognize reward information from objects under a given circumstances. Recent experimental and theoretical studies have suggested that primates, including human beings, have at least 2 brain processes that calculate the reward value of objects. One is the process coding a specific reward value of a stimulus or response, depending on direct experience (e.g., classical conditioning and TD learning). The other enables us to predict reward based on the internal model of given circumstances without direct experience (e.g., categorization and inference). To clarify the neuronal correlates of the multiple processes on reward prediction, we have conducted 4 experiments: (1) single-unit recording from the caudate and lateral prefrontal cortex of a monkey, while it performed a memory-guided saccade task with asymmetric reward schedule; (2) human fMRI imaging during random-dot discrimination with asymmetric reward condition; (3) single-unit recording from the monkey dopamine neuron in the random-dot discrimination task with asymmetric reward schedule; and (4) simultaneous single-unit recording from the striatum and lateral prefrontal cortex of monkeys performing a reward inference task. Results suggest that the nigro striatal network and the prefrontal network have different functional roles for reward prediction (value generation). The former applies the model-free method (temporal-difference learning), while the latter uses the model-based method (category-based learning).  相似文献   

10.
Abler B  Walter H  Erk S 《Neuroreport》2005,16(7):669-672
Psychological considerations suggest that the omission of rewards in humans comprises two effects: first, an allocentric effect triggering learning and behavioural changes potentially processed by dopaminergic neurons according to the prediction error theory; second, an egocentric effect representing the individual's emotional reaction, commonly called frustration. We investigated this second effect in the context of omission of monetary reward with functional magnetic resonance imaging. As expected, the contrast omission relative to receipt of reward led to a decrease in ventral striatal activation consistent with prediction error theory. Increased activation for this contrast was found in areas previously related to emotional pain: the right anterior insula and the right ventral prefrontal cortex. We interpreted this as a neural correlate of the egocentric effect.  相似文献   

11.
Learning occurs when an outcome differs from expectations, generating a reward prediction error signal (RPE). The RPE signal has been hypothesized to simultaneously embody the valence of an outcome (better or worse than expected) and its surprise (how far from expectations). Nonetheless, growing evidence suggests that separate representations of the two RPE components exist in the human brain. Meta‐analyses provide an opportunity to test this hypothesis and directly probe the extent to which the valence and surprise of the error signal are encoded in separate or overlapping networks. We carried out several meta‐analyses on a large set of fMRI studies investigating the neural basis of RPE, locked at decision outcome. We identified two valence learning systems by pooling studies searching for differential neural activity in response to categorical positive‐versus‐negative outcomes. The first valence network (negative > positive) involved areas regulating alertness and switching behaviours such as the midcingulate cortex, the thalamus and the dorsolateral prefrontal cortex whereas the second valence network (positive > negative) encompassed regions of the human reward circuitry such as the ventral striatum and the ventromedial prefrontal cortex. We also found evidence of a largely distinct surprise‐encoding network including the anterior cingulate cortex, anterior insula and dorsal striatum. Together with recent animal and electrophysiological evidence this meta‐analysis points to a sequential and distributed encoding of different components of the RPE signal, with potentially distinct functional roles.  相似文献   

12.
Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top‐down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high‐order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes. Hum Brain Mapp 35:3738–3749, 2014. © 2013 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.  相似文献   

13.
We present a neural network model where the spatial and temporal components of a task are merged and learned in the hippocampus as chains of associations between sensory events. The prefrontal cortex integrates this information to build a cognitive map representing the environment. The cognitive map can be used after latent learning to select optimal actions to fulfill the goals of the animal. A simulation of the architecture is made and applied to learning and solving tasks that involve both spatial and temporal knowledge. We show how this model can be used to solve the continuous place navigation task, where a rat has to navigate to an unmarked goal and wait for 2 seconds without moving to receive a reward. The results emphasize the role of the hippocampus for both spatial and timing prediction, and the prefrontal cortex in the learning of goals related to the task.  相似文献   

14.
Catechol-O-methyltransferase (COMT) catabolises dopamine and is important for regulating dopamine levels in the prefrontal cortex. Consistent with its regulation of prefrontal cortex dopamine, COMT modulates working memory and executive function; however, its significance for other cognitive domains, and in other brain regions, remains relatively unexplored. One such example is reward processing, for which dopamine is a critical mediator, and in which the striatum and corticostriatal circuitry are implicated. Here, we discuss emerging data which links COMT to reward processing, review what is known of the underlying neural substrates, and consider whether COMT is a good therapeutic target for treating addiction. Although a limited number of studies have investigated COMT and reward processing, common findings are beginning to emerge. COMT appears to modulate cortical and striatal activation during both reward anticipation and delivery, and to impact on reward-related learning and its underlying neural circuitry. COMT has been studied as a candidate gene for numerous reward-related phenotypes and there is some preliminary evidence linking it with certain aspects of addiction. However, additional studies are required before these associations can be considered robust. It is premature to consider COMT a good therapeutic target for addiction, but this hypothesis should be revisited as further information emerges. In particular, it will be critical to reveal the precise neurobiological mechanisms underlying links between COMT and reward processing, and the extent to which these relate to the putative associations with addiction.  相似文献   

15.
Reality monitoring refers to the process of discriminating between internally and externally generated information. Two different tasks have often been used to assess this ability: (a) memory for perceived versus imagined stimuli; and (b) memory for participant- versus experimenter-performed operations. However, it is not known whether these two reality monitoring tasks share neural substrates. The present study involved use of a within-subjects functional magnetic resonance imaging design to examine common and distinct brain mechanisms associated with the two reality monitoring conditions. The sole difference between the two lay in greater activation in the medial anterior prefrontal cortex when recollecting whether the participant or the experimenter had carried out an operation during prior encoding as compared to recollecting whether an item had been perceived or imagined. This region has previously been linked with attending to mental states. Task differences were also reflected in the nature of functional connectivity relationships between the medial anterior and right lateral prefrontal cortex: There was a stronger correlation in activity between the two regions during recollection of self/experimenter context. This indicates a role for the medial anterior prefrontal cortex in the monitoring of retrieved information relating to internal or external aspects of context. Finally, given the importance of reality monitoring to understanding psychotic symptoms, brain activity was related to measures of proneness to psychosis and schizotypal traits. The observation of significant correlations between reduced medial anterior prefrontal signal and scores on such measures corroborates these theoretical links.  相似文献   

16.
Reward expectation and reward prediction errors are thoughtto be critical for dynamic adjustments in decision-making andreward-seeking behavior, but little is known about their representationin the brain during uncertainty and risk-taking. Furthermore,little is known about what role individual differences mightplay in such reinforcement processes. In this study, it is shownbehavioral and neural responses during a decision-making taskcan be characterized by a computational reinforcement learningmodel and that individual differences in learning parametersin the model are critical for elucidating these processes. Inthe fMRI experiment, subjects chose between high- and low-riskrewards. A computational reinforcement learning model computedexpected values and prediction errors that each subject mightexperience on each trial. These outputs predicted subjects’trial-to-trial choice strategies and neural activity in severallimbic and prefrontal regions during the task. Individual differencesin estimated reinforcement learning parameters proved criticalfor characterizing these processes, because models that incorporatedindividual learning parameters explained significantly morevariance in the fMRI data than did a model using fixed learningparameters. These findings suggest that the brain engages areinforcement learning process during risk-taking and that individualdifferences play a crucial role in modeling this process.  相似文献   

17.
OBJECTIVE: Deficits in motor inhibition may contribute to impulsivity and irritability in children with bipolar disorder. Studies of the neural circuitry engaged during failed motor inhibition in pediatric bipolar disorder may increase our understanding of the pathophysiology of the illness. The authors tested the hypothesis that children with bipolar disorder and comparison subjects would differ in ventral prefrontal cortex, striatal, and anterior cingulate activation during unsuccessful motor inhibition. They also compared activation in medicated versus unmedicated children with bipolar disorder and in children with bipolar disorder and attention deficit hyperactivity disorder (ADHD) versus those with bipolar disorder without ADHD. METHOD: The authors conducted an event-related functional magnetic resonance imaging study comparing neural activation in children with bipolar disorder and healthy comparison subjects while they performed a motor inhibition task. The study group included 26 children with bipolar disorder (13 unmedicated and 15 with ADHD) and 17 comparison subjects matched by age, gender, and IQ. RESULTS: On failed inhibitory trials, comparison subjects showed greater bilateral striatal and right ventral prefrontal cortex activation than did patients. These deficits were present in unmedicated patients, but the role of ADHD in mediating them was unclear. CONCLUSIONS: In relation to comparison subjects, children with bipolar disorder may have deficits in their ability to engage striatal structures and the right ventral prefrontal cortex during unsuccessful inhibition. Further research should ascertain the contribution of ADHD to these deficits and the role that such deficits may play in the emotional and behavioral dysregulation characteristic of bipolar disorder.  相似文献   

18.
BACKGROUND: The mesolimbic dopaminergic reward system seems to play a crucial role in reinforcing effects of nicotine. Recently, acute high-frequency repetitive transcranial magnetic stimulation (rTMS) of frontal brain regions has been shown to efficiently modulate the mesostriatal and mesolimbic dopaminergic system in both animals and humans. For this reason, we investigated whether high-frequency rTMS would be able to influence nicotine-related behavior by studying rTMS effects on craving and cigarette smoking. METHOD: Fourteen treatment-seeking smokers were included in a double-blind crossover trial, conducted in 2002, comparing single days of active versus sham stimulation. Outcome measures were rTMS effects on number of cigarettes smoked during an ad libitum smoking period and effects on craving after a period of acute abstinence. RESULTS: High-frequency (20-Hz) rTMS of left dorsolateral prefrontal cortex reduced cigarette smoking significantly (p <.01) in an active stimulation compared with sham stimulation. Levels of craving did not change significantly. CONCLUSION: High-frequency rTMS may be useful for treatment in smoking cessation.  相似文献   

19.
The past several years have seen a resurgence of interest in understanding the psychological and neural bases of what are often referred to as “negative symptoms” in schizophrenia. These aspects of schizophrenia include constructs such as asociality, avolition (a reduction in the motivation to initiate or persist in goal-directed behavior), and anhedonia (a reduction in the ability to experience pleasure). We believe that these dimensions of impairment in individuals with schizophrenia reflect difficulties using internal representations of emotional experiences, previous rewards, and motivational goals to drive current and future behavior in a way that would allow them to obtain desired outcomes, a deficit that has major clinical significance in terms of functional capacity. In this article, we review the major components of the systems that link experienced and anticipated rewards with motivated behavior that could potentially be impaired in schizophrenia. We conclude that the existing evidence suggests relatively intact hedonics in schizophrenia, but impairments in some aspects of reinforcement learning, reward prediction, and prediction error processing, consistent with an impairment in “wanting.” As of yet, there is only indirect evidence of impairment in anterior cingulate and orbital frontal function that may support value and effort computations. However, there are intriguing hints that individuals with schizophrenia may not be able to use reward information to modulate cognitive control and dorsolateral prefrontal cortex function, suggesting a potentially important role for cortical–striatal interactions in mediating impairment in motivated and goal-directed behavior in schizophrenia.  相似文献   

20.
Self-control allows humans the patience necessary to maximize reward attainment in the future. Yet it remains elusive when and how the preference to self-controlled choice is formed. We measured brain activity while female and male humans performed an intertemporal choice task in which they first received delayed real liquid rewards (forced-choice trial), and then made a choice between the reward options based on the experiences (free-choice trial). We found that, while subjects were awaiting an upcoming reward in the forced-choice trial, the anterior prefrontal cortex (aPFC) tracked a dynamic signal reflecting the pleasure of anticipating the future reward. Importantly, this prefrontal signal was specifically observed in self-controlled individuals, and moreover, interregional negative coupling between the prefrontal region and the ventral striatum (VS) became stronger in those individuals. During consumption of the liquid rewards, reduced ventral striatal activity predicted self-controlled choices in the subsequent free-choice trials. These results suggest that a well-coordinated prefrontal-striatal mechanism during the reward experience shapes preferences regarding the future self-controlled choice.SIGNIFICANCE STATEMENT Anticipating future desirable events is a critical mental function that guides self-controlled behavior in humans. When and how are the self-controlled choices formed in the brain? We monitored brain activity while humans awaited a real liquid reward that became available in tens of seconds. We found that the frontal polar cortex tracked temporally evolving signals reflecting the pleasure of anticipating the future reward, which was enhanced in self-controlled individuals. Our results highlight the contribution of the fronto-polar cortex to the formation of self-controlled preferences, and further suggest that future prospect in the prefrontal cortex (PFC) plays an important role in shaping future choice behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号