首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
1. This study investigated neuronal activity in the striatum preceding predictable environmental events and behavioral reactions. Monkeys performed in a delayed go-nogo task that included separate time periods during which animals expected signals of behavioral significance, prepared for execution or inhibition of arm reaching movements, and expected the delivery of reward. In the task, animals were instructed by a green light cue to perform an arm reaching movement when a trigger stimulus came on approximately 3 s later (go situation). Movement was withheld after the same trigger light when the instruction cue had been red (nogo situation). Liquid reward was delivered on correct performance in both situations. 2. A total of 1,173 neurons were studied in the striatum (caudate nucleus and putamen) of 3 animals, of which 615 (52%) showed some change in activity during task performance. This report describes how the activity of 193 task-related neurons increased in advance of at least 1 component of the task, namely the instruction cue, the trigger stimulus, or the delivery of liquid reward. These neurons were found in dorsal and anterior parts of caudate and putamen and were slightly more frequent in the proximity of the internal capsule. 3. The activity of 16 neurons increased in both go and nogo trials before the onset of the instruction and subsided shortly after this signal. These activations may be related to the expectation of the instruction as the first signal in each trial. 4. The activity of 15 neurons increased between the instruction and the trigger stimulus in both go and nogo trials. These activations may be related to the expectation of the trigger stimulus independent of an arm movement. Further 56 neurons showed sustained activations only when the instruction requested a movement reaction. Activations were absent in trials in which the movement was withheld. Twenty-one of these neurons were tested with 2 different movement targets, 5 of which showed activity related to the direction of movement. These activations may be related to the preparation of movement or expectation of the specific movement triggering signal. The activity of an additional 20 neurons was unmodulated before the trigger stimulus in movement trials but increased in the interval between the no-movement instruction and the trigger stimulus for withholding the movement. These activations may be related to the preparation of movement inhibition as specific nogo reaction.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

2.
Sensation, memories, and predictions contribute to choices in everyday life, and their relative impact should change with task constraints. To investigate how the impact from sensory cortex on decision making varies with task constraints we trained macaque monkeys in a direction discrimination task where they could maximize reward by waiting for sensory visual information early in a trial, while focusing on memory and reward prediction as a trial progressed. The task constraints caused animals to indicate decisions in complete absence of visual motion stimuli (stimulus independent decisions), as 25% of the trials were ‘no stimulus’ trials. On ‘no stimulus’ trials reward delivery depended on the current decision in relation to the decision history. Stimulus independent decisions occurred during an epoch when a stimulus could in principle have been presented, or afterwards when stimuli could not occur anymore. Stimulus independent decisions were significantly different during these two periods. Reward exploitation was more efficient late in the trial, but it was not associated with systematic activity changes in directionally selective neurons in area MT. Conversely, systematic changes of neuronal activity and firing rate correlation in directionally selective middle temporal area (MT) neurons were restricted to a short time period before early decisions. Changing task constraints in the course of a single trial thus determines how neurons in sensory areas contribute to decision making. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

3.
This study investigated how different expected rewards influence behavior-related neuronal activity in the anterior striatum. In a spatial delayed-response task, monkeys reached for a left or right target and obtained a small quantity of one of two juices (apple, grenadine, orange, lemon, black currant, or raspberry). In each trial, an initial instruction picture indicated the behavioral target and predicted the reward. Nonmovement trials served as controls for movement relationships. Consistent preferences in special reward choice trials and differences in anticipatory licks, performance errors, and reaction times indicated that animals differentially expected the rewards predicted by the instructions. About 600 of >2,500 neurons in anterior parts of caudate nucleus, putamen, and ventral striatum showed five forms of task-related activations, comprising responses to instructions, spatial or nonspatial activations during the preparation or execution of the movement, and activations preceding or following the rewards. About one-third of the neurons showed different levels of task-related activity depending on which liquid reward was predicted at trial end. Activations were either higher or lower for rewards that were preferred by the animals as compared with nonpreferred rewards. These data suggest that the expectation of an upcoming liquid reward may influence a fraction of task-related neurons in the anterior striatum. Apparently the information about the expected reward is incorporated into the neuronal activity related to the behavioral reaction leading to the reward. The results of this study are in general agreement with an account of goal-directed behavior according to which the outcome should be represented already at the time at which the behavior toward the outcome is performed.  相似文献   

4.
The orbitofrontal cortex appears to be involved in the control of voluntary, goal-directed behavior by motivational outcomes. This study investigated how orbitofrontal neurons process information about rewards in a task that depends on intact orbitofrontal functions. In a delayed go-nogo task, animals executed or withheld a reaching movement and obtained liquid or a conditioned sound as reinforcement. An initial instruction picture indicated the behavioral reaction to be performed (movement vs. nonmovement) and the reinforcer to be obtained (liquid vs. sound) after a subsequent trigger stimulus. We found task-related activations in 188 of 505 neurons in rostral orbitofrontal area 13, entire area 11, and lateral area 14. The principal task-related activations consisted of responses to instructions, activations preceding reinforcers, or responses to reinforcers. Most activations reflected the reinforcing event rather than other task components. Instruction responses occurred either in liquid- or sound-reinforced trials but rarely distinguished between movement and nonmovement reactions. These instruction responses reflected the predicted motivational outcome rather than the behavioral reaction necessary for obtaining that outcome. Activations preceding the reinforcer began slowly and terminated immediately after the reinforcer, even when the reinforcer occurred earlier or later than usually. These activations preceded usually the liquid reward but rarely the conditioned auditory reinforcer. The activations also preceded expected drops of liquid delivered outside the task, suggesting a primary appetitive rather than a task-reinforcing relationship that apparently was related to the expectation of reward. Responses after the reinforcer occurred in liquid- but rarely in sound-reinforced trials. Reward-preceding activations and reward responses were unrelated temporally to licking movements. Several neurons showed reward responses outside the task but instruction responses during the task, indicating a response transfer from primary reward to the reward-predicting instruction, possibly reflecting the temporal unpredictability of reward. In conclusion, orbitofrontal neurons report stimuli associated with reinforcers are concerned with the expectation of reward and detect reward delivery at trial end. These activities may contribute to the processing of reward information for the motivational control of goal-directed behavior.  相似文献   

5.
Responses of monkey dopamine neurons during learning of behavioral reactions.   总被引:24,自引:0,他引:24  
1. Previous studies have shown that dopamine (DA) neurons respond to stimuli of behavioral significance, such as primary reward and conditioned stimuli predicting reward and eliciting behavioral reactions. The present study investigated how these responses develop and vary when the behavioral significance of stimuli changes during different stages of learning. Impulses from DA neurons were recorded with movable microelectrodes from areas A8, A9, and A10 in two awake monkeys during the successive acquisition of two behavioral tasks. Impulses of DA neurons were distinguished from other neurons by their long duration (1.8-5.0 ms) and low spontaneous frequency (0.5-7.0 imp/s). 2. In the first task, animals learned to reach in a small box in front of them when it opened visibly and audibly. Before conditioning, DA neurons were activated the first few times that the empty box opened and animals reacted with saccadic eye movements. Neuronal and behavioral responses disappeared on repeated stimulus presentation. Thus neuronal responses were related to the novelty of an unexpected stimulus eliciting orienting behavior. 3. Subsequently, the box contained a small morsel of apple in one out of six trials. Animals reacted with ocular saccades to nearly every box opening and reached out when the morsel was present. One-third of 49 neurons were phasically activated by every door opening. The response was stronger when food was present. Thus DA neurons responded simultaneously to the sight of primary food reward and to the conditioned stimulus associated with reward. 4. When the box contained a morsel of apple on every trial, animals regularly reacted with target-directed eye and arm movements, and the majority of 76 DA neurons responded to door opening. The same neurons lacked responses to a light not associated with task performance that was illuminated at the position of the food box in alternate sessions, thus demonstrating specificity for the behavioral significance of stimuli. 5. The second task employed the operant conditioning of a reaction time situation in which animals reached from a resting key toward a lever when a small light was illuminated. DA neurons lacked responses to the unconditioned light. During task acquisition lasting 2-3 days, one-half of 25 DA neurons were phasically activated when a drop of liquid reward was delivered for reinforcing the reaching movement. In contrast, neurons were not activated when reward was delivered at regular intervals (2.5-3.5 s) but a task was not performed.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

6.
In the primate striatum, the tonically discharging neurons respond to conditioned stimuli associated with reward. We investigated whether these neurons respond to the reward itself and how changes in the behavioral context in which the reward is delivered might influence their responsiveness. A total of 286 neurons in the caudate nucleus and putamen were studied in two awake macaque monkeys while liquid reward was delivered in three behavioral situations: (1) an instrumental task, in which reward was delivered upon execution of a visually triggered arm movement; (2) a classically conditioned task, in which reward was delivered 1 s after a visual signal; (3) a free reward situation, in which reward was delivered at irregular time intervals outside of any conditioning task. The monkeys′ uncertainty about the time at which reward will be delivered was assessed by monitoring their mouth movements. A larger proportion of neurons responsive to reward was observed in the free reward situation (86%) than in the classically conditioned (57%) and instrumental tasks (37%). Among the neurons tested in all situations (n = 78), 24% responded to reward regardless of the situation and 65% in only one or two situations. Responses selective for one particular situation occurred exclusively in the free reward situation. When the reward was delivered immediately after the visual signal in the classically conditioned task, most of the neurons reduced or completely lost their responses to reward, and other neurons remained responsive. Conversely, neuronal responses invariably persisted when reward was delivered later than 1 s after the visual signal. This is the first report that tonic striatal neurons might display responses directly to primary rewards. The neuronal responses were strongly influenced by the behavioral context in which the animals received the reward. An important factor appears to be the timing of reward. These neurons might therefore contribute to a general aspect of behavioral reactivity of the subject to relevant stimuli. Received: 16 September 1996 / Accepted: 1 April 1997  相似文献   

7.
Animals optimize behaviors by predicting future critical events based on histories of actions and their outcomes. When behavioral outcomes like reward and aversion are signaled by current external cues, actions are directed to acquire the reward and avoid the aversion. The basal ganglia are thought to be the brain locus for reward-based adaptive action planning and learning. To understand the role of striatum in coding outcomes of forthcoming behavioral responses, we addressed two specific questions. First, how are the histories of reward and aversion used for encoding forthcoming outcomes in the striatum during a series of instructed behavioral responses? Second, how are the behavioral responses and their instructed outcomes represented in the striatum? We recorded discharges of 163 presumed projection neurons in the striatum while monkeys performed a visually instructed lever-release task for reward, aversion, and sound outcomes, whose occurrences could be estimated by their histories. Before outcome instruction, discharge rates of a subset of neurons activated in this epoch showed positive or negative regression slopes with reward history (24/44), that is, to the number of trials since the last reward trial, which changed in parallel with reward probability of current trials. The history effect was also observed for the aversion outcome but in far fewer neurons (3/44). Once outcomes were instructed in the same task, neurons selectively encoded the outcomes before and after behavioral responses (reward, 46/70; aversion, 6/70; sound, 6/70). The history- and current instruction-based coding of forthcoming behavioral outcomes in the striatum might underlie outcome-oriented behavioral modulation.  相似文献   

8.
Learning theory emphasizes the importance of expectations in the control of instrumental action. This study investigated the variation of behavioral reactions toward different rewards as an expression of differential expectations of outcomes in primates. We employed several versions of two basic behavioral paradigms, the spatial delayed response task and the delayed reaction task. These tasks are commonly used in neurobiological studies of working memory, movement preparation, and event expectation involving the frontal cortex and basal ganglia. An initial visual instruction stimulus indicated to the animal which one of several food or liquid rewards would be delivered after each correct behavioral response, or whether or not a reward could be obtained. We measured the reaction times of the operantly conditioned arm movement necessary for obtaining the reward, and the durations of anticipatory licking prior to liquid reward delivery as a Pavlovian conditioned response. The results showed that both measures varied depending on the reward predicted by the initial instruction. Arm movements were performed with significantly shorter reaction times for foods or liquids that were more preferred by the animal than for less preferred ones. Still larger differences were observed between rewarded and unrewarded trials. An interesting effect was found in unrewarded trials, in which reaction times were significantly shorter when a highly preferred reward was delivered in the alternative rewarded trials of the same trial block as compared to a less preferred reward. Anticipatory licks preceding the reward were significantly longer when highly preferred rather than less preferred rewards, or no rewards, were predicted. These results demonstrate that behavioral reactions preceding rewards may vary depending on the predicted future reward and suggest that monkeys differentially expect particular outcomes in the presently investigated tasks.  相似文献   

9.
R E Suri  W Schultz 《Neuroscience》1999,91(3):871-890
This study investigated how the simulated response of dopamine neurons to reward-related stimuli could be used as reinforcement signal for learning a spatial delayed response task. Spatial delayed response tasks assess the functions of frontal cortex and basal ganglia in short-term memory, movement preparation and expectation of environmental events. In these tasks, a stimulus appears for a short period at a particular location, and after a delay the subject moves to the location indicated. Dopamine neurons are activated by unpredicted rewards and reward-predicting stimuli, are not influenced by fully predicted rewards, and are depressed by omitted rewards. Thus, they appear to report an error in the prediction of reward, which is the crucial reinforcement term in formal learning theories. Theoretical studies on reinforcement learning have shown that signals similar to dopamine responses can be used as effective teaching signals for learning. A neural network model implementing the temporal difference algorithm was trained to perform a simulated spatial delayed response task. The reinforcement signal was modeled according to the basic characteristics of dopamine responses to novel stimuli, primary rewards and reward-predicting stimuli. A Critic component analogous to dopamine neurons computed a temporal error in the prediction of reinforcement and emitted this signal to an Actor component which mediated the behavioral output. The spatial delayed response task was learned via two subtasks introducing spatial choices and temporal delays, in the same manner as monkeys in the laboratory. In all three tasks, the reinforcement signal of the Critic developed in a similar manner to the responses of natural dopamine neurons in comparable learning situations, and the learning curves of the Actor replicated the progress of learning observed in the animals. Several manipulations demonstrated further the efficacy of the particular characteristics of the dopamine-like reinforcement signal. Omission of reward induced a phasic reduction of the reinforcement signal at the time of the reward and led to extinction of learned actions. A reinforcement signal without prediction error resulted in impaired learning because of perseverative errors. Loss of learned behavior was seen with sustained reductions of the reinforcement signal, a situation in general comparable to the loss of dopamine innervation in Parkinsonian patients and experimentally lesioned animals. The striking similarities in teaching signals and learning behavior between the computational and biological results suggest that dopamine-like reward responses may serve as effective teaching signals for learning behavioral tasks that are typical for primate cognitive behavior, such as spatial delayed responding.  相似文献   

10.
The cholinergic pedunculopontine tegmental nucleus (PPTN) is one of the major ascending arousal systems in the brain stem and is linked to motor, limbic, and sensory systems. Based on previous studies, we hypothesized that PPTN would be related to the integrative control of movement, reinforcement, and performance of tasks in behaving animals. To investigate how PPTN contributes to the behavioral control, we analyzed the activity of PPTN neurons during visually guided saccade tasks in three monkeys in relation to saccade preparation, execution, reward, and performance of the task. During visually guided saccades, we observed saccade-related burst (26/70) and pause neurons (19/70), indicating that a subset of PPTN neurons are related to both saccade execution and fixation. Burst neurons exhibited greater selectivity for saccade direction than pause neurons. The preferred directions for both burst and pause neurons were not aligned with either horizontal or vertical axes, nor biased strongly in either the ipsilateral or the contralateral direction. The spatial representation of the saccade-related activity of PPTN neurons is different from other brain stem saccade systems and may therefore relay saccade-related activity from different areas. Increasing discharges were observed around reward onset in a subset of neurons (22/70). These neurons responded to the freely delivered rewards within ~140 ms. However, during the saccade task, the latencies of the responses around reward onset ranged between 100 ms before and 200 ms after the reward onset. These results suggest that the activity observed after appropriate saccade during the task may include response associated with reward. We found that the reaction time to the appearance of the fixation point (FP) was longer when the animal tended to fail in the ensuring task. This reaction time to FP appearance (RTFP) served as an index of motivation. The RTFP could be predicted by the neuronal activity of a subset of PPTN neurons (13/70) that varied their activity levels with task performance, discharging at a higher rate in successful versus error trials. A combination of responses related to saccade execution, reward delivery, and task performance was observed in PPTN neurons. We conclude from the multimodality of responses in PPTN neurons that PPTN may serve as an integrative interface between the various signals required for performing purposive behaviors.  相似文献   

11.
Anatomic and behavioral evidence shows that TE and perirhinal cortices are two directly connected but distinct inferior temporal areas. Despite this distinctness, physiological properties of neurons in these two areas generally have been similar with neurons in both areas showing selectivity for complex visual patterns and showing response modulations related to behavioral context in the sequential delayed match-to-sample (DMS) trials, attention, and stimulus familiarity. Here we identify physiological differences in the neuronal activity of these two areas. We recorded single neurons from area TE and perirhinal cortex while the monkeys performed a simple behavioral task using randomly interleaved visually cued reward schedules of one, two, or three DMS trials. The monkeys used the cue's relation to the reward schedule (indicated by the brightness) to adjust their behavioral performance. They performed most quickly and most accurately in trials in which reward was immediately forthcoming and progressively less well as more intermediate trials remained. Thus the monkeys appeared more motivated as they progressed through the trial schedule. Neurons in both TE and perirhinal cortex responded to both the visual cues related to the reward schedules and the stimulus patterns used in the DMS trials. As expected, neurons in both areas showed response selectivity to the DMS patterns, and significant, but small, modulations related to the behavioral context in the DMS trial. However, TE and perirhinal neurons showed strikingly different response properties. The latency distribution of perirhinal responses was centered 66 ms later than the distribution of TE responses, a larger difference than the 10-15 ms usually found in sequentially connected visual cortical areas. In TE, cue-related responses were related to the cue's brightness. In perirhinal cortex, cue-related responses were related to the trial schedules independently of the cue's brightness. For example, some perirhinal neurons responded in the first trial of any reward schedule including the one trial schedule, whereas other neurons failed to respond in the first trial but respond in the last trial of any schedule. The majority of perirhinal neurons had more complicated relations to the schedule. The cue-related activity of TE neurons is interpreted most parsimoniously as a response to the stimulus brightness, whereas the cue-related activity of perirhinal neurons is interpreted most parsimoniously as carrying associative information about the animal's progress through the reward schedule. Perirhinal cortex may be part of a system gauging the relation between work schedules and rewards.  相似文献   

12.
The behavioral and motivational changes that result from use of abused substances depend upon activation of neuronal populations in the reward centers of the brain, located primarily in the corpus striatum in primates. To gain insight into the cellular mechanisms through which abused drugs reinforce behavior in the primate brain, changes in firing of neurons in the ventral (VStr, nucleus accumbens) and dorsal (DStr, caudate-putamen) striatum to “natural” (juice) vs. drug (i.v. cocaine) rewards were examined in four rhesus monkeys performing a visual Go-Nogo decision task. Task-related striatal neurons increased firing to one or more of the specific events that occurred within a trial represented by (1) Target stimuli (Go trials) or (2) Nogotarget stimuli (Nogo trials), and (3) Reward delivery for correct performance. These three cell populations were further subdivided into categories that reflected firing exclusively on one or the other type of signaled reward (juice or cocaine) trial (20%–30% of all cells), or, a second subpopulation that fired on both (cocaine and juice) types of rewarded trial (50%). Results show that neurons in the primate striatum encoded cocaine-rewarded trials similar to juice-rewarded trials, except for (1) increased firing on cocaine-rewarded trials, (2) prolonged activation during delivery of i.v. cocaine infusion, and (3) differential firing in ventral (VStr cells) vs. dorsal (DStr cells) striatum cocaine-rewarded trials. Reciprocal activations of antithetic subpopulations of cells during different temporal intervals within the same trial suggest a functional interaction between processes that encode drug and natural rewards in the primate brain.  相似文献   

13.
It has been reported that neurons in the orbitofrontal cortex (OFC) respond to emotionally significant events such as reward-predicting cues and/or the reward itself. The responses to reward-predicting cues are considered to carry the information of the predicted reward. However, few studies have focused on the relationship of the neuronal activity during a cue period with that during a reward period. We can infer that the cue responses of OFC neurons are correlated to the reward responses if they carry the information of the predicted reward. In this study, we focused on neurons that showed responses during both the cue and reward periods, and compared the response characteristics between these periods. We found 94 of 369 OFC neurons showed significant responses during both the cue and reward periods, and 43 of which preserved their selectivity between these periods. Furthermore, population analysis showed that stronger cue responses corresponded to stronger reward responses, and stronger reward responses corresponded to stronger cue responses. These results suggest that individual neurons in the OFC associate visual information with reward information, and contribute to the prediction of future rewards by forming reward representations.  相似文献   

14.
This study investigated how neuronal activity in orbitofrontal cortex related to the expectation of reward changed while monkeys repeatedly learned to associate new instruction pictures with known behavioral reactions and reinforcers. In a delayed go-nogo task with several trial types, an initial picture instructed the animal to execute or withhold a reaching movement and to expect a liquid reward or a conditioned auditory reinforcer. When novel instruction pictures were presented, animals learned according to a trial-and-error strategy. After experience with a large number of novel pictures, learning occurred in a few trials, and correct performance usually exceeded 70% in the first 60-90 trials. About 150 task-related neurons in orbitofrontal cortex were studied in both familiar and learning conditions and showed two major forms of changes during learning. Quantitative changes of responses to the initial instruction were seen as appearance of new responses, increase of existing responses, or decrease or complete disappearance of responses. The changes usually outlasted initial learning trials and persisted during subsequent consolidation. They often modified the trial selectivities of activations. Increases might reflect the increased attention during learning and induce neuronal changes underlying the behavioral adaptations. Decreases might be related to the unreliable reward-predicting value of frequently changing learning instructions. The second form of changes reflected the adaptation of reward expectations during learning. In initial learning trials, animals reacted as if they expected liquid reward in every trial type, although only two of the three trial types were rewarded with liquid. In close correspondence, neuronal activations related to the expectation of reward occurred initially in every trial type. The behavioral indices for reward expectation and their neuronal correlates adapted in parallel during the course of learning and became restricted to rewarded trials. In conclusion, these data support the notion that neurons in orbitofrontal cortex code reward information in a flexible and adaptive manner during behavioral changes after novel stimuli.  相似文献   

15.
We analyzed the activity of 51 trigeminothalamic neurons in the medullary dorsal horn (trigeminal nucleus caudalis) of monkeys during the performance of behavioral tasks requiring the monkeys to discriminate innocuous and noxious thermal stimuli applied to the face and to detect the onset of visual stimuli. Static properties of trigeminothalamic neurons in behaving monkeys were similar to those in anesthetized monkeys. Responses to passively presented mechanical and thermal stimuli, receptive-field properties, and conduction velocities did not differ in the awake and anesthetized states. For most wide dynamic range and nociceptive-specific trigeminothalamic neurons, there was a negative correlation between the magnitude of thermally evoked activity and behavioral latencies to discriminate 47 and 49 degrees C stimuli. Thus, both groups of neurons provide information that could be used by the monkey to discriminate noxious thermal stimuli. The magnitude of thermal responses of trigeminothalamic neurons was modulated by the behavioral significance of the stimulus. Behaviorally relevant thermal stimuli presented during the thermal discrimination task produced a greater neuronal response than equivalent irrelevant thermal stimuli presented between behavioral trials or presented while the monkey performed the visual detection task. Neurons whose activity is modulated by behavioral state are likely to be involved in discrimination of thermal stimuli, since the activity of these neurons correlates with the behavioral response to the stimuli and information from the modulated neurons is sent to the thalamus. Some trigeminothalamic neurons that exhibited somatosensory responses also responded to behaviorally relevant stimuli and events associated with trial initiation and receipt of reward in the behavioral tasks. Similar events outside a behavioral task evoked no neuronal responses. These task-related responses were similar to those described previously for medullary dorsal horn neurons not identified as to projection sites (14).(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

16.
Dopamine (DA) neurons respond to sensory stimuli that predict reward. To understand how DA neurons acquire such ability, we trained monkeys on a one-direction-rewarded version of memory-guided saccade task (1DR) only when we recorded from single DA neurons. In 1DR, position-reward mapping was changed across blocks of trials. In the early stage of training of 1DR, DA neurons responded to reward delivery; in the later stages, they responded predominantly to the visual cue that predicted reward or no reward (reward predictor) differentially. We found that such a shift of activity from reward to reward predictor also occurred within a block of trials after position-reward mapping was altered. A main effect of long-term training was to accelerate the within-block reward-to-predictor shift of DA neuronal responses. The within-block shift appeared first in the intermediate stage, but was slow, and DA neurons often responded to the cue that indicated reward in the preceding block. In the advanced stage, the reward-to-predictor shift occurred quickly such that the DA neurons' responses to visual cues faithfully matched the current position-reward mapping. Changes in the DA neuronal responses co-varied with the reward-predictive differentiation of saccade latency both in short-term (within-block) and long-term adaptation. DA neurons' response to the fixation point also underwent long-term changes until it occurred predominantly in the first trial within a block. This might trigger a switch between the learned sets. These results suggest that midbrain DA neurons play an essential role in adapting oculomotor behavior to frequent switches in position-reward mapping.  相似文献   

17.
Decoding of temporal intervals from cortical ensemble activity   总被引:1,自引:0,他引:1  
Neurophysiological, neuroimaging, and lesion studies point to a highly distributed processing of temporal information by cortico-basal ganglia-thalamic networks. However, there are virtually no experimental data on the encoding of behavioral time by simultaneously recorded cortical ensembles. We predicted temporal intervals from the activity of hundreds of neurons recorded in motor and premotor cortex as rhesus monkeys performed self-timed hand movements. During the delay periods, when animals had to estimate temporal intervals and prepare hand movements, neuronal ensemble activity encoded both the time that elapsed from the previous hand movement and the time until the onset of the next. The neurons that were most informative of these temporal intervals increased or decreased their rates throughout the delay until reaching a threshold value, at which point a movement was initiated. Variability in the self-timed delays was explainable by the variability of neuronal rates, but not of the threshold. In addition to predicting temporal intervals, the same neuronal ensemble activity was informative for generating predictions that dissociated the delay periods of the task from the movement periods. Left hemispheric areas were the best source of predictions in one bilaterally implanted monkey overtrained to perform the task with the right hand. However, after that monkey learned to perform the task with the left hand, its left hemisphere continued and the right hemisphere started contributing to the prediction. We suggest that decoding of temporal intervals from bilaterally recorded cortical ensembles could improve the performance of neural prostheses for restoration of motor function.  相似文献   

18.
Goal-directed behaviors require the consideration and expenditure of physical effort. The anterior cingulate cortex (ACC) appears to play an important role in evaluating effort and reward and in organizing goal-directed actions. Despite agreement regarding the involvement of the ACC in these processes, the way in which effort-, reward-, and motor-related information is registered by networks of ACC neurons is poorly understood. To contrast ACC responses to effort, reward, and motor behaviors, we trained rats on a reversal task in which the selected paths on a track determined the level of effort or reward. Effort was presented in the form of an obstacle that was climbed to obtain reward. We used single-unit recordings to identify neural correlates of effort- and reward-guided behaviors. During periods of outcome anticipation, 52% of recorded ACC neurons responded to the specific route taken to the reward while 21% responded prospectively to effort and 12% responded prospectively to reward. In addition, effort- and reward-selective neurons typically responded to the route, suggesting that these cells integrated motor-related activity with expectations of future outcomes. Furthermore, the activity of ACC neurons did not discriminate between choice and forced trials or respond to a more generalized measure of outcome value. Nearly all neural responses to effort and reward occurred after path selection and were restricted to discrete temporal/spatial stages of the task. Together, these findings support a role for the ACC in integrating route-specific actions, effort, and reward in the service of sustaining discrete movements through an effortful series of goal-directed actions.  相似文献   

19.
Summary The sources of input and the behavioral effects of lesions and drug administration suggest that the striatum participates in motivational processes. We investigated the activity of single striatal neurons of monkeys in response to reward delivered for performing in a go-nogo task. A drop of liquid was given each time the animal correctly executed or withheld an arm movement in reaction to a visual stimulus. Of 1593 neurons, 115 showed increased activity in response to delivery of liquid reward in both go and nogo trials. Responding neurons were predominantly located in dorsal and ventromedial parts of anterior putamen, in dorsal and ventral caudate, and in nucleus accumbens. They were twice as frequent in ventral as compared to dorsal striatal areas. Responses occurred at a median latency of 337 ms and lasted for 525 ms, with insignificant differences between dorsal and ventral striatum. Reward responses differed from activity recorded in the face area of posterior putamen which varied synchronously with individual mouth movements. Responses were directly related to delivery of primary liquid reward and not to auditory stimuli associated with it. Most of them also occurred when reward was delivered outside of the task. These results demonstrate that neurons of dorsal and particularly ventral striatum are involved in processing information concerning the attribution of primary reward.  相似文献   

20.
In behavioral science, it is well known that humans and nonhuman animals are highly sensitive to differences in reward magnitude when choosing an outcome from a set of alternatives. We know that a realm of behavioral reactions is altered when animals begin to expect different levels of reward outcome. Our present aim was to investigate how the expectation for different magnitudes of reward influences behavior-related neurophysiology in the anterior striatum. In a spatial delayed response task, different instruction pictures are presented to the monkey. Each image represents a different magnitude of juice. By reaching to the spatial location where an instruction picture was presented, animals could receive the particular liquid amount designated by the stimulus. Reliable preferences in reward choice trials and differences in anticipatory licks, performance errors, and reaction times indicated that animals differentially expected the various reward amounts predicted by the instruction cues. A total of 374 of 2,000 neurons in the anterior parts of the caudate nucleus, putamen, and ventral striatum showed five forms of task-related activation during the preparation or execution of movement and activations preceding or following the liquid drop delivery. Approximately one-half of these striatal neurons showed differing response levels dependent on the magnitude of liquid to be received. Results of a linear regression analysis showed that reward magnitude and single cell discharge rate were related in a subset of neurons by a monotonic positive or negative relationship. Overall, these data support the idea that the striatum utilizes expectancies that contain precise information concerning the predicted, forthcoming level of reward in directing general behavioral reactions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号