首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Responses of monkey dopamine neurons during learning of behavioral reactions.   总被引:24,自引:0,他引:24  
1. Previous studies have shown that dopamine (DA) neurons respond to stimuli of behavioral significance, such as primary reward and conditioned stimuli predicting reward and eliciting behavioral reactions. The present study investigated how these responses develop and vary when the behavioral significance of stimuli changes during different stages of learning. Impulses from DA neurons were recorded with movable microelectrodes from areas A8, A9, and A10 in two awake monkeys during the successive acquisition of two behavioral tasks. Impulses of DA neurons were distinguished from other neurons by their long duration (1.8-5.0 ms) and low spontaneous frequency (0.5-7.0 imp/s). 2. In the first task, animals learned to reach in a small box in front of them when it opened visibly and audibly. Before conditioning, DA neurons were activated the first few times that the empty box opened and animals reacted with saccadic eye movements. Neuronal and behavioral responses disappeared on repeated stimulus presentation. Thus neuronal responses were related to the novelty of an unexpected stimulus eliciting orienting behavior. 3. Subsequently, the box contained a small morsel of apple in one out of six trials. Animals reacted with ocular saccades to nearly every box opening and reached out when the morsel was present. One-third of 49 neurons were phasically activated by every door opening. The response was stronger when food was present. Thus DA neurons responded simultaneously to the sight of primary food reward and to the conditioned stimulus associated with reward. 4. When the box contained a morsel of apple on every trial, animals regularly reacted with target-directed eye and arm movements, and the majority of 76 DA neurons responded to door opening. The same neurons lacked responses to a light not associated with task performance that was illuminated at the position of the food box in alternate sessions, thus demonstrating specificity for the behavioral significance of stimuli. 5. The second task employed the operant conditioning of a reaction time situation in which animals reached from a resting key toward a lever when a small light was illuminated. DA neurons lacked responses to the unconditioned light. During task acquisition lasting 2-3 days, one-half of 25 DA neurons were phasically activated when a drop of liquid reward was delivered for reinforcing the reaching movement. In contrast, neurons were not activated when reward was delivered at regular intervals (2.5-3.5 s) but a task was not performed.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

2.
The nucleus accumbens (NAc) is necessary for the expression of Pavlovian-conditioned approach behavior but not for the expression of instrumental behavior conditioned in sessions that set a low response requirement. Although numerous studies have characterized firing patterns of NAc neurons in relation to instrumental behavior, very little is known about how NAc neurons encode information in Pavlovian tasks. In the present study, recordings of accumbal firing patterns were made during sessions in which rats performed a Pavlovian-conditioned approach task. Most of the recorded neurons (74/83, 89%) exhibited significant responses during the conditioned stimulus (CS) presentation and/or the reward exposure. The reward responses were prevalent, predominantly inhibitory, and comparable to reward responses observed in various types of behavioral paradigms, including instrumental tasks. The CS responses could be segregated into multiple subtypes on the basis of directionality, onset latency, and duration. Several characteristics of the CS firing patterns were unique relative to cue responses observed previously during alternative types of conditioning sessions. It is possible that the novel firing patterns correspond to the differential contributions of the accumbens to Pavlovian-conditioned approach behavior and instrumentally conditioned behavior. Regardless, the novel patterns of firing add to existing evidence that characterization of accumbal firing patterns in Pavlovian tasks may provide additional information about the neurophysiological mechanisms that mediate accumbal contributions to behavior.  相似文献   

3.
Asahi T  Uwano T  Eifuku S  Tamura R  Endo S  Ono T  Nishijo H 《Neuroscience》2006,143(2):627-639
Anatomical connections of the insular cortex suggest its involvement in cognition, emotion, memory, and behavioral manifestation. However, there have been few neurophysiological studies on the insular cortex in primates, in relation to such higher cognitive functions. In the present study, neural activity was recorded from the monkey insular cortex during performance of a delayed-response delayed-reward go/nogo task. In this task, visual stimuli indicating go or nogo responses associated with reward (reward trials) and with no reward (no-reward trials) were presented after eye fixation. In the reward trials, the monkey was required to release a button during presentation of the 2nd visual stimuli after a delay period (delay 1). Then, a juice reward was delivered after another delay (delay 2). The results indicated that the neurons responding in each epoch of the task were topographically localized within the insular cortex, consistent with the previous anatomical studies indicating topographical distributions of afferent inputs from other subcortical and cortical sensory areas. Furthermore, some insular neurons 1) nonspecifically responded to the visual cues and during fixation; 2) responded to the visual cues predicting reward and during the delay period before reward delivery; 3) responded differentially in go/nogo trials during the delay 2; and 4) responded around button manipulation. The observed patterns of insular-neuron responses and the correspondence of their topographical localization to those in previous anatomical studies suggest that the insular cortex is involved in attention- and reward-related functions and might monitor and integrate activities of other brain regions during cognition and behavioral manifestation.  相似文献   

4.
Neuronal activity was recorded from the anterior cingulate cortex of behaving rats during discrimination and learning of conditioned stimuli associated with or without reinforcements. The rats were trained to lick a protruding spout just after a conditioned stimulus to obtain reward (intracranial self-stimulation or sucrose solution) or to avoid aversion. The conditioned stimuli included both elemental (auditory or visual stimuli) and configural (simultaneous presentation of auditory and visual stimuli predicting reward outcome opposite to that predicted by each stimulus presented alone) stimuli. Of the 62 anterior cingulate neurons responding during the task, 38 and four responded differentially and non-differentially to the conditioned stimuli (conditioned stimulus-related neurons), respectively. Of the 38 differential conditioned stimulus-related neurons, 33 displayed excitatory (n = 10) and inhibitory (n = 23) responses selectively to the conditioned stimuli predicting reward. These excitatory and inhibitory differential conditioned stimulus-related neurons were located mainly in the cingulate cortex areas 1 and 3 of the rostral and ventral parts of the anterior cingulate cortex, respectively. The remaining 20 neurons responded mainly during intracranial self-stimulation and/or ingestion of sucrose (ingestion/intracranial self-stimulation-related neurons). Increase in activity of the ingestion/intracranial self-stimulation-related neurons was correlated to the first lick to obtain rewards during the task, suggesting that the activity reflected some aspects of motor functions for learned instrumental behaviors. These ingestion/intracranial self-stimulation-related neurons were located sparsely in cingulate cortex area 1 of the rostral part of the anterior cingulate cortex and densely in frontal area 2 of the caudal and dorsal parts of the anterior cingulate cortex. Analysis by the multidimensional scaling of responses of 38 differential conditioned stimulus-related neurons indicated that the anterior cingulate cortex categorized the conditioned stimuli into three groups based on reward contingency, regardless of the physical characteristics of the stimuli, in a two-dimensional space; the three conditioned (two elemental and one configural) stimuli predicting sucrose solution, the three conditioned (two elemental and one configural) stimuli predicting no reward, and the lone conditioned stimulus predicting intracranial self-stimulation. The results suggest that the anterior cingulate cortex is organized topographically; stimulus attributes predicting reward or no reward are represented in the rostral and ventral parts of the anterior cingulate cortex, while the caudal and dorsal parts of the anterior cingulate cortex are related to execution of learned instrumental behaviors. These results are in line with recent neuropsychological studies suggesting that the rostral part of the anterior cingulate cortex plays a crucial role in socio-emotional behaviors by assigning a positive or negative value to future outcomes.  相似文献   

5.
Learning theory emphasizes the importance of expectations in the control of instrumental action. This study investigated the variation of behavioral reactions toward different rewards as an expression of differential expectations of outcomes in primates. We employed several versions of two basic behavioral paradigms, the spatial delayed response task and the delayed reaction task. These tasks are commonly used in neurobiological studies of working memory, movement preparation, and event expectation involving the frontal cortex and basal ganglia. An initial visual instruction stimulus indicated to the animal which one of several food or liquid rewards would be delivered after each correct behavioral response, or whether or not a reward could be obtained. We measured the reaction times of the operantly conditioned arm movement necessary for obtaining the reward, and the durations of anticipatory licking prior to liquid reward delivery as a Pavlovian conditioned response. The results showed that both measures varied depending on the reward predicted by the initial instruction. Arm movements were performed with significantly shorter reaction times for foods or liquids that were more preferred by the animal than for less preferred ones. Still larger differences were observed between rewarded and unrewarded trials. An interesting effect was found in unrewarded trials, in which reaction times were significantly shorter when a highly preferred reward was delivered in the alternative rewarded trials of the same trial block as compared to a less preferred reward. Anticipatory licks preceding the reward were significantly longer when highly preferred rather than less preferred rewards, or no rewards, were predicted. These results demonstrate that behavioral reactions preceding rewards may vary depending on the predicted future reward and suggest that monkeys differentially expect particular outcomes in the presently investigated tasks.  相似文献   

6.
The orbitofrontal cortex appears to be involved in the control of voluntary, goal-directed behavior by motivational outcomes. This study investigated how orbitofrontal neurons process information about rewards in a task that depends on intact orbitofrontal functions. In a delayed go-nogo task, animals executed or withheld a reaching movement and obtained liquid or a conditioned sound as reinforcement. An initial instruction picture indicated the behavioral reaction to be performed (movement vs. nonmovement) and the reinforcer to be obtained (liquid vs. sound) after a subsequent trigger stimulus. We found task-related activations in 188 of 505 neurons in rostral orbitofrontal area 13, entire area 11, and lateral area 14. The principal task-related activations consisted of responses to instructions, activations preceding reinforcers, or responses to reinforcers. Most activations reflected the reinforcing event rather than other task components. Instruction responses occurred either in liquid- or sound-reinforced trials but rarely distinguished between movement and nonmovement reactions. These instruction responses reflected the predicted motivational outcome rather than the behavioral reaction necessary for obtaining that outcome. Activations preceding the reinforcer began slowly and terminated immediately after the reinforcer, even when the reinforcer occurred earlier or later than usually. These activations preceded usually the liquid reward but rarely the conditioned auditory reinforcer. The activations also preceded expected drops of liquid delivered outside the task, suggesting a primary appetitive rather than a task-reinforcing relationship that apparently was related to the expectation of reward. Responses after the reinforcer occurred in liquid- but rarely in sound-reinforced trials. Reward-preceding activations and reward responses were unrelated temporally to licking movements. Several neurons showed reward responses outside the task but instruction responses during the task, indicating a response transfer from primary reward to the reward-predicting instruction, possibly reflecting the temporal unpredictability of reward. In conclusion, orbitofrontal neurons report stimuli associated with reinforcers are concerned with the expectation of reward and detect reward delivery at trial end. These activities may contribute to the processing of reward information for the motivational control of goal-directed behavior.  相似文献   

7.
1. This study investigated neuronal activity in the striatum preceding predictable environmental events and behavioral reactions. Monkeys performed in a delayed go-nogo task that included separate time periods during which animals expected signals of behavioral significance, prepared for execution or inhibition of arm reaching movements, and expected the delivery of reward. In the task, animals were instructed by a green light cue to perform an arm reaching movement when a trigger stimulus came on approximately 3 s later (go situation). Movement was withheld after the same trigger light when the instruction cue had been red (nogo situation). Liquid reward was delivered on correct performance in both situations. 2. A total of 1,173 neurons were studied in the striatum (caudate nucleus and putamen) of 3 animals, of which 615 (52%) showed some change in activity during task performance. This report describes how the activity of 193 task-related neurons increased in advance of at least 1 component of the task, namely the instruction cue, the trigger stimulus, or the delivery of liquid reward. These neurons were found in dorsal and anterior parts of caudate and putamen and were slightly more frequent in the proximity of the internal capsule. 3. The activity of 16 neurons increased in both go and nogo trials before the onset of the instruction and subsided shortly after this signal. These activations may be related to the expectation of the instruction as the first signal in each trial. 4. The activity of 15 neurons increased between the instruction and the trigger stimulus in both go and nogo trials. These activations may be related to the expectation of the trigger stimulus independent of an arm movement. Further 56 neurons showed sustained activations only when the instruction requested a movement reaction. Activations were absent in trials in which the movement was withheld. Twenty-one of these neurons were tested with 2 different movement targets, 5 of which showed activity related to the direction of movement. These activations may be related to the preparation of movement or expectation of the specific movement triggering signal. The activity of an additional 20 neurons was unmodulated before the trigger stimulus in movement trials but increased in the interval between the no-movement instruction and the trigger stimulus for withholding the movement. These activations may be related to the preparation of movement inhibition as specific nogo reaction.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

8.
R E Suri  W Schultz 《Neuroscience》1999,91(3):871-890
This study investigated how the simulated response of dopamine neurons to reward-related stimuli could be used as reinforcement signal for learning a spatial delayed response task. Spatial delayed response tasks assess the functions of frontal cortex and basal ganglia in short-term memory, movement preparation and expectation of environmental events. In these tasks, a stimulus appears for a short period at a particular location, and after a delay the subject moves to the location indicated. Dopamine neurons are activated by unpredicted rewards and reward-predicting stimuli, are not influenced by fully predicted rewards, and are depressed by omitted rewards. Thus, they appear to report an error in the prediction of reward, which is the crucial reinforcement term in formal learning theories. Theoretical studies on reinforcement learning have shown that signals similar to dopamine responses can be used as effective teaching signals for learning. A neural network model implementing the temporal difference algorithm was trained to perform a simulated spatial delayed response task. The reinforcement signal was modeled according to the basic characteristics of dopamine responses to novel stimuli, primary rewards and reward-predicting stimuli. A Critic component analogous to dopamine neurons computed a temporal error in the prediction of reinforcement and emitted this signal to an Actor component which mediated the behavioral output. The spatial delayed response task was learned via two subtasks introducing spatial choices and temporal delays, in the same manner as monkeys in the laboratory. In all three tasks, the reinforcement signal of the Critic developed in a similar manner to the responses of natural dopamine neurons in comparable learning situations, and the learning curves of the Actor replicated the progress of learning observed in the animals. Several manipulations demonstrated further the efficacy of the particular characteristics of the dopamine-like reinforcement signal. Omission of reward induced a phasic reduction of the reinforcement signal at the time of the reward and led to extinction of learned actions. A reinforcement signal without prediction error resulted in impaired learning because of perseverative errors. Loss of learned behavior was seen with sustained reductions of the reinforcement signal, a situation in general comparable to the loss of dopamine innervation in Parkinsonian patients and experimentally lesioned animals. The striking similarities in teaching signals and learning behavior between the computational and biological results suggest that dopamine-like reward responses may serve as effective teaching signals for learning behavioral tasks that are typical for primate cognitive behavior, such as spatial delayed responding.  相似文献   

9.
Summary The sources of input and the behavioral effects of lesions and drug administration suggest that the striatum participates in motivational processes. We investigated the activity of single striatal neurons of monkeys in response to reward delivered for performing in a go-nogo task. A drop of liquid was given each time the animal correctly executed or withheld an arm movement in reaction to a visual stimulus. Of 1593 neurons, 115 showed increased activity in response to delivery of liquid reward in both go and nogo trials. Responding neurons were predominantly located in dorsal and ventromedial parts of anterior putamen, in dorsal and ventral caudate, and in nucleus accumbens. They were twice as frequent in ventral as compared to dorsal striatal areas. Responses occurred at a median latency of 337 ms and lasted for 525 ms, with insignificant differences between dorsal and ventral striatum. Reward responses differed from activity recorded in the face area of posterior putamen which varied synchronously with individual mouth movements. Responses were directly related to delivery of primary liquid reward and not to auditory stimuli associated with it. Most of them also occurred when reward was delivered outside of the task. These results demonstrate that neurons of dorsal and particularly ventral striatum are involved in processing information concerning the attribution of primary reward.  相似文献   

10.
The role of the substantia nigra pars reticulata (SNpr) has been studied in the head-free monkey during orienting behaviour in response to visual instruction signals triggering head positioning and conditioned arm movement. During the behavioural responses we recorded the electromyographic activities of neck muscles and triceps brachii, head movement, horizontal electrooculogram and single unit activity of SNpr neurons. Activity of 38 neurons located in the medial part of SNpr were analysed during the visuo-motor task. Forty percent of these units showed a moderate decrease in tonic firing rate during postural preparation preceding the orientation toward eccentric visual signal. This decrease, unrelated with saccadic eye movements per se, was followed by a marked pause observed when the rewarded stimulus was switched on and the conditioned arm movement was executed to get the reward. These data suggest that the pause in discharge of these SNpr neurons are time locked with behaviourally relevant visual stimuli and/or appropriate motor responses.  相似文献   

11.
Animals optimize behaviors by predicting future critical events based on histories of actions and their outcomes. When behavioral outcomes like reward and aversion are signaled by current external cues, actions are directed to acquire the reward and avoid the aversion. The basal ganglia are thought to be the brain locus for reward-based adaptive action planning and learning. To understand the role of striatum in coding outcomes of forthcoming behavioral responses, we addressed two specific questions. First, how are the histories of reward and aversion used for encoding forthcoming outcomes in the striatum during a series of instructed behavioral responses? Second, how are the behavioral responses and their instructed outcomes represented in the striatum? We recorded discharges of 163 presumed projection neurons in the striatum while monkeys performed a visually instructed lever-release task for reward, aversion, and sound outcomes, whose occurrences could be estimated by their histories. Before outcome instruction, discharge rates of a subset of neurons activated in this epoch showed positive or negative regression slopes with reward history (24/44), that is, to the number of trials since the last reward trial, which changed in parallel with reward probability of current trials. The history effect was also observed for the aversion outcome but in far fewer neurons (3/44). Once outcomes were instructed in the same task, neurons selectively encoded the outcomes before and after behavioral responses (reward, 46/70; aversion, 6/70; sound, 6/70). The history- and current instruction-based coding of forthcoming behavioral outcomes in the striatum might underlie outcome-oriented behavioral modulation.  相似文献   

12.
The nucleus accumbens (NAc) plays an important role in both appetitive and consummatory behavior. To examine how NAc neurons encode information during reward consumption, we recorded the firing activity of rat NAc neurons during the performance of a discriminative stimulus task. In this task, the animal must make an operant response to an intermittently presented cue to obtain a sucrose reward delivered in a reward receptacle. Uncued entries to the receptacle were not rewarded. Both excitations and inhibitions during reward consumption were observed, but substantially more neurons were inhibited than excited. These excitations and inhibitions began when the animal entered the reward receptacle and ended when the animal exited the receptacle. Both excitations and inhibitions were much smaller or nonexistent when the animal made uncued entries into the reward receptacle. In one set of experiments, we randomly withheld the reward in some cued trials that would otherwise have been rewarded. Excitations and inhibitions were of similar magnitude whether or not the reward was delivered. This indicates that the sensory stimulus of reward does not drive these phasic responses; instead, the reward-associated responses may be driven by the conditioned stimuli associated with reward, or they may encode information about consummatory motor activity. Another population of NAc neurons was excited on exit from the reward receptacle. Many of these excitations persisted for tens of seconds after the receptacle exit and showed a significant inverse correlation with the rate of uncued operant responding. These findings are consistent with a contribution of NAc neurons to both reward consummatory and reward seeking behavior.  相似文献   

13.
This study investigated how neuronal activity in orbitofrontal cortex related to the expectation of reward changed while monkeys repeatedly learned to associate new instruction pictures with known behavioral reactions and reinforcers. In a delayed go-nogo task with several trial types, an initial picture instructed the animal to execute or withhold a reaching movement and to expect a liquid reward or a conditioned auditory reinforcer. When novel instruction pictures were presented, animals learned according to a trial-and-error strategy. After experience with a large number of novel pictures, learning occurred in a few trials, and correct performance usually exceeded 70% in the first 60-90 trials. About 150 task-related neurons in orbitofrontal cortex were studied in both familiar and learning conditions and showed two major forms of changes during learning. Quantitative changes of responses to the initial instruction were seen as appearance of new responses, increase of existing responses, or decrease or complete disappearance of responses. The changes usually outlasted initial learning trials and persisted during subsequent consolidation. They often modified the trial selectivities of activations. Increases might reflect the increased attention during learning and induce neuronal changes underlying the behavioral adaptations. Decreases might be related to the unreliable reward-predicting value of frequently changing learning instructions. The second form of changes reflected the adaptation of reward expectations during learning. In initial learning trials, animals reacted as if they expected liquid reward in every trial type, although only two of the three trial types were rewarded with liquid. In close correspondence, neuronal activations related to the expectation of reward occurred initially in every trial type. The behavioral indices for reward expectation and their neuronal correlates adapted in parallel during the course of learning and became restricted to rewarded trials. In conclusion, these data support the notion that neurons in orbitofrontal cortex code reward information in a flexible and adaptive manner during behavioral changes after novel stimuli.  相似文献   

14.
It has been proposed that nucleus accumbens neurons respond to outcome (reward and punishment) and outcome-predictive information. Alternatively, it has been suggested that these neurons respond to salient stimuli, regardless of their outcome-predictive properties, to facilitate a switch in ongoing behavior. We recorded the activity of 82 single-nucleus accumbens neurons in thirsty rats responding within a modified go/no-go task. The task design allowed us to analyze whether neurons responded to conditioned stimuli that predicted rewarding (saccharin) or aversive (quinine) outcomes, and whether the neural responses correlated with behavioral switching. Approximately one third (28/82) of nucleus accumbens neurons exhibited 35 responses to conditioned stimuli. Over 2/3 of these responses encoded the nature of the upcoming rewarding (19/35) or aversive (5/35) outcome. No response was selective solely for the switching of the rat's behavior, although the activity of approximately one third of responses (11/35) predicted the upcoming outcome and was correlated with the presence or absence of a subsequent behavioral switch. Our data suggest a primary functional role for the nucleus accumbens in encoding outcome-predicting information and a more limited role in behavioral switching.  相似文献   

15.
The dynamics of interneuronal functional connections were studied in the prefrontal cortex of dogs performing a task consisting of unforeseen remodeling of movement conditioned reflexes. An original method was used to find and classify the temporal patterns of linked spikes coming from several simultaneously recorded neurons. This procedure showed that in 33 pairs of neurons (87% of the total number of pairs showing interneuronal functional connections), parts of the conditioned reflex program were associated with behaviorally significant changes in the functional relationship between the neurons. In different behavioral situations, linked activation of a given pair of cells was restricted to different stages of the performance of the conditioned reflex task. In 17 pairs of neurons, the periods of switching of interneuronal functional connections, i.e., the intervals in which linked spikes were absent during all stages of task performance, were seen only in a particular behavioral context different from the situation in which these same cells generated linked discharges. The specific characteristics of different methods of analyzing the dynamics of interneuronal functional connections in conditions of a dynamic behavioral context and multifactorial determination of conditioned reflex activity are discussed.  相似文献   

16.
Summary The behavioral relationships of 396 striatum neurons with regular, tonically elevated discharge rates were studied. While monkeys performed a delayed gonogo task, neurons predominantly located in medial putamen responded with phasic depressions (n = 30) or activations (n = 5) to task-specific stimuli. Particularly effective was an instruction light preparing for movement or no-movement reactions, and an auditory signal associated with reward delivery. Stimuli triggering arm or mouth movements were less effective. The data demonstrate that these usually poorly modulated neurons display context-dependent phasic activity in specific behavioral situations.  相似文献   

17.
The projection from the thalamic centre médian-parafascicular (CM-Pf) complex to the caudate nucleus and putamen forms a massive striatal input system in primates. We examined the activity of 118 neurons in the CM and 62 neurons in the Pf nuclei of the thalamus and 310 tonically active neurons (TANs) in the striatum in awake behaving macaque monkeys and analyzed the effects of pharmacologic inactivation of the CM-Pf on the sensory responsiveness of the striatal TANs. A large proportion of CM and Pf neurons responded to visual (53%) and/or auditory beep (61%) or click (91%) stimuli presented in behavioral tasks, and many responded to unexpected auditory, visual, or somatosensory stimuli presented outside the task context. The neurons fell into two classes: those having short-latency facilitatory responses (SLF neurons, predominantly in the Pf) and those having long-latency facilitatory responses (LLF neurons, predominantly in the CM). Responses of both types of neuron appeared regardless of whether or not the sensory stimuli were associated with reward. These response characteristics of CM-Pf neurons sharply contrasted with those of TANs in the striatum, which under the same conditions responded preferentially to stimuli associated with reward. Many CM-Pf neurons responded to alerting stimuli such as unexpected handclaps and noises only for the first few times that they occurred; after that, the identical stimuli gradually became ineffective in evoking responses. Habituation of sensory responses was particularly common for the LLF neurons. Inactivation of neuronal activity in the CM and Pf by local infusion of the GABA(A) receptor agonist, muscimol, almost completely abolished the pause and rebound facilitatory responses of TANs in the striatum. Such injections also diminished behavioral responses to stimuli associated with reward. We suggest that neurons in the CM and Pf supply striatal neurons with information about behaviorally significant sensory events that can activate conditional responses of striatal neurons in combination with dopamine-mediated nigrostriatal inputs having motivational value.  相似文献   

18.
Anatomic and behavioral evidence shows that TE and perirhinal cortices are two directly connected but distinct inferior temporal areas. Despite this distinctness, physiological properties of neurons in these two areas generally have been similar with neurons in both areas showing selectivity for complex visual patterns and showing response modulations related to behavioral context in the sequential delayed match-to-sample (DMS) trials, attention, and stimulus familiarity. Here we identify physiological differences in the neuronal activity of these two areas. We recorded single neurons from area TE and perirhinal cortex while the monkeys performed a simple behavioral task using randomly interleaved visually cued reward schedules of one, two, or three DMS trials. The monkeys used the cue's relation to the reward schedule (indicated by the brightness) to adjust their behavioral performance. They performed most quickly and most accurately in trials in which reward was immediately forthcoming and progressively less well as more intermediate trials remained. Thus the monkeys appeared more motivated as they progressed through the trial schedule. Neurons in both TE and perirhinal cortex responded to both the visual cues related to the reward schedules and the stimulus patterns used in the DMS trials. As expected, neurons in both areas showed response selectivity to the DMS patterns, and significant, but small, modulations related to the behavioral context in the DMS trial. However, TE and perirhinal neurons showed strikingly different response properties. The latency distribution of perirhinal responses was centered 66 ms later than the distribution of TE responses, a larger difference than the 10-15 ms usually found in sequentially connected visual cortical areas. In TE, cue-related responses were related to the cue's brightness. In perirhinal cortex, cue-related responses were related to the trial schedules independently of the cue's brightness. For example, some perirhinal neurons responded in the first trial of any reward schedule including the one trial schedule, whereas other neurons failed to respond in the first trial but respond in the last trial of any schedule. The majority of perirhinal neurons had more complicated relations to the schedule. The cue-related activity of TE neurons is interpreted most parsimoniously as a response to the stimulus brightness, whereas the cue-related activity of perirhinal neurons is interpreted most parsimoniously as carrying associative information about the animal's progress through the reward schedule. Perirhinal cortex may be part of a system gauging the relation between work schedules and rewards.  相似文献   

19.
The intermediate cerebellum (the intermediate cerebellar cortex and interposed nuclei) and associated brainstem circuits are essential for the acquisition and expression of classically conditioned eyeblinks in the rabbit. The purpose of the present experiment was to determine whether these circuits are also involved in adaptive eyelid closure learned in an instrumental paradigm. For that purpose, rabbits with unrestrained eyelids were trained in two tasks: (1) classical conditioning of the eyeblink; and (2) a new instrumental task in which they avoided delivery of an aversive stimulus by maintaining tonic eyelid closure. To examine the involvement of the intermediate cerebellum in these two types of learned behavior, the cerebellar interposed nuclei were injected with the GABAA agonist muscimol and with the GABAA antagonist picrotoxin. Inactivating the interposed nuclei with muscimol abolished classically conditioned eyeblinks and severely affected the rabbit's capacity to maintain tonic eyelid closure. On the other hand, reducing inhibition with picrotoxin failed to interrupt the learned responses and increased the amplitude of eyelid closure. These data indicate that the cerebellar interposed nuclei control both phasic classically conditioned eyeblinks and tonic instrumental eyelid closure. To account for this new finding, a "hybrid" hypothesis combining the cerebellar learning hypothesis and the performance hypothesis is proposed.  相似文献   

20.
Dworkin and Dworkin (1990) reported that conditioned responses of the tibial nerve (a putative measure of skeletal motor activity) were uncorrelated with conditioned responses of the plantar vasculature during discriminative Pavlovian conditioning in the chronically paralyzed rat. On the basis of this finding, Dworkin and Dworkin concluded that the vasomotor response had not been mediated by skeletal motor processes. This commentary presents neuroanatomical, physiological, and behavioral evidence that suggests that sudomotor (sweat gland) and not skeletal motor efference might have been responsible for the classically conditioned tibial nerve response of Dworkin and Dworkin (1990). If this interpretation is correct, then Dworkin and Dworkin have documented an autonomic-autonomic dissociation, not a skeletal motor-autonomic dissociation. Response mechanisms in Pavlovian and instrumental autonomic conditioning are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号