首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Self-organizing neural projections.   总被引:2,自引:0,他引:2  
Teuvo Kohonen 《Neural networks》2006,19(6-7):723-733
The Self-Organizing Map (SOM) algorithm was developed for the creation of abstract-feature maps. It has been accepted widely as a data-mining tool, and the principle underlying it may also explain how the feature maps of the brain are formed. However, it is not correct to use this algorithm for a model of pointwise neural projections such as the somatotopic maps or the maps of the visual field, first of all, because the SOM does not transfer signal patterns: the winner-take-all function at its output only defines a singular response. Neither can the original SOM produce superimposed responses to superimposed stimulus patterns. This presentation introduces a new self-organizing system model related to the SOM that has a linear transfer function for patterns and combinations of patterns all the time. Starting from a randomly interconnected pair of neural layers, and using random mixtures of patterns for training, it creates a pointwise-ordered projection from the input layer to the output layer. If the input layer consists of feature detectors, the output layer forms a feature map of the inputs.  相似文献   

2.
Recent efforts to develop large-scale brain and neurocognitive architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason for this is that most conventional SOMs use a static encoding representation: each input pattern or sequence is effectively represented as a fixed point activation pattern in the map layer, something that is inconsistent with the rhythmic oscillatory activity observed in the brain. Here we develop and study an alternative encoding scheme that instead uses sparsely-coded limit cycles to represent external input patterns/sequences. We establish conditions under which learned limit cycle representations arise reliably and dominate the dynamics in a SOM. These limit cycles tend to be relatively unique for different inputs, robust to perturbations, and fairly insensitive to timing. In spite of the continually changing activity in the map layer when a limit cycle representation is used, map formation continues to occur reliably. In a two-SOM architecture where each SOM represents a different sensory modality, we also show that after learning, limit cycles in one SOM can correctly evoke corresponding limit cycles in the other, and thus there is the potential for multi-SOM systems using limit cycles to work effectively as hetero-associative memories. While the results presented here are only first steps, they establish the viability of SOM models based on limit cycle activity patterns, and suggest that such models merit further study.  相似文献   

3.
4.
This paper explores the combination of self-organizing map (SOM) and feedback, in order to represent sequences of inputs. In general, neural networks with time-delayed feedback represent time implicitly, by combining current inputs and past activities. It has been difficult to apply this approach to SOM, because feedback generates instability during learning. We demonstrate a solution to this problem, based on a nonlinearity. The result is a generalization of SOM that learns to represent sequences recursively. We demonstrate that the resulting representations are adapted to the temporal statistics of the input series.  相似文献   

5.
Autism is a developmental disorder with possibly multiple pathophysiologies. It has been theorized that cortical feature maps in individuals with autism are inadequate for forming abstract codes and representations. Cortical feature maps make it possible to classify stimuli, such as phonemes of speech, disregarding incidental detail. Hierarchies of such maps are instrumental in creating abstract codes and representations of objects and events. Self-Organizing Maps (SOMs) are artificial neural networks that offer insights into the development of cortical feature maps. Attentional impairment is prevalent in autism, but whether it is caused by attention-shift impairment or strong familiarity preference or negative response to novelty is a matter of debate. We model attention shift during self-organization by presenting a SOM with stimuli from two sources in four different modes, namely, novelty seeking (regarded as normal learning), attention-shift impairment (shifts are made with a low probability), familiarity preference (shifts made with a lower probability to the source that is the less familiar to the SOM of the two sources), and familiarity preference in conjunction with attention-shift impairment. The resulting feature maps from learning with novelty seeking and with attention-shift impairment are much the same except that learning with attention-shift impairment often yields maps with a somewhat better discrimination capacity than learning with novelty seeking. In contrast, the resulting maps from learning with strong familiarity preference are adapted to one of the sources at the expense of the other, and if one of the sources has a set of stimuli with smaller variability, the resulting maps are adapted to stimuli from that source. When familiarity preference is less pronounced, the resulting maps may become normal or fully restricted to one of the sources, and in that case, always the source with smaller variability if such a source is present. Such learning, in a system with many different maps, will result in very uneven capacities. Learning with familiarity preference in conjunction with attention-shift impairment surprisingly has higher probability for the development of normal maps than learning with familiarity preference alone.  相似文献   

6.
Various forms of the self-organizing map (SOM) have been proposed as models of cortical development [Choe Y., Miikkulainen R., (2004). Contour integration and segmentation with self-organized lateral connections. Biological Cybernetics, 90, 75-88; Kohonen T., (2001). Self-organizing maps (3rd ed.). Springer; Sirosh J., Miikkulainen R., (1997). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neural Computation, 9(3), 577-594]. Typically, these models use weight normalization to contain the weight growth associated with Hebbian learning. A more plausible mechanism for controlling the Hebbian process has recently emerged. Turrigiano and Nelson [Turrigiano G.G., Nelson S.B., (2004). Homeostatic plasticity in the developing nervous system. Nature Reviews Neuroscience, 5, 97-107] have shown that neurons in the cortex actively maintain an average firing rate by scaling their incoming weights. In this work, it is shown that this type of homeostatic synaptic scaling can replace the common, but unsupported, standard weight normalization. Organized maps still form and the output neurons are able to maintain an unsaturated firing rate, even in the face of large-scale cell proliferation or die-off. In addition, it is shown that in some cases synaptic scaling leads to networks that more accurately reflect the probability distribution of the input data.  相似文献   

7.
The self-organizing ARTMAP rule discovery (SOARD) system derives relationships among recognition classes during online learning. SOARD training on input/output pairs produces the basic competence of direct recognition of individual class labels for new test inputs. As a typical supervised system, it learns many-to-one maps, which recognize different inputs (Spot, Rex) as belonging to one class (dog). As an ARTMAP system, it also learns one-to-many maps, allowing a given input (Spot) to learn a new class (animal) without forgetting its previously learned output (dog), even as it corrects erroneous predictions (cat). As it learns individual input/output class predictions, SOARD employs distributed code representations that support online rule discovery. When the input Spot activates the classes dogand animal, confidence in the rule dog→animal begins to grow. When other inputs simultaneously activate classes cat and animal, confidence in the converse rule, animal→dog, decreases. Confidence in a self-organized rule is encoded as the weight in a path from one class node to the other. An experience-based mechanism modulates the rate of rule learning, to keep inaccurate predictions from creating false rules during early learning. Rules may be excitatory or inhibitory so that rule-based activation can add missing classes and remove incorrect ones. SOARD rule activation also enables inputs to learn to make direct predictions of output classes that they have never experienced during supervised training. When input Rex activates its learned class dog, the rule dog→animal indirectly activates the output class animal. The newly activated class serves as a teaching signal which allows input Rex to learn direct activation of the output class animal. Simulations using small-scale and large-scale datasets demonstrate functional properties of the SOARD system in both spatial and time-series domains.  相似文献   

8.
《Neural networks》1999,12(6):803-823
Topographic map algorithms that are aimed at building “faithful representations” also yield maps that transfer the maximum amount of information available about the distribution from which they receive input. The weight density (magnification factor) of these maps is proportional to the input density, or the neurons of these maps have an equal probability to be active (equiprobabilistic map). As MSE minimization is not compatible with equiprobabilistic map formation in general, a number of heuristics have been devised in order to compensate for this discrepancy in competitive learning schemes, e.g. by adding a “conscience” to the neurons’ firing behavior. However, rather than minimizing a modified MSE criterion, we introduce a new unsupervised competitive learning rule, called the kernel-based Maximum Entropy learning Rule (kMER), for topographic map formation, that optimizes an information-theoretic criterion directly. To each neuron a radially symmetric kernel is associated, with a given center and radius, and the two are updated in such a way that the (unconditional) information-theoretic entropy of the neurons’ outputs is maximized. We review a number of competitive learning rules for building equiprobabilistic maps. As benchmark tests for the faithfulness of the representations, we consider two types of distributions and compare the performances of these rules and kMER, for batch and incremental learning. As a first example application, we consider non-parametric density estimation where the maps are used for generating “pilot” estimates in kernel-based density estimation. The second application we envisage for kMER is “on-line” adaptive filtering of speech signals, using Gabor functions as wavelet filters. The topographic feature maps that are developed in this way differ in several respects from those obtained with Kohonen's Adaptive-Subspace SOM algorithm.  相似文献   

9.
The self-organizing map (SOM) is a nonlinear unsupervised method for vector quantization. In the context of classification and data analysis, the SOM technique highlights the neighbourhood structure between clusters. The correspondence between this clustering and the input proximity is called the topology preservation. We present here a stochastic method based on bootstrapping in order to increase the reliability of the induced neighbourhood structure. Considering the property of topology preservation, a local approach of variability (at an individual level) is preferred to a global one. The resulting (robust) map, called R-map, is more stable relatively to the choice of the sampling method and to the learning options of the SOM algorithm (initialization and order of data presentation). The method consists of selecting one map from a group of several solutions resulting from the same self-organizing map algorithm, but obtained with various inputs. The R-map can be thought of as the map, among the group of solutions, corresponding to the most common interpretation of the data set structure. The R-map is then the representative of a given SOM network, and the R-map ability to adjust the data structure indicates the relevance of the chosen network.  相似文献   

10.
Sohrab  Cornelius  Jochen 《Neural networks》2009,22(5-6):586-592
The brain is able to perform actions based on an adequate internal representation of the world, where task-irrelevant features are ignored and incomplete sensory data are estimated. Traditionally, it is assumed that such abstract state representations are obtained purely from the statistics of sensory input for example by unsupervised learning methods. However, more recent findings suggest an influence of the dopaminergic system, which can be modeled by a reinforcement learning approach. Standard reinforcement learning algorithms act on a single layer network connecting the state space to the action space. Here, we involve in a feature detection stage and a memory layer, which together, construct the state space for a learning agent. The memory layer consists of the state activation at the previous time step as well as the previously chosen action. We present a temporal difference based learning rule for training the weights from these additional inputs to the state layer. As a result, the performance of the network is maintained both, in the presence of task-irrelevant features, and at randomly occurring time steps during which the input is invisible. Interestingly, a goal-directed forward model emerges from the memory weights, which only covers the state–action pairs that are relevant to the task. The model presents a link between reinforcement learning, feature detection and forward models and may help to explain how reward systems recruit cortical circuits for goal-directed feature detection and prediction.  相似文献   

11.
Granule cells of the dentate gyrus (DG) generally have multiple place fields, whereas CA3 cells, which are second order, have only a single place field. Here, we explore the mechanisms by which the high selectivity of CA3 cells is achieved. Previous work showed that the multiple place fields of DG neurons could be quantitatively accounted for by a model based on the number and strength of grid cell inputs and a competitive network interaction in the DG that is mediated by gamma frequency feedback inhibition. We have now built a model of CA3 based on similar principles. CA3 cells receive input from an average of one active DG cell and from 1,400 cortical grid cells. Based on experimental findings, we have assumed a linear interaction of the two pathways. The results show that simulated CA3 cells generally have a single place field, as observed experimentally. Thus, a two-step process based on simple rules (and that can occur without learning) is able to explain how grid cell inputs to the hippocampus give rise to cells having ultimate spatial selectivity. The CA3 processes that produce a single place depend critically on the competitive network processes and do not require the direct cortical inputs to CA3, which are therefore likely to perform some other unknown function.  相似文献   

12.
The plasticity of sensorimotor systems in mammals underlies the capacity for motor learning as well as the ability to relearn following injury. Spinal cord injury, which both deprives afferent input and interrupts efferent output, results in a disruption of cortical somatotopy. While changes in corticospinal axons proximal to the lesion are proposed to support the reorganization of cortical motor maps after spinal cord injury, intracortical horizontal connections are also likely to be critical substrates for rehabilitation-mediated recovery. Intrinsic connections have been shown to dictate the reorganization of cortical maps that occurs in response to skilled motor learning as well as after peripheral injury. Cortical networks incorporate changes in motor and sensory circuits at subcortical or spinal levels to induce map remodeling in the neocortex. This review focuses on the reorganization of cortical networks observed after injury and posits a role of intracortical circuits in recovery.  相似文献   

13.
Results of neural network learning are always subject to some variability, due to the sensitivity to initial conditions, to convergence to local minima, and, sometimes more dramatically, to sampling variability. This paper presents a set of tools designed to assess the reliability of the results of self-organizing maps (SOM), i.e. to test on a statistical basis the confidence we can have on the result of a specific SOM. The tools concern the quantization error in a SOM, and the neighborhood relations (both at the level of a specific pair of observations and globally on the map). As a by-product, these measures also allow to assess the adequacy of the number of units chosen in a map. The tools may also be used to measure objectively how the SOM are less sensitive to non-linear optimization problems (local minima, convergence, etc.) than other neural network models.  相似文献   

14.
This paper presents two novel neural networks based on snap-drift in the context of self-organisation and sequence learning. The snap-drift neural network employs modal learning that is a combination of two modes; fuzzy AND learning (snap), and Learning Vector Quantisation (drift). We present the snap-drift self-organising map (SDSOM) and the recurrent snap-drift neural network (RSDNN). The SDSOM uses the standard SOM architecture, where a layer of input nodes connects to the self-organising map layer and the weight update consists of either snap (min of input and weight) or drift (LVQ, as in SOM). The RSDNN uses a simple recurrent network (SRN) architecture, with the hidden layer values copied back to the input layer. A form of reinforcement learning is deployed in which the mode is swapped between the snap and drift when performance drops, and in which adaptation is probabilistic, whereby the probability of a neuron being adapted is reduced as performance increases. The algorithms are evaluated on several well known data sets, and it is found that these exhibit effective learning that is faster than alternative neural network methods.  相似文献   

15.
16.
In this work, we focus on the problem of training ensembles or, more generally, a set of self-organizing maps (SOMs). In the light of new theory behind ensemble learning, in particular negative correlation learning (NCL), the question arises if SOM ensemble learning can benefit from non-independent learning when the individual learning stages are interlinked by a term penalizing correlation in errors. We can show that SOMs are well suited as weak ensemble components with a small number of neurons. Using our approach, we obtain efficiently trained SOM ensembles outperforming other reference learners. Due to the transparency of SOMs, we can give insights into the interrelation between diversity and sublocal accuracy inside SOMs. We are able to shed light on the diversity arising over a combination of several factors: explicit versus implicit as well as inter-diversities versus intra-diversities. NCL fully exploits the potential of SOM ensemble learning when the single neural networks co-operate at the highest level and stability is satisfied. The reported quantified diversities exhibit high correlations to the prediction performance.  相似文献   

17.
Since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. However, the mechanisms underlying their formation are poorly understood and, so far, there is no biologically plausible algorithms which can explain how external stimuli can be online stored in cell assemblies.We addressed this question in a previous paper [Salihoglu, U., Bersini, H., Yamaguchi, Y., Molter, C., (2009). A model for the cognitive map formation: Application of the retroaxonal theory. In Proc. IEEE international joint conference on neural networks], were, based on biologically plausible mechanisms, a novel unsupervised algorithm for online cell assemblies’ creation was developed. The procedure involved simultaneously, a fast Hebbian/anti-Hebbian learning of the network’s recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilized the cell assemblies by learning the feedforward input connections.Here, we first quantify the role played by the retroaxonal feedback mechanism. Then, we show how multiple cognitive maps, composed by a set of orthogonal input stimuli, can be encoded in the network. As a result, when facing a previously learned input, the system is able to retrieve the cognitive map it belongs to. As a consequence, ambiguous inputs which could belong to multiple cognitive maps can be disambiguated by the knowledge of the context, i.e. the cognitive map.  相似文献   

18.
This is a simulation-based contribution exploring a novel approach to the open-ended formation of multimodal representations in autonomous agents. In particular, we address the issue of transferring (“bootstrapping”) feature selectivities between two modalities, from a previously learned or innate reference representation to a new induced representation. We demonstrate the potential of this algorithm by several experiments with synthetic inputs modeled after a robotics scenario where multimodal object representations are “bootstrapped” from a (reference) representation of object affordances. We focus on typical challenges in autonomous agents: absence of human supervision, changing environment statistics and limited computing power. We propose an autonomous and local neural learning algorithm termed PROPRE (projection–prediction) that updates induced representations based on predictability: competitive advantages are given to those feature-sensitive elements that are inferable from activities in the reference representation. PROPRE implements a bi-directional interaction of clustering (“projection”) and inference (“prediction”), the key ingredient being an efficient online measure of predictability controlling learning in the projection step. We show that the proposed method is computationally efficient and stable, and that the multimodal transfer of feature selectivity is successful and robust under resource constraints. Furthermore, we successfully demonstrate robustness to noisy reference representations, non-stationary input statistics and uninformative inputs.  相似文献   

19.
Brain capacity is dependent not so much on the number of neurons but on the number of synaptic connections with functional connections that develop over a lifetime of genetic programming and life experiences. In the uninjured human brain, cortical reorganization that occurs in response to learning and experience is referred to as brain plasticity. Motor learning and complex environments result in a greater number of synapses and an increase in dendritic branching, whereas repetitive movements alone, in the absence of motor learning, do not. Learning and experience lead to an expansion of cortical representation, while failure to maintain training results in a contraction of cortical representation. In animals, loss of sensory peripheral afferent input results in an expansion of the forelimb representation of the intact adjacent cortex. Prolonged periods of peripheral nerve stimulation in both animals and humans can lead to reorganization of related sensorimotor cortical maps.  相似文献   

20.
A self-organising network that grows when required.   总被引:6,自引:0,他引:6  
The ability to grow extra nodes is a potentially useful facility for a self-organising neural network. A network that can add nodes into its map space can approximate the input space more accurately, and often more parsimoniously, than a network with predefined structure and size, such as the Self-Organising Map. In addition, a growing network can deal with dynamic input distributions. Most of the growing networks that have been proposed in the literature add new nodes to support the node that has accumulated the highest error during previous iterations or to support topological structures. This usually means that new nodes are added only when the number of iterations is an integer multiple of some pre-defined constant, A. This paper suggests a way in which the learning algorithm can add nodes whenever the network in its current state does not sufficiently match the input. In this way the network grows very quickly when new data is presented, but stops growing once the network has matched the data. This is particularly important when we consider dynamic data sets, where the distribution of inputs can change to a new regime after some time. We also demonstrate the preservation of neighbourhood relations in the data by the network. The new network is compared to an existing growing network, the Growing Neural Gas (GNG), on a artificial dataset, showing how the network deals with a change in input distribution after some time. Finally, the new network is applied to several novelty detection tasks and is compared with both the GNG and an unsupervised form of the Reduced Coulomb Energy network on a robotic inspection task and with a Support Vector Machine on two benchmark novelty detection tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号