首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development.This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets.  相似文献   

2.
Standard image segmentation methods may not be able to segment astronomical images because their special nature. We present an algorithm for astronomical image segmentation based on self-organizing neural networks and wavelets. We begin by performing wavelet decomposition of the image. The segmentation process has two steps. In the first we separate the stars and other prominent objects using the second plane (w(2)) of the wavelet decomposition, which has little noise but retains enough signal to represent those objects. This method was as least as effective as the traditional source extraction methods in isolating bright objects both from the background and from extended sources. In the second step the rest of the image (extended sources and background) is segmented using a self-organizing neural network. The result is a predetermined number of clusters, which we associate with extended regions plus a small region for each star or bright object. We have applied the algorithm to segment images of both galaxies and planets. The results show that the simultaneous use of all the scales in the self-organizing neural network helps the segmentation process, since it takes into account not only the intensity level, but also both the high and low frequencies present in the image. The connectivity of the regions obtained also shows that the algorithm is robust in the presence of noise. The method can also be applied to restored images.  相似文献   

3.
In this paper we present a self-organizing neural network model of early lexical development called DevLex. The network consists of two self-organizing maps (a growing semantic map and a growing phonological map) that are connected via associative links trained by Hebbian learning. The model captures a number of important phenomena that occur in early lexical acquisition by children, as it allows for the representation of a dynamically changing linguistic environment in language learning. In our simulations, DevLex develops topographically organized representations for linguistic categories over time, models lexical confusion as a function of word density and semantic similarity, and shows age-of-acquisition effects in the course of learning a growing lexicon. These results match up with patterns from empirical research on lexical development, and have significant implications for models of language acquisition based on self-organizing neural networks.  相似文献   

4.
This paper presents a neural model of similarity perception in identification tasks. It is based on self-organizing maps and population coding and is examined through five different identification experiments. Simulating an identification task, the neural model generates a confusion matrix that can be compared directly with that of human subjects. The model achieves a fairly accurate match with the pertaining experimental data both during training and thereafter. To achieve this fit, we find that the entire activity in the network should decline while learning the identification task, and that the population encoding of the specific stimuli should become sparse as the network organizes. Our results, thus, suggest that a self-organizing neural model employing population coding can account for identification processing while suggesting computational constraints on the underlying cortical networks.  相似文献   

5.
基于多重分形谱和自组织神经网络的医学图像分割   总被引:1,自引:0,他引:1  
背景:单一的多重分形谱图像分割虽然在边缘及纹理的区分上有较大优势,但是选择不同的测度,不同的阈值对于分割结果影响比较大,正确地选择最优的测度比较困难。 目的:结合多重分形谱图像分割法及自组织特征映射神经网络对医学图像进行处理。 方法:以图像每一象素及其周围象素的均值及方差为基本特征,再结合4种不同多重分形谱为纹理特征,实现自组织特征映射神经网络。 结果与结论:选择不同的测度对同一图像的分割结果是不一样的,并且同一种测度对不同图像的分割效果也不一样,说明基于多重分形谱的医学图像分割中选择合适的测度是一个关键所在。因此将多重分形谱结合自组织特征映射神经网络的方法对图像进行处理,该方法省略了选择测度的步骤,直接把4种多重分形谱作为特征,与另两种基本特征一起作为自组织神经网络的输入,对网络进行学习,并自动对图像进行分割。实验结果表明该方法在满足复杂图像中有效进行分割的同时达到了自动、自适应的目的。 关键词: 多重分形;自组织特征映射神经网络;医学图像分割;纹理;数字化影像技术  相似文献   

6.
The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested and it has been shown to produce results comparable with those obtained with other neural network methods such as Back Propagation, Cascade Correlation, Rulex, and Ruleneg. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. For the first time, a study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer. The results obtained show the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level. The results obtained with the RDP are comparable to those obtained with the backpropagation and the Cascade Correlation algorithms.  相似文献   

7.
A pruning method for the recursive least squared algorithm.   总被引:1,自引:0,他引:1  
The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.  相似文献   

8.
Electronic implementation of a class of neural networks whose short-term memory equation is governed by multiplicative, rather than additive, inhibition is proposed. The network models can be derived from ionic flow in nerve membranes and multiplicative terms result from control of conductive paths by voltages of other cells in the network. Since Field Effect Transistors (FETs) are voltage controlled conductances when operated below pinch-off, these networks can be readily implemented in FET technology using this physical property. This class of neural networks appears in many areas of the brain as well as the sensory system and has been used as a basic building block for the multilayer self-organizing architecture of Adaptive Resonance Theory (ART). The model has been especially useful for explaining a wide range of peripheral visual phenomena. The implementation is intended to specifically demonstrate desirable front-end image processing properties of contrast enhancement, edge detection, dynamic range compression, and adaptation of dynamics to mean intensity levels. Since the network can be mathematically described, its dynamics and stability may be examined. Compatibility of the network with higher level processing allows for its inclusion in multilayer self-organizing neural network architectures.  相似文献   

9.
In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.  相似文献   

10.
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11C‐PIB PET and T1‐weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto‐encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI‐based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI‐less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.  相似文献   

11.
Neural synchrony in schizophrenia: from networks to new treatments   总被引:1,自引:0,他引:1  
Evidence is accumulating that brain regions communicate with each other in the temporal domain, relying on coincidence of neural activity to detect phasic relationships among neurons and neural assemblies. This coordination between neural populations has been described as "self-organizing," an "emergent property" of neural networks arising from the temporal synchrony between synaptic transmission and firing of distinct neuronal populations. Evidence is also accumulating that communication and coordination failures between different brain regions may account for a wide range of problems in schizophrenia, from psychosis to cognitive dysfunction. We review the knowledge about the functional neuroanatomy and neurochemistry of neural oscillations and oscillation abnormalities in schizophrenia. Based on this, we argue that we can begin to use oscillations, across frequencies, to do translational studies to understand the neural basis of schizophrenia.  相似文献   

12.

Background

The range of the fatty acids has been largely investigated in the plasma and erythrocytes of patients suffering from neuropsychiatric disorders. In this paper we investigate, for the first time, whether the study of the platelet fatty acids from such patients may be facilitated by means of artificial neural networks.

Methods

Venous blood samples were taken from 84 patients with a DSM-IV-TR diagnosis of major depressive disorder and from 60 normal control subjects without a history of clinical depression. Platelet levels of the following 11 fatty acids were analyzed using one-way analysis of variance: C14:0, C16:0, C16:1, C18:0, C18:1 n-9, C18:1 n-7, C18:2 n-6, C18:3 n-3, C20:3 n-3, C20:4 n-6 and C22:6 n-3. The results were then entered into a wide variety of different artificial neural networks.

Results

All the artificial neural networks tested gave essentially the same result. However, one type of artificial neural network, the self-organizing map, gave superior information by allowing the results to be described in a two-dimensional plane with potentially informative border areas. A series of repeated and independent self-organizing map simulations, with the input parameters being changed each time, led to the finding that the best discriminant map was that obtained by inclusion of just three fatty acids.

Conclusion

Our results confirm that artificial neural networks may be used to analyze platelet fatty acids in neuropsychiatric disorder. Furthermore, they show that the self-organizing map, an unsupervised competitive-learning network algorithm which forms a nonlinear projection of a high-dimensional data manifold on a regular, low-dimensional grid, is an optimal type of artificial neural network to use for this task.
  相似文献   

13.
The problem of understanding how ensembles of neurons code for somatosensory information has been defined as a classification problem: given the response of a population of neurons to a set of stimuli, which stimulus generated the response on a single-trial basis? Multivariate statistical techniques such as linear discriminant analysis (LDA) and artificial neural networks (ANNs), and different types of preprocessing stages, such as principal and independent component analysis, have been used to solve this classification problem, with surprisingly small performance differences. Therefore, the goal of this project was to design a new method to maximize computational efficiency rather than classification performance. We developed a peri-stimulus time histogram (PSTH)-based method, which consists of creating a set of templates based on the average neural responses to stimuli and classifying each single trial by assigning it to the stimulus with the 'closest' template in the Euclidean distance sense. The PSTH-based method is computationally more efficient than methods as simple as linear discriminant analysis, performs significantly better than discriminant analyses (linear, quadratic or Mahalanobis) when small binsizes are used (1 ms) and as well as LDA with any other binsize, is optimal among other minimum-distance classifiers and can be optimally applied on raw neural data without a previous stage of dimension reduction. We conclude that the PSTH-based method is an efficient alternative to more sophisticated methods such as LDA and ANNs to study how ensemble of neurons code for discrete sensory stimuli, especially when datasets with many variables are used and when the time resolution of the neural code is one of the factors of interest.  相似文献   

14.
We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches (e.g., the Kohonen feature map) is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process that also includes occasional removal of units. The second variant of the model is a supervised learning method that results from the combination of the above-mentioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible—in contrast to earlier approaches—to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks that generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented that are better than any results previously published.  相似文献   

15.
A fast prototype-based nearest neighbor classifier is introduced. The proposed Adjusted SOINN Classifier (ASC) is based on SOINN (self-organizing incremental neural network), it automatically learns the number of prototypes needed to determine the decision boundary, and learns new information without destroying old learned information. It is robust to noisy training data, and it realizes very fast classification. In the experiment, we use some artificial datasets and real-world datasets to illustrate ASC. We also compare ASC with other prototype-based classifiers with regard to its classification error, compression ratio, and speed up ratio. The results show that ASC has the best performance and it is a very efficient classifier.  相似文献   

16.
Functional holography of recorded neuronal networks activity   总被引:1,自引:0,他引:1  
We present a new approach for analyzing multi-channel recordings, such as ECoG (electrocorticograph) recordings of cortical brain activity and of individual neuron dynamics, in cultured networks. The latter are used here to illustrate the method and its ability to discover hidden functional connectivity motifs in the recorded activity. The cultured networks are formed from dissociated mixtures of cortical neurons and glia-cells that are homogeneously spread over multi-electrode array for recording of the neuronal activity. Rich, spontaneous dynamical behavior is detected, marked by the formation of temporal sequences of synchronized bursting events (SBEs), partitioned into statistically distinguishable subgroups, each with its own characteristic spatio-temporal pattern of activity. In analogy with coherence connectivity networks for multi-location cortical recordings, we evaluated the inter-neuron correlation-matrix for each subgroup. Ordinarily such matrices are mapped onto a connectivity network between neuron positions in real space. In our functional holography, the correlations are normalized by the correlation distances—Euclidian distances between the matrix columns. Then, we project the N-dimensional (for N channels) space spanned by the matrix of the normalized correlations, or correlation affinities, onto a corresponding 3D manifold (3D Cartesian space constructed by the three leading principal vectors of the principal component algorithm). The neurons are located by their principal eigenvalues and linked by their original (not normalized) correlations. By looking at these holograms, hidden causal motifs are revealed: each SBEs subgroup generates its characteristic connectivity diagram (network) in the 3D manifold, where the neuron locations and their links form simple structures. Moreover, the computed temporal ordering of neuron activity, when projected onto the connectivity diagrams, also exhibits simple patterns of causal propagation. We show that the method can expose functional connectivity motifs like the co-existence of subneuronal functional networks in the space of affinities. The method can be directly utilized to construct similar causal holograms for recorded brain activity. We expect that by doing so, hidden functional connectivity motifs with relevance to the understanding of brain activity might be discovered.  相似文献   

17.
Cohen and Grossberg proved that a large class of neural networks with symmetric interaction coefficients admit a global Liapunov function guaranteeing that their trajectories approach equilibrium points. Such networks function as content-addressable memories, and the equilibria are the stored memories. Cohen and Grossberg also conjectured, based upon substantial computational evidence, that networks within a class of mixed cooperative-competitive networks with symmetric interaction coefficients also have this property. This conjecture is here disproved. In particular, a class of homogeneous, distance-dependent, on-center off-surround neural networks are constructed which supports persistent oscillations for appropriate initial data. Such a class is constructed in each even dimension. Similar systems, which have been used to model the dynamics of the hippocampus, are compared to this class of networks to clarify the origins of oscillatory class of behavior in this class of systems.  相似文献   

18.
This paper is concerned with the state estimation problem for a class of Markovian neural networks with discrete and distributed time-delays. The neural networks have a finite number of modes, and the modes may jump from one to another according to a Markov chain. The main purpose is to estimate the neuron states, through available output measurements, such that for all admissible time-delays, the dynamics of the estimation error is globally asymptotically stable in the mean square. An effective linear matrix inequality approach is developed to solve the neuron state estimation problem. Both the existence conditions and the explicit characterization of the desired estimator are derived. Furthermore, it is shown that the traditional stability analysis issue for delayed neural networks with Markovian jumping parameters can be included as a special case of our main results. Finally, numerical examples are given to illustrate the applicability of the proposed design method.  相似文献   

19.
It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon–neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks.  相似文献   

20.
Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey–Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号