首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A hybrid model consisting of an Artificial Neural Network (ANN) and a Genetic Algorithm procedure for diagnostic risk factors selection in Medicine is proposed in this paper. A medical disease prediction may be viewed as a pattern classification problem based on a set of clinical and laboratory parameters. Probabilistic Neural Network models were assessed in terms of their classification accuracy concerning medical disease prediction. A Genetic Algorithm search was performed to examine potential redundancy in the diagnostic factors. This search led to a pruned ANN architecture, minimizing the number of diagnostic factors used during the training phase and therefore minimizing the number of nodes in the ANN input and hidden layer as well as the Mean Square Error of the trained ANN at the testing phase. As a conclusion, a number of diagnostic factors in a patient’s data record can be omitted without loss of fidelity in the diagnosis procedure.  相似文献   

3.
Automatic target recognition (ATR) is a domain in which the neural network technology has been applied with limited success. The domain is characterized by large training sets with dissimilar target images carrying conflicting information. This paper presents a novel method for quantifying the degree of non-cooperation that exists among the target members of the training set. Both the network architecture and the training algorithm are considered in the computation of the non-cooperation measures. Based on these measures, the self partitioning neural network (SPNN) approach partitions the target vectors into an appropriate number of groups and trains one subnetwork to recognize the targets in each group. A fusion network combines the outputs of the subnetworks to produce the final response. This method automatically determines the number of subnetworks needed without excessive computation. The subnetworks are simple with only one hidden layer and one unit in the output layer. They are topologically identical to one another. The simulation results indicate that the method is robust and capable of self organization to overcome the ill effects of the non-cooperating targets in the training set. The self partitioning approach improves the classification accuracy and reduces the training time of neural networks significantly. It is also shown that a trained self partitioning neural network is capable of learning new training vectors without retraining on the combined training set (i.e., the training set consisting of the previous and newly acquired training vectors).  相似文献   

4.
5.
Machine learning methods that can handle variable-size structured data such as sequences and graphs include Bayesian networks (BNs) and Recursive Neural Networks (RNNs). In both classes of models, the data is modeled using a set of observed and hidden variables associated with the nodes of a directed acyclic graph. In BNs, the conditional relationships between parent and child variables are probabilistic, whereas in RNNs they are deterministic and parameterized by neural networks. Here, we study the formal relationship between both classes of models and show that when the source nodes variables are observed, RNNs can be viewed as limits, both in distribution and probability, of BNs with local conditional distributions that have vanishing covariance matrices and converge to delta functions. Conditions for uniform convergence are also given together with an analysis of the behavior and exactness of Belief Propagation (BP) in 'deterministic' BNs. Implications for the design of mixed architectures and the corresponding inference algorithms are briefly discussed.  相似文献   

6.
A back-propagation network was trained to recognize high voltage spike-wave spindle (HVS) patterns in the rat, a rodent model of human petit mal epilepsy. The spontaneously occurring HVSs were examined in 137 rats of the Fisher 344 and Brown Norway strains and their F1, F2 and backcross hybrids. Neocortical EEG and movement of the rat were recorded for 12 night hours in each animal and analog data were filtered (low cut: 1 Hz; high cut: 50 Hz) and sampled at 100 Hz with 12 bit precision. A training data set was generated by manually marking durations of HVS epochs in 16 representative animals selected from each group. Training data were presented to back-propagation networks with variable numbers of input, hidden and output cells. The performance of different types of networks was first examined with the training samples and then the best configuration was tested on novel sets of the EEG data. FFT transformation of EEG significantly improved the pattern recognition ability of the network. With the most effective configuration (16 input; 19 hidden; 1 output cells) the summed squared error dropped by 80% as compared with that of the initial random weights. When testing the network with new patterns the manual and automatic evaluations were compared quantitatively. HVSs which were detected properly by the network reached 93–99% of the manually marked HVS patterns, while falsely detected events (non-HVS, artifacts) varied between 18% and 40%. These findings demonstrate the utility of back-propagation networks in automatic recognition of EEG patterns.  相似文献   

7.
We show that a neural network with Hebbian learning and transmission delays will automatically perform invariant pattern recognition for a one-parameter transformation group. To do this it has to experience a learning phase in which static objects are presented as well as objects that are continuously undergoing small transformations. Our network is fully connected and starts with zero initial synapses so it does not require any a priori knowledge of the transformation group. From the information contained in the “moving” input, the network creates its internal representation of the transformation connecting the moving states. If the network cannot perform this transformation exactly, we show that in general the network representation will be a sensible approximation in terms of state overlaps. The limitation of our model is that we can implement only one-parameter transformation groups.  相似文献   

8.
This study investigates fractional Fourier transform pre-processing of input signals to neural networks. The fractional Fourier transform is a generalization of the ordinary Fourier transform with an order parameter a. Judicious choice of this parameter can lead to overall improvement of the neural network performance. As an illustrative example, we consider recognition and position estimation of different types of objects based on their sonar returns. Raw amplitude and time-of-flight patterns acquired from a real sonar system are processed, demonstrating reduced error in both recognition and position estimation of objects.  相似文献   

9.
Information complexity of neural networks.   总被引:1,自引:0,他引:1  
This paper studies the question of lower bounds on the number of neurons and examples necessary to program a given task into feed forward neural networks. We introduce the notion of information complexity of a network to complement that of neural complexity. Neural complexity deals with lower bounds for neural resources (numbers of neurons) needed by a network to perform a given task within a given tolerance. Information complexity measures lower bounds for the information (i.e. number of examples) needed about the desired input-output function. We study the interaction of the two complexities, and so lower bounds for the complexity of building and then programming feed-forward nets for given tasks. We show something unexpected a priori--the interaction of the two can be simply bounded, so that they can be studied essentially independently. We construct radial basis function (RBF) algorithms of order n3 that are information-optimal, and give example applications.  相似文献   

10.
On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain–computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes (which information is explicitly present in input data streams and would be crucial to consider in some tasks) and of the information contained in the timing of the following spikes that is learned by the dynamic synapses as a whole spatio-temporal pattern.  相似文献   

11.
On impulsive autoassociative neural networks.   总被引:5,自引:0,他引:5  
Z H Guan  J Lam  G Chen 《Neural networks》2000,13(1):63-69
Many systems existing in physics, chemistry, biology, engineering, and information science can be characterized by impulsive dynamics caused by abrupt jumps at certain instants during the process. These complex dynamical behaviors can be modeled by impulsive differential systems or impulsive neural networks. This paper formulates and studies a new model of impulsive autoassociative neural networks. Several fundamental issues, such as global exponential stability and existence and uniqueness of equilibria of such neural networks, are established.  相似文献   

12.
Electroencephalogram processing using neural networks.   总被引:2,自引:0,他引:2  
The electroencephalogram (EEG), a highly complex signal, is one of the most common sources of information used to study brain function and neurological disorders. More than 100 current neural network applications dedicated to EEG processing are presented. Works are categorized according to their objective (sleep analysis, monitoring anesthesia depth, brain-computer interface, EEG artifact detection, EEG source-based localization, etc.). Each application involves a specific approach (long-term analysis or short-term EEG segment analysis, real-time or time delayed processing, single or multiple EEG-channel analysis, etc.), for which neural networks were generally successful. The promising performances observed are demonstrative of the efficiency and efficacy of systems developed. This review can aid researchers, clinicians and implementors to understand up-to-date interest in neural network tools for EEG processing. The extended bibliography provides a database to assist in possible new concepts and idea development.  相似文献   

13.
Dynamics of periodic delayed neural networks.   总被引:9,自引:0,他引:9  
This paper formulates and studies a model of periodic delayed neural networks. This model can well describe many practical architectures of delayed neural networks, which is generalization of some additive delayed neural networks such as delayed Hopfield neural networks and delayed cellular neural networks, under a time-varying environment, particularly when the network parameters and input stimuli are varied periodically with time. Without assuming the smoothness, monotonicity and boundedness of the activation functions, the two functional issues on neuronal dynamics of this periodic networks, i.e. the existence and global exponential stability of its periodic solutions, are investigated. Some explicit and conclusive results are established, which are natural extension and generalization of the corresponding results existing in the literature. Furthermore, some examples and simulations are presented to illustrate the practical nature of the new results.  相似文献   

14.
Exponential stability of Cohen-Grossberg neural networks.   总被引:13,自引:0,他引:13  
Exponential stabilities of the Cohen-Grossberg neural network with and without delays are analyzed. By Liapunov functions/functionals, sufficient conditions are obtained for general exponential stability, while by using a comparison result from the theory of monotone dynamical systems, componentwise exponential stability is also discussed. All results are established without assuming any symmetry of the connection matrix, and the differentiability and monotonicity of the activation functions.  相似文献   

15.
In models of associative memory composed of pulse neurons, chaotic pattern transitions where the pattern retrieved by the network changes chaotically were found. The network is composed of multiple modules of pulse neurons, and when the inter-module connection strength decreased, the stability of pattern retrieval changed from stable to chaotic. It was found that the mixed pattern of stored patterns plays an important role in chaotic pattern transitions.  相似文献   

16.
We present a neurobiologically-inspired stochastic cellular automaton whose state jumps with time between the attractors corresponding to a series of stored patterns. The jumping varies from regular to chaotic as the model parameters are modified. The resulting irregular behavior, which mimics the state of attention in which a system shows a great adaptability to changing stimulus, is a consequence in the model of short-time presynaptic noise which induces synaptic depression. We discuss results from both a mean-field analysis and Monte Carlo simulations.  相似文献   

17.
The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target/non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.  相似文献   

18.
Abduction is the process of proceeding from data describing a set of observations or events, to a set of hypotheses which best explains or accounts for the data. Cost-based abduction (CBA) is a formalism in which evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. In previous work, we presented a method for using high order recurrent networks to find least cost proofs for CBA instances. Here, we present a method that significantly reduces the size of the neural network that is produced for a given CBA instance. We present experimental results describing the performance of this method and comparing its performance to that of the previous method.  相似文献   

19.
Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.  相似文献   

20.
H K Lee 《Neural networks》2000,13(6):629-642
In this paper we show that the posterior distribution for feedforward neural networks is asymptotically consistent. This paper extends earlier results on universal approximation properties of neural networks to the Bayesian setting. The proof of consistency embeds the problem in a density estimation problem, then uses bounds on the bracketing entropy to show that the posterior is consistent over Hellinger neighborhoods. It then relates this result back to the regression setting. We show consistency in both the setting of the number of hidden nodes growing with the sample size, and in the case where the number of hidden nodes is treated as a parameter. Thus we provide a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号