首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested and it has been shown to produce results comparable with those obtained with other neural network methods such as Back Propagation, Cascade Correlation, Rulex, and Ruleneg. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. For the first time, a study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer. The results obtained show the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level. The results obtained with the RDP are comparable to those obtained with the backpropagation and the Cascade Correlation algorithms.  相似文献   

2.
Stability analysis of delayed cellular neural networks   总被引:31,自引:0,他引:31  
In this paper, the problems of stability in a class of delayed cellular neural networks (DCNN) are studied; some new stability criteria are obtained by using the Lyapunov functional method and some analysis techniques. These criteria can be used to design globally stable networks and thus have important significance in both theory and application.  相似文献   

3.
The current literature indicates that olfactory bulbar input projects throughout layer IA of the entire olfactory tubercle, with apparently more fibers in the lateral part than in the medial part of the tubercle. In addition, olfactory cortical association fibers project to layers IB, II, and III in all regions of the tubercle. This study exploited the phenomenon of transsynaptic transfer of WGA-HRP after injection into the olfactory bulb of rats to explore the degree of olfactory-related input to the tubercle. A computerized image analysis system was employed to quantify the amount of tracer transferred to layer II neurons of the tubercle. Qualitative analysis of the data indicates that the lateral tubercle consists of areas that receive little olfactory-related input. Nonparametric statistical tests and a novel application of artificial neural networks indicate regionally heterogeneous labeling across the tubercle and broad connections between homologous regions of the bulb and tubercle. These results have implications for understanding how olfactory sensory information is integrated into limbic-motor circuits by the olfactory tubercle.  相似文献   

4.
M. Kimura  R. Nakano 《Neural networks》1998,11(9):1589-1599
This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a first step to the generalization problem for RNNs, we also investigate whether or not a DS produced by some RNN can be identified from several observed orbits of the DS. First, it is proved that RNNs without hidden units uniquely produce a certain class of DS. Next, neural dynamical systems (NDSs) are proposed as DSs produced by RNNs with hidden units. Moreover, affine neural dynamial systems (A-NDSs) are provided as nontrivial examples of NDSs and it is proved that any DS can be finitely approximated by an A-NDS with any precision. We propose an A-NDS as a DS that an RNN can actually produce on the visible state space to approximate the target DS. For the generalization problem of RNNs, a geometric criterion is derived in the case of RNNs without hidden units. This theory is also extended to the case of RNNs with hidden units for learning A-NDSs.  相似文献   

5.
王欣萍  孙昕   《中国神经再生研究》2011,15(35):6592-6595
背景:电子病历中包含大量能够辅助临床诊断和决策的医疗信息。 目的:利用BP人工神经网络进行电子病历的数据挖掘。 方法:针对BP人工神经网络的原理及算法进行了分析,提出BP人工神经网络模型构建的6个步骤,分别为训练数据集的确定,数据准备,网络模型的建立,进行数据挖掘,评估BP网络得到的结果及预测结果的应用。并分析了BP人工神经网络在电子病历中的相关应用。 结果与结论:利用BP人工神经网络可以对电子病历进行分析预测,查找存在的危险因素。证实BP人工神经网络在电子病历系统数据分析中具有实际应用价值。  相似文献   

6.
背景:将人工智能和人工神经网络二者相结合应用于精神卫生领域的文献在国内外还未见报道,更未见将人工神经网络与人工智能相结合用于模拟人类医学专家大脑诊断思维模式诊断儿童心理障碍的相关报道。 目的:用计算机模拟人脑诊断思维模式,建立一套基于人工神经网络与专家系统的儿童心理障碍标准化诊断与防治的人工智能专家系统。 方法:儿童心理障碍标准化诊断与防治的人工智能专家系统涉及儿童心理学、儿童精神病学、心理测量、心理治疗、计算机科学等诸多学科,诊断系统结合了ICD-10、DSM IV及CCMD-2等诊断标准、大规模流行病学调查数据、资深精神医学专家的丰富临床经验和临床资料。临床资料来源于全国14 家医院流调及门诊收集的原始病例,共回收有效资料1 125份,用基于神经网络与专家系统相结合的方法进行智能诊断系统的编制。 结果与结论:诊断系统能诊断61种儿童心理障碍,它包括95%以上的儿童心理障碍,在诊断之后,计算机将给出一个治疗方法建议。将195例计算机系统诊断结果与资深儿童心理精神医学专家的诊断结果进行双盲比较,诊断符合率是99%,有助于年轻医生学习资深儿童心理精神医学专家丰富的临床经验,也能帮助全国各地的心理障碍患儿,更好地为儿童心理卫生事业服务。  相似文献   

7.
An on-line identification scheme using Volterra polynomial basis function (VPBF) neural networks is considered for nonlinear control systems. This comprises a structure selection procedure and a recursive weight learning algorithm. The orthogonal least-squares algorithm is introduced for off-line structure selection and the growing network technique is used for on-line structure selection. An on-line recursive weight learning algorithm is developed to adjust the weights so that the identified model can adapt to variations of the characteristics and operating points in nonlinear systems. The convergence of both the weights and the estimation errors is established using a Lyapunov technique. The identification procedure is illustrated using simulated examples.  相似文献   

8.
Here we study the multivariate quantitative constructive approximation of real and complex valued continuous multivariate functions on a box or RN, NN, by the multivariate quasi-interpolation sigmoidal neural network operators. The “right” operators for our goal are fully and precisely described. This approximation is derived by establishing multidimensional Jackson type inequalities involving the multivariate modulus of continuity of the engaged function or its high order partial derivatives. Our multivariate operators are defined by using a multidimensional density function induced by the logarithmic sigmoidal function. The approximations are pointwise and uniform. The related feed-forward neural network is with one hidden layer.  相似文献   

9.
The nervous system is the most complex object we know of. It is a spatially distributed, functionally differentiated network formed by axonal connections between defined neuron populations and effector cells. Computer science provides exciting new tools for archiving, analyzing, synthesizing, and modeling on the Web vast amounts of frequently conflicting and incomplete qualitative and quantitative data about the organization and molecular mechanisms of neural networks. To optimize conceptual advances in systems neuroscience, it is important for the research and publishing communities to embrace three exercises: using defined nomenclatures; populating databases; and providing feedback to developers about improved design, performance, and functionality of knowledge management systems and associated visualization tools.  相似文献   

10.
神经干细胞的迁移和网络化是近一个世纪以来神经科学领域研究的热点之一。目前已经有比较成熟的理论解释神经系统生长发育过程中神经干细胞的迁移和网络化复杂的机制,在近年来兴起的神经干细胞移植技术中也得到进一步的证实和应用。本文将分别从神经干细胞的迁移现象及其定向迁移的可能机制.神经网络的研究进展和神经干细胞迁移和网络化的意义等方面进行综述.  相似文献   

11.
12.
This paper presents two novel neural networks based on snap-drift in the context of self-organisation and sequence learning. The snap-drift neural network employs modal learning that is a combination of two modes; fuzzy AND learning (snap), and Learning Vector Quantisation (drift). We present the snap-drift self-organising map (SDSOM) and the recurrent snap-drift neural network (RSDNN). The SDSOM uses the standard SOM architecture, where a layer of input nodes connects to the self-organising map layer and the weight update consists of either snap (min of input and weight) or drift (LVQ, as in SOM). The RSDNN uses a simple recurrent network (SRN) architecture, with the hidden layer values copied back to the input layer. A form of reinforcement learning is deployed in which the mode is swapped between the snap and drift when performance drops, and in which adaptation is probabilistic, whereby the probability of a neuron being adapted is reduced as performance increases. The algorithms are evaluated on several well known data sets, and it is found that these exhibit effective learning that is faster than alternative neural network methods.  相似文献   

13.
In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system’s weight values in the feasible region which is derived from arrays’ state and plane wave’s information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm’s performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints.  相似文献   

14.
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop.  相似文献   

15.
In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush–Kuhn–Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.  相似文献   

16.
Alzheimer’s disease patients diagnosed with the Chinese Classification of Mental Disorders diagnostic criteria were selected from the community through on-site sampling. Levels of macro and trace elements were measured in blood samples using an atomic absorption method, and neurotransmitters were measured using a radioimmunoassay method. SPSS 13.0 was used to establish a database, and a back propagation artificial neural network for Alzheimer’s disease prediction was simulated using Clementine 12.0 software. With scores of activities of daily living, creatinine, 5-hydroxytryptamine, age, dopamine and aluminum as input variables, the results revealed that the area under the curve in our back propagation artificial neural network was 0.929 (95% confidence interval: 0.868-0.968), sensitivity was 90.00%, specificity was 95.00%, and accuracy was 92.50%. The findings indicated that the results of back propagation artificial neural network established based on the above six variables were satisfactory for screening and diagnosis of Alzheimer’s disease in patients selected from the community.  相似文献   

17.
A deterministic neural network concept for a “universal approximator” is proposed. The network has two hidden layers; only the synapses of the output layer are required to be plastic and only those depend on the function to be approximated. It is shown that a DEterministic Function Approximation Network (DEFAnet) allows to approximate an arbitrary continuous function from the finite-dimensional unit interval into the finite-dimensional real space with arbitrary accuracy; arbitrary Boolean functions may be implemented exactly in a simple subset of DEFAnets. In a supervised learning scheme, convergence to the desired function is guaranteed; back propagation of errors is not required. The concept is also open for reinforcement learning. In addition, when the topology of the network is determined according to the DEFAnet concept, it is possible to calculate all plastic synaptic weights in closed form, thus reducing the training considerably or replacing it altogether. Efficient algorithms for the calculation of synapse weights are given.  相似文献   

18.
A fast prototype-based nearest neighbor classifier is introduced. The proposed Adjusted SOINN Classifier (ASC) is based on SOINN (self-organizing incremental neural network), it automatically learns the number of prototypes needed to determine the decision boundary, and learns new information without destroying old learned information. It is robust to noisy training data, and it realizes very fast classification. In the experiment, we use some artificial datasets and real-world datasets to illustrate ASC. We also compare ASC with other prototype-based classifiers with regard to its classification error, compression ratio, and speed up ratio. The results show that ASC has the best performance and it is a very efficient classifier.  相似文献   

19.
Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.  相似文献   

20.
In this paper, we study convergence behaviors of delayed discrete cellular neural networks without periodic coefficients. Some sufficient conditions are derived to ensure all solutions of delayed discrete cellular neural network without periodic coefficients converge to a periodic function, by applying mathematical analysis techniques and the properties of inequalities. Finally, some examples showing the effectiveness of the provided criterion are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号