首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
基础医学   4篇
内科学   1篇
神经病学   10篇
特种医学   1篇
  2022年   1篇
  2016年   3篇
  2014年   1篇
  2011年   1篇
  2005年   1篇
  2003年   3篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1995年   2篇
  1988年   1篇
排序方式: 共有16条查询结果,搜索用时 15 毫秒
1.
Isomura Y  Kato N 《Brain research》2000,883(1):26-124
The amplitude of backpropagating action potentials (BAPs) is attenuated, either activity- or neurotransmitter-dependently in the apical dendrite of hippocampal pyramidal neurons. To test the possibility that this BAP attenuation may contribute to regulating the inducibility of long-term potentiation (LTP), BAPs evoked by theta-burst stimulation (TBS), a standard protocol for LTP induction, to apical dendrite synapses were subjected to perturbation by conditioning stimuli to basal dendrite synapses. During this conditioned TBS (cTBS), the amplitude of BAPs was noticeably attenuated, but that of somatic action potentials was not. In the distal dendrite area, cTBS-induced LTP was much smaller than that induced by TBS. By contrast, no difference was observed between TBS- and cTBS-induced LTP in the proximal dendrite area. These findings suggest that the activity-dependent attenuation of BAPs, propagating along the apical dendrite, may serve to regulate hippocampal synaptic plasticity.  相似文献   
2.
The purpose of this investigation is to establish a practical method to predict and create surface a profile of bone defects by a well-trained 3-D orthogonal neural network. First, the coordinates of the skeletal positions around the boundary of bone defects are input into the 3-D orthogonal neural network to train it to learn the scattering characteristic. The 3-D orthogonal neural network avoids local minima and converges rapidly. After the neural network has been well trained, the mathematic model of the bone defect surface is generated, and the pixel positions are derived. Herein, to verify its performance the proposed method is applied on a patient with a craniofacial defect.  相似文献   
3.
Backpropagation is often viewed as a method for adapting artificial neural networks to classify patterns. Based on parts of the book by Rumelhart and colleagues, many authors equate backpropagation with the generalized delta rule applied to fully-connected feedforward networks. This paper will summarize a more general formulation of backpropagation, developed in 1974, which does more justice to the roots of the method in numerical analysis and statistics, and also does more justice to creative approaches expressed by neural modelers in the past year or two. It will discuss applications of backpropagation to forecasting over time (where errors have been halved by using methods other than least squares), to optimization, to sensitivity analysis, and to brain research.

This paper will go on to derive a generalization of backpropagation to recurrent systems (which input their own output), such as hybrids of perceptron-style networks and Grossberg/Hopfield networks. Unlike the proposal of Rumelhart, Hinton, and Williams, this generalization does not require the storage of intermediate iterations to deal with continuous recurrence. This generalization was applied in 1981 to a model of natural gas markets, where it located sources of forecast uncertainty related to the use of least squares to estimate the model parameters in the first place.  相似文献   

4.
Parkinson’s disease (PD) is a movement disorder that affects the patient’s nervous system and health-care applications mostly uses wearable sensors to collect these data. Since these sensors generate time stamped data, analyzing gait disturbances in PD becomes challenging task. The objective of this paper is to develop an effective clinical decision-making system (CDMS) that aids the physician in diagnosing the severity of gait disturbances in PD affected patients. This paper presents a Q-backpropagated time delay neural network (Q-BTDNN) classifier that builds a temporal classification model, which performs the task of classification and prediction in CDMS. The proposed Q-learning induced backpropagation (Q-BP) training algorithm trains the Q-BTDNN by generating a reinforced error signal. The network’s weights are adjusted through backpropagating the generated error signal. For experimentation, the proposed work uses a PD gait database, which contains gait measures collected through wearable sensors from three different PD research studies. The experimental result proves the efficiency of Q-BP in terms of its improved classification accuracy of 91.49%, 92.19% and 90.91% with three datasets accordingly compared to other neural network training algorithms.  相似文献   
5.
This paper considers a class of online gradient learning methods for backpropagation (BP) neural networks with a single hidden layer. We assume that in each training cycle, each sample in the training set is supplied in a stochastic order to the network exactly once. It is interesting that these stochastic learning methods can be shown to be deterministically convergent. This paper presents some weak and strong convergence results for the learning methods, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. The conditions on the activation function and the learning rate to guarantee the convergence are relaxed compared with the existing results. Our convergence results are valid for not only S-S type neural networks (both the output and hidden neurons are Sigmoid functions), but also for P-P, P-S and S-P type neural networks, where S and P represent Sigmoid and polynomial functions, respectively.  相似文献   
6.
Automatic target recognition (ATR) is a domain in which the neural network technology has been applied with limited success. The domain is characterized by large training sets with dissimilar target images carrying conflicting information. This paper presents a novel method for quantifying the degree of non-cooperation that exists among the target members of the training set. Both the network architecture and the training algorithm are considered in the computation of the non-cooperation measures. Based on these measures, the self partitioning neural network (SPNN) approach partitions the target vectors into an appropriate number of groups and trains one subnetwork to recognize the targets in each group. A fusion network combines the outputs of the subnetworks to produce the final response. This method automatically determines the number of subnetworks needed without excessive computation. The subnetworks are simple with only one hidden layer and one unit in the output layer. They are topologically identical to one another. The simulation results indicate that the method is robust and capable of self organization to overcome the ill effects of the non-cooperating targets in the training set. The self partitioning approach improves the classification accuracy and reduces the training time of neural networks significantly. It is also shown that a trained self partitioning neural network is capable of learning new training vectors without retraining on the combined training set (i.e., the training set consisting of the previous and newly acquired training vectors).  相似文献   
7.
In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input–output functions, even when targets are available for the top layer. Learning complex input–output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far.  相似文献   
8.
Stability analysis of a three-term backpropagation algorithm   总被引:1,自引:0,他引:1  
Efficient learning by the backpropagation (BP) algorithm is required for many practical applications. The BP algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a two-term algorithm consisting of a learning rate (LR) and a momentum factor (MF). The major drawbacks of the two-term BP learning algorithm are the problems of local minima and slow convergence speeds, which limit the scope for real-time applications. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and criteria for evaluating convergence are required to facilitate the application of the three terms BP algorithm. This paper analyzes the convergence of the new three-term backpropagation algorithm. If the learning parameters of the three-term BP algorithm satisfy the conditions given in this paper, then it is guaranteed that the system is stable and will converge to a local minimum. It is proved that if at least one of the eigenvalues of matrix F (compose of the Hessian of the cost function and the system Jacobian of the error vector at each iteration) is negative, then the system becomes unstable. Also the paper shows that all the local minima of the three-term BP algorithm cost function are stable. The relationship between the learning parameters are established in this paper such that the stability conditions are met.  相似文献   
9.
Gradient descent training of neural networks can be done in either a batch or on-line manner. A widely held myth in the neural network community is that batch training is as fast or faster and/or more ‘correct’ than on-line training because it supposedly uses a better approximation of the true gradient for its weight updates. This paper explains why batch training is almost always slower than on-line training—often orders of magnitude slower—especially on large training sets. The main reason is due to the ability of on-line training to follow curves in the error surface throughout each epoch, which allows it to safely use a larger learning rate and thus converge with less iterations through the training data. Empirical results on a large (20,000-instance) speech recognition task and on 26 other learning tasks demonstrate that convergence can be reached significantly faster using on-line training than batch training, with no apparent difference in accuracy.  相似文献   
10.
Neural networks (NNs), in general, and multi-layer perceptron (MLP), in particular, represent one of the most efficient classifiers among the machine learning (ML) algorithms. Inspired by the stimulus-sampling paradigm, it is plausible to assume that the association of stimuli with the neurons in the output layer of a MLP can increase its performance. The stimulus-sampling process is assumed memoryless (Markovian), in the sense that the choice of a particular stimulus at a certain step, conditioned by the whole prior evolution of the learning process, depends only on the network’s answer at the previous step. This paper proposes a novel learning technique, by enhancing the standard backpropagation algorithm performance with the aid of a stimulus-sampling procedure applied to the output neurons. The network uses the observable behavior that varies throughout the training process by stimulating the correct answers through corresponding rewards/penalties assigned to the output neurons. The proposed model has been applied in computer-aided medical diagnosis using five real-life breast cancer, colon cancer, diabetes, thyroid, and fetal heartbeat databases. The statistical comparison to well-established ML algorithms proved beyond doubt its efficiency and robustness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号