首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Williams and Zipser (1989) proposed two analogue learning algorithms for fully recurrent networks. The first method is an exact gradient-following algorithm for problems where data consists of epochs. The second method, called the Real-Time Recurrent Learning (RTRL) algorithm, uses data described by a temporal stream of inputs and outputs, without time marks or epochs. In this paper we describe a new implementation of this RTRL algorithm. This improved implementation makes it possible to increase the performance of the learning algorithm during the training phase by using some a priori knowledge about the temporal necessities of the problem. The reduction of the computational expense of the training enables the use of this algorithm for more complex problems. Some simulations of a process control task demonstrate the properties of this algorithm.  相似文献   

2.
《Clinical neurophysiology》2021,132(7):1433-1443
The electroencephalogram (EEG) is a fundamental tool in the diagnosis and classification of epilepsy. In particular, Interictal Epileptiform Discharges (IEDs) reflect an increased likelihood of seizures and are routinely assessed by visual analysis of the EEG. Visual assessment is, however, time consuming and prone to subjectivity, leading to a high misdiagnosis rate and motivating the development of automated approaches. Research towards automating IED detection started 45 years ago. Approaches range from mimetic methods to deep learning techniques. We review different approaches to IED detection, discussing their performance and limitations. Traditional machine learning and deep learning methods have yielded the best results so far and their application in the field is still growing. Standardization of datasets and outcome measures is necessary to compare models more objectively and decide which should be implemented in a clinical setting.  相似文献   

3.
A new artificial neural model for unsupervised learning is proposed. Consider first a two-class pattern recognition problem. We use one neuron (possibly higher order) with a sigmoid in the range from −1 to 1. Positive output means class 1 and negative output means class 2. The main idea of the method is that it iterates the weights in such a way as to move the decision boundary to a place of low pattern density. Constraining the length of the weight vector, if the neuron output is mostly near 1 or −1, then this means that the patterns are mostly far away from the decision boundary and we have probably a good classifier. We define a function which measures how close the output is to 1 or −1. Training is performed by a steepest-ascent algorithm on the weights. The method is extended to the multiclass case by applying the previous procedure in a hierarchical manner (i.e., by partitioning the patterns into two groups, then considering each group separately and partitioning it further and so on until we end up with the final classifier).  相似文献   

4.

Objective

The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning).

Methods

Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9–11 years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks.

Results

Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children’s music skills were associated with performance on auditory and visual behavioural statistical learning tasks.

Conclusion

Our data suggests that individual differences in musical skills are associated with children’s ability to detect regularities.

Significance

The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments.  相似文献   

5.
Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.  相似文献   

6.
Glioma is the most common primary intraparenchymal tumor of the brain and the 5-year survival rate of high-grade glioma is poor. Magnetic resonance imaging (MRI) is essential for detecting, characterizing and monitoring brain tumors but definitive diagnosis still relies on surgical pathology. Machine learning has been applied to the analysis of MRI data in glioma research and has the potential to change clinical practice and improve patient outcomes. This systematic review synthesizes and analyzes the current state of machine learning applications to glioma MRI data and explores the use of machine learning for systematic review automation. Various datapoints were extracted from the 153 studies that met inclusion criteria and analyzed. Natural language processing (NLP) analysis involved keyword extraction, topic modeling and document classification. Machine learning has been applied to tumor grading and diagnosis, tumor segmentation, non-invasive genomic biomarker identification, detection of progression and patient survival prediction. Model performance was generally strong (AUC = 0.87 ± 0.09; sensitivity = 0.87 ± 0.10; specificity = 0.0.86 ± 0.10; precision = 0.88 ± 0.11). Convolutional neural network, support vector machine and random forest algorithms were top performers. Deep learning document classifiers yielded acceptable performance (mean 5-fold cross-validation AUC = 0.71). Machine learning tools and data resources were synthesized and summarized to facilitate future research. Machine learning has been widely applied to the processing of MRI data in glioma research and has demonstrated substantial utility. NLP and transfer learning resources enabled the successful development of a replicable method for automating the systematic review article screening process, which has potential for shortening the time from discovery to clinical application in medicine.  相似文献   

7.
8.
Previous research has shown that it is possible to predict which speaker is attended in a multispeaker scene by analyzing a listener's electroencephalography (EEG) activity. In this study, existing linear models that learn the mapping from neural activity to an attended speech envelope are replaced by a non‐linear neural network (NN). The proposed architecture takes into account the temporal context of the estimated envelope and is evaluated using EEG data obtained from 20 normal‐hearing listeners who focused on one speaker in a two‐speaker setting. The network is optimized with respect to the frequency range and the temporal segmentation of the EEG input, as well as the cost function used to estimate the model parameters. To identify the salient cues involved in auditory attention, a relevance algorithm is applied that highlights the electrode signals most important for attention decoding. In contrast to linear approaches, the NN profits from a wider EEG frequency range (1–32 Hz) and achieves a performance seven times higher than the linear baseline. Relevant EEG activations following the speech stimulus after 170 ms at physiologically plausible locations were found. This was not observed when the model was trained on the unattended speaker. Our findings therefore indicate that non‐linear NNs can provide insight into physiological processes by analyzing EEG activity.  相似文献   

9.
10.

Objective

Visual assessment of the EEG still outperforms current computer algorithms in detecting epileptiform discharges. Deep learning is a promising novel approach, being able to learn from large datasets. Here, we show pilot results of detecting epileptiform discharges using deep neural networks.

Methods

We selected 50 EEGs from focal epilepsy patients. All epileptiform discharges (n?=?1815) were annotated by an experienced neurophysiologist and extracted as 2?s epochs. In addition, 50 normal EEGs were divided into 2?s epochs. All epochs were divided into a training (n?=?41,381) and test (n?=?8775) set. We implemented several combinations of convolutional and recurrent neural networks, providing the probability for the presence of epileptiform discharges. The network with the largest area under the ROC curve (AUC) in the test set was validated on seven independent EEGs with focal epileptiform discharges and twelve normal EEGs.

Results

The final network had an AUC of 0.94 for the test set. Validation allowed detection of epileptiform discharges with 47.4% sensitivity and 98.0% specificity (FPR: 0.6/min). For the normal EEGs in the validation set, the specificity was 99.9% (FPR: 0.03/min).

Conclusions

Deep neural networks can accurately detect epileptiform discharges from scalp EEG recordings.

Significance

Deep learning may result in a fundamental shift in clinical EEG analysis.  相似文献   

11.
《Clinical neurophysiology》2021,132(6):1234-1240
ObjectiveAutomating detection of Interictal Epileptiform Discharges (IEDs) in electroencephalogram (EEG) recordings can reduce the time spent on visual analysis for the diagnosis of epilepsy. Deep learning has shown potential for this purpose, but the scarceness of expert annotated data creates a bottleneck in the process.MethodsWe used EEGs from 50 patients with focal epilepsy, 49 patients with generalized epilepsy (IEDs were visually labeled by experts) and 67 controls. The data was filtered, downsampled and cut into two second epochs. We increased the number of input samples containing IEDs through temporal shifting and using different montages. A VGG C convolutional neural network was trained to detect IEDs.ResultsUsing the dataset with more samples, we reduced the false positive rate from 2.11 to 0.73 detections per minute at the intersection of sensitivity and specificity. Sensitivity increased from 63% to 96% at 99% specificity. The model became less sensitive to the position of the IED in the epoch and montage.ConclusionsTemporal shifting and use of different EEG montages improves performance of deep neural networks in IED detection.SignificanceDataset augmentation can reduce the need for expert annotation, facilitating the training of neural networks, potentially leading to a fundamental shift in EEG analysis.  相似文献   

12.
Post-stroke discharge planning may be aided by accurate early prognostication. Machine learning may be able to assist with such prognostication. The study’s primary aim was to evaluate the performance of machine learning models using admission data to predict the likely length of stay (LOS) for patients admitted with stroke. Secondary aims included the prediction of discharge modified Rankin Scale (mRS), in-hospital mortality, and discharge destination. In this study a retrospective dataset was used to develop and test a variety of machine learning models. The patients included in the study were all stroke admissions (both ischaemic stroke and intracerebral haemorrhage) at a single tertiary hospital between December 2016 and September 2019. The machine learning models developed and tested (75%/25% train/test split) included logistic regression, random forests, decision trees and artificial neural networks. The study included 2840 patients. In LOS prediction the highest area under the receiver operator curve (AUC) was achieved on the unseen test dataset by an artificial neural network at 0.67. Higher AUC were achieved using logistic regression models in the prediction of discharge functional independence (mRS ≤2) (AUC 0.90) and in the prediction of in-hospital mortality (AUC 0.90). Logistic regression was also the best performing model for predicting home vs non-home discharge destination (AUC 0.81). This study indicates that machine learning may aid in the prognostication of factors relevant to post-stroke discharge planning. Further prospective and external validation is required, as well as assessment of the impact of subsequent implementation.  相似文献   

13.
BackgroundIndividuals with obstructive sleep apnoea (OSA) experience a higher burden of atrial fibrillation (AF) than the general population, and many cases of AF remain undetected. We tested the feasibility of an artificial intelligence (AI) approach to opportunistic detection of AF from single-lead electrocardiograms (ECGs) which are routinely recorded during in-laboratory polysomnographic sleep studies.MethodsUsing transfer learning, an existing ECG AI model was applied to 1839 single-lead ECG traces recorded during in-laboratory sleep studies without any training of the algorithm. Manual review of all traces was performed by two trained clinicians who were blinded to each other's review. Discrepancies between the two investigators were resolved by two cardiologists who were also unaware of each other's scoring. The diagnostic accuracy of the AI algorithm was calculated against the results of the manual ECG review which were considered gold standard.ResultsManual review identified AF in 144 of the 1839 single-lead ECGs (7.8%). The AI detected all cases of manually confirmed AF (sensitivity = 100%, 95% CI: 97.5–100.0). The AI model misclassified many ECGs with artefacts as AF, resulting in a specificity of 76.0 (95% CI: 73.9–78.0), and an overall diagnostic accuracy of 77.9% (95% CI: 75.9%–97.8%).ConclusionTransfer learning AI, without additional training, can be successfully applied to disparate ECG signals, with excellent negative predictive values, and can exclude AF among patients undergoing evaluation for suspected OSA. Further signal-specific training is likely to improve the AI's specificity and decrease the need for manual verification.  相似文献   

14.
Automatic target recognition (ATR) is a domain in which the neural network technology has been applied with limited success. The domain is characterized by large training sets with dissimilar target images carrying conflicting information. This paper presents a novel method for quantifying the degree of non-cooperation that exists among the target members of the training set. Both the network architecture and the training algorithm are considered in the computation of the non-cooperation measures. Based on these measures, the self partitioning neural network (SPNN) approach partitions the target vectors into an appropriate number of groups and trains one subnetwork to recognize the targets in each group. A fusion network combines the outputs of the subnetworks to produce the final response. This method automatically determines the number of subnetworks needed without excessive computation. The subnetworks are simple with only one hidden layer and one unit in the output layer. They are topologically identical to one another. The simulation results indicate that the method is robust and capable of self organization to overcome the ill effects of the non-cooperating targets in the training set. The self partitioning approach improves the classification accuracy and reduces the training time of neural networks significantly. It is also shown that a trained self partitioning neural network is capable of learning new training vectors without retraining on the combined training set (i.e., the training set consisting of the previous and newly acquired training vectors).  相似文献   

15.
The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain–Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly, prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships ‘hidden’ in the data. NeuCube will allow the integration in one model of various brain data, information and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated in a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed.  相似文献   

16.
Machine learning techniques provide new methods to predict diagnosis and clinical outcomes at an individual level. We aim to review the existing literature on the use of machine learning techniques in the assessment of subjects with bipolar disorder. We systematically searched PubMed, Embase and Web of Science for articles published in any language up to January 2017. We found 757 abstracts and included 51 studies in our review. Most of the included studies used multiple levels of biological data to distinguish the diagnosis of bipolar disorder from other psychiatric disorders or healthy controls. We also found studies that assessed the prediction of clinical outcomes and studies using unsupervised machine learning to build more consistent clinical phenotypes of bipolar disorder. We concluded that given the clinical heterogeneity of samples of patients with BD, machine learning techniques may provide clinicians and researchers with important insights in fields such as diagnosis, personalized treatment and prognosis orientation.  相似文献   

17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号