首页 | 本学科首页   官方微博 | 高级检索  
     


Neocortical layer 4 as a pluripotent function linearizer
Authors:Favorov Oleg V  Kursun Olcay
Affiliation:Department of Biomedical Engineering, University of North Carolina School of Medicine, Chapel Hill, NC 27599-7545, USA. favorov@bme.unc.edu
Abstract:
A highly effective kernel-based strategy used in machine learning is to transform the input space into a new "feature" space where nonlinear problems become linear and more readily solvable with efficient linear techniques. We propose that a similar "problem-linearization" strategy is used by the neocortical input layer 4 to reduce the difficulty of learning nonlinear relations between the afferent inputs to a cortical column and its to-be-learned upper layer outputs. The key to this strategy is the presence of broadly tuned feed-forward inhibition in layer 4: it turns local layer 4 domains into functional analogs of radial basis function networks, which are known for their universal function approximation capabilities. With the use of a computational model of layer 4 with feed-forward inhibition and Hebbian afferent connections, self-organized on natural images to closely match structural and functional properties of layer 4 of the cat primary visual cortex, we show that such layer-4-like networks have a strong intrinsic tendency to perform input transforms that automatically linearize a broad repertoire of potential nonlinear functions over the afferent inputs. This capacity for pluripotent function linearization, which is highly robust to variations in network parameters, suggests that layer 4 might contribute importantly to sensory information processing as a pluripotent function linearizer, performing such a transform of afferent inputs to a cortical column that makes it possible for neurons in the upper layers of the column to learn and perform their complex functions using primarily linear operations.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号