b The Institute for Advanced Studies, The Hebrew University of Jerusalem, Israel
Abstract:
Learning can be considered as an optimization problem which is solved either by using a linear programming algorithm or by using a relaxation algorithm. Both approaches appeal to the same hypotheses. The linear programming algorithm has been introduced by Krauth and Mezard. Here we study learning rules based on relaxation algorithms which do not use strict constraints on the norm of efficacies but only boundary constraints. The algorithm comes in the form of an Hebbian rule weighted by a factor which hinders the memorization of already well imprinted patterns. We show that the norm either increases steadily up to its limit if M < 2·N (the Venkatesh limit) or remains finite if M > 2·N. This property allows the system to be never stuck into overcrowding conditions making it reusable for long-term memory even though it used to be overcrowded at a given moment in its history. It also accounts for palimpsest effects. Possible biological implementations are discussed. Finally the algorithm is easily generalized to systems comprising hidden units.