Sigmoidal Weight Constraint in a Recurrent Neural Network

PDF

Authors
  1. Barton, S.A.
Corporate Authors
Defence R&D Canada - Suffield, Ralston ALTA (CAN)
Abstract
When training a recurrently connected neural network (RNN), the magnitude of the connection strengths (weights) must be limited in some way. The weights are normally constrained by either renormalizing them after each learning step, or by using a decay term proportional to the weight. For large numbers of training cycles, we show that an RNN output can become unstable with previously used weight adjustment methods. We introduce a technique that constrains weight values to move on a smooth sigmoidal curve. Without the need for renormalization or a parametric decay term, our RNNs then produce stable output. Performance is also improved in other ways. As an example, an associative memory RNN is shown to converge much faster and to more accurate values than with previous methods.

Il y a un résumé en français ici.

Report Number
DRDC-SUFFIELD-TM-2004-261 — Technical Memorandum
Date of publication
01 Dec 2004
Number of Pages
28
DSTKIM No
CA025554
CANDIS No
523355
Format(s):
CD ROM

Permanent link

Document 1 of 1

Date modified: