Next: Learning problems
Up: Architecture-coupled methods
Previous: Architecture-coupled methods
  Contents
  Index
Recurrent cascade correlation
Fahlman (1991) has recently proposed a
learning algorithm that establishes a mechanism to grow a DTRNN during
training by adding hidden state units which are trained separately so
that their output does not affect the operation of the DTRNN. Training
starts with an architecture without hidden state units,
|
(4.29) |
and a pool of candidate hidden units with local
feedback
which are connected to the
inputs are trained to follow
the residual error of the network:
|
(4.30) |
with . Training adds the best candidate unit to the
network in a process called tenure. If there are already
tenured hidden units, the state of candidate is
|
(4.31) |
(the prime in meaning that it weights state values at
time , not as usual).
Tenure adds the best of the candidates
to the network as a hidden unit labeled (where is the number
of existing hidden units), its incoming weights are frozen and
connections are established with the output units and subsequently trained.
Therefore,
hidden units form a lower-triangular
structure in which each of the
units receives feedback only from itself (local
feedback)and the output is computed
from the input and each of the hidden units:
|
(4.32) |
Recurrent cascade correlation networks have recently been shown to be
incapable of recognizing certain classes of regular languages (see
section 4.2.3).
Next: Learning problems
Up: Architecture-coupled methods
Previous: Architecture-coupled methods
  Contents
  Index
Debian User
2002-01-21