Next: #goudreau94a###,
Up: DTRNN based on threshold
Previous: #Alo91a###:
  Contents
  Index
The
authors of this more recent paper (http://www.dlsi.ua.es/~mlf/nnafmc/papers/horne96bounds.pdf) try to improve these
bounds by using a different DTRNN architecture, which, instead of
using a single-layer first-order feedforward neural
network for the next-state
function, uses a lower-triangular network, that is, a network in which a unit labeled
receives inputs only from units with lower labels (); lower
triangular networks include layered feedforward
networks as a special case. If
no restrictions are imposed on weights, the lower and upper bounds are
identical and the number of units is
(the bound
being better than the one obtained
by Alon et al. (1991)).
When thresholds and weights are restricted to the set
, the lower and upper bounds are again the same, but better:
. When limits are imposed on the fan-in, the result by
Alon et al. (1991) is recovered: .
Debian User
2002-01-21