Elman (1990)'s simple recurrent net, a
widely-used Moore NSM, is described by a
next-state function identical to the
next-state function of Robinson and Fallside (1991), eq. (3.10),
and an output function
whose
-th component (
) is given by
The second-order
counterpart of Elman (1990)'s simple
recurrent net has been used
by Blair and Pollack (1997) and
Carrasco et al. (1996). In that case, the -th
coordinate of the next-state function is
identical to eq. (3.8), and the output
function is identical to
eq. (3.15).
The second-order DTRNN used by Giles et al. (1992) ,
Watrous and Kuhn (1992), Pollack (1991) ,
Forcada and Carrasco (1995), and
Zeng et al. (1993) may be formulated as a Moore NSM in which the output vector is simply a projection
of the state vector
for
with
, as in the case of Williams and Zipser (1989c) and
Williams and Zipser (1989a). The classification of these second-order
networks as Mealy or Moore NSM depends on the actual configuration of feedback
weights used by the
authors. For example, Giles et al. (1992) use one
of the units of the state vector
as an
output unit; this would be a neural Moore
machine in which
(this unit
is part of the state vector because its value is also fed back to form
for the next cycle).