Next: Stability and generalization
Up: Representing and learning
Previous: Representing and learning
  Contents
  Index
Open questions on grammatical inference with DTRNN
Summarizing, here are some of the open questions remaining when
training DTRNNs to perform
string-processing tasks:
- How does one choose , the number of state units in the DTRNN?
This imposes an inductive bias (only automata representable with
state units can be learned, and maybe not all of them).
- Will the DTRNN exhibit a behavior that may easily be interpreted
in terms of automaton transitions or grammar
rules?
There is no bias toward a symbolic internal
representation:
the number of available states in is infinite.
- Will it learn? Even if a DTRNN can represent a
FSM compatible with the
learning set, learning algorithms do not guarantee a solution.
Learning a task is harder than programming that
task on a DTRNN.
- There is the problem of multiple minima: most algorithms may
get trapped in undesirable local minima.
- If the task exhibits long-term
dependencies along the strings, it
may be very hard to learn (see section 3.5).
Next: Stability and generalization
Up: Representing and learning
Previous: Representing and learning
  Contents
  Index
Debian User
2002-01-21