Next: Learning algorithms for DTRNN
Up: Sequence processing with neural
Previous: Other architectures without hidden
  Contents
  Index
Application of DTRNN to sequence processing
DTRNN have been applied to a wide variety of sequence-processing tasks;
here is a survey of some of them:
- Channel equalization:
- In digital communications, when a series
of symbols is transmitted, the effect of
the channel may yield a signal whose decoding
may be impossible without resorting to a compensation or reversal of
these effects at the receiver side. This sequence transduction task
(which converts the garbled sequence received into something as similar
as possible to the transmitted signal) is usually known as equalization. A number of researchers have studied DTRNN
for channel equalization purposes
(Ortiz-Fuentes and Forcada, 1997; Bradley and Mars, 1995; Parisi et al., 1997; Cid-Sueiro and Figueiras-Vidal, 1993; Kechriotis et al., 1994; Cid-Sueiro et al., 1994).
- Speech recognition:
- Speech recognition may be formulated either
as a sequence transduction task (for example, continuous speech
recognition systems aim at obtaining a sequence of
phonemes from a sequence of acoustic vectors derived from a digitized
speech sample) or as a sequence recognition task (for example, as in
isolated-word recognition, which assigns a word in a vocabulary to a
sequence of acoustic vectors). Discrete-time recurrent neural networks
have been extensively used in speech recognition tasks
(Watrous et al., 1990; Chiu and Shanblatt, 1995; Robinson, 1994; Kuhn et al., 1990; Bridle, 1990; Chen et al., 1995; Robinson and Fallside, 1991).
- Speech coding:
- Speech coding
aims at obtaining a compressed representation of a speech signal so
that it may be sent at the lowest possible bit rate. A family of speech
coders are based in the concept of predictive
coding: if the speech signal at time
may be predicted using the values of the signal at earlier times,
then the transmitter may simply send the prediction error instead of
the actual value of the signal and the receiver may use a similar
predictor to reconstruct the signal; in particular, a DTRNN may be
used as a predictor. The
transmission of the prediction error may be arranged in such a way
that the number of bits necessary is much smaller than the one
needed to send the actual signal with the same reception quality
(Sluijter et al., 1995). Haykin and Li (1995),
Baltersee and Chambers (1997), and
Wu et al. (1994)
have used DTRNN predictors for speech coding.
- System identification and control:
- DTRNN may be trained to be
models of time-dependent processes such as a stirred-tank continuous
chemical reactor: this is usually referred to as system
identification. Control goes a step further: a
DTRNN may be trained to drive a real system (a ``plant'') so that the
properties of its
output follows a desired temporal pattern. Many researchers have used of DTRNN
in system identification
(Nerrand et al., 1994; Werbos, 1990; Adali et al., 1997; Cheng et al., 1995; Dreider et al., 1995),
and control
(Chovan et al., 1996; Narendra and Parthasarathy, 1990; Chovan et al., 1994; Puskorius and Feldkamp, 1994; Zbikowski and Dzielinski, 1995; Wang and Wu, 1995; Li et al., 1995; Wang and Wu, 1996).
- Time series prediction:
- The
prediction of the next item in a
sequence may be interesting in many other applications besides speech
coding. For example, short-term electrical load forecasting is important
to control electrical power generation and distribution. Time series
prediction is a classical sequence prediction application of
DTRNN. See, for example,
Draye et al. (1995); Connor and Martin (1994); Aussem et al. (1995); Dreider et al. (1995).
- Natural language processing:
-
The processing of sentences
written in any natural (human) language
may itself be seen as a sequence processing task, and has been also
approached with DTRNN. Examples include discovering grammatical and
semantic classes of words when predicting the next word in a
sentence (Elman, 1991), learning to assign thematic roles to parts
of Chinese sentences (Chen et al., 1997), or training a DTRNN to judge on
the grammaticality of natural language sentences
(Lawrence et al., 1996).
Next: Learning algorithms for DTRNN
Up: Sequence processing with neural
Previous: Other architectures without hidden
  Contents
  Index
Debian User
2002-01-21