Neural network topologies and bottle neck features in speech recognition
Karafiát Martin, Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
neural networks, topologies, speech recognition, bottle-neck features
Different neural net topologies for estimating features for speech recognition were presented. We introduced bottle-neck structure into previously proposed Split Context. This was done mainly to reduce size of resulting neural net, which serves as feature estimator. When bottle-neck outputs are used also as final outputs from neural network instead of probability estimates, the reduction of word error rate is also reached.
This poster overviewthe newly proposed bottle-neck features and then examines the possibility of use of meural net structure with
bottle-neck in hierarchical neural net classifier such as Split
First, the neural net with bottle-neck is used in place of merger to
see whether the advantage seeen for single neural net will hold also
for hierarchical classifier. Then we use the bottle-neck neural nets
in place of context classifiers, using bottle-neck outputs as input to
a merger classifier. Finally, bottle-neck neural nets are used in both
stages of Split Context classifier. This improved Split Context
structure gains several advantages: The use of bottle-neck imply
size reduction of resulting classifier. Also, processing of classifier
output is smaller compare to probabilistic features. The WER reduction was achieved too.