Result Details
Neural network topologies and bottle neck features in speech recognition
Karafiát Martin, Ing., Ph.D., FIT (FIT), DCGM (FIT)
Černocký Jan, prof. Dr. Ing., DCGM (FIT)
Different neural net topologies for estimating features for speechrecognition were presented. We introduced bottle-neck structure intopreviously proposed Split Context. This was done mainly to reduce sizeof resulting neural net, which serves as feature estimator. Whenbottle-neck outputs are used also as final outputs from neural networkinstead of probability estimates, the reduction of word error rate isalso reached.
neural networks, topologies, speech recognition, bottle-neck features
This poster overviewthe newly proposed bottle-neck features and then examines the possibility of use of meural net structure with
bottle-neck in hierarchical neural net classifier such as Split
Context classifier.
First, the neural net with bottle-neck is used in place of merger to
see whether the advantage seeen for single neural net will hold also
for hierarchical classifier. Then we use the bottle-neck neural nets
in place of context classifiers, using bottle-neck outputs as input to
a merger classifier. Finally, bottle-neck neural nets are used in both
stages of Split Context classifier. This improved Split Context
structure gains several advantages: The use of bottle-neck imply
size reduction of resulting classifier. Also, processing of classifier
output is smaller compare to probabilistic features. The WER reduction was achieved too.
@misc{BUT63689,
author="František {Grézl} and Martin {Karafiát} and Jan {Černocký}",
title="Neural network topologies and bottle neck features in speech recognition",
year="2007",
pages="78--82",
address="Brno",
url="http://www.fit.vutbr.cz/~grezl/publi/mlmi2007.pdf"
}