Publication Details
Delayed Fusion: Integrating Large Language Models into First-Pass Decoding in End-to-end Speech Recognition
speech recognition, large language model, decoding, delayed fusion
This paper presents an efficient decoding approach for end-to-end automatic
speech recognition (E2E-ASR) with large language models (LLMs). Although shallow
fusion is the most common approach to incorporate language models into E2E-ASR
decoding, we face two practical problems with LLMs. (1) LLM inference is
computationally costly. (2) There may be a vocabulary mismatch between the ASR
model and the LLM. To resolve this mismatch, we need to retrain the ASR model
and/or the LLM, which is at best time-consuming and in many cases not feasible.
We propose delayed fusion, which applies LLM scores to ASR hypotheses with
a delay during decoding and enables easier use of pre-trained LLMs in ASR tasks.
This method can reduce not only the number of hypotheses scored by the LLM but
also the number of LLM inference calls. It also allows re-tokenizion of ASR
hypotheses during decoding if ASR and LLM employ different tokenizations. We
demonstrate that delayed fusion provides improved decoding speed and accuracy
compared to shallow fusion and N-best rescoring using the LibriHeavy ASR corpus
and three public LLMs, OpenLLaMA 3B & 7B and Mistral 7B.
@inproceedings{BUT198053,
author="HORI, T. and KOCOUR, M. and HAIDER, A. and MCDERMOTT, E. and ZHUANG, X.",
title="Delayed Fusion: Integrating Large Language Models into First-Pass Decoding in End-to-end Speech Recognition",
booktitle="Proceedings of ICASSP 2025",
year="2025",
pages="1--5",
publisher="IEEE Biometric Council",
address="Hyderabad",
doi="10.1109/ICASSP49660.2025.10890391",
isbn="979-8-3503-6874-1",
url="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10890391"
}