Abstract In order to face the uncertainty and semantic complexity of speech signals in real-time interactive scenes and achieve more efficie
Abstract In order to face the uncertainty and semantic complexity of speech signals in real-time interactive scenes and achieve more efficient and accurate speech recognition results, this study proposes a Dynamic Adaptive Transformer for Real-Time Speech Recognition (DATR-SR) model. The study is based on public datasets such as Aishell-1, HKUST, LibriSpeech, CommonVoice, and China TV series datasets covering various contexts, and extensive experiments and analysis are carried out. The results show that DATR-SR has excellent adaptability and robust performance in different language environments and dynamic scenes. With the increase of data volume, the character error rate decreases from 5.2 to 2.7%, the reasoning delay is always kept within 15ms, and the resource utilization rate reaches more than 75%, showing efficient computing ability. On the two kinds of datasets, the word error rate is as low as 4.3%, and the accuracy rate is over 91%. Especially in complex contexts, the semantic coherence rate is 92.3% and the speech event recall rate is 91.3%. Compared with other cutting-edge models, DATR-SR is significantly improved in diverse speech event recognition and dynamic scene switching response. This study aims to provide an efficient speech recognition solution for technical developers and service providers in real-time interactive fields such as emotional socialization, online education and intelligent customer service to enhance the user experience and help the intelligent development of industrial applications.