英語音声理解のための自動字幕提示の効果の分析
Investigating the Impact of Automated Transcripts for English Listening Comprehension

概要

Real-time transcripts generated by automatic speech recognition (ASR) technologies hold potential to facilitate non-native speakers’ (NNSs) listening comprehension. While introducing another modality (i.e., ASR transcripts) to NNSs provides supplemental information to understand speech, it also runs the risk of overwhelming them with excessive information. The goal of our research is to provide a design guideline for presenting ASR transcripts to NNSs to effectively support their listening comprehension. To reach our goal, we investigated the advantages and disadvantages of presenting ASR transcripts to NNSs using an eye-tracker, and explored how they affect NNSs’ listening experiences.

産業界への展開例・適用分野

More and more global companies and organizations are forming multinational teams so that people from different language and cultural backgrounds can work together to generate new ideas, solve problems, and make decisions. To communicate and collaborate, multinational teams often adopt a common language (i.e., English). However, a common language does not necessarily ensure effective communication. NNSs often face comprehension difficulties when listening to native speakers’ (NSs’) speech. For example, when having a meeting with NSs of English or listening to an English talk, NNSs of English often get left behind and sometimes even miss the key points of meetings/talk.
Real-time transcripts generated by ASR technologies hold potential to help NNSs improve their listening comprehension. If such a technology was installed into portable devices such as smartphones, tablets, or laptops, NNSs could view the automatically generated transcripts on the screen while they listened to the speech. Our findings have implications for enhancing ASR technologies to better support NNSs.

研究者

氏名 専攻 研究室 役職/学年
曹 珣 社会情報学専攻 石田・松原研究室 博士3回生
山下 直美 その他所属 NTT コミュニケーション科学基礎研究所 研究員
石田 亨 社会情報学専攻 石田・松原研究室 教授

PAGE TOP