Details of Research Outputs

TitleA Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding
Creator
Date Issued2023-06-27
Source PublicationProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume37
Pages12691-12699
AbstractMulti-Intent Spoken Language Understanding (SLU), a novel and more complex scenario of SLU, is attracting increasing attention. Unlike traditional SLU, each intent in this scenario has its specific scope. Semantic information outside the scope even hinders the prediction, which tremendously increases the difficulty of intent detection. More seriously, guiding slot filling with these inaccurate intent labels suffers error propagation problems, resulting in unsatisfied overall performance. To solve these challenges, in this paper, we propose a novel Scope-Sensitive Result Attention Network (SSRAN) based on Transformer, which contains a Scope Recognizer (SR) and a Result Attention Network (RAN). Scope Recognizer assignments scope information to each token, reducing the distraction of out-of-scope tokens. Result Attention Network effectively utilizes the bidirectional interaction between results of slot filling and intent detection, mitigating the error propagation problem. Experiments on two public datasets indicate that our model significantly improves SLU performance (5.4% and 2.1% on Overall accuracy) over the state-of-the-art baseline.
URLView source
Language英语English
Scopus ID2-s2.0-85167973707
Citation statistics
Document TypeConference paper
Identifierhttp://repository.uic.edu.cn/handle/39GCC9TT/11573
CollectionBeijing Normal-Hong Kong Baptist University
Corresponding AuthorYang,Wenmian
Affiliation
1.Shanghai Jiao Tong University,Shanghai,China
2.Nanyang Technological University,Singapore,Singapore
3.BNU-UIC Institute of Artificial Intelligence and Future Networks,Beijing Normal University (Zhuhai),Guangdong Key Lab of AI and Multi-Modal Data Processing,BNU-HKBU United International College,Zhuhai,Guang Dong,China
Recommended Citation
GB/T 7714
Cheng,Lizhi,Yang,Wenmian,Jia,Weijia. A Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding[C], 2023: 12691-12699.
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Cheng,Lizhi]'s Articles
[Yang,Wenmian]'s Articles
[Jia,Weijia]'s Articles
Baidu academic
Similar articles in Baidu academic
[Cheng,Lizhi]'s Articles
[Yang,Wenmian]'s Articles
[Jia,Weijia]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Cheng,Lizhi]'s Articles
[Yang,Wenmian]'s Articles
[Jia,Weijia]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.