Details of Research Outputs

Status已发表Published
TitleTime-sync video tag extraction using semantic association graph
Creator
Date Issued2019
Source PublicationACM Transactions on Knowledge Discovery from Data
ISSN1556-4681/1556-472X
Volume13Issue:4
Abstract

Time-sync comments (TSCs) reveal a new way of extracting the online video tags. However, such TSCs have lots of noises due to users’ diverse comments, introducing great challenges for accurate and fast video tag extractions. In this article, we propose an unsupervised video tag extraction algorithm named Semantic Weight-Inverse Document Frequency (SW-IDF). Specifically, we first generate corresponding semantic association graph (SAG) using semantic similarities and timestamps of the TSCs. Second, we propose two graph cluster algorithms, i.e., dialogue-based algorithm and topic center-based algorithm, to deal with the videos with different density of comments. Third, we design a graph iteration algorithm to assign the weight to each comment based on the degrees of the clustered subgraphs, which can differentiate the meaningful comments from the noises. Finally, we gain the weight of each word by combining Semantic Weight (SW) and Inverse Document Frequency (IDF). In this way, the video tags are extracted automatically in an unsupervised way. Extensive experiments have shown that SW-IDF (dialogue-based algorithm) achieves 0.4210 F1-score and 0.4932 MAP (Mean Average Precision) in high-density comments, 0.4267 F1-score and 0.3623 MAP in low-density comments; while SW-IDF (topic center-based algorithm) achieves 0.4444 F1-score and 0.5122 MAP in high-density comments, 0.4207 F1-score and 0.3522 MAP in low-density comments. It has a better performance than the state-of-the-art unsupervised algorithms in both F1-score and MAP. © 2019 Association for Computing Machinery.

KeywordExtraction
DOI10.1145/3332932
URLView source
Indexed BySCIE ; SSCI
Language英语English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Information Systems ; Computer Science, Software Engineering
WOS IDWOS:000496747400002
Citation statistics
Cited Times:6[WOS]   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
Identifierhttp://repository.uic.edu.cn/handle/39GCC9TT/1856
CollectionResearch outside affiliated institution
Affiliation
1.Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
2.State Key Lab of IoT for Smart City, FST, University of Macau, Macau, 999078, China
3.Department of Electrical and Computer Engineering, University of California, Los Angeles, 90095, CA, United States
4.Department of Computer Science and Engineering, American University of Sharjah, Sharjah, United Arab Emirates
5.China Unicom Research Institute, Economic-Technological Development Area, Bldg. 2, No, 1 Beihuan East Road, Beijing, 100176, China
Recommended Citation
GB/T 7714
Yang, Wenmian,Wang, Kun,Ruan, Naet al. Time-sync video tag extraction using semantic association graph[J]. ACM Transactions on Knowledge Discovery from Data, 2019, 13(4).
APA Yang, Wenmian., Wang, Kun., Ruan, Na., Gao, Wenyuan., Jia, Weijia., .. & Zhang, Yunyong. (2019). Time-sync video tag extraction using semantic association graph. ACM Transactions on Knowledge Discovery from Data, 13(4).
MLA Yang, Wenmian,et al."Time-sync video tag extraction using semantic association graph". ACM Transactions on Knowledge Discovery from Data 13.4(2019).
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Yang, Wenmian]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yang, Wenmian]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yang, Wenmian]'s Articles
[Wang, Kun]'s Articles
[Ruan, Na]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.