题名 | DiGAN: Directional Generative Adversarial Network for Object Transfiguration |
作者 | |
发表日期 | 2022-06-27 |
会议名称 | 2022 International Conference on Multimedia Retrieval, ICMR 2022 |
会议录名称 | ICMR 2022 - Proceedings of the 2022 International Conference on Multimedia Retrieval
![]() |
ISBN | 9781450392389 |
页码 | 471-479 |
会议日期 | June 27-30, 2022 |
会议地点 | Newark |
摘要 | The concept of cycle consistency in couple mapping has helped CycleGAN illustrate remarkable performance in the context of image-to-image translation. However, its limitations in object transfiguration have not been ideally solved yet. In order to alleviate previous problems of wrong transformation position, degeneration, and artifacts, this work presents a new approach called Directional Generative Adversarial Network (DiGAN) in the field of object transfiguration. The major contribution of this work is threefold. First, paired directional generators are designed for both intra-domain and inter-domain generations. Second, a segmentation network based on Mask R-CNN is introduced to build conditional inputs for both generators and discriminators. Third, a feature loss and a segmentation loss are added to optimize the model. Experimental results indicate that DiGAN surpasses CycleGAN and AttentionGAN by 17.2% and 60.9% higher on Inception Score, 15.5% and 2.05% lower on Fréchet Inception Distance, and 14.2% and 15.6% lower on VGG distance, respectively, in horse-to-zebra mapping. |
关键词 | cycle consistency feature consistency generative adversarial network object transfiguration segment-conditional generation |
DOI | 10.1145/3512527.3531400 |
URL | 查看来源 |
语种 | 英语English |
Scopus入藏号 | 2-s2.0-85134068668 |
引用统计 | |
文献类型 | 会议论文 |
条目标识符 | https://repository.uic.edu.cn/handle/39GCC9TT/9834 |
专题 | 理工科技学院 |
通讯作者 | Chen, Donglong |
作者单位 | 1.Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science,BNU-HKBU United International College,Zhuhai,China 2.School of Computer Science,Fudan University,Shanghai,China |
第一作者单位 | 北师香港浸会大学 |
通讯作者单位 | 北师香港浸会大学 |
推荐引用方式 GB/T 7714 | Luo, Zhen,Zhang, Yingfang,Zhong, Peihaoet al. DiGAN: Directional Generative Adversarial Network for Object Transfiguration[C], 2022: 471-479. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论