I2I translation model based on CondConv and spectral domain realness measurement
BCS-StarGAN
Article Ecrit par: Li, Yuqiang ; Shangguan, Xinyi ; Liu, Chun ; Meng, Haochen ;
Résumé: In recent years, the research on the Image-to-Image(I2I) translation based on Generative Adversarial Networks has received extensive attention from both industry and academia, and relevant research results have been emerging. As a typical representative, StarGAN v2 has achieved good results in the field of I2I translation. But it still has the problem of insufficient feature extraction in some cases, which leads to the unsatisfactory quality of I2I translation. The conventional method is to increase the depth and width of the model. But this approach increases the complexity of the model, making the already difficult-to-train StarGAN v2 even more difficult to train, thus hindering the application of the model. To this end, this paper proposes an improved model BCS-StarGAN based on conditional parameterized convolution(CondConv) and spectral domain realness measurement. This method can significantly improve I2I translation quality by only adding a small amount of computation. In this paper, we first replace the conventional convolution used by the Bottleneck module in the generator of the StarGAN v2 with CondConv. Furthermore, to better obtain the high frequency data distribution of real images, a lightweight spectral classifier is added to the discriminator. It enables the discriminator to distinguish whether the image has high frequency data to motivate the generator to learn the high frequency information of the image. Finally, we conduct a qualitative and quantitative experimental comparison on three public datasets. The comparative experimental results with mainstream models show that BCS-StarGAN can achieve the best results.
Langue:
Anglais