Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective
- Tianlong Chen ,
- Yu Cheng ,
- Zhe Gan ,
- Jingjing Liu ,
- Zhangyang Wang
35th Annual Conference on Neural Information Processing System (NeurIPS 2021) |
Treating the lottery ticket as an inductive prior, we provide a brand-new angle for the data-hungry GAN training, that is orthogonal to augmentation-based methods. Abstract: Training generative adversarial networks (GANs) with limited real image data generally results in deteriorated performance and collapsed models. To conquer this challenge, we are inspired by the latest observations, that one can discover independently trainable and highly sparse subnetworks (a.k.a., lottery tickets) from GANs. Treating this as an inductive prior, we suggest a brand-new angle towards data-efficient GAN training: by first identifying the lottery ticket from the original GAN using the small training set of real images; and then focusing on training that sparse subnetwork by re-using the same set. Both steps have lower complexity and are more data-efficient to train. We find our coordinated framework to offer orthogonal gains to existing real image data augmentation methods, and we additionally offer a new feature-level augmentation that can be applied together with them. Comprehensive experiments endorse the effectiveness of our proposed framework, across various GAN architectures (SNGAN, BigGAN, and StyleGAN-V2) and diverse datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet). Our training framework also displays powerful few-shot generalization ability, i.e., generating high-fidelity images by training from scratch with just 100 real images, without any pre-training.