Casual GAN Papers
Easy to read summaries of popular AI papers.
Scroll Down
Improving Inversion and Generation Diversity in StyleGAN using a Gaussianized Latent Space” by Wulff et al. explained in 5 minutes.
»
Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing by Hyunsu Kim et al. explained in 5 minutes.
»
MLP-Mixer: An all-MLP Architecture for Vision by Tolstikhin et al explained in 5 minutes.
»
StyleGAN2 Distillation for Feed-forward Image Manipulation by Viazovetskyi et al. explained in 5 minutes.
»
EigenGAN: Layer-Wise Eigen-Learning for GANs by Zhenliang He et al. explained in 5 minutes.
»
Generating Diverse High-Fidelity Images with VQ-VAE-2 by Razavi et al. explained in 5 minutes.
»
Training Generative Adversarial Networks with Limited Data by Karras et al. explained in 5 minutes.
»
Spatially-Adaptive Pixelwise Networks for Fast Image Translation by Shaham et al. explained in 5 minutes.
»
Designing an Encoder for StyleGAN Image Manipulation by Omer et al. explained in 5 minutes.
»
ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement by Yuval et al. explained in 5 minutes.
»
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Mildenhall et al. explained in 5 minutes.
»
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery by Patashnik et al. explained in 5 minutes.
»