Casual GAN Papers
Easy to read summaries of popular AI papers.
Scroll Down
Plenoxels: Radiance Fields without Neural Networks by Alex Yu, Sara Fridovich-Keil, et al. explained in 5 minutes
»
Zero-Shot Text-Guided Object Generation with Dream Fields by Ajay Jain, et al. explained in 5 minutes
»
HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing by Yuval Alaluf, Omer Tov, et al. explained in 5 minutes
»
I GAN Explain: AI-assisted Image Editing. Part 2. How to edit images without photoshop.
»
Masked Autoencoders Are Scalable Vision Learners by Kaiming He et al. explained in 5 minutes
»
I GAN Explain: 8 years of GAN evolution, and the intuition behind it.
»
Projected GANs Converge Faster by Axel Sauer et al. explained in 5 minutes
»
Adaptive Convolutions for Structure-Aware Style Transfer by Prashanth Chandran et al. explained in 5 minutes
»
I GAN Explain: So you want to learn GANs? A generative AI primer by Casual GAN Papers
»
I GAN Explain: VQGAN + CLIP Tutorial and Colab Walkthrough
»
IMAGE-BASED CLIP-GUIDED ESSENCE TRANSFER by Hila Chefer et al. explained in 5 minutes
»
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis by Peng Zhou et al. explained in 5 minutes
»