1. Normalize the input [-1.+1] 2. Modified Loss function :min(log(1-D) to max(D) 3. Use Spherical Z rather than Uniform one - Sampling Generative Networks 4. Do not mix Real and Fake data by BatchNorm 5. Avoid Sparse Gradients (ReLU to Leaky ReLU, Maxpool to Average Pool 6. Label Smoothing 7. DCGANS/Hybrid models (KL + GAN, VAE+GAN) 8. Use RL stochastic tricks 9. ADAM 10. Trank Failure early- check loss
@vkvaibhavkumarvk5 жыл бұрын
github.com/soumith/ganhacks
@FireSonix5 жыл бұрын
For those strangers who wanna go a little further: 11. Don't balance via loss statistics (Do not use D/G Losses for any hyperparameter tuning) 12. If you have labels, use them (Use Additional information from labels, e.g. in AC-GAN. CGAN etc) 13. Add noise to inputs, decay over time 14. Train discriminator more, sometimes 15. Batch Discrimination 16. Discrete Variable (Use additional information other than labels)