![]() If you think of the Z vector as a high-level description of Implicitly forcing the generator to learn a nonsensical embedding of "given this (random) Z, produce this specific X" and you'd be You might get some sort of result out, but you'd effectively be saying Were to use the MSE between the outputs of the GAN and a single image, Turn into an image) is sampled independently of training data. (the "random noise" which the generator receives as input and has to The second is that in the standard GAN algorithm, the latent vector Obviously in practice there's difficulties getting this kind of convergence (the training is sort of inherently unstable) but that is Photorealistic outputs, the D(G(Z)) values should always be close toġ. Given a "perfect" generator which always has Where D(G(Z)) is the probability assigned to the generated image by ![]() Where G(Z) is a generated image and X is a sample, it's BCE(D(G(Z)),1) Might see in a binary reconstruction loss, which would be BCE(G(Z),X) The first thing is that the BCE objective for the Generator can moreĪccurately be stated as "the images output by the generator should beĪssigned a high probability by the Discriminator." It's not BCE as you There's several things to keep in mind here. I found a really good answer from user ajmooch on reddit and decided to post it here in case someone had the misconceptions I had: ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |