Hey. I am considering using GANs for my data augmentation to tacke imbalanced classed in the task of Facial Emotion Recognition. I am planning to use Google collab to train my GAN model with datasets contain specific emotion. What gan model would do best in creating most realistic results? CycleGAN?
@moranciumКүн бұрын
Hi Aladdin, I have a Query that how you are able to calc the IOU scores from just length and breadth of the bounding boxes, can you please explain me that?
@jacky24763 күн бұрын
Thx.
@JuanGabrielOyolaCardona4 күн бұрын
Thanks for sharing 😀👍 greetings from Colombia.
@algorithmo1345 күн бұрын
At 54:02, why do we need to pass the truncated trg[:, :-1] ? I thought we already applied the mask to prevent the model from looking into the future?
@danieldare26406 күн бұрын
I think the confusion people may have is that the bounding boxers can be different sizes and you might have an excessively big predicted bounding box into a small actual.
@amparoclay9 күн бұрын
Walker Christopher Miller Sarah Gonzalez William
@ayaabdalsalam70749 күн бұрын
I am new to deep learning should i listen to this videos or go to deep learnng playlist and what is the next step i should folow?
@kelliepineda-ew6zc10 күн бұрын
Davis Edward Robinson Michelle Johnson Larry
@Ismail_Qayyum11 күн бұрын
which color theme isn it?
@brandonnashasa8611 күн бұрын
Lewis Michael Lee Sarah Thomas Betty
@hwuM927udq11 күн бұрын
Thanks for sharing , that's really helpful!
@FranklinDempsey-p8i13 күн бұрын
Samir Harbor
@nobertnghoboko432515 күн бұрын
I recoded this in C++. It was interesting
@dilanprakash309716 күн бұрын
there is a simpler way to reduce lr you can import ReduceLROnPlateau
@RicardoMlu-tw2ig17 күн бұрын
Also love your voice btw, feels calming 👍
@RicardoMlu-tw2ig17 күн бұрын
Thanks so much!🎉
@JoaoPecorella18 күн бұрын
After 17 submission, I'll watch your video sir!
@JoaoPecorella18 күн бұрын
Around 75% accurate, best so far is 78!
@NewNew-qn7kh18 күн бұрын
I love the way your ide looks, what are you using/ what settings?
@oceanwave450218 күн бұрын
10:03 I think the sigma here, which is the output of "self.hid_2sigma", should be interpreted as "log of variance". Why? Because the Linear layer can somehow output negative values while Variance (of a distribution) can't be negative. By interpreting as "log of variance", we can get the variance inside by using exp(). As a result, we have two needed code changes: doing the exp() against this "log of variance" when calculating z as well as calculating the Loss.
@BenthamDebby-n7q20 күн бұрын
Conor Walks
@claudiosaponaro456521 күн бұрын
how the discriminator, at the start , knows to distinguish for instance a dollar bill?
@Mrroot-nr8xk23 күн бұрын
Hi! thanks for your explanations. It could be in tensorflow also? Thanks!
@MalachiEleanore-s1t24 күн бұрын
Walker Ronald Perez Michelle Lee Carol
@malikayeshasiddiqa24 күн бұрын
i want to learn tensor flow with c++ make tutoreials on that too
@emotionblur721425 күн бұрын
I'm trying with tensorflow 2.17.0 and the line inputs = keras.Input(shape=(28*28)) produces the error "cannot convert '784' to a shape. solution: apparently shape has to be explicitly a tuple, so shape(28*28, ) (with comma) does it.
@vatsalkhetan404626 күн бұрын
You are doing such an incredible join man
@jaytrocio207526 күн бұрын
thanks for this! your guide is clear and makes sense
@donfeto763629 күн бұрын
15:26 i don't think it is good to shuffle the test data, if you want to compare models bassed on test you need should not shuffle the test.
@salehaabdulbaseer649Ай бұрын
pls work on the scratch implementation
@nohanael-c4fАй бұрын
what is this ide ?
@WhereTheCashАй бұрын
why would you ever undervolt lol thats like getting a lambo and putting it in eco mode
@donstraight41754 күн бұрын
Spoken like a true turd. Pretty simple maths - lower temps = longer lifespan and slightly better performance.
@vaibhavpujari124Ай бұрын
can you please share the code
@enesfehmimananАй бұрын
Wow maan, it was really an impressive experience
@matthewwolcott5984Ай бұрын
If you get the error 'ValueError: Argument(s) not recognized: {'lr': 0.001}', the issue is because 'lr' has been deprecated in favor of 'learning_rate'. Thus the code should now be: optimizer=keras.optimizers.Adam(learning_rate=0.001),
@KenneithTVАй бұрын
Very High Temperature can damage your GPU.
@novinnouri764Ай бұрын
thanks
@nobertnghoboko4325Ай бұрын
Could you please do a video on the compute hardware. Like the nvidia graphics cards or AMD, where you estimate model parameter training capability.(Provided that the model is a straight forward model like a transformer) or something like that.
@junaiddooast7435Ай бұрын
your voice is very cool and soothing
@kaanguler9492Ай бұрын
For some reason some of the graphs are totally fine and some show up very limited step.
@QunFengDaiАй бұрын
谢谢你的视频
@frankrobert6867Ай бұрын
hope you feel great
@quakenxtАй бұрын
on the apple watch you can enable track af fibirilaltions which gives automatic hrv readings i think hourly also during the day
@AladdinPerssonАй бұрын
Hey, thanks for mentioning this. Yea I have enabled this, and it does provide more HRV measurements throughout the night and day which is valuable (although at a cost of bit worse battery)
@akagamishanks7991Ай бұрын
Omg its so funny I was literally about watching a review about the whoop and now you dropped one
@khunii2188Ай бұрын
Hi. First of all really appreciate the great guide you've uploaded for undervolting the GPU. However, I have a question running in my mind. Hope you aren't busy to reply back, but why did you lower the core clock by - 250 before proceeding to change the voltage curve? Is there some specific reason for this?
@PrzemyslawDolataАй бұрын
14:05 is this even correct though? Shouldn't you project Q, K and V from the `embed_size` to `embed_size` and split the resulting tensors? Your implementation seems to treat parts of the embedding dimension separately (instead of jointly attending to the entire embedding space) and using the same weights (instead of having separate weights for each head).
@AladdinPerssonАй бұрын
hey, it was a while since I looked at this but I believe I might have made some mistake in this video as I remember going through the implementation a while after and modifying this line in the implementation if I recall correctly. Check the latest one on GitHub
@IsaacTian-n7fАй бұрын
Doesn't line 68 loss_disc.backwards() cause the generator to train at the same time unless you freeze the weights? I was under the impression the generator is not supposed to train at the same time as the discriminator because then you are making the gen worse to improve the disc.
@RaviTeja-zk4lbАй бұрын
Does this support writing to remote path directly?
@jawadislam3743Ай бұрын
at 9:30 he says the discriminator wants to maximize the loss? shouldn't it be wanting to minimize it? can someone help me understand? TIA