First, wanted to say I really appreciate you putting all this content out. I'm incredibly relieved that at 4:30 you've broken the run_cbs() function out into multiple lines. If I had to offer a criticism of the coding so far it would be that you really emphasize being terse. This combined with using a lot of Python specific language features can make things tough to follow. Starting with simple, but verbose code might be better from a learning perspective, then subsequently re-writing. Again, really like the content!
@myfolder45615 күн бұрын
A huge thank you - this is really useful and insightful. Not a lot of people out there talk about ways to look inside the model while it's being trained let alone practical advice on how/when to intervene training when neurons are becoming dead or going haywire. Wish I had seen this one by Jeremy earlier to save me from much troubles.
@michaelmuller136Ай бұрын
Awesome, this helps with getting a better understanding of python, pytorch and fastai, thank you very much!
@grekiki6 ай бұрын
At 1:23:20 the only reason things seem to be improving, is that you are plotting activations of validation batches.
@giorda7713 күн бұрын
This lesson is so good. It was hard at first but after watching several times, experimenting with code and creating Anki cards for the penny drop moments I feel so much more confident to continue P2. Thank you all for the amazing lectures.
@howardjeremyp11 күн бұрын
Great to hear!
@VolodymyrBilyachatАй бұрын
Another way to debug code is to run notebooks inside of VSCode and just run debug cell.
@ekbastu Жыл бұрын
GOD mode activated
@ankithsavio23284 күн бұрын
At 43:00 can you explain how this particular implementation of momentum does the other weighted average (1 - self.mom) for the next set of gradients
@giorda773 күн бұрын
Because, by default, pytorch accumulates the gradients. Calling zero_grad by default purges the gradients. Jeremy here overwrites the default method to retain self. mom amount (default .85) across batches. For ref Lesson 18 has a very good intro using an Excel that dives into Momentum.
@myfolder45613 күн бұрын
Under the section on Hook, Hooks and Hookcallback, it seems a bit complicated how the hook function (such as append_stats) gets modified within nested partial structure, first in Hookcallback, then in Hook, before it's passed to the torch API register_forward_hook(). Is it a common approach or is there a way to simplify/refactor without this nesting structure - what's best practice?