One thing I love about this course is that you go one by one through each line, and start out with what a novice would write based on intuition and then refactor it to be more of a production-like piece of code that could be used on actual libraries. I think as a beginner even in some "from scratch" courses instructors don't start with the naive intuition, and a lot of the code just looks like magic that we should somehow know how to write. I love that you show it as a process that doesn't need to be perfect on the first-go. It's really hard to get into an iteration mindset like that sometimes, because nowadays we strive for perfection too early, but learning isn't like that it has to be messy when you start. On the opposite end of the spectrum, some courses go too simple and only include toy examples which would never be used in real libraries, and then you have to go to another source to build on that initial intuition. You've struck the perfect balance of simple micro-level to complex macro-level all in one course. I'll be honest, I got lost a few lessons back, because I was trying to speed through watching the videos. I definitely will go back to the parts I missed, but I want to finish one iteration of the course quickly so that I can go back to fill in any gaps.
@myfolder45614 ай бұрын
A huge thank you - this is really useful and insightful. Not a lot of people out there talk about ways to look inside the model while it's being trained let alone practical advice on how/when to intervene training when neurons are becoming dead or going haywire. Wish I had seen this one by Jeremy earlier to save me from much troubles.
@giorda774 ай бұрын
This lesson is so good. It was hard at first but after watching several times, experimenting with code and creating Anki cards for the penny drop moments I feel so much more confident to continue P2. Thank you all for the amazing lectures.
@howardjeremyp4 ай бұрын
Great to hear!
@useless_deno3 ай бұрын
The lecture is great! It provides clear intuition on how to apply callbacks and gain insights into the model. Great work!
@JohnSmith-he5xg Жыл бұрын
First, wanted to say I really appreciate you putting all this content out. I'm incredibly relieved that at 4:30 you've broken the run_cbs() function out into multiple lines. If I had to offer a criticism of the coding so far it would be that you really emphasize being terse. This combined with using a lot of Python specific language features can make things tough to follow. Starting with simple, but verbose code might be better from a learning perspective, then subsequently re-writing. Again, really like the content!
@michaelmuller1365 ай бұрын
Awesome, this helps with getting a better understanding of python, pytorch and fastai, thank you very much!
@kyledavelaar4552 ай бұрын
Fantastic stuff Jeremy. I really appreciate your willingness to put this information online for us to learn from. On a side note, is no one going to comment on the momentum diagram at 42:10 and how awesome it is?
@myfolder45614 ай бұрын
Under the section on Hook, Hooks and Hookcallback, it seems a bit complicated how the hook function (such as append_stats) gets modified within nested partial structure, first in Hookcallback, then in Hook, before it's passed to the torch API register_forward_hook(). Is it a common approach or is there a way to simplify/refactor without this nesting structure - what's best practice?
@grekiki11 ай бұрын
At 1:23:20 the only reason things seem to be improving, is that you are plotting activations of validation batches.
@VolodymyrBilyachat5 ай бұрын
Another way to debug code is to run notebooks inside of VSCode and just run debug cell.
@ekbastu Жыл бұрын
GOD mode activated
@ankithsavio23284 ай бұрын
At 43:00 can you explain how this particular implementation of momentum does the other weighted average (1 - self.mom) for the next set of gradients
@giorda774 ай бұрын
Because, by default, pytorch accumulates the gradients. Calling zero_grad by default purges the gradients. Jeremy here overwrites the default method to retain self. mom amount (default .85) across batches. For ref Lesson 18 has a very good intro using an Excel that dives into Momentum.