I enjoy watching your videos. I watch them over and over again & learning sth new every time. I especially like how you do your “passes” throughout your videos even though you’ve made a video on those “pass” and not just referring people without a high level overview. Best explanation I’ve ever seen. Great job! 😊
@CodeEmporium Жыл бұрын
Thanks so much! And I am happy you are enjoying this !
@nerassurdo Жыл бұрын
I enjoy your videos. You are actually very good. Keep it up! your channel will grow for sure.
@CodeEmporium Жыл бұрын
Thanks for the words of encouragement! I certainly hope so! Keep an eye out for daily content (videos, shorts, community posts) :)
@charlesje1966 Жыл бұрын
I play these videos in the background while learning python and it's starting to sink in.
@paull923 Жыл бұрын
Great content. I especially like the ChatGPT series, very clear explained. Already excited about the upcoming video about GPT.
@CodeEmporium Жыл бұрын
It’ll be up very soon (probably next Monday). Thank you for watching !
@jeff__w Жыл бұрын
13:24 “…and eventually it’s going to generate a vector which is going to be of the size [VOCAB SIZE + 1]” If it’s the number of possible tokens in a given language, why it is “generating” anything at all? Isn’t that always the same for a given language? Why doesn’t it just “look up” that vector?
@HazemAzim Жыл бұрын
Well done .. great sharp explanation
@CodeEmporium Жыл бұрын
Thanks so much for the compliment! :)
@grownupgaming Жыл бұрын
11:40 Cannot understand, why do we want to further fine-tune the original GPT parameters? Instead, freeze them and keep the learning to the newly added final linear layer, wouldnt that be enough?
@vtrandal Жыл бұрын
[Great channel. Love your channel.] Questions: Lacurt scale? What is that? Likert scale? Got it. And DALL-E? What is that? When I search videos on your KZbin channel it pulls up a few videos but I cannot find DALL-E in the video transcript.
@addkik Жыл бұрын
GPT in Kannada 😉
@Dhirajkumar-ls1ws Жыл бұрын
Club all these videos together.
@CodeEmporium Жыл бұрын
I believe I have a “ChatGPT” playlist. And right now, I am making a playlist to code transformers from scratch
@josephpareti9156 Жыл бұрын
is the softmax layer only used for SFT - which I would expect ?
@pariotourpariotour9459 Жыл бұрын
I'm looking for how attention layers are trained.😊
@CodeEmporium Жыл бұрын
I discuss the entire process of building a “transformer from scratch” in the playlist with that name. Hope that is helpful
@prashantlawhatre7007 Жыл бұрын
The videos in this playlist are by far the best I have seen on this topic. Specially the parts where Ajay discusses the Reward model and PPO.