Llama - EXPLAINED!

  Рет қаралды 36,972

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 67
@CodeEmporium
@CodeEmporium Жыл бұрын
Would you like to see more videos on Llama? Let me know. Have a wonderful day :)
@paisanareeprasertkul1950
@paisanareeprasertkul1950 Жыл бұрын
Yes, definitely. One of the best explanations of the topic!
@manusrivastava2047
@manusrivastava2047 Жыл бұрын
great video, love the well structured and informative nature they have. Would love to see how to use word embeddings from Llama2 or other language model for transfer learning. Thanks and keep up the good work!
@ozne_2358
@ozne_2358 Жыл бұрын
Yes, please. More details on the code, how the parameters are initialized from the parameter file and used in the various stages.
@scitechtalktv9742
@scitechtalktv9742 Жыл бұрын
I am struggling to have llama 2 working with Dutch language reliably, so you can pose questions in Dutch and have llama 2 give the answer in Dutch. (This will be due to the fact that llama 2 is trained on data that contains very little Dutch language). I have had some succes using special prompts to do that, but sometimes it switches back to English unexpectedly. What technique(s) can I use to solve this? My use case is: I have Dutch texts that I want to be able to pose questions to in Dutch by means of Retrieval Augmented Generation (RAG) (using a llama 2 LLM) and get answers in correct Dutch?
@라마-f7b
@라마-f7b Жыл бұрын
I'm waiting the video!
@jeswer9
@jeswer9 Жыл бұрын
Yes please more deep dive into the code! Super valuable video because of that part.
@dan1ar
@dan1ar Жыл бұрын
Great video! Looking forward to deep dive into llama code
@CodeEmporium
@CodeEmporium Жыл бұрын
Sure thing. I have slated it on my TODOs :) Thank you for watching
@aurkom
@aurkom Жыл бұрын
Would love a deep dive into stuff like LoRA and quantization (bitsandbytes library) as well. Perhaps, doing it from scratch in pytorch!
@CodeEmporium
@CodeEmporium Жыл бұрын
Perfect. I have coded out the transformer from scratch using PyTorch. Maybe I’ll think of a similar series for llama :)
@share4713
@share4713 Жыл бұрын
The more i watch videos , the more i understand a subject, this is propably because i Can now see the subject in different angles or perspectives, now i have a better intuition of transformer architectures and i Can code it from scratch, thank you.
@pipinstallyp
@pipinstallyp Жыл бұрын
Hey, thanks a lot for your videos. Your video - transformer attention is all you need helped me build an intuition back before transformers were really cool. It's lovely to see your video on llama, as I actively get to finetune llama on day to day basis :) Much love.
@CodeEmporium
@CodeEmporium Жыл бұрын
Super happy to hear! Thanks so much for watching :)
@dollarscholar2956
@dollarscholar2956 Жыл бұрын
Clear, informative, well presented. Great video!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for commenting:)
@abhijitnayak1639
@abhijitnayak1639 Жыл бұрын
Thank you for such an insightful video. Would definitely love a deep-dive video on the architecture and code of LLama 2. Could you please also do an implementation of BERT or RoBERTa fine-tuning (the training process optimized via deepspeed) . Thanks again!!
@steel-r_ua
@steel-r_ua 9 ай бұрын
Thanks for the great video and a GREAT way of presenting data and showing the code!
@dinoscheidt
@dinoscheidt Жыл бұрын
Commenting for the algorithm. Very well explained. You have a talent !
@CodeEmporium
@CodeEmporium Жыл бұрын
Much appreciated ! Thank you!
@prasadraavi390
@prasadraavi390 Жыл бұрын
Beautifully Explained. Thank you. Yes, I want to know more about its architecture too.
@naevan1
@naevan1 Жыл бұрын
amazing work man. one of my favourite deep learning creators!
@gopalakrishna9651
@gopalakrishna9651 11 ай бұрын
yes. please. deep dive arch. and code walkthrough if possible. Thanks a lot for the video. May gods blessing be with you.
@prasadraavi390
@prasadraavi390 Жыл бұрын
Beautifully explained. Thank you.
@YashVerma-ii8lx
@YashVerma-ii8lx 11 ай бұрын
Thank you so much for explaining brother! Would be really great if you could give a code walkthrough video as well!
@popamaji
@popamaji Жыл бұрын
I have not implemented the code for decoder only so I have 3questions: 1. so it uses the triangular mask? I have heard from 2 sources which it does, but I dont get it, as we only feed inputs and not the outputs(unlike original transformer),how triangular mask on input data makes sense? 2. does why its called `decoder only`? the architecture seems much closer to encoder part of original transformer model, than its decoder part!! specially when the mask also not different than encoder of original. 3. is it autoregressive or still can be autoencoder to output the outputs in one pass?
@НухкадиевБагаутдин
@НухкадиевБагаутдин 3 ай бұрын
Great explanation!
@spydeyftw
@spydeyftw Жыл бұрын
Good explanation with proper understanding !
@dikshyakasaju7541
@dikshyakasaju7541 Жыл бұрын
Very informative!! Would be sick if you could dive deeper.
@CodeEmporium
@CodeEmporium Жыл бұрын
Yes! Thanks for watching! Will think about if as a future video / series
@andresg297
@andresg297 Жыл бұрын
Excellent explanation. Thank you
@NicholasRenotte
@NicholasRenotte Жыл бұрын
1.8k and closing in my boi!!!!
@CodeEmporium
@CodeEmporium Жыл бұрын
Ma guy. I will join the ranks of the 6 digit sub counts
@naevan1
@naevan1 Жыл бұрын
would you be intersted in making a guide of finetuning llamma2 or you thin kit is oversaturated?
@jiaxingyu8300
@jiaxingyu8300 Жыл бұрын
Nice explanation!
@popamaji
@popamaji Жыл бұрын
please make a video about how the generative feature and how the reinforcement learning is used in language models?
@popamaji
@popamaji Жыл бұрын
is this decoder with simplified form?!?!!?!? or its encoder with decoder mask?
@DaTruAndi
@DaTruAndi Жыл бұрын
I think you didn’t describe RLHF fully. What you described was more SFT, you seemingly skipped mentioning the reward model explicitly. Maybe implicitly you meant it, but it could help to clarify this part of reinforcement learning
@rogermenezes
@rogermenezes Жыл бұрын
He has a very good series called "chatGPT explained" where he goes into detailed explanation of RHLF: kzbin.info/www/bejne/lX6ze2Z5rqmiobc
@CodeEmporium
@CodeEmporium Жыл бұрын
Yea that’s true. I mentioned this as “humans determining what is a better answer” when I probably should have said “humans determine the better answer to train the rewards model (s) and this in turn is used with the original fine tuned model to further fine tune it. And this happens via some proximal policy optimization” ~ or maybe something along these lines. Thanks for pointing it out. I’ll clarify this on some follow up videos in the near future too
@abzs5811
@abzs5811 10 ай бұрын
@@CodeEmporiumlost me fam
@adarshsaurabh7871
@adarshsaurabh7871 Жыл бұрын
Can you please help me. I have multiple doubts. As all of these models are LLM and these generated next words based on the previous words, can I find tune them on any type of data, for example I like to make a model which can make poems, shayeri for me so can I train these for this task. Also as llama doesn't have an encoder. Isn't it a disadvantage. Also can you please make a video on encoder and decoder and their specific details. Please 🤓🤓
@ajaytaneja111
@ajaytaneja111 Жыл бұрын
Hi Ajay, would love to hear your insights on PEFT - the theoretical aspects of course. I have seen a lot of videos on PEFT and some reading too. The theoretical aspects are not well explained.
@CodeEmporium
@CodeEmporium Жыл бұрын
Ajay! Yea for sure. I am interested to learn more about this too. I’ll read more and make some content on this soon :)
@tunkskabulungana46
@tunkskabulungana46 8 ай бұрын
You said llama is an 8 language model, which prg.langs are they?😮
@ruksharalam173
@ruksharalam173 Жыл бұрын
It'd be great if you could please dig deeper into llama code and architecture.
@younessamih3188
@younessamih3188 Жыл бұрын
Very helpful ! that will be great ...
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! I’ll think of a deep dive as a future video / series
@StrangeMemes52
@StrangeMemes52 Жыл бұрын
wow , amazing video 😁 , so how language modle after training fine-tuned , i mean how works this fine_tune ?
@CodeEmporium
@CodeEmporium Жыл бұрын
Fine tuning is done depending on the specific task you want. In llama chat and ChatGPTs case, we want the fine tuning on question answering. So we feed the model a bunch of questions + answer pairs and the model parameters are “fine tuned”. Hope this helps.
@ajaytaneja111
@ajaytaneja111 Жыл бұрын
Hi Ajay, I suppose they do grouped query Attention and not multi head attention
@CodeEmporium
@CodeEmporium Жыл бұрын
I’ll need to check the fine grained details out. Thanks for the heads up. If so, I’ll address this in that future video
@ajaytaneja111
@ajaytaneja111 Жыл бұрын
Thanks for the response, Ajay. As always, great video.
@ajaytaneja111
@ajaytaneja111 Жыл бұрын
Hi Ajay, I have been reading Llama 2 research paper. They talk a lot of safety during pre-training as you might have seen. Do you think they score over GPT in this aspect?
@CodeEmporium
@CodeEmporium Жыл бұрын
Yea. That 77 page dissertation in llama 2 definitely makes the claim that it is safer. They have sections and infographics dedicated to showing this as well. That said, I would need to check how much of this safety is incorporated in the pre training as well. I didn’t think there would be much in this phase. But I haven’t read the entire dissertation, so I may be wrong.
@shreyojitdas9333
@shreyojitdas9333 4 ай бұрын
please we need a deep dive sir
@lakshman587
@lakshman587 4 ай бұрын
Please detailed explanation!!
@alexandertakele7528
@alexandertakele7528 11 ай бұрын
Thank you so much
@xinyaoyin2238
@xinyaoyin2238 4 ай бұрын
it is just a nerfed but faster transformer
@jackhale8497
@jackhale8497 Жыл бұрын
😢 "Promo sm"
@azai.online
@azai.online Жыл бұрын
I do like Llama 2 and found it easy to use. I am using it in my own multi application platform and its great.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
LLM Agents - Explained!
14:13
CodeEmporium
Рет қаралды 2,2 М.
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН
Llama: The Open-Source AI Model that's Changing How We Think About AI
8:46
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 377 М.
Getting to Know Llama 2: Everything You Need to Start Building
33:33
Meta Developers
Рет қаралды 32 М.
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 96 М.
RAG - Explained!
30:00
CodeEmporium
Рет қаралды 3,3 М.
Transformer Neural Networks - EXPLAINED! (Attention is all you need)
13:05
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,9 МЛН