LLMs meet Robotic Operating System
40:28
RLHF and its missing component
57:04
Transformers and Matrices
24:58
10 ай бұрын
Пікірлер
@micahdelaurentis6551
@micahdelaurentis6551 Ай бұрын
Doesn't get any better than this folks
@ganeshsubramanian6217
@ganeshsubramanian6217 Ай бұрын
sir, thank you so much. i watched the full video and could digest most of the information well. you are amazing.
@harshpatil7684
@harshpatil7684 2 ай бұрын
Does it also mean from your statistical model that LLMs can reason across contextual history using same principles ? So say I have prompt 1, response 1, prompt 2, response 2, ......., prompt (n-1) response (n-1). say this is my current state of conversation with a voice assistant. If I now send prompt n, can it use that as an intention and measure across all previous prompts (intentions) and responses (messages) and factor in all the history of conversation and generate response n ?
@areenreddy
@areenreddy 3 ай бұрын
your content is useful and i am hoping you keep this on
@mlandaiacademy
@mlandaiacademy 3 ай бұрын
thank you!!
@SuvradipDasPhotographyOfficial
@SuvradipDasPhotographyOfficial 3 ай бұрын
Excellent❤❤❤
@mlandaiacademy
@mlandaiacademy 3 ай бұрын
glad it is useful 🤗
@SuvradipDasPhotographyOfficial
@SuvradipDasPhotographyOfficial 3 ай бұрын
@@mlandaiacademy Trust me its too good and i will be completing the remaining two tomorrow. I really hope you make videos on GAN, CNN, RNN too.
@mlandaiacademy
@mlandaiacademy 3 ай бұрын
@@SuvradipDasPhotographyOfficial Thank you. We will try if time allows for sure :)
@brendanclarke1302
@brendanclarke1302 4 ай бұрын
This is very cool, thanks for the presentation.
@mlandaiacademy
@mlandaiacademy 4 ай бұрын
glad you find it useful.
@ramanandr7562
@ramanandr7562 4 ай бұрын
Nice
@tawfickraad7707
@tawfickraad7707 6 ай бұрын
excellent explanation, Bravo
@mlandaiacademy
@mlandaiacademy 6 ай бұрын
Glad it helped!
@eigd
@eigd 7 ай бұрын
Is there a paper associated with this talk? And is there a ROS package on some github somewhere?
@mlandaiacademy
@mlandaiacademy 7 ай бұрын
We are planning to put the paper out soon we are just writing it - stay tuned.
@eigd
@eigd 7 ай бұрын
Never mind. He mentions paper is coming, and they are working on how to share the code.
@mlandaiacademy
@mlandaiacademy 7 ай бұрын
He is me btw hehe
@eigd
@eigd 7 ай бұрын
@@mlandaiacademy Then let me thank you for a very interesting video! I am working on my Phd. project proposal right now, and I am looking to work on embodiment of robots and navigation in Agriculture. This domain is exploding right now!
@mlandaiacademy
@mlandaiacademy 7 ай бұрын
We should deffo have a chat write me on [email protected] if interested 😀
@amelieschreiber6502
@amelieschreiber6502 8 ай бұрын
kzbin.info/www/bejne/gIrXXp-aj5h3p68si=lZ8_2fsIH11WyLng
@samiloom8565
@samiloom8565 8 ай бұрын
Very important subject but the Slides are not clear and the video resolution is not available
@mlandaiacademy
@mlandaiacademy 8 ай бұрын
Yes KZbin has two qualities sd and hd. SD processing is done and HD processing typically takes more time. In couple of hours if you refresh it should be hd quality 😀 it’s KZbin’s processing time 😀😀
@mlandaiacademy
@mlandaiacademy 8 ай бұрын
It is amazing that NLI models have no clue about semantics! Let us know if you have any question!
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Feel free to leave any questions for Souradip. I will convey them!
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Please feel free to leave any questions for Leo! I can then convey.
@timothywong4485
@timothywong4485 9 ай бұрын
Promo'SM 🙋
@tarekshaalan3257
@tarekshaalan3257 9 ай бұрын
Dr Bou Ammar another great video. To be honest yours are very highly recommended and the clarity and quality are unique very much appreciated for your efforts you put on these lectures.
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Thanks a lot Tarek! You are the best. I am glad you find them useful ! Appreciate the kind words as well 🙏🤗
@tarekshaalan3257
@tarekshaalan3257 9 ай бұрын
Thank for your explainig brielfy this paper and it is a very intersting one which as you mentioned worth to check why X* cluster is actually forming keep it up the awesome work homie 👍🏻
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Thank you so much!! Glad you like it 😀
@ChiragAhuja1
@ChiragAhuja1 9 ай бұрын
This is the best tutorial, since I used REINFORCE few years back for finding best sequence of data augmentation and then even for Recommender problems. Good to see it returning back.
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Thank you so much !!!
@tarekshaalan3257
@tarekshaalan3257 9 ай бұрын
For Norm 0 it violates the fundamental property of norm which vector norm usually must have a positive length or size, and since a vector has no length in L0 then there will be some confusion there? therefore cannot be considered as a norm. That's why for ur question that the sparsity norm cannot be actually a norm I guess.
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Think about what would happen if we do multiple Alpha by a vector and take the l0 norm of that. Then compare that with what the axioms of norms should give us for a the norm of the vector alpha x vs the norm of vector of x
@tarekshaalan3257
@tarekshaalan3257 9 ай бұрын
Thank you Dr Ammar very useful lectures and helping me a lot in my journey appreciated a lot.
@mlandaiacademy
@mlandaiacademy 9 ай бұрын
Thanks a lot glad you find them useful !
@Janamejaya.Channegowda
@Janamejaya.Channegowda 10 ай бұрын
Thank you for sharing.
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
Of course :)
@YouTubeMed
@YouTubeMed 10 ай бұрын
Thank you !
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
You are very welcome !!
@theneuralmancer
@theneuralmancer 10 ай бұрын
First! And amazing work thank you for this :)
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
Thank you so much!!
@mohammeddarras9719
@mohammeddarras9719 10 ай бұрын
عمل جبار فخورين بك اخ هيثم استمر بنشر العلم الحقيقي
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
متشكر اخي محمد ☺️
@JATINKUMAR-qu4vi
@JATINKUMAR-qu4vi 10 ай бұрын
👍👍
@drsoumyabanerjee5020
@drsoumyabanerjee5020 10 ай бұрын
EXCELLENT ! thanks for sharing
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
Glad you enjoyed it!
@marohs5606
@marohs5606 10 ай бұрын
Eureka is giving me automated Circulium learning feeling? where gpt is the one deciding the reward function for each stage of learning
@mlandaiacademy
@mlandaiacademy 10 ай бұрын
This is an interesting way to see it. The way I see it is like it is trying to design denser rewards from a sparse signal in a way. Based on fitness it is trying to make the problem easier to solve by giving me easier rewards at each stage. It’s something similar to what u mention yea
@NoThing-ec9km
@NoThing-ec9km 11 ай бұрын
This makes me feel real stupid. Maybe I am.
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
No you aren’t. It is a short teaser to a much longer paper. It is more saying go read that paper it is awesome.
@directorsudar4108
@directorsudar4108 11 ай бұрын
It’s really interesting and awesome
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
I like it too yes 😃
@ChrisConsultant
@ChrisConsultant 11 ай бұрын
Quality content.
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
Thanks you so much!!
@Chandra_Sekhar_T
@Chandra_Sekhar_T 11 ай бұрын
The L0 Norm does not follow absolute Homogeneity and that is why it is not a norm
@Janamejaya.Channegowda
@Janamejaya.Channegowda 11 ай бұрын
Thank you for sharing, greatly appreciated.
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
Glad you find it useful!!
@KetutArtayasa
@KetutArtayasa 11 ай бұрын
Thank you🙏
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
You are very welcome !!
@36nibs
@36nibs 11 ай бұрын
i think its a problem that i know what hes talking about
@36nibs
@36nibs 11 ай бұрын
computer no good keeping context
@anupamghoshh
@anupamghoshh 11 ай бұрын
@Neural-Awakening
@Neural-Awakening 11 ай бұрын
Fantastic video, Thank you for pushing out valuable diagrams and explanations. AI acceleration is a go!
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
Glad it was helpful!
@jasonkena
@jasonkena 11 ай бұрын
At 10:00, what's preventing the NN from learning a degenerate representation which merely memorizes the (x, y) pairs?
@jasonkena
@jasonkena 11 ай бұрын
Thanks for the great series! I've heard that BO typically scales very poorly with the number of variables to optimize over. What's the maximum number of parameters HEBO can feasibly work on? And how is it that it works well on the protein problem considering there are ~(protein length)x(number of amino acid types) variables?
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
First, I would make a distinction between the dimensions of the problem and the values each dimension could take. In terms of dimensions, you are right that standard BO could suffer - maybe up to 80 dims I'd say if you try hard. However, there are high-d BO algorithms devised to takle higher-d, some include: TuRBO (arxiv.org/pdf/1910.01739.pdf), latent space Bayesian Optimisation (arxiv.org/abs/2201.11872) and decomposition techniques (arxiv.org/abs/2301.12844). In terms of the second quesiton, that relates to the combinatorial nature of the problem for which combinatorial Bayesian optimisation solvers have been developed. The trick is in the kernel and how you maximise your acquisitions to better explore the space. If interested, we had a new library at NeurIPS that solve those problems and gives you a SOTA algorithms. Please find the paper here: arxiv.org/pdf/2306.09803.pdf, the code here: github.com/huawei-noah/HEBO/tree/master/MCBO, and a blog about the results here: medium.com/@haitham.bouammar71/introducing-mcbo-a-modular-framework-for-comprehensive-benchmarking-and-evaluation-in-5baacab71fc6. Hope this helps :)
@jayeshkurdekar126
@jayeshkurdekar126 Жыл бұрын
Very organic way of explaining from first principals..thnks a ton
@SalAlab-r4h
@SalAlab-r4h Жыл бұрын
great thanks
@timverbarg
@timverbarg Жыл бұрын
why is it better to have the gradient_theta of log p_theta (x)?
@mlandaiacademy
@mlandaiacademy 11 ай бұрын
It is not better per-say. It is more tractable though. Think about it, p(tau) = p(s_0) \prod_t p(s_t+|s_t, a_t) pi_theta(a_t|s_t). If you want to compute the grad_theta of this it is hard for many reasoning the product, not knowing p(s_t+1|s_t,a_t) and not ness. leading to a tractable algorithm which you can use to sample to get the Monte Carlo of the expectation. The log helps us in those. When you take the log the prod becomes a sum and grad wrt to p(s_t|s_t, a_t) vanishes since it does not depend on theta. Then you are left with grad over pi_theta. Hope this helps!
@adamli213
@adamli213 Жыл бұрын
but why?
@YouTubeMed
@YouTubeMed Жыл бұрын
Thanks for the new video, one of the best channels on the topic of artificial intelligence❤
@YouTubeMed
@YouTubeMed Жыл бұрын
Great lecture Haitham. Good luck to you and Rasul!
@lennarts.2488
@lennarts.2488 Жыл бұрын
Thanks for the video! I kinda like the concept of your slides. Normally one would say “please don't put too much stuff on one slide” but this way it is much easier to comprehend the concepts (especially with the color coded information). Kudos!
@orangethemeow
@orangethemeow Жыл бұрын
this video is very relaxing, thanks
@JTan-fq6vy
@JTan-fq6vy Жыл бұрын
Thanks for the great video! I have a question regarding the RHS of the last equation at the bottom of the screen: Can the pi(a|s) be inside the inner summation? Specifically, can we write the double summation as "\sum{a}\sum{s'} pi(a|s) * Pr(s'|s,a) (...)" instead of “\sum{a} pi(a|s) \sum{s'} Pr(s'|s,a) (...)”? Thank you!