sir, thank you so much. i watched the full video and could digest most of the information well. you are amazing.
@harshpatil76842 ай бұрын
Does it also mean from your statistical model that LLMs can reason across contextual history using same principles ? So say I have prompt 1, response 1, prompt 2, response 2, ......., prompt (n-1) response (n-1). say this is my current state of conversation with a voice assistant. If I now send prompt n, can it use that as an intention and measure across all previous prompts (intentions) and responses (messages) and factor in all the history of conversation and generate response n ?
@areenreddy3 ай бұрын
your content is useful and i am hoping you keep this on
@mlandaiacademy3 ай бұрын
thank you!!
@SuvradipDasPhotographyOfficial3 ай бұрын
Excellent❤❤❤
@mlandaiacademy3 ай бұрын
glad it is useful 🤗
@SuvradipDasPhotographyOfficial3 ай бұрын
@@mlandaiacademy Trust me its too good and i will be completing the remaining two tomorrow. I really hope you make videos on GAN, CNN, RNN too.
@mlandaiacademy3 ай бұрын
@@SuvradipDasPhotographyOfficial Thank you. We will try if time allows for sure :)
@brendanclarke13024 ай бұрын
This is very cool, thanks for the presentation.
@mlandaiacademy4 ай бұрын
glad you find it useful.
@ramanandr75624 ай бұрын
Nice
@tawfickraad77076 ай бұрын
excellent explanation, Bravo
@mlandaiacademy6 ай бұрын
Glad it helped!
@eigd7 ай бұрын
Is there a paper associated with this talk? And is there a ROS package on some github somewhere?
@mlandaiacademy7 ай бұрын
We are planning to put the paper out soon we are just writing it - stay tuned.
@eigd7 ай бұрын
Never mind. He mentions paper is coming, and they are working on how to share the code.
@mlandaiacademy7 ай бұрын
He is me btw hehe
@eigd7 ай бұрын
@@mlandaiacademy Then let me thank you for a very interesting video! I am working on my Phd. project proposal right now, and I am looking to work on embodiment of robots and navigation in Agriculture. This domain is exploding right now!
@mlandaiacademy7 ай бұрын
We should deffo have a chat write me on [email protected] if interested 😀
Very important subject but the Slides are not clear and the video resolution is not available
@mlandaiacademy8 ай бұрын
Yes KZbin has two qualities sd and hd. SD processing is done and HD processing typically takes more time. In couple of hours if you refresh it should be hd quality 😀 it’s KZbin’s processing time 😀😀
@mlandaiacademy8 ай бұрын
It is amazing that NLI models have no clue about semantics! Let us know if you have any question!
@mlandaiacademy9 ай бұрын
Feel free to leave any questions for Souradip. I will convey them!
@mlandaiacademy9 ай бұрын
Please feel free to leave any questions for Leo! I can then convey.
@timothywong44859 ай бұрын
Promo'SM 🙋
@tarekshaalan32579 ай бұрын
Dr Bou Ammar another great video. To be honest yours are very highly recommended and the clarity and quality are unique very much appreciated for your efforts you put on these lectures.
@mlandaiacademy9 ай бұрын
Thanks a lot Tarek! You are the best. I am glad you find them useful ! Appreciate the kind words as well 🙏🤗
@tarekshaalan32579 ай бұрын
Thank for your explainig brielfy this paper and it is a very intersting one which as you mentioned worth to check why X* cluster is actually forming keep it up the awesome work homie 👍🏻
@mlandaiacademy9 ай бұрын
Thank you so much!! Glad you like it 😀
@ChiragAhuja19 ай бұрын
This is the best tutorial, since I used REINFORCE few years back for finding best sequence of data augmentation and then even for Recommender problems. Good to see it returning back.
@mlandaiacademy9 ай бұрын
Thank you so much !!!
@tarekshaalan32579 ай бұрын
For Norm 0 it violates the fundamental property of norm which vector norm usually must have a positive length or size, and since a vector has no length in L0 then there will be some confusion there? therefore cannot be considered as a norm. That's why for ur question that the sparsity norm cannot be actually a norm I guess.
@mlandaiacademy9 ай бұрын
Think about what would happen if we do multiple Alpha by a vector and take the l0 norm of that. Then compare that with what the axioms of norms should give us for a the norm of the vector alpha x vs the norm of vector of x
@tarekshaalan32579 ай бұрын
Thank you Dr Ammar very useful lectures and helping me a lot in my journey appreciated a lot.
@mlandaiacademy9 ай бұрын
Thanks a lot glad you find them useful !
@Janamejaya.Channegowda10 ай бұрын
Thank you for sharing.
@mlandaiacademy10 ай бұрын
Of course :)
@YouTubeMed10 ай бұрын
Thank you !
@mlandaiacademy10 ай бұрын
You are very welcome !!
@theneuralmancer10 ай бұрын
First! And amazing work thank you for this :)
@mlandaiacademy10 ай бұрын
Thank you so much!!
@mohammeddarras971910 ай бұрын
عمل جبار فخورين بك اخ هيثم استمر بنشر العلم الحقيقي
@mlandaiacademy10 ай бұрын
متشكر اخي محمد ☺️
@JATINKUMAR-qu4vi10 ай бұрын
👍👍
@drsoumyabanerjee502010 ай бұрын
EXCELLENT ! thanks for sharing
@mlandaiacademy10 ай бұрын
Glad you enjoyed it!
@marohs560610 ай бұрын
Eureka is giving me automated Circulium learning feeling? where gpt is the one deciding the reward function for each stage of learning
@mlandaiacademy10 ай бұрын
This is an interesting way to see it. The way I see it is like it is trying to design denser rewards from a sparse signal in a way. Based on fitness it is trying to make the problem easier to solve by giving me easier rewards at each stage. It’s something similar to what u mention yea
@NoThing-ec9km11 ай бұрын
This makes me feel real stupid. Maybe I am.
@mlandaiacademy11 ай бұрын
No you aren’t. It is a short teaser to a much longer paper. It is more saying go read that paper it is awesome.
@directorsudar410811 ай бұрын
It’s really interesting and awesome
@mlandaiacademy11 ай бұрын
I like it too yes 😃
@ChrisConsultant11 ай бұрын
Quality content.
@mlandaiacademy11 ай бұрын
Thanks you so much!!
@Chandra_Sekhar_T11 ай бұрын
The L0 Norm does not follow absolute Homogeneity and that is why it is not a norm
@Janamejaya.Channegowda11 ай бұрын
Thank you for sharing, greatly appreciated.
@mlandaiacademy11 ай бұрын
Glad you find it useful!!
@KetutArtayasa11 ай бұрын
Thank you🙏
@mlandaiacademy11 ай бұрын
You are very welcome !!
@36nibs11 ай бұрын
i think its a problem that i know what hes talking about
@36nibs11 ай бұрын
computer no good keeping context
@anupamghoshh11 ай бұрын
❤
@Neural-Awakening11 ай бұрын
Fantastic video, Thank you for pushing out valuable diagrams and explanations. AI acceleration is a go!
@mlandaiacademy11 ай бұрын
Glad it was helpful!
@jasonkena11 ай бұрын
At 10:00, what's preventing the NN from learning a degenerate representation which merely memorizes the (x, y) pairs?
@jasonkena11 ай бұрын
Thanks for the great series! I've heard that BO typically scales very poorly with the number of variables to optimize over. What's the maximum number of parameters HEBO can feasibly work on? And how is it that it works well on the protein problem considering there are ~(protein length)x(number of amino acid types) variables?
@mlandaiacademy11 ай бұрын
First, I would make a distinction between the dimensions of the problem and the values each dimension could take. In terms of dimensions, you are right that standard BO could suffer - maybe up to 80 dims I'd say if you try hard. However, there are high-d BO algorithms devised to takle higher-d, some include: TuRBO (arxiv.org/pdf/1910.01739.pdf), latent space Bayesian Optimisation (arxiv.org/abs/2201.11872) and decomposition techniques (arxiv.org/abs/2301.12844). In terms of the second quesiton, that relates to the combinatorial nature of the problem for which combinatorial Bayesian optimisation solvers have been developed. The trick is in the kernel and how you maximise your acquisitions to better explore the space. If interested, we had a new library at NeurIPS that solve those problems and gives you a SOTA algorithms. Please find the paper here: arxiv.org/pdf/2306.09803.pdf, the code here: github.com/huawei-noah/HEBO/tree/master/MCBO, and a blog about the results here: medium.com/@haitham.bouammar71/introducing-mcbo-a-modular-framework-for-comprehensive-benchmarking-and-evaluation-in-5baacab71fc6. Hope this helps :)
@jayeshkurdekar126 Жыл бұрын
Very organic way of explaining from first principals..thnks a ton
@SalAlab-r4h Жыл бұрын
great thanks
@timverbarg Жыл бұрын
why is it better to have the gradient_theta of log p_theta (x)?
@mlandaiacademy11 ай бұрын
It is not better per-say. It is more tractable though. Think about it, p(tau) = p(s_0) \prod_t p(s_t+|s_t, a_t) pi_theta(a_t|s_t). If you want to compute the grad_theta of this it is hard for many reasoning the product, not knowing p(s_t+1|s_t,a_t) and not ness. leading to a tractable algorithm which you can use to sample to get the Monte Carlo of the expectation. The log helps us in those. When you take the log the prod becomes a sum and grad wrt to p(s_t|s_t, a_t) vanishes since it does not depend on theta. Then you are left with grad over pi_theta. Hope this helps!
@adamli213 Жыл бұрын
but why?
@YouTubeMed Жыл бұрын
Thanks for the new video, one of the best channels on the topic of artificial intelligence❤
@YouTubeMed Жыл бұрын
Great lecture Haitham. Good luck to you and Rasul!
@lennarts.2488 Жыл бұрын
Thanks for the video! I kinda like the concept of your slides. Normally one would say “please don't put too much stuff on one slide” but this way it is much easier to comprehend the concepts (especially with the color coded information). Kudos!
@orangethemeow Жыл бұрын
this video is very relaxing, thanks
@JTan-fq6vy Жыл бұрын
Thanks for the great video! I have a question regarding the RHS of the last equation at the bottom of the screen: Can the pi(a|s) be inside the inner summation? Specifically, can we write the double summation as "\sum{a}\sum{s'} pi(a|s) * Pr(s'|s,a) (...)" instead of “\sum{a} pi(a|s) \sum{s'} Pr(s'|s,a) (...)”? Thank you!