Thanks for letting us know! I was kind of confused
@vuxminhan3 ай бұрын
Please improve the audio quality next time! Otherwise great lecture. Thanks Professors!
@cyoung-s2mАй бұрын
Excellent lecture! building a groundbreaking approach rooted in fundamental, solid principles and first-principles. It’s not only about llm agents but also a profound wisdom for life.
@arnabbiswas12 ай бұрын
Listening to this lecture after OpenAI's o1 release. The lecture is helping me to understand what is possibly happening under the hood of o1. Thanks for the course.
@VishalSachdev3 ай бұрын
Need better audio capture setup for next lecture
@sheldonlai16503 ай бұрын
Agree your opinion
@jeffreyhao13432 ай бұрын
The audio quality is good enough, mate, but this is Chinglish, requiring better listening skills.
@OlivierNayraguet2 ай бұрын
@@jeffreyhao1343 You mean you need good reading skills. Otherwise, by the time I manage the form, I lose the content.
@haodeng96395 күн бұрын
Thank you, professor! The best course for beginners.
@Shiv19790416Ай бұрын
Excellent lecture. Thanks for first-principles approach to learning agents.
@7of9343 ай бұрын
Please make captions match the speaker's timing (currently they are about a 2-3 seconds late.
@claymorton64012 ай бұрын
Use the KZbin embedded caption.
@garibaldiarnold2 ай бұрын
I don't get it... at 49:50: What's the difference between "LLM generate multiple responses" vs "sampling multiple times"?
@aryanpandey78352 ай бұрын
generating multiple responses can lead to better consistency and quality by allowing for a self-selection process among diverse outputs, while sampling multiple times may provide a more straightforward but less nuanced approach.
@faiqkhan7545Ай бұрын
@@aryanpandey7835 I think you have shuffled the concept here. sampling multiple times can enhance self consistency within LLMs , generating multiple responses is just generating different pathways and some might be wrong , it doesnt lead to better consistency .
@FanxuMinАй бұрын
@@faiqkhan7545 I agree with this, reasoning path is an irrelevant variable for the training of LLMs.
@jteichma3 ай бұрын
Agree especially the second speaker. Sound quality is muffled. Thanks 🙏
@yaswanthkumargothireddy6591Ай бұрын
what I did to reduce the echo noise is to download mp3 of this lecture, open with microsoft clipchamp(lucky I) and applied noise reducion filter(you have noise reduction filter in media players like VLC if you don't have clipchamp). Finally synced and played video and audio seperately. :)
@ZHANGChenhao-x7v2 ай бұрын
awesome lecture!
@deeplearning70973 ай бұрын
It's worth repeating, the audio is terrible. You really want some determination to stick through this. Shame really. These presenters deserve better, and the people who signed up for this. Thanks though.
@sahil00943 ай бұрын
It’s definitely some Indian commenting this
@TheDjpenza3 ай бұрын
I'm not sure this needs to be said, but I am going to say it because the language used in this presentation concerns me. LLMs are not using reason. They operate in the domain of mimicking language. Reason happens outside of the domain of language. For example, if you have blocks sorted by colors and hand a child a block they will be able to put it into the correct sorted pile even before they have language skills. What you are demonstrating is a longstanding principle of all machine learning problems. The more you constrain your search space, the more predictable your outcome. In the first moves of a chess game the model is less certain of which move leads to a win than it will be later in the game. This is not because it is reasoning throughout the game, it is because the search space has collapsed. You have found clever ways to collapse an LLM search space such that it will find output that mimics reasoning. You have not created a way to do reasoning with LLMs.
@user-pt1kj5uw3b3 ай бұрын
Wow you really figured it all out. I doubt anyone has thought of this before.
@JTan-fq6vy2 ай бұрын
What is your definition of reasoning? And how does it fit into the paradigm of machine learning (learning from data)?
@romarsit17952 ай бұрын
Underrated comment
@datatalkswithchandranshu20282 ай бұрын
What u refer can be done via Vision models.Color identification via vision, sorting via basic model. Reason means adding logic to steps for model rather than direct answer. The answer is in the statement maximisation of P(response|ques)= Sum(paths)P(responses,path|question)
@Andre-mi6fkАй бұрын
This is not quite true. If you anchor reasoning to what your acceptable level of reasoning is, then you might have a point. However, reasoning and reason are distinct and should be called out. An LLM can tell you exactly why it chose the answer or path it did, sometimes wrong, yes, but it gave you it's thought process. That is --> LEARNED from the data pattern in the training data.
@Pingu_astrocat213 ай бұрын
thank you for uploading :)
@akirasakai-ws4eu2 ай бұрын
thanks for sharing❤❤ love this course
@arjunraghunandananАй бұрын
This was very useful.
@arjunraghunandananАй бұрын
09:40 What do you expect for AI? I hope that going forth, AI can help reduce/remove the workload on menial tasks such as data entry, idea prototyping, onboarding, scheduling, calculations, knowledge localization & transformation tasks so that we, humans can focus on better tasks such as tackling climate change, exploring space, faster & safer transportation, preventing poverty and diseases, etc. (AI can help us in that too. ) Offloading operational overheads to an AI feels the best thing that should happen. But, the digital divide and lack of uniform access to latest tech across different parts of the world is the biggest problem I see here.
@lucasxu20872 ай бұрын
Great lecture. one question on the example mentioned: Q: “Elon Musk” A: the last letter of "Elon" is "n". the last letter of "Musk" is "k". Concatenating "n", "k" leads to "nk". so the output is "nk". Q: “Bill Gates” A: the last letter of "Bill" is "l". the last letter of "Gates" is "s". Concatenating "l", "s" leads to "ls". so the output is "ls". Q: “Barack Obama" A: since LLM works by predicting the next token with highest probability, how can LLM with reasoning ability predict 'ka' which might not even be a valid token in the training corpus, and how can it be with highest probability given the prompt?
@IaZu-o5t2 ай бұрын
You can learn about Attention, Search "Attention is all you need" can find some popular science video about this paper
@datatalkswithchandranshu20282 ай бұрын
Due to the 2examples, we get LLM understanding of the steps to follow to get the answer, rather than just stating the answers nk and ls. So it increases the P(correct answer|question)
@sanjaylalwani12 ай бұрын
Great lecture. Audio could be improved in next lecture.
@wzyjoseph73172 ай бұрын
lidangzzz send me here, would finished this amazing lecture
@user-cy1ot8ge4n2 ай бұрын
haha~,me either .So awesome lecture!
@jianyangdeng13412 ай бұрын
same bro
@JKlupiАй бұрын
😂same
@wzyjoseph7317Ай бұрын
@@JKlupi good luck! bro
@wzyjoseph7317Ай бұрын
@@user-cy1ot8ge4n good luck bro
@victorespiritusantiago86643 ай бұрын
thank you for share slides.!
@SanskarBhargava-g5u2 ай бұрын
Can I have access to the discord ?
@faizrazadec2 ай бұрын
Kindly Improve the Audio, It's barely hearable!
@yevonli-s5c2 ай бұрын
Please improve the audio quality, great lecture tho!
@MUHAMMADAMINNADIM-q4uАй бұрын
gREAT SESSIONS
@achris7Ай бұрын
Audio quality should be improved its very difficult to understnd
@tianyushi27873 ай бұрын
32:35
@sahil00942 ай бұрын
even the subtitles are all wrong, AI cant recognize this person's english hahaahahah
@MrVoronoiАй бұрын
Great content but the accent and audio are tedious. Please make an effort to improve that. Look at the great Andrew Ng. Being Chinese speaking is not an excuse for being incomprehensible. He's clear and articulate and delivers some of the most useful content on AI.
@AdrianTorrie20 күн бұрын
💯 agree. Fantastic content, poor delivery on all fronts making it harder to take in the actual content.
@haweiiliang3311Ай бұрын
Sorry, but, the accent of the lady from the begining drives me crazy.😅Typical Chinglish style.
@sahil00942 ай бұрын
waste of time
@qianfanguo651110 күн бұрын
I am confused with the apple example 39:41. What does the token mean in this example? Where does the top-1:5, top-2:I... words come from?