CS 194/294-196 (LLM Agents) - Lecture 1, Denny Zhou

  Рет қаралды 38,759

Berkeley RDI Center on Decentralization & AI

Berkeley RDI Center on Decentralization & AI

Күн бұрын

Пікірлер: 56
@prakashpvss
@prakashpvss 3 ай бұрын
Lecture starts at 14:31
@elizabnet3245
@elizabnet3245 3 ай бұрын
Thanks for letting us know! I was kind of confused
@vuxminhan
@vuxminhan 3 ай бұрын
Please improve the audio quality next time! Otherwise great lecture. Thanks Professors!
@cyoung-s2m
@cyoung-s2m Ай бұрын
Excellent lecture! building a groundbreaking approach rooted in fundamental, solid principles and first-principles. It’s not only about llm agents but also a profound wisdom for life.
@arnabbiswas1
@arnabbiswas1 2 ай бұрын
Listening to this lecture after OpenAI's o1 release. The lecture is helping me to understand what is possibly happening under the hood of o1. Thanks for the course.
@VishalSachdev
@VishalSachdev 3 ай бұрын
Need better audio capture setup for next lecture
@sheldonlai1650
@sheldonlai1650 3 ай бұрын
Agree your opinion
@jeffreyhao1343
@jeffreyhao1343 2 ай бұрын
The audio quality is good enough, mate, but this is Chinglish, requiring better listening skills.
@OlivierNayraguet
@OlivierNayraguet 2 ай бұрын
@@jeffreyhao1343 You mean you need good reading skills. Otherwise, by the time I manage the form, I lose the content.
@haodeng9639
@haodeng9639 5 күн бұрын
Thank you, professor! The best course for beginners.
@Shiv19790416
@Shiv19790416 Ай бұрын
Excellent lecture. Thanks for first-principles approach to learning agents.
@7of934
@7of934 3 ай бұрын
Please make captions match the speaker's timing (currently they are about a 2-3 seconds late.
@claymorton6401
@claymorton6401 2 ай бұрын
Use the KZbin embedded caption.
@garibaldiarnold
@garibaldiarnold 2 ай бұрын
I don't get it... at 49:50: What's the difference between "LLM generate multiple responses" vs "sampling multiple times"?
@aryanpandey7835
@aryanpandey7835 2 ай бұрын
generating multiple responses can lead to better consistency and quality by allowing for a self-selection process among diverse outputs, while sampling multiple times may provide a more straightforward but less nuanced approach.
@faiqkhan7545
@faiqkhan7545 Ай бұрын
@@aryanpandey7835 I think you have shuffled the concept here. sampling multiple times can enhance self consistency within LLMs , generating multiple responses is just generating different pathways and some might be wrong , it doesnt lead to better consistency .
@FanxuMin
@FanxuMin Ай бұрын
@@faiqkhan7545 I agree with this, reasoning path is an irrelevant variable for the training of LLMs.
@jteichma
@jteichma 3 ай бұрын
Agree especially the second speaker. Sound quality is muffled. Thanks 🙏
@yaswanthkumargothireddy6591
@yaswanthkumargothireddy6591 Ай бұрын
what I did to reduce the echo noise is to download mp3 of this lecture, open with microsoft clipchamp(lucky I) and applied noise reducion filter(you have noise reduction filter in media players like VLC if you don't have clipchamp). Finally synced and played video and audio seperately. :)
@ZHANGChenhao-x7v
@ZHANGChenhao-x7v 2 ай бұрын
awesome lecture!
@deeplearning7097
@deeplearning7097 3 ай бұрын
It's worth repeating, the audio is terrible. You really want some determination to stick through this. Shame really. These presenters deserve better, and the people who signed up for this. Thanks though.
@sahil0094
@sahil0094 3 ай бұрын
It’s definitely some Indian commenting this
@TheDjpenza
@TheDjpenza 3 ай бұрын
I'm not sure this needs to be said, but I am going to say it because the language used in this presentation concerns me. LLMs are not using reason. They operate in the domain of mimicking language. Reason happens outside of the domain of language. For example, if you have blocks sorted by colors and hand a child a block they will be able to put it into the correct sorted pile even before they have language skills. What you are demonstrating is a longstanding principle of all machine learning problems. The more you constrain your search space, the more predictable your outcome. In the first moves of a chess game the model is less certain of which move leads to a win than it will be later in the game. This is not because it is reasoning throughout the game, it is because the search space has collapsed. You have found clever ways to collapse an LLM search space such that it will find output that mimics reasoning. You have not created a way to do reasoning with LLMs.
@user-pt1kj5uw3b
@user-pt1kj5uw3b 3 ай бұрын
Wow you really figured it all out. I doubt anyone has thought of this before.
@JTan-fq6vy
@JTan-fq6vy 2 ай бұрын
What is your definition of reasoning? And how does it fit into the paradigm of machine learning (learning from data)?
@romarsit1795
@romarsit1795 2 ай бұрын
Underrated comment
@datatalkswithchandranshu2028
@datatalkswithchandranshu2028 2 ай бұрын
What u refer can be done via Vision models.Color identification via vision, sorting via basic model. Reason means adding logic to steps for model rather than direct answer. The answer is in the statement maximisation of P(response|ques)= Sum(paths)P(responses,path|question)
@Andre-mi6fk
@Andre-mi6fk Ай бұрын
This is not quite true. If you anchor reasoning to what your acceptable level of reasoning is, then you might have a point. However, reasoning and reason are distinct and should be called out. An LLM can tell you exactly why it chose the answer or path it did, sometimes wrong, yes, but it gave you it's thought process. That is --> LEARNED from the data pattern in the training data.
@Pingu_astrocat21
@Pingu_astrocat21 3 ай бұрын
thank you for uploading :)
@akirasakai-ws4eu
@akirasakai-ws4eu 2 ай бұрын
thanks for sharing❤❤ love this course
@arjunraghunandanan
@arjunraghunandanan Ай бұрын
This was very useful.
@arjunraghunandanan
@arjunraghunandanan Ай бұрын
09:40 What do you expect for AI? I hope that going forth, AI can help reduce/remove the workload on menial tasks such as data entry, idea prototyping, onboarding, scheduling, calculations, knowledge localization & transformation tasks so that we, humans can focus on better tasks such as tackling climate change, exploring space, faster & safer transportation, preventing poverty and diseases, etc. (AI can help us in that too. ) Offloading operational overheads to an AI feels the best thing that should happen. But, the digital divide and lack of uniform access to latest tech across different parts of the world is the biggest problem I see here.
@lucasxu2087
@lucasxu2087 2 ай бұрын
Great lecture. one question on the example mentioned: Q: “Elon Musk” A: the last letter of "Elon" is "n". the last letter of "Musk" is "k". Concatenating "n", "k" leads to "nk". so the output is "nk". Q: “Bill Gates” A: the last letter of "Bill" is "l". the last letter of "Gates" is "s". Concatenating "l", "s" leads to "ls". so the output is "ls". Q: “Barack Obama" A: since LLM works by predicting the next token with highest probability, how can LLM with reasoning ability predict 'ka' which might not even be a valid token in the training corpus, and how can it be with highest probability given the prompt?
@IaZu-o5t
@IaZu-o5t 2 ай бұрын
You can learn about Attention, Search "Attention is all you need" can find some popular science video about this paper
@datatalkswithchandranshu2028
@datatalkswithchandranshu2028 2 ай бұрын
Due to the 2examples, we get LLM understanding of the steps to follow to get the answer, rather than just stating the answers nk and ls. So it increases the P(correct answer|question)
@sanjaylalwani1
@sanjaylalwani1 2 ай бұрын
Great lecture. Audio could be improved in next lecture.
@wzyjoseph7317
@wzyjoseph7317 2 ай бұрын
lidangzzz send me here, would finished this amazing lecture
@user-cy1ot8ge4n
@user-cy1ot8ge4n 2 ай бұрын
haha~,me either .So awesome lecture!
@jianyangdeng1341
@jianyangdeng1341 2 ай бұрын
same bro
@JKlupi
@JKlupi Ай бұрын
😂same
@wzyjoseph7317
@wzyjoseph7317 Ай бұрын
@@JKlupi good luck! bro
@wzyjoseph7317
@wzyjoseph7317 Ай бұрын
@@user-cy1ot8ge4n good luck bro
@victorespiritusantiago8664
@victorespiritusantiago8664 3 ай бұрын
thank you for share slides.!
@SanskarBhargava-g5u
@SanskarBhargava-g5u 2 ай бұрын
Can I have access to the discord ?
@faizrazadec
@faizrazadec 2 ай бұрын
Kindly Improve the Audio, It's barely hearable!
@yevonli-s5c
@yevonli-s5c 2 ай бұрын
Please improve the audio quality, great lecture tho!
@MUHAMMADAMINNADIM-q4u
@MUHAMMADAMINNADIM-q4u Ай бұрын
gREAT SESSIONS
@achris7
@achris7 Ай бұрын
Audio quality should be improved its very difficult to understnd
@tianyushi2787
@tianyushi2787 3 ай бұрын
32:35
@sahil0094
@sahil0094 2 ай бұрын
even the subtitles are all wrong, AI cant recognize this person's english hahaahahah
@MrVoronoi
@MrVoronoi Ай бұрын
Great content but the accent and audio are tedious. Please make an effort to improve that. Look at the great Andrew Ng. Being Chinese speaking is not an excuse for being incomprehensible. He's clear and articulate and delivers some of the most useful content on AI.
@AdrianTorrie
@AdrianTorrie 20 күн бұрын
💯 agree. Fantastic content, poor delivery on all fronts making it harder to take in the actual content.
@haweiiliang3311
@haweiiliang3311 Ай бұрын
Sorry, but, the accent of the lady from the begining drives me crazy.😅Typical Chinglish style.
@sahil0094
@sahil0094 2 ай бұрын
waste of time
@qianfanguo6511
@qianfanguo6511 10 күн бұрын
I am confused with the apple example 39:41. What does the token mean in this example? Where does the top-1:5, top-2:I... words come from?
CS 194/294-196 (LLM Agents) - Lecture 2, Shunyu Yao
1:08:44
Berkeley RDI Center on Decentralization & AI
Рет қаралды 20 М.
Как Я Брата ОБМАНУЛ (смешное видео, прикол, юмор, поржать)
00:59
It works #beatbox #tiktok
00:34
BeatboxJCOP
Рет қаралды 4,9 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 14 МЛН
Learn Prompt Engineering: Full Beginner Crash Course (5 HOURS!)
4:57:00
Zero To Mastery
Рет қаралды 28 М.
CS 194/294-196 (LLM Agents) - Lecture 3, Chi Wang and Jerry Liu
1:04:49
Berkeley RDI Center on Decentralization & AI
Рет қаралды 14 М.
CS 194/294-196 (LLM Agents) - Lecture 5, Omar Khattab
1:04:58
Berkeley RDI Center on Decentralization & AI
Рет қаралды 16 М.
CS 194/294-196 (LLM Agents) - Lecture 6, Graham Neubig
1:00:42
Berkeley RDI Center on Decentralization & AI
Рет қаралды 8 М.
立党讲座08:如何做科研,如何挑选读PhD正确方向?
1:23:59
CS 194/294-196 (LLM Agents) - Lecture 4, Burak Gokturk
1:02:37
Berkeley RDI Center on Decentralization & AI
Рет қаралды 10 М.
CS 194/294-196 (LLM Agents) - Lecture 12, Dawn Song
1:44:10
Berkeley RDI Center on Decentralization & AI
Рет қаралды 2,7 М.