Tap to unmute

Stanford CS25: V4 I Jason Wei & Hyung Won Chung of OpenAI

  Рет қаралды 173,737

Stanford Online

Stanford Online

Күн бұрын

Пікірлер: 76
@yoesemiat
@yoesemiat 7 ай бұрын
The fact that giving more freedom to the model and having less inductive biases affected by human subjectivity actually improves performance is really iluminating. Thanks.
@jean-pierrecoffe6666
@jean-pierrecoffe6666 7 ай бұрын
Nothing new under the sun, this is just the Bitter Lesson
@chriswang2464
@chriswang2464 6 ай бұрын
Moreover, it is inspired by Occam's Razor.
@michaelbernaski7337
@michaelbernaski7337 8 ай бұрын
Excellent. First talk is practical. Second is profound. Thank you.
@TrishanPanch
@TrishanPanch 8 ай бұрын
Outstanding. I teach an AI class and there are loads of great pedagogical nuggets here that I am going to borrow.
@ankitthawal1313
@ankitthawal1313 7 ай бұрын
Can you explain, what are those?
@lugia8888
@lugia8888 7 ай бұрын
Nice, a fake class.
@irshviralvideo
@irshviralvideo 7 ай бұрын
@@anshuraj4277 why bother going to college to learn ?
@calm694
@calm694 7 ай бұрын
@@anshuraj4277 learn english first before making going to AI CS
@packsw9243
@packsw9243 7 ай бұрын
@@calm694 "before making going" yeah you're a real genius
@zyxbody
@zyxbody 7 ай бұрын
I dont understand anything but I like how these people teach.May all get to understand the concepts thats my only prayer.
@flavioferlin3127
@flavioferlin3127 4 ай бұрын
I could listen to these gentlemen talk about this stuff all day. Thanks and kudos for making such a fascinating topic relatable.
@sanesanyo
@sanesanyo 8 ай бұрын
One of my favourite talks in recent times..learnt so much from this.
@ricopags
@ricopags 8 ай бұрын
Really grateful for this being uploaded! Thank you to both speakers and to Stanford for the generosity. Highlight of the video for me is the Hyung's sheepish refusal to get into predictions on the staying power/relevance of MoE or any specific architecture. It felt like a wasted question since the premise of his talk is "tl;dr Sutton's Bitter Lesson"
@sady01
@sady01 7 ай бұрын
What an amazing lecture. It was simple, yet groundbreaking
@ariG23498
@ariG23498 8 ай бұрын
He has his slides in his head! Loved the content.
@inforoundup9826
@inforoundup9826 8 ай бұрын
Great talks by both speakers
@jasonmeyer495
@jasonmeyer495 7 ай бұрын
Amazing content. His use of simple examples to explain deep concepts is extraordinary. So lucky to be living in a world where content like this is so easily discoverable and accessible.
@Aditya-ri7em
@Aditya-ri7em 7 ай бұрын
he came and started teaching like a teacher .
@atdt01410x
@atdt01410x 8 ай бұрын
This lecture is super useful. really appreciate.
@indylawi5021
@indylawi5021 6 ай бұрын
Great lecture and insights on LLM.
@JiayiHe-fs2rh
@JiayiHe-fs2rh 4 ай бұрын
Thanks for the great talk!
@JasonKendra
@JasonKendra 7 ай бұрын
Don't let this setback define your trading journey. Keep working hard and striving for success.
@izumskee
@izumskee 8 ай бұрын
Very great talk. Thank you
@gmccreight2
@gmccreight2 8 ай бұрын
Thanks for the talk! Really interesting stuff. I had one question. At 1:04:00 Hyung suggests that uni-directional attention is preferable to bidirectional attention in turn-taking scenarios because it allows the reuse of calculated information in the KV cache. I'm trying to understand how this fits into his broader thesis that we should be moving towards more generic approaches. On the surface the use of the KV cache doesn't feel particularly generic. Does it make sense because masked self-attention is necessary for next token generation, anyhow, so using a causal attention mask universally makes sense?
@the_wisecrack9472
@the_wisecrack9472 6 ай бұрын
This is really great! Thank you
@jwPick
@jwPick 6 ай бұрын
thank so much for the precious video
@itsaugbog
@itsaugbog 7 ай бұрын
Hilariously Jensen Huang from NVIDIA just spoke in an fireside chat recently about how they're already dependent on AI and models for designing chips so that last comment is already happening. Great talk.
@simonfranco644
@simonfranco644 7 ай бұрын
Can you support this with a doc or link. I am very keen in exploring this. Also, it was hilarious to me that the attendees laughed at the doctor for explaining that and I giggled when he mentioned that it might just be official in two years or so.
@adamlin120
@adamlin120 8 ай бұрын
Great and inspiring talks
@jooholee_
@jooholee_ 7 ай бұрын
Greak Talk. Very Inspiring.
@CrazyFoxMovies
@CrazyFoxMovies 8 ай бұрын
Great lecture!
@lugia8888
@lugia8888 7 ай бұрын
All of this is BS 😂
@zacharykosove9048
@zacharykosove9048 8 ай бұрын
The students were asking some great questions, no wonder I don't go to Stanford
@roro5179
@roro5179 7 ай бұрын
im the dude at the end (dont go to Stanford xd)
@mprone
@mprone 7 ай бұрын
Questions looked pretty naive to me. What's "great" about them to you?
@laalbujhakkar
@laalbujhakkar 8 ай бұрын
Thanks for all the extra popping into the mic during the intro brrrruh!
@doinitlive3015
@doinitlive3015 7 ай бұрын
Types of leadership can be used as an analogy in the area of using less structure but at the same time performance is higher. A leader who utilizes an authoritarian type of leadership increases productivity within the team but decreases the team's creativity. Whereas a team under a democratic type of leadership are able to solve problems with increased creativity leading to innovative ideas.
@Faustordz
@Faustordz 7 ай бұрын
Very intriguing!
@lc.sin.
@lc.sin. 7 ай бұрын
Besides compute, I guess the eponentially cheaper network bandwidth, data storage, sensors to capture real world input should also be part of driving forces
@aliwaheed906
@aliwaheed906 7 ай бұрын
Maybe the emergent behavior happens because for that task to be learned there are a set of pre-requisite tasks that need to be learned first. Just brainstorming here.
@boybro624
@boybro624 5 ай бұрын
I don't quite understand that the overall loss is divided into many sub-losses, is it true that llm training only uses cross_entropy as karpathy said , sorry, I'm new to this field
@Arcticwhir
@Arcticwhir 7 ай бұрын
im more curious about the 22% of completely flat set of tasks and what the solutions are to change that. Also for larger models, showing that less structure is generally better but needs more compute, does that mean the model will need less RLHF to have a desirable model for humans...
@heyitsjoshd
@heyitsjoshd 8 ай бұрын
How do we know what is small vs large? For example, with emergent tasks, it highlights that more data could lead to more accuracy with enough compute. The small LM would have not seen accuracy improvements but the large LM did. For the tasks currently indicated as flat, couldn't we just not have enough compute now to know if these tasks would get more accurate?
@DanBillings
@DanBillings 8 ай бұрын
Please put the subject of the talk in the title. You can then market the OpenAI speakers
@Lalala_1701
@Lalala_1701 8 ай бұрын
Andrew ng also took same kind of example to explain LM.
@dkierans
@dkierans 7 ай бұрын
Yeah, this is a pretty great talk. It is quite hard to figure out at what technical level to hit the widest audience. This is nice. Not as nice as those flaxen locks though.
@robertwilsoniii2048
@robertwilsoniii2048 7 ай бұрын
Something that always bothered me was that adding in random terms increases predicability power, holding sample size constant (scaling compute without increasing data size). The peoblem is it decreases explanatory power and ability to understand the individual contributions of each variable. It's like pop-astrology, star signs -- libra, gemini, leo... etc. -- adding extra variables improves scaling compute and predictability, but does it add anything to clarity? I suppose to make predictions clarity doesn't matter. That always annoyed me.
@xiaoxiandong7382
@xiaoxiandong7382 3 ай бұрын
wow!!! So good
@MatijaGrcic
@MatijaGrcic 7 ай бұрын
Amazing!
@hedu5303
@hedu5303 8 ай бұрын
Strange world. This dude is almost a kid and gives a lecture
8 ай бұрын
I am happy to learn from any kid :)
@shairuno
@shairuno 8 ай бұрын
His intuition is older than me
@vireyes1595
@vireyes1595 7 ай бұрын
nah man gotta recognize game when you see it. dude’s a future titan of the industry and we’re out here getting his guest lecture for free. pretty solid win for all parties involved in my book
@SuperHeromindNsoul
@SuperHeromindNsoul 7 ай бұрын
True we can all learn from each other and Speakers here also learn from someone
@MrAmgadHasan
@MrAmgadHasan 7 ай бұрын
Indeed. Many of the recent breakthroughs ML were achieved by people in their 20s, mostly during or briefly after their PhDs.
@erebi8386
@erebi8386 7 ай бұрын
형원게이 힘내라
@Umarbit
@Umarbit 7 ай бұрын
Please remove the noise from audio
@akashdeb9823
@akashdeb9823 7 ай бұрын
Jason can do 18 pull ups no breaks
@hh0686
@hh0686 7 ай бұрын
Please…why can’t the presentation be done on a projector instead of a whiteboard. The kind of visual is so horrible.
@primedanny417
@primedanny417 5 ай бұрын
It was intentional, from Hyung Won Chung's tweet: "Jason walked into the classroom without anything (no laptop, no notes) and gave a lecture out of memory."
@rasen84
@rasen84 8 ай бұрын
The second half is 100% wrong on the idea that scaling is what matters and adding complexity into the model, adding inductive biases bites you in the ass later. You're not considering the considerable amount of human labor allocated to data curation and handwritten instruction tuning data. That is necessary because the model is too simple and too dumb. The model doesn't have the necessary inductive biases to intelligently take any data. You need to add more inductive biases in order to obviate the need for human labor on data curation and creation.
@김성주-h1b
@김성주-h1b 8 ай бұрын
He is not talking about the immediate moment. He is discussing what kind of model would be preferable when there is an abundance of data and computing resources. He mentioned that due to the current limitations in computing resources, it's necessary to use models with some degree of inductive bias. Although he didn't say it explicitly, he probably thinks that models with inductive bias are also needed due to limitations in data. However, in the future, as more computing and data resources become available, models with less inductive bias will be better.
@rasen84
@rasen84 8 ай бұрын
@@김성주-h1b what I’m saying is that the data collection, creation and curation process should count towards model complexity and scaling hypothesis. You could be removing complexity from the model and offloading that complexity to human data curators and creators.
@김성주-h1b
@김성주-h1b 8 ай бұрын
​ @rasen84 , I believe we are on the same page. I agree with your point that "You could be removing complexity from the model and offloading that complexity to human data curators and creators." However, I think he is talking about the trends and the distant future, perhaps 10 years from now. Yes, if we remove complexity from the model and training methods, we will need more resources to compensate for the trade-off in data preparation. However, in the future, there may be a vast array of open-source data available and synthetic data generated through self-play approaches. Then, our goal will be to reduce assumptions in the model, give it more freedom and make it bigger . I believe this is what he intended.
@hang_8169
@hang_8169 7 ай бұрын
@@rasen84 I would argue even if you use old method which has more structure in it, you still need spend the same amount of effort on data if not more to be adhere to the structure that you impose on the model. Because your model has MORE assumptions on data that it expects not less.
@rasen84
@rasen84 7 ай бұрын
@@hang_8169 then it’s time to add more inductive biases.
@wenhanzhou5826
@wenhanzhou5826 7 ай бұрын
Dude just learned how to manually classify lungcancer to better understand the neural network he is building 💀
@Gracie1121-g9d
@Gracie1121-g9d Ай бұрын
通俗易懂的
@elcanmhmmdli3305
@elcanmhmmdli3305 8 ай бұрын
Azerbaijan❤
Stanford CS25: V4 I Aligning Open Language Models
1:16:21
Stanford Online
Рет қаралды 25 М.
Stanford CS25: V4 I Hyung Won Chung of OpenAI
36:31
Stanford Online
Рет қаралды 197 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
Jason Wei: Scaling Paradigms for Large Language Models
40:10
Mayur Naik
Рет қаралды 4,2 М.
Jeff Dean (Google): Exciting Trends in Machine Learning
1:12:30
Rice Ken Kennedy Institute
Рет қаралды 176 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 417 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 176 М.
MIT EI seminar, Hyung Won Chung from OpenAI. "Don't teach. Incentivize."
35:56
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН