Stanford CS25: V4 I Overview of Transformers

  Рет қаралды 55,386

Stanford Online

Stanford Online

Күн бұрын

April 4, 2024
Steven Feng, Stanford University [styfeng.github...]
Div Garg, Stanford University [divyanshgarg.com/]
Emily Bunnapradist, Stanford University [ / ebunnapradist ]
Seonghee Lee, Stanford University [shljessie.gith...]
Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, applications, and remaining challenges/weaknesses. Also discussion about AI agents. Slides here: docs.google.co...
More about the course can be found here: web.stanford.e...
View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

Пікірлер: 40
@3ilm_yanfa3
@3ilm_yanfa3 5 ай бұрын
Can't believe, ... Just today, we started the part about LSTM and transformers in my ML course, and here it comes Thank you guys !
@fatemehmousavi402
@fatemehmousavi402 5 ай бұрын
Awesome, thank you Stanford online for sharing these amazing video series
@Drazcmd
@Drazcmd 5 ай бұрын
Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)
@mjavadrajabi7401
@mjavadrajabi7401 5 ай бұрын
Great!! Finally It's time for CS25 V4🔥
@benjaminy.
@benjaminy. 4 ай бұрын
Hello Everyone! Thank you very much for uploading these materials. Cheers
@marcinkrupinski
@marcinkrupinski 4 ай бұрын
AMazing stuff! Thank you for publishing this valuable material!
@styfeng
@styfeng 5 ай бұрын
it's finally released! hope y'all enjoy(ed) the lecture 😁
@laalbujhakkar
@laalbujhakkar 5 ай бұрын
Don't hold the mic so close bro. The lecture was really good though :)
@gemini22581
@gemini22581 5 ай бұрын
What is a good course to learn NLP?
@siiilversurfffeeer
@siiilversurfffeeer 4 ай бұрын
hi feng! will there be more cs25 v4 lectures upload in this channel?
@styfeng
@styfeng 4 ай бұрын
@@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)
@JJGhostHunters
@JJGhostHunters 4 ай бұрын
I recently started to explore using transformers for timeseries classification as opposed to NLP. Very excited about this content!
@lebesguegilmar1
@lebesguegilmar1 5 ай бұрын
Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil
@liangqunlu1553
@liangqunlu1553 4 ай бұрын
Very interesting summarization
@GerardSans
@GerardSans 5 ай бұрын
Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.
@ehza
@ehza 5 ай бұрын
That's a pretty important observation imo
@junyuzheng5282
@junyuzheng5282 4 ай бұрын
Then what is “reason” “thoughts” “knowledge”?
@DrakenRS78
@DrakenRS78 4 ай бұрын
Do individual Neurons have thoughts , reason or knowledge - or is it once again the collective which we should be assessing
@TheNewton
@TheNewton 4 ай бұрын
This mis-anthropomorphism problem will only grow because each end of the field/industry is being sloppy with it , so calls for sanity will just get derided as time goes on. On the starting side we have academics title baiting like they did with "attention" so papers get attention instead of just making a new word|phrase like 'correlation network' , 'word window' , 'hyper hyper-networks' etc ; or just overloading existing terms 'backtracking', backpropagation etc. And the on other end of the collective full court press is corporations continuing to pass assistant(tools) off as human like with names such cortana, siri etc for the sake of branding and marketing.
@TheNewton
@TheNewton 4 ай бұрын
@@junyuzheng5282 `Then what is “reason” “thoughts” “knowledge”?` reason,thoughts,knowledge, etc are more than are hallucinated in your linear algebra formulas
@RishiKaura
@RishiKaura 4 ай бұрын
Sincere students and smart
@GeorgeMonsour
@GeorgeMonsour 4 ай бұрын
I want to know more about 'filters.' Are they human or computer processes or mathematical models? The filters are a reflection, I'd like to understand more about. I hope they are not an inflection, that would be an unconscious pathway. This is a really sweet dip into the currency of knowledge and these students are to be commended however, in the common world there is a tendency developing towards a 'tower of babel'. Greed may have an influence that we must be wary of. I heard some warnings in the presentation that consider this tendency. I'm impressed by these students. I hope they aren't influenced by the silo system of capitalism and that they remain at the front of the generalization and commonality needed to keep bad actors off the playing field.
@egonkirchof
@egonkirchof 3 ай бұрын
In summary, Transformers mean using tons of weight matrixes, leading to way better results.
@IamPotato_007
@IamPotato_007 5 ай бұрын
Where are the professors?
@riju1956
@riju1956 5 ай бұрын
so they stand for 1 hour
@rockokechukwu3343
@rockokechukwu3343 5 ай бұрын
Is it okay to cheat in an exam if you have the opportunity to do so?
@TV19933
@TV19933 5 ай бұрын
future artificial intelligence i was into talk this probability challenge Gemini ai talking ability rapid talk i suppose so it's splendid
@Anbu_Sampath
@Anbu_Sampath 5 ай бұрын
it would be great if CS25: V4 created another playlist in youtube.
@hussienalsafi1149
@hussienalsafi1149 5 ай бұрын
☺️☺️☺️🥰🥰🥰
@ramsever5087
@ramsever5087 4 ай бұрын
what is said in 13:47 is incorrect. Large language models like ChatGPT or other state-of-the-art language models do not only have a decoder in their architecture. They employ the standard transformer encoder-decoder architecture. The transformer architecture used in these large language models consists of two main components: The Encoder: This encodes the input sequence (prompt, instructions, etc.) into vector representations. It uses self-attention mechanisms to capture contextual information within the input sequence. The Decoder: This takes in the encoded representations from the encoder. It generates the output sequence (text) in an autoregressive manner, one token at a time. It uses self-attention over the already generated output, as well as cross-attention over the encoder's output, to predict the next token. So both the encoder and decoder are critical components. The encoder allows understanding and representing the input, while the decoder enables powerful sequence generation capabilities by predictively modeling one token at a time while attending to the encoder representations and past output. Having only a decoder without an encoder would mean the model can generate text but not condition on or understand any input instructions/prompts. This would severely limit its capabilities. The transformer's encoder-decoder design, with each component's self-attention and cross-attention, is what allows large language models to understand inputs flexibly and then generate relevant, coherent, and contextual outputs. Both components are indispensable for their impressive language abilities.
@gleelantern
@gleelantern 3 ай бұрын
ChatGPT, Gemini, etc. are decoder-only models. Read their tech reports.
@primedanny417
@primedanny417 Ай бұрын
You should really read the GPT-1 paper. Otherwise, please source your claim that ChatGPT's models are encoder-decoder architecture.
@laalbujhakkar
@laalbujhakkar 5 ай бұрын
Stanford's struggles with microphones continue.
@jeesantony5308
@jeesantony5308 5 ай бұрын
it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼
@laalbujhakkar
@laalbujhakkar 5 ай бұрын
@@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.
@si4009
@si4009 Ай бұрын
This is not what I expected. What a complete terrible explanation. I was expecting a complete history of Transformers. The fall of the Deception's or how Optimus Prime came to be. A very misleading title indeed.
Stanford CS25: V4 I Jason Wei & Hyung Won Chung of OpenAI
1:17:07
Stanford Online
Рет қаралды 164 М.
Artificial Intelligence | 60 Minutes Full Episodes
53:30
60 Minutes
Рет қаралды 7 МЛН
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 6 МЛН
SCHOOLBOY. Мама флексит 🫣👩🏻
00:41
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 7 МЛН
Поветкин заставил себя уважать!
01:00
МИНУС БАЛЛ
Рет қаралды 6 МЛН
The Joker wanted to stand at the front, but unexpectedly was beaten up by Officer Rabbit
00:12
Stanford CS25: V4 I Aligning Open Language Models
1:16:21
Stanford Online
Рет қаралды 22 М.
LLM vs NLP | Kevin Johnson
10:36
dscout
Рет қаралды 16 М.
Stanford CS25: V4 I Demystifying Mixtral of Experts
1:04:32
Stanford Online
Рет қаралды 7 М.
Attention in transformers, visually explained | Chapter 6, Deep Learning
26:10
Trends in Deep Learning Hardware: Bill Dally (NVIDIA)
1:10:58
Paul G. Allen School
Рет қаралды 21 М.
Google CEO Sundar Pichai and the Future of AI | The Circuit
24:02
Bloomberg Originals
Рет қаралды 3,6 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 428 М.
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 6 МЛН