Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)

  Рет қаралды 792,508

Stanford Online

Stanford Online

Күн бұрын

Пікірлер: 298
@CG-hj1cu
@CG-hj1cu 4 ай бұрын
I'm a student for life....approaching 40.....never had the privilege of attending a university like Stanford. To get access to these quality lectures is amazing. Thank you
@Fracasse-0x13
@Fracasse-0x13 4 ай бұрын
This is a quality lecture?
@KevinLanahan0
@KevinLanahan0 4 ай бұрын
@@Fracasse-0x13 for people who dont have access to education, yes, it is a quality lecture.
@darrondavis5848
@darrondavis5848 3 ай бұрын
i am living my dreams
@shaohongchen1063
@shaohongchen1063 3 ай бұрын
@@Fracasse-0x13 why this is not a quality lecture?
@MyLordaizen
@MyLordaizen 3 ай бұрын
They all the same Everything is on the web you don't need certification to tell the world you know it Build the best
@paolacastillootoya8904
@paolacastillootoya8904 2 ай бұрын
He is doing his part to encourage women in STEM.
@ProgrammingWIthRiley
@ProgrammingWIthRiley Ай бұрын
Women have always been in STEM. We all know about Grace Hopper. Please let this go.
@ProgrammingWIthRiley
@ProgrammingWIthRiley Ай бұрын
Lookup Ruth David. She worked at the CIA redid all of their tech infrastructure and she’s still alive!
@fan82209
@fan82209 Ай бұрын
haha absolutely
@astrolillo
@astrolillo Ай бұрын
vos queres un marido de stem nada
@Originalimoc
@Originalimoc Ай бұрын
😮​@@astrolillo
@nothing12392
@nothing12392 5 ай бұрын
It is one thing to be a great research institution but to be a great research institution that is full of talented and kind lecturers is extremely impressive. I've been impressed by every single Stanford course and lecture I have participated in through SCPD and KZbin and this lecturer is no exception.
@stanfordonline
@stanfordonline 5 ай бұрын
Thank you for sharing your positive experiences with our courses and lectures!
@a2ashraf
@a2ashraf Ай бұрын
Wow, big words. Thank you for the comment, your words encouraged me to watch the whole thing and I don't regret it at all. Best decision!
@EduardoLima
@EduardoLima 3 ай бұрын
We live in a tremendous moment in time. Free access to the best lectures on the most relevant topic from the best university
@stanfordonline
@stanfordonline 2 ай бұрын
Thanks for your comment, we love to hear this feedback!
@devanshmishra-ez1tn
@devanshmishra-ez1tn 3 ай бұрын
00:10 Building Large Language Models overview 02:21 Focus on data evaluation and systems in industry over architecture 06:25 Auto regressive language models predict the next word in a sentence. 08:26 Tokenizing text is crucial for language models 12:38 Training a large language model involves using a large corpus of text. 14:49 Tokenization process considerations 18:40 Tokenization improvement in GPT 4 for code understanding 20:31 Perplexity measures model hesitation between tokens 24:18 Comparing outputs and model prompting 26:15 Evaluation of language models can yield different results 30:15 Challenges in training large language models 32:06 Challenges in building large language models 35:57 Collecting real-world data is crucial for large language models 37:53 Challenges in building large language models 41:38 Scaling laws predict performance improvement with more data and larger models 43:33 Relationship between data, parameters, and compute 47:21 Importance of scaling laws in model performance 49:12 Quality of data matters more than architecture and losses in scaling laws 52:54 Inference for large language models is very expensive 54:54 Training large language models is costly 59:12 Post training aligns language models for AI assistant use 1:01:05 Supervised fine-tuning for large language models 1:04:50 Leveraging large language models for data generation and synthesis 1:06:49 Balancing data generation and human input for effective learning 1:10:23 Limitations of human abilities in generating large language models 1:12:12 Training language models to maximize human preference instead of cloning human behaviors. 1:16:06 Training reward model using softmax logits for human preferences. 1:18:02 Modeling optimization and challenges in large language models (LLMs) 1:21:49 Reinforcement learning models and potential benefits 1:23:44 Challenges with using humans for data annotation 1:27:21 LLMs are cost-effective and have better agreement with humans than humans themselves 1:29:12 Perplexity is not calibrated for large language models 1:33:00 Variance in performance of GPT-4 based on prompt specificity 1:34:51 Pre-training data plays a vital role in model initialization 1:38:32 Utilize GPUs efficiently with matrix multiplication 1:40:21 Utilizing 16 bits for faster training in deep learning 1:44:08 Building Large Language Models from scratch Crafted by Merlin AI.
@mz2317
@mz2317 16 күн бұрын
helpful
@bp3016
@bp3016 4 ай бұрын
Is my teachers in school looked this good, I wouldn't miss a single class. He's handsome af.
@wecretion3504
@wecretion3504 4 ай бұрын
🤣🤣🤣😂
@thedelicatehand
@thedelicatehand 3 ай бұрын
No because fr
@MrC0MPUT3R
@MrC0MPUT3R 3 ай бұрын
Came for the speaker; stayed for the knowledge.
@sevendoubleodex
@sevendoubleodex 3 ай бұрын
… strange
@kellymoses8566
@kellymoses8566 3 ай бұрын
I'm a straight dude and even I'm like "DAMN!"
@yanndubois3914
@yanndubois3914 5 ай бұрын
Slides: drive.google.com/file/d/1B46VFrqFAPAEj3kaCrBAtQqeh2_Ztawl/view?usp=sharing
@Imperfectly_perfect_007
@Imperfectly_perfect_007 4 ай бұрын
Thank you sir...i heartly appreciate it😊.... lecture was awesome 🤌
@junnishere00
@junnishere00 4 ай бұрын
thankyou so much. i really appreciate it
@helloadventureworld
@helloadventureworld 4 ай бұрын
lecture was perfect. is there a playlist for the whole class of cs229 for the same semester as this video? all I have found was before 2022 which made me wondering
@yanndubois3914
@yanndubois3914 4 ай бұрын
@@helloadventureworld no, the rest of CS229 has not been released and I don't know if it will. This is only the guest lecture.
@helloadventureworld
@helloadventureworld 4 ай бұрын
@@yanndubois3914 Thanks for the response and information you have shared :)
@wop130
@wop130 Ай бұрын
Damn. That lecturer is fineeee. 😍
@thedelicatehand
@thedelicatehand 3 ай бұрын
Suddenly I am interested in LLMS
@meelijah5474
@meelijah5474 2 ай бұрын
I might not know what you are saying but I have the same feeling as you lol.
@SimonaVermiglio
@SimonaVermiglio Ай бұрын
😂😂😂
@접니다-q6y
@접니다-q6y Ай бұрын
🤣🤣
@DonTiagoDonato
@DonTiagoDonato Ай бұрын
Why the picture of Zé Pequeno ?
@emilycooper500
@emilycooper500 Ай бұрын
😂
@SudipBishwakarma
@SudipBishwakarma 5 ай бұрын
This is really a great lecture, super dense but still digestible. Its not even been 2 years since ChatGPT was released to public and to see the rapid pace of research around LLMs and it getting better is really interesting. Thank you so much, now I have some papers to read to further my understanding.
@adamm2e
@adamm2e 19 күн бұрын
As someone who has worked in both corporations at the D and C-Level and someone who is a life long learner (studied at Harvard University and had one professor change my life in terms of CS, Malan) I am always impressed by how the technical knowledge of the lecturers and the ability to convey difficult to understand information is made possible through the Stanford CS, GSB and the associated schools. Quite grateful for the fact that you are sharing the next chapter in our paradigm shift (AGENTIC AI, et al) with our future leaders. 🎉🎉🎉
@anshdeshraj
@anshdeshraj 4 ай бұрын
finally a someone said Machine Learning instead of slapping AI on everything!
@duartesilva7907
@duartesilva7907 3 ай бұрын
I feel that whenever someone talks about AI a lot it means that they know nothing about it
@paolacastillootoya8904
@paolacastillootoya8904 2 ай бұрын
Right? And a lot of people believing in Yubal Harari because of it
@ReflectionOcean
@ReflectionOcean 5 ай бұрын
Insights By "YouSum Live" 00:00:05 Building large language models (LLMs) 00:00:59 Overview of LLM components 00:01:21 Importance of data in LLM training 00:02:59 Pre-training models on internet data 00:04:48 Language models predict word sequences 00:06:02 Auto-regressive models generate text 00:10:48 Tokenization is crucial for LLMs 00:19:12 Evaluation using perplexity 00:22:07 Challenges in evaluating LLMs 00:29:00 Data collection is a significant challenge 00:41:08 Scaling laws improve model performance 01:00:01 Post-training aligns models with user intent 01:02:26 Supervised fine-tuning enhances model responses 01:10:00 Reinforcement learning from human feedback 01:19:01 DPO simplifies reinforcement learning process 01:28:01 Evaluation of post-training models 01:37:20 System optimization for LLM training 01:39:05 Low precision improves GPU efficiency 01:41:38 Operator fusion enhances computational speed 01:44:23 Future considerations for LLM development Insights By "YouSum Live"
@dr.mikeybee
@dr.mikeybee 5 ай бұрын
This is very well done. It's super easy to understand. I think your students should learn a lot. It's a great skill to be able to present complex material in a simple fashion. It means you really understand both the material and your audience.
@megharajpoot9930
@megharajpoot9930 2 ай бұрын
This course has so much of insights and a quick summary view of LLMs. I have also gone through coursera course paid one. This one is equally good and free. Thanks for the video.
@mukammedalimbet2351
@mukammedalimbet2351 3 ай бұрын
great! thanks for sharing! One thing i would suggest is to transcribe or add subtitle of questions that is being asked by the students. That way we could better understand the answer given by lecturer.
@majidmehmood3780
@majidmehmood3780 3 ай бұрын
people should first learn about basic language models like bigrams, unigrams. these were the first language models and stanford really has good lectures in it
@BMoRideNGrind
@BMoRideNGrind 3 ай бұрын
Really incredible delivery of complicated information. ❤
@김진혁-l4l
@김진혁-l4l 2 ай бұрын
what a wonderful lectures...this 1.75 hour is one of the most valuable in my life
@SerhiiFedorov-v1l
@SerhiiFedorov-v1l 3 ай бұрын
Thank you for the video! I am glad that we live in this time and can witness the development of AI technologies.
@ludwingdb
@ludwingdb 5 күн бұрын
Excellent Lecture. Thanks to my former colleagues at SCPD!
@Qxxliu
@Qxxliu 3 ай бұрын
one good point when they discuss the difference between ppo and dpo is reward model can reduce the dependency of labeled preference data
@for-ever-22
@for-ever-22 5 ай бұрын
This is an amazing breakdown of the high level overview of an LLM’s. Every aspect of an LLM was mentioned. Thank you for this amazing video. I’ll come back here often
@NeerajSharma-yf4ih
@NeerajSharma-yf4ih 2 ай бұрын
I had the privilege of attending an insightful 90-minute lecture by Stanford faculty, which greatly boosted my confidence in completing my thesis. The approach they shared aligns closely with my own research methodology, reinforcing the direction of my work. Grateful for this inspiring experience!"
@sucim
@sucim 5 ай бұрын
Fabulous lecture! Goes into all important concepts and also highlights the interesting details that are commonly glossed over, thanks for recording!
@PratikBhavsar1
@PratikBhavsar1 5 ай бұрын
Very informative, updated and crisp~ keep them coming..don't stop now!
@RaushanKumar-qb3de
@RaushanKumar-qb3de 3 ай бұрын
Best explanation.. I'm watching at 3 am. Thanks
@sonudixit-h3w
@sonudixit-h3w 5 ай бұрын
Thanks a lot for sharing this. I would like to point a correction- time 20:28 - Consider case prob(true_token)
@yanndubois3914
@yanndubois3914 5 ай бұрын
Yes that's correct, it's the baseline performance of a very bad language model.
@KelvinMeeks
@KelvinMeeks 3 ай бұрын
Great talk. Loved the level of detail, the insights, the pacing.
@minhatvo82
@minhatvo82 4 ай бұрын
fantastic, wonderful, significant, magnificent, outstanding, class of titans, world-class🎉
@Nightsd01
@Nightsd01 3 ай бұрын
What an awesome video. Data quality is a real issue, and even more interestingly, LLM’s learn a lot like humans. Introduce the simpler concepts first (training data prompts) and then introduce more complex subjects, and the LLM’s learn more just like humans
@pkprasadtube
@pkprasadtube 2 ай бұрын
I love the way you answered the questions, very clear and precise.
@boeingpameesha9550
@boeingpameesha9550 4 ай бұрын
My sincere thanks for sharing it.
@samratsakya
@samratsakya 3 ай бұрын
Thank you for the gem Standford Online. Great starter - Time to read more papers on LLMs
@davemas70
@davemas70 18 күн бұрын
Very informative. Thank you for sharing!
@AnupSingh-kt5yn
@AnupSingh-kt5yn 3 ай бұрын
Great & Comprehensive Presentation 🎉
@goldentime11
@goldentime11 2 ай бұрын
Thanks for sharing this. It is a great introduction of the LLM system.
@brindaswayamprakasham2102
@brindaswayamprakasham2102 2 ай бұрын
this was genuinely interesting and easy to follow through, thanks!
@Mawfox_be_ite
@Mawfox_be_ite 13 күн бұрын
Hey buddy, I hope you kept playing squash after you left Vancouver. Good too see you got into Stanford as you had hoped. Cheers, AK
@ProgrammingWIthRiley
@ProgrammingWIthRiley Ай бұрын
Amazing lecture. Great job
@thunderbirdk
@thunderbirdk 3 ай бұрын
Wow! Such a wonderful presentation! Thanks so much!
@mohammedosman4902
@mohammedosman4902 5 ай бұрын
great lecture, wish the speaker had more time to go over the full presentation
@carvalhoribeiro
@carvalhoribeiro 4 ай бұрын
Great presentation and very helpful. Thanks for sharing this
@squidwardswift
@squidwardswift 3 ай бұрын
Dayum he’s fine
@cui_1152
@cui_1152 3 ай бұрын
Please give this dude 15more minutes, for Tiling, Flash Attention, Parallelization for data and model !!
@jay_wright_thats_right
@jay_wright_thats_right 3 ай бұрын
If you know all of that, you don't need 15 more minutes.
@maximshaposhnikov7970
@maximshaposhnikov7970 4 ай бұрын
What an amazing lecture, now want a part 2 about the topics that haven’t been touched upon 🤩
@bhoicebychoice5435
@bhoicebychoice5435 Ай бұрын
Scaling behavior of LLM fine-tuning, emphasizes the importance of model size, task-specific considerations, and the trade-offs between different fine-tuning approaches. It highlights the need for practitioners to make informed decisions based on their specific needs and resources. As the field of LLMs continues to evolve, further research is needed to fully understand the complex interplay between model architecture, data, and fine-tuning strategies, especially at even larger scales. My research significantly contributes to the ongoing effort to develop more efficient and effective methods for adapting powerful LLMs to a wide range of downstream tasks.
@SuperLano98
@SuperLano98 4 ай бұрын
When will the other lectures be updated? This was so good!
@Joeystumbo
@Joeystumbo 25 күн бұрын
He is an alien, such brilliant and young human being. Impressed.
@luxbran532
@luxbran532 4 ай бұрын
Great lecture
@xiaoxiandong7382
@xiaoxiandong7382 3 ай бұрын
would love to see the other recordings of cs25!
@sahejagarwal801
@sahejagarwal801 5 ай бұрын
Most amazing video ever
@meer.sohrab
@meer.sohrab 5 ай бұрын
The best one we want more
@zeep14dabs
@zeep14dabs 4 ай бұрын
this is amazing, can you guys make a playlist for begginers?. thank you!
@nomi6761
@nomi6761 5 ай бұрын
How do people know that "adding more data" is not just increasing likelihood of training on something from the benchmarks, while "adding more parameters" is not just increasing the recall abilities (parametric memory capacity) of the model to retrieve benchmark stuff during evaluation? Really curious about that point.
@hamzadata
@hamzadata 5 ай бұрын
man this is amazing!
@danieleneh3193
@danieleneh3193 3 ай бұрын
This is a gold mine
@beansforbrain
@beansforbrain 3 ай бұрын
Looking forward to do a PostDoc from SU
@futurecharacteristics
@futurecharacteristics 2 ай бұрын
It's never too late to get started for learning
@imalive404
@imalive404 4 ай бұрын
@5:55 there is an approximation. it lies on the axioms. the axiom being probability should sum to 1. second the approximation is that distribution only comes out of the given corpora. The given corpora is the approximation of the total population. Which we all know has its own biases.
@cristovaoiglesias523
@cristovaoiglesias523 Ай бұрын
The Chinchilla paper demonstrated that for a fixed FLOPs budget, smaller models trained on more data perform better than larger models trained on less data.
@AlphaVisionPro
@AlphaVisionPro 3 ай бұрын
You can build my ❤️
@kartikeychhipa3813
@kartikeychhipa3813 5 ай бұрын
Just Amazing!
@balajinadar1503
@balajinadar1503 4 ай бұрын
Ignore this comment Day 1 19:05 Day 2 28:38 Day 3 41:05 Day 4 1:00:00
@namazbekbekzhan
@namazbekbekzhan 3 ай бұрын
00:10 Обзор создания больших языковых моделей 02:21 Сосредоточьтесь на оценке данных и системах на практике 06:25 Авторегрессивные языковые модели предсказывают следующее слово 08:26 Токенизация текста и размер словаря имеют решающее значение для языковых моделей. 12:38 Токенизация и обучение токенизаторов 14:49 Оптимизация процесса токенизации и решения по объединению токенов 18:40 GPT 4 улучшил токенизацию для лучшего понимания кода 20:31 Переплетение измеряет колебания модели между словами. 24:18 Оценка открытых вопросов является сложной задачей. 26:15 Различные способы оценки крупных языковых моделей 30:15 Шаги по предварительной обработке веб-данных для больших языковых моделей 32:06 Проблемы с обработкой дубликатов и фильтрацией низкокачественных документов в больших масштабах. 35:57 Сбор данных о мире имеет решающее значение для практических крупных языковых моделей. 37:53 Проблемы при предобучении крупных языковых моделей 41:38 Законы масштабирования предсказывают улучшение производительности с увеличением объема данных и размером моделей. 43:33 Вычисления определяются данными и параметрами. 47:21 Понимание значения законов масштабирования при создании больших языковых моделей 49:12 Хорошие данные имеют решающее значение для лучшего масштабирования. 52:54 Вывод для больших языковых моделей дорогой. 54:54 Обучение крупных языковых моделей требует высоких вычислительных затрат. 59:12 Большие языковые модели (LLM) требуют дообучения для выравнивания, чтобы стать AI-ассистентами. 1:01:05 Создание крупных языковых моделей (LLM) включает в себя тонкую настройку предварительно обученных моделей на желаемых данных. 1:04:50 Предобученные языковые модели оптимизируют под конкретные типы пользователей во время дообучения. 1:06:49 Сбалансирование генерации синтетических данных с человеческим вводом имеет решающее значение для эффективного обучения. 1:10:23 Проблемы в создании контента, превышающего человеческие способности 1:12:12 Генерация идеальных ответов с использованием максимизации предпочтений 1:16:06 Обучение модели вознаграждения с использованием логитов для непрерывных предпочтений 1:18:02 Обучение крупных языковых моделей с помощью ПО и проблемы в обучении с подкреплением 1:21:49 Обсуждение о методах обучения с подкреплением и их преимуществах в использовании моделей наград. 1:23:44 Проблемы использования людей в качестве аннотаторов данных 1:27:21 LLM более экономичны и предлагают лучшее согласие, чем люди. 1:29:12 Проблемы с перплексией и калибровкой в языковых моделях 1:33:00 Вариативность в производительности GPT-4 в зависимости от подсказок 1:34:51 Важность предобучения в больших языковых моделях 1:38:32 Использование ГПУ для умножения матриц может быть в 10 раз быстрее, но коммуникация и память играют ключевую роль. 1:40:21 Уменьшенная точность для более быстрой матричной умножения 1:44:08 Создание больших языковых моделей (ЯМП) Crafted by Merlin AI.
@sanjayg1728
@sanjayg1728 3 ай бұрын
Could you please share the link to the lecture on Transformers that you were referring to in the video?
@keshmesh123
@keshmesh123 3 ай бұрын
thank you! great lecture.
@sagemantaena
@sagemantaena 11 күн бұрын
i’d never skip his class.
@FemiAdigun
@FemiAdigun 21 күн бұрын
Thank you for this.
@F3lp1s
@F3lp1s 5 ай бұрын
So Amazing!
@Neilblaze
@Neilblaze 5 ай бұрын
Great content, thanks!
@esamyakIndore
@esamyakIndore 3 ай бұрын
More lecture of Machine learning plz share
@enzoluispenagallegos5440
@enzoluispenagallegos5440 5 ай бұрын
Thank you for this
@web3global
@web3global 4 ай бұрын
Thank you! 🚀
@nataliatenoriomaia1635
@nataliatenoriomaia1635 3 ай бұрын
Can we please have access to the previous lecture about Transformers?
@MitatEfeÜnal-e3b
@MitatEfeÜnal-e3b 2 ай бұрын
I don’t know what the guy is talking about but imma watch HIM
@RaushanKumar-qb3de
@RaushanKumar-qb3de 2 ай бұрын
I like his teaching style and that laughter in between 😂😁🤙. Last one be careful heavyone
@perrystalsis1818
@perrystalsis1818 Ай бұрын
I'm just trying to get started in ML. Good god. Do a you tube channel already. Really good. Or at least do some blog updates.
@SyedShayanAliShah
@SyedShayanAliShah 3 ай бұрын
The reason Stanford graduate the rule the world
@njabulonzimande2893
@njabulonzimande2893 3 ай бұрын
LLM - chatbots Architecture (Neural networks) Training algorithm Data Evaluation System
@shoaibyehya3600
@shoaibyehya3600 5 ай бұрын
Impressive
@DonTiagoDonato
@DonTiagoDonato Ай бұрын
From Brazil 🇧🇷
@alexmoonrock
@alexmoonrock 3 ай бұрын
This interests me but I have no coding experience. Any tips to where to start , surely Standford lectures ? Coding 101 I guess. Anything helps :)
@doomed5206
@doomed5206 3 ай бұрын
suddenly i m interested in llms😗😗😗
@jdk997
@jdk997 3 ай бұрын
Whoever records these videos need to leave the slides up longer for the viewers to read as the speaker explains the concepts.
@jsherdiana
@jsherdiana 24 күн бұрын
Thank You
@SettimiTommaso
@SettimiTommaso 5 ай бұрын
Yes!
@E.T.S.
@E.T.S. Ай бұрын
Thank you.
@cherryfan9987
@cherryfan9987 5 ай бұрын
Thank u
@Zoronoa01
@Zoronoa01 3 ай бұрын
Where can we find the rest of the videos for CS229 summer 2024?
@my_mother168
@my_mother168 2 ай бұрын
so good ,
@hajrawaheed9636
@hajrawaheed9636 18 күн бұрын
Are the slides available online?
@Pl15604
@Pl15604 5 ай бұрын
The training algorithm is actually the key... It is because of RLHF that we have GPT-4
@not_amanullah
@not_amanullah 3 ай бұрын
thanks ❤️🤍
@AzharAli-n5c
@AzharAli-n5c 2 ай бұрын
great
@chrisj2841
@chrisj2841 4 ай бұрын
Anyone here took the class in which this lecture was held ( cs229 summer 2024) ?
@aminekhelifkhelif7306
@aminekhelifkhelif7306 3 ай бұрын
is there a way to add sections so we can return to specific parts later?
14 күн бұрын
can someone please share the complete series link.
@mudassiria
@mudassiria 3 ай бұрын
the lecture is good but the thing i dislike is the frequent change of the slide screen with the tutor camera. the video should be like a mini-player of tutor camera at the bottom corner with the slide screen on for the full time. that irritates me a lot in the whole lecture, making my focus fluctuate constantly
@sokhibtukhtaev9693
@sokhibtukhtaev9693 Ай бұрын
what is that paper that mentions from last year at 1:27:25 which is 50x cheaper and better than human agreements?
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
Stanford CS25: V3 I Retrieval Augmented Language Models
1:19:27
Stanford Online
Рет қаралды 178 М.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 380 М.
WE GOT ACCESS TO GPT-3! [Epic Special Edition]
3:57:17
Machine Learning Street Talk
Рет қаралды 386 М.
Making PQ Signatures work in the WebPKI
55:23
PKI Consortium
Рет қаралды 17
Deep Learning Interview Prep Course
3:59:50
freeCodeCamp.org
Рет қаралды 535 М.
Stanford ECON295/CS323 I 2024 I Business of AI, Reid Hoffman
1:18:21
Stanford Online
Рет қаралды 56 М.
Wolfram Physics Project Launch
3:50:19
Wolfram
Рет қаралды 2 МЛН
Bartosz Walczak - "Excluding a Burling graph"
1:40:16
TCS Group at Jagiellonian
Рет қаралды 69
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН