Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5

  Рет қаралды 78,030

Lex Fridman

Lex Fridman

Күн бұрын

Пікірлер: 122
@cgrxi
@cgrxi 6 жыл бұрын
0:00 Introduction by Prof. Lex 1:04 Fundamental nature of reality : Does god play dice ? (Refers Albert Einstein) 1:54 Philosophy of science : Instrumentalism and Realism 4:08 The unreasonable effectiveness of mathematics [1][2] 6:08 Math and simple underlying principles of reality 7:26 Human intuition and ingenuity 8:56 Role of imagination (Refers Einstein's special relativity) 10:00 Do we/ will have tools to describe the process of learning mathematically ? (Refers Hook's Microscope) [3][4][5] 12:16 From a Mathematical point of view : What is a great Teacher ? 13:48 Mechanism in Learning and Essence of Duck (Bumper sticker material. Quack Quack !!) 16:58 How far are we from integrating the predicates ? (Refer the duck content to understand this question) 18:17 Admissible Set of Functions and Predicates (Talks about VC Theory [6]) 23:01 What do you think about deep learning ? (Mentions Churchill's book "The Second World War" [7], Shallow Learning [8]) 27:57 Alpha Go and Effectiveness of Neural Networks [9] 30:46 Human Intelligence and Alan Turing 33:34 Big-O Complexity and Worst Case Analysis 38:49 Opinion of how AI is considered as coding to imitate a human being 39:44 Learning and intelligence 42:09 Interesting problems on Statistical Learning (Mentions Digit Recognition problem and importance of intelligence) 48:48 Poetry, Philosophy and Mathematics 50:40 Happiest Moment as a Researcher References : [1] Wigner, Eugene P. "The unreasonable effectiveness of mathematics in the natural sciences." In Mathematics and Science, pp. 291-306. 1990. [2] www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableEffectiveness.pdf [3] kzbin.info/www/bejne/aJjXo56uqdiEaM0 [4] books.google.com/books?hl=en&lr=&id=ISP_gRwuz94C&oi=fnd&pg=PR1&dq=Micrographia+hook&ots=LF1VWdxjQg&sig=Qca7QzxkynZXc4AGy0YldNdQP_k [5] Hook, Robert. "Micrographia: Or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses with Observation and Inquiries Thereupon." Royal Society: London, UK 1665. [6] www.cs.cmu.edu/~bapoczos/Classes/ML10715_2015Fall/slides/VCdimension.pdf [7] www.goodreads.com/book/show/25587.The_Second_World_War [8] files.meetup.com/18405165/DLmeetup.pdf [9] www.imdb.com/title/tt6700846/
@nands4410
@nands4410 6 жыл бұрын
Thanks
@mithunchandramohan2452
@mithunchandramohan2452 5 жыл бұрын
You're awesome !
@tanishshrivastava2442
@tanishshrivastava2442 3 жыл бұрын
Thanks.
@JohnGFisher
@JohnGFisher 5 жыл бұрын
Keep revisiting this and slowly understanding more. This may be the best podcast on the channel.
@SassePhoto
@SassePhoto 8 ай бұрын
This is the most underrated interview I have ever come across. It deserves MILLIONS of Views. A genius who had a brilliant idea 30 years before he was appreciated
@AZTECMAN
@AZTECMAN 6 жыл бұрын
I appreciate you sharing this with us all Lex. Gratitude.
@GodofStories
@GodofStories Жыл бұрын
Invariance- 43:53 When mathematicians first created deep learning, they immediately recognized that they use way more training data than humans need. How to decrease training data by 100x, and still have a high enough success> That is the real question of learning-intelligence. - Vapnik
@douglasholman6300
@douglasholman6300 6 жыл бұрын
Hey Lex, Thanks for making this content free and accessible online! Very generous and much appreciated.
@kentthedev
@kentthedev 6 жыл бұрын
I can't remember the time that I've really enjoyed a great conversation like this one.These are good questions by Lex . And I am so excited and thrilled by the intelligence of Vladimir Vapnik.
@GodofStories
@GodofStories Жыл бұрын
I wish Lex would've asked the meaning of life question to Mr. Vapnik, that is always my favorite part of every pod. Lex, round 2 please! Glad to know Mr.Vapnik is still alive.
@GodofStories
@GodofStories Жыл бұрын
Oooh wow, i didn't realize how lucky we were, podcast #71 is the 2nd Round. Awesome, let's gooo!
@kodee2
@kodee2 6 жыл бұрын
I have to express my gratitude for uploading stuff like this, Thanks so much Lex and thanks to Dr. Vapnik for taking the time to express some of the insights he has gained throughout his life
@rashidskh
@rashidskh 3 жыл бұрын
Спасибо, Лекс. Очень интересно было послушать профессора Владимира Вапника.
@calebmunuru3598
@calebmunuru3598 Күн бұрын
Три год позже
@GodofStories
@GodofStories Жыл бұрын
Fascinating. Mr. Vapnik's pure mathematics arguments are very much a sharp contrast, and welcome viewpoint on learning. Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
@GodofStories
@GodofStories Жыл бұрын
This is fascinating. I had to pay more attention to appreciate the detail in Mr.Vapnik's arguments. I feel Lex was outmatched by just the pure mathematical arguments of Mr.Vapnik, which is fair. It would be hard for anyone who isn't a pure mathematician to contest him, and have a debate. IT would be astonishing though to see a debate or discussion of mathematicians of this level. Maybe Lex, can go into way more technical podcasts than the general, abstract, and cultural pods that he is doing more of these days. Though, I still love he is still doing technical pods in various scientific topics. Maybe a round 2 in many of your earlier pods, including Mr.Vapnik here?
@ericgoldstein4734
@ericgoldstein4734 4 жыл бұрын
Hi Lex, I have enjoyed many of your podcasts and was very happy and very interested to see you did these interviews with Vladimir Vapnik. It would be extremely interesting if you would interview Hava Siegelmann. She was, among many other things, the co-inventor of Support Vector Clustering with Vapnik; she, in fact, improved his labeled clustering to an unlabeled clustering algorithm - becoming one of the most widely used in industry. She is the inventor of Super-Turing computation, the only functional alternative to Turing computation. She was the founder and director of DARPA’s Lifelong Learning program for the past four years. Lifelong Learning is the most advanced program for AI capable of learning in real time and applying learned experience to previously not experienced circumstances. I would love to see an interview! Thanks, Eric
@volotat
@volotat 6 жыл бұрын
Another great video. Thanks for that amazing content, Lex.
@itsalljustimages
@itsalljustimages 6 жыл бұрын
Just another day I was thinking about "how come ideas are generated in different parts of the world within a definite time period simultaneously?". Glad to hear that a prominent mathematician thinks the same way (31:34). It's Platonic and poetic. And I have heard many mathematicians say this sort of thing. Ramanujan is also a great example that makes this theory interesting.
@dljve
@dljve 3 жыл бұрын
I think in 7:13 he says "residuals", not "details" (as in the subtitles). That's an important difference for the meaning of what he's saying.
@mlliarm
@mlliarm 5 жыл бұрын
The legendary Vapnik !!! Thank you Lex !
@goodlack9093
@goodlack9093 6 ай бұрын
This is so good, wish there were more guests like the one in this vid nowadays too
@teegnas
@teegnas 3 жыл бұрын
Based on the number of views ... this podcast with Vapnik is greatly underrated
@343clement
@343clement 6 жыл бұрын
I can't help but wonder if professor Vapnik could have expressed his thoughts a bit better if the interview was done in Russian.
@artlenski8115
@artlenski8115 6 жыл бұрын
I am pretty sure the answer is yes. He's got lots of knowledge, wisdom in him, unfortunately the communication bottleneck is language.
@343clement
@343clement 6 жыл бұрын
@@artlenski8115 If the answer is yes then it's a shame. I'm sure Lex could have done the interview in Russian and then translate it in the subtitles. Although that would be much more time consuming to prepare the video. I guess you can't have the best of both worlds.
@lexfridman
@lexfridman 6 жыл бұрын
@@343clement The answer is absolutely yes. I think about this a lot. A lot of brilliant minds are lost to history due to this language bottleneck. Perhaps the best approach for Russian speakers, I think, is to mix Russian and English together as I feel based on topic and then later translate, but I haven't tried that yet. It would be tough on many levels. But you've inspired me to at least try.
@343clement
@343clement 6 жыл бұрын
@@lexfridman I cannot thank you enough for taking the time to edit and upload these videos, большое спасибо! By all means, please experiment with the format of the interviews. By the way, you playing the guitar while riding in Black Betty is the coolest thing ever :)
@volotat
@volotat 6 жыл бұрын
​@@lexfridman That would be awesome, considering how many russian speakers here. :)
@mauriciopereira4824
@mauriciopereira4824 6 жыл бұрын
That was real good interview, thanks for sharing
@srh80
@srh80 2 жыл бұрын
What a spectacularly intelligent person. A very different perspective than mainstream machine learning media.
@gavinlin6636
@gavinlin6636 5 жыл бұрын
1. "I'm not sure that intelligence is just inside of us. It may also be outside of us." 2. "I know for sure that you must know something more than digits." 3. Invariance theory might be the hope of understanding intelligence?
@williamramseyer9121
@williamramseyer9121 4 жыл бұрын
Wow! Incredible. What an interview. Like a series of Zen koans in mathematical form. I especially loved Dr. Vapnik's discussion of what a great teacher does. Two questions: 1) as physics drives deeper into the nature of reality will we find that math is not just a model but can fully represent, i.e. is, reality; and 2) if other universes exist do they have the same mathematics? Thanks!
@qorod123
@qorod123 6 жыл бұрын
Wow what an interesting conversation, thank you so much Lex for the video, really appreciate it and looking forward to more of such videos, cheers
@dynamicgecko1213
@dynamicgecko1213 4 жыл бұрын
The duck conversation was very intriguing and enjoyable.
@manosangelis9826
@manosangelis9826 5 жыл бұрын
25:56 Representer theorem says that optimal solution ... is on shallow networks, not on deep learning. I cannot understand why this holds. Can sb explain or give me a reference? Thanks
@3cheeseup
@3cheeseup 4 жыл бұрын
Representer theorem says you can perfectly approximate arbitrary function with finite 1-layer neural network. Deep learning however uses more than one layer
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
@@3cheeseup this is a very interesting point. But if we consider that Deep Learning is (1) able to discover hidden structure of the data (features learning) and (2) model nested hierarchy of concepts, does this means that you should manually translate point (1) and (2) into a shallow network? In other words, you can approximate a DL model using a finite 1-layer neural network BUT in doing so you need to manually introduce concepts (1) and (2) into the shallow network.
@3cheeseup
@3cheeseup 4 жыл бұрын
@@colouredlaundry1165 The representer theorem doesn't say anything about the amount of neurons you need. It could be 1, 100, a Googol or a Googolplex to represent your target function. As we only have limited ressources I don't think the theorem is of any practical importance to us.
@NoNTr1v1aL
@NoNTr1v1aL Жыл бұрын
Absolutely amazing video!
@3145mimosa
@3145mimosa 6 жыл бұрын
Thank you for uploading such a beautiful interview! I enjoyed this video so much!
@sreramk1494
@sreramk1494 6 жыл бұрын
Great conversation! But I beg to differ with Vladimir Vapnik on the role of imagination in discoveries. Imagination and human intuition plays an active role in extending the existing laws and axioms, and to construct theories to fit observations. What he had worked on might not have required imagination and intuition, but when it comes to theorizing and extending the existing laws, or the language of mathematics itself (or physics) human intuition and imagination will be essential. Every sub-domain people specialize in will have its own unique demands.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
Well I believe it is clear that Vladimir is simply talking about his personal life experience; of course every person has a different life experience. Maybe in Einstein discoveries imagination had a great role.
@Dondlo46
@Dondlo46 2 жыл бұрын
This is a really good mental and hearing workout, his accent is really hard to listen to, but I like it
@Hungry_Ham
@Hungry_Ham 6 жыл бұрын
Very insightful. Learned a lot about ducks
@sebastianavalos2055
@sebastianavalos2055 6 жыл бұрын
Beauty and poetry! Again, thanks Lex!
@channel-ug9gt
@channel-ug9gt 6 жыл бұрын
exactly !!! this is like "... songs, paintings, writings, dance, drama, photography, carpentry, crafts, love, and love ..."
@TennisNeedsMore
@TennisNeedsMore 5 жыл бұрын
Thanks Lex, this talk was amazeballs! Anyone got what the MIT guy's name was @52:27? Dodley? Or something
@martisl
@martisl 6 жыл бұрын
With each interview, I'm getting more interested in the subject. Thank you for the great content!
@mehmetaliozer2403
@mehmetaliozer2403 3 жыл бұрын
7:12 He said, "We are looking only residuals".
@6pat
@6pat 3 жыл бұрын
Thanks a lot for the podcast , it was very interesting to listen.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
26:23 he is definitely right but we cannot wait 20 years for some brilliant mathematician to discover that. In the meantime I think it is good to use DL which is not perfect but gets the job done.
@GodsNode
@GodsNode 5 жыл бұрын
24:29 Lays the smackdown on the dilettante and mathematically deficient.
@GuillermoValleCosmos
@GuillermoValleCosmos 6 жыл бұрын
haha I liked his response to the AlphaGo question! On the other hand, I think it's missleading. Just like in maths, a problem's difficulty should be gauged by how hard it seems before solving it, not how hard it is in hinsight.
@koozdra
@koozdra 6 жыл бұрын
really interesting conversation, thank you!
@s25412
@s25412 3 жыл бұрын
For folks that are having a hard time understanding, Captioning the video should help.
@JoshTechBytes
@JoshTechBytes 6 жыл бұрын
This was incredible.
@channel-ug9gt
@channel-ug9gt 6 жыл бұрын
what does he says at @ 3.49 "the GOD or GOAL of ML is to learn about conditional probability" ? I think it's "GOAL" but then the next sentence is about God playing dices. I think he says GOAL first and then GOD in the following sentence but they sound so similar and they are very close to each other in the dialoge.
@FlyingOctopus0
@FlyingOctopus0 6 жыл бұрын
I am not sure if we can derive theory of inteligence purely from math. In physics the problems are easier, because we can create meaningful equations, which can guide us. The examples could be Max plank quantization of energy or Albert Einstein retativity theory or Dirac's anti particles or currently string theory. On the other hand in biology, chemistry, ... there is less insight from equations. For example effects of protein folding are very difficult to deduce from equations and we have to use computation instead. The same could be with intelligence that it has mathematical description, but is very messy and does not adhere to our sense of mathematical beaty. This could of course change as we find more connections and built consistant theory, so initially messy ideas become more and more intuitive and beautiful, but the core does not change. Using beauty and elegance of math as heuristic is a little bit dangerous. For example geocentric theory at the time had nicer description than heliocentric theory. The reason was that we had to made more correction term to heliocentric theory to match the precision of geocentric theory. It was, because they didn't use elipse to describe motion, but instead compositions of circular motions were used. Only after emprical findings of Kepler we switched to elipses. Another more anecdotal example would be the dynamo theory of WALTER M. ELSÄSSER describing why plantes have magnetic fields. He told his theory to Albert Einstein, but “he didn’t much believe it. He simply could not believe that something so beautiful could have such a complicated explanation" in words of Einsten assistan (Einstein prefered not to tell his opinion). The theory was correct, Einstein's intuition was wrong. (Source: top of 3rd page of pdf -> www.geosociety.org/documents/gsa/memorials/v24/Elsasser-WM.pdf) Also currently string theory is getting some backlash, because of lack of results despite decade long effort. This theory has some promising connections and seems to be a perfect fit for missing element in our understanding of physics, but there are also some ugly parts, like need for more dimensions or too many possible universes. So we have to be carefull to not be too much focused on mathematical beauty, nature can just be messy or we might not have a mathematical tools to appreciate it's beauty.
@g1org1dalaka1
@g1org1dalaka1 3 жыл бұрын
Einstein discovered relativity from equations btw, he saw that time was not constant from those derived equations
@williamscott1697
@williamscott1697 5 жыл бұрын
Gold. This is gold. Very nice to hear others perspectives. This guy is stubborn lol.
@channel-ug9gt
@channel-ug9gt 6 жыл бұрын
is he saying @ 3:11 "setting" ?
@gorkhajankalyan29
@gorkhajankalyan29 2 жыл бұрын
great work , thanks both
@channel-ug9gt
@channel-ug9gt 6 жыл бұрын
what is he saying at 1:35 ? "it is ???? described ", what is ???
@anas.2k866
@anas.2k866 3 жыл бұрын
Hi, what he meant by "predicate", please. I google it but I found a different definitions.
@GodofStories
@GodofStories Жыл бұрын
Think it's just kind of like a qualitative description or sentence.
@edobroserdov
@edobroserdov Жыл бұрын
podcasts.google.com/feed/aHR0cHM6Ly9saXN0ZW5ib3guYXBwL2Yvc3M2Y1NjQ3phSy0/episode/c3M2Y1NjQ3phSy06ZEdlaFp0YnJSc0g?ep=14. from 59:28
@RalphDratman
@RalphDratman 4 жыл бұрын
But there are no simple invariants for any complicated real-world classification task. If there were, the machine learning would not be necessary. We could just use straight computer code.
@jigarkdoshi888
@jigarkdoshi888 6 жыл бұрын
Great stuff, thought the editing somewhat breaks the flow. Why not put the whole conversation as is? I like the stutters and misunderstanding of questions type conversation :-) There is something there as well.
@cppmsg
@cppmsg 6 жыл бұрын
true, also when interviewers interrupt and talk over the smart person, only does damage.
@Alp09111
@Alp09111 6 жыл бұрын
thanks Lex,that was great!
@bradleyedwards4604
@bradleyedwards4604 6 жыл бұрын
Another interesting interview, but I think all of the interviews would be better with fewer leading questions and professing by the interviewer.
@Quarky_
@Quarky_ 6 жыл бұрын
His comment about music is similar to the ideas in GEB!
@SaveriusTianhui
@SaveriusTianhui 5 жыл бұрын
God Bless Vladmir Vapnik
@AbhijeetSaxenaIN
@AbhijeetSaxenaIN 6 жыл бұрын
Ground Truths guide us all
@1674-q4o
@1674-q4o 5 жыл бұрын
I understand some of what's being said here.
@javidq
@javidq 5 жыл бұрын
Mathematical explication of implicit invariants can be at least partially done for some senses and particular problems, in general sense encoding homeomorphisms. But how to discover invariants when even a human observer doesn't see them or perceives incorrect invariants ))
@Katharina643
@Katharina643 Жыл бұрын
What a brilliant mind !
@ilya1kravchenko468
@ilya1kravchenko468 6 жыл бұрын
So in a way, the problem of intelligence or at least the basis regarding the concept of a good teacher hinges on metaphorical truth and linguistic precision.
@slideai243
@slideai243 2 жыл бұрын
Dear Mr. Fridman, this is good video. I am researching SVM and has a paper to introduce to you and Dr. Vapnik. Could you please let me know Dr. Vapnik contact point? Thank you.
@HooliganSadikson
@HooliganSadikson 3 жыл бұрын
I love a lot of blah blah blah!! Great podcast!!!!
@torsteinsrnes4872
@torsteinsrnes4872 5 жыл бұрын
Those subtitles should probably "Weak and strong convergence" not "Big and strong .."
@bornroller6603
@bornroller6603 5 жыл бұрын
Wonderful. Thank you.
@Arifi070
@Arifi070 Жыл бұрын
very wise man!
@sarbajitg
@sarbajitg 5 ай бұрын
24:28
@GodofStories
@GodofStories Жыл бұрын
The secret character of the podcast - the Duck.
@GodsNode
@GodsNode 5 жыл бұрын
He shot down neural networks even for a hypothetical scenario, lol
@Kareem-hl8hj
@Kareem-hl8hj 6 жыл бұрын
very interesting person
@alo1236546
@alo1236546 4 жыл бұрын
Gg suggest moon sonata when i play this video. Respect sir.
@macromak4158
@macromak4158 Ай бұрын
I understand one thing from this conversation. AI will not take over humans just because AI is missing intelligence.
@timothymuldoon8473
@timothymuldoon8473 6 жыл бұрын
Very sad that this only gets 455 views
@oybekrustamov5640
@oybekrustamov5640 6 жыл бұрын
But, 455 relevant viewers.
@cppmsg
@cppmsg 6 жыл бұрын
It only came out today. Give it another day or two.
@hintergedankee
@hintergedankee Жыл бұрын
Everything out of ramanujans mind came out of his intuition
@pumba6099
@pumba6099 2 жыл бұрын
Fuck like a Duck - I Swear thats what i heard lol then he sheepish smiles love it.
@GodofStories
@GodofStories Жыл бұрын
lol. that's an interesting example he just quacked out. quack like a duck
@AZTECMAN
@AZTECMAN 6 жыл бұрын
AGI should make games and enjoy music.
@Andre_Foreman
@Andre_Foreman 3 жыл бұрын
BACK UP IN THE INTRO HERE ALEXANDER
@JohanKarlsson
@JohanKarlsson 11 ай бұрын
Brilliant
@douglasholman6300
@douglasholman6300 6 жыл бұрын
I strongly disagree with Vapnik on his opinion about intuition. He seems dogmatic in his dismissal of the idea, however, through history we have seen a number of human phenotypes that produce significant intellectual achievement. One such phenotype that appears to be convergent in many individuals who have made tremendous achievements and cracked open entire academic disciplines (e.g. Einstein) is that of the visionary. Someone who is able to intimately understand a problem so that they may sufficiently abstract it to allow for giant leaps of progress by using intuition or visualization rather than iterative logical steps. I feel like Vapnik may be more of the literal, autistic type of individual who is very good at specializing and using brute force logic to iterate from axioms to a model within his discipline. I would not be too quick to discount the role of intuition particularly in the more demanding, technical fields such as pure mathematics and theoretical physics as opposed to machine learning and statistics.
@JoseAyerdis
@JoseAyerdis 2 жыл бұрын
Lex sound a bit nervious while interviewing Vapnik, although hard not to be in the face of him!
@GodofStories
@GodofStories Жыл бұрын
pure mathematical genius. would love more intense pods on maths. probably the hardest subject in the universe (as it describes it, quite literally)
@kparag01
@kparag01 6 жыл бұрын
Thnx
@urasigal9359
@urasigal9359 7 ай бұрын
учащийся ПТУ разговаривает с учёным.
@nishanagarwal6068
@nishanagarwal6068 Жыл бұрын
Couldn't grasp this one..
@15997359
@15997359 11 ай бұрын
I CAN HYPOTHESISE EVEN THO GOD KNOWS ALL CONDITIONAL PROBABILITIES .....HE STILL NEEDS TO CONSIDER ALL THE OUTCOMES WITHOUT BIAS...WHICH IS IMPOSSIBLE FOR ANY OBSERVER...
@____uncompetative
@____uncompetative 2 жыл бұрын
*does God play dice?* God is to our Universe what Gary gygax is to _Dungeons & Dragons_ God doesn't necessarily play with dice, but define what kinds of dice d6 d10 d20 etc should be the basis of his loose adventures that others play with under a DM who follows D&D rules which had an Intelligent Designer in Gary Gygax, There are other games which use dice, such as _Monopoly_ and therefore you can logically infer the existence of exouniverses that suppor alien life.
@jessicahardesty3358
@jessicahardesty3358 6 жыл бұрын
💖
@GodsNode
@GodsNode 5 жыл бұрын
NO IMAGINATION!!! lol
@GodsNode
@GodsNode 5 жыл бұрын
@George Hatoutsidis Agreed. I think imagination is very important for finding or creating something valuable with math. Perhaps, he views imagination as working back from fantasy or thinking in terms of beauty. But imagination can be simply manipulating equations in a creative way to discover/uncover some valuable insight.
@GodsNode
@GodsNode 5 жыл бұрын
@George Hatoutsidis Also, I would say you do not need knowledge for imagination, rather you need knowledge to increase the chances that you will be able to manifest your imagination into reality.
@charlesrump5771
@charlesrump5771 2 жыл бұрын
What?
@PhumlaniNxumalo
@PhumlaniNxumalo 6 ай бұрын
1.25x
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
1:20:21
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
George Mason University | Degree Celebration COS / CPH / CHSS | Thursday, December 19th
1:45:45
Rethinking Statistical Learning Theory: Learning Using Statistical Invariants
1:01:59
NYU Tandon School of Engineering
Рет қаралды 11 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
Andrew Bustamante: CIA Spy | Lex Fridman Podcast #310
3:53:09
Lex Fridman
Рет қаралды 19 МЛН
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
1:44:31
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН