12a: Neural Nets

  Рет қаралды 536,557

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 282
@tommytan8571
@tommytan8571 4 жыл бұрын
Rest in peace , professor . He died in 2019 , let us remembered him by watching this again and again.
@omgcyanide4642
@omgcyanide4642 3 жыл бұрын
No way
@jeffreyanderson5333
@jeffreyanderson5333 3 жыл бұрын
Let's push this to a million view
@marcogelsomini7655
@marcogelsomini7655 3 жыл бұрын
The great explainer
@avibrarbrar
@avibrarbrar 3 жыл бұрын
Maybe understanding it and doing something worthwhile with it.
@sshaikh8104
@sshaikh8104 2 жыл бұрын
What a shocking news i just read, its feeling like some one from my own professor 😕 I am extremely sad
@OttoFazzl
@OttoFazzl 8 жыл бұрын
This professor is amazing! His explanation of SVMs was one of the best and clear I could find on the Internet.
@gaurav63105
@gaurav63105 8 жыл бұрын
I also started with SVMs and then decided to see his other lectures,he's so crisp
@alexm5914
@alexm5914 8 жыл бұрын
I'm watching SVMs right now, and I think I might do that too...
@binoruv
@binoruv 7 жыл бұрын
Me too!!!
@magnumalba
@magnumalba 4 жыл бұрын
It is not "This Professor". It is one of the fathers of AI.
@ankitasahoo668
@ankitasahoo668 4 жыл бұрын
I too agree
@ahmedmoneim9964
@ahmedmoneim9964 8 жыл бұрын
Thanks MIT for making these lectures publicly available, it is simply great!!
@vinayreddy8683
@vinayreddy8683 7 жыл бұрын
Ahmed AbdelMounem don't built a bomb with the base of this lecture
@StingBolt
@StingBolt 3 жыл бұрын
@@vinayreddy8683 i wonder how idiots like you came here
@muhammadhamzahm1204
@muhammadhamzahm1204 5 жыл бұрын
May you live in peace professor Patrick! You're a giant in field of machine learning. Your these lecture are biggest asset that beginners can use to climb. Thanks
@pandatobi5897
@pandatobi5897 4 жыл бұрын
rest in peace* now. he's dead.
@NeuralxAi
@NeuralxAi 5 жыл бұрын
I am From a village in Kashmir. We Don't Have Teachers That Can Explain Things on this Level And i Totally depend on These Great Teachers in MIT. Lot's Of Love Sir, I wish I could Get You Subscribers from my Whole University. I Can Only Say Thank You So much for Quality Educations .
@chakibafraoucene397
@chakibafraoucene397 5 жыл бұрын
he passed away :( last month
@NeuralxAi
@NeuralxAi 5 жыл бұрын
@@chakibafraoucene397 RIP 😓
@sozejigar1326
@sozejigar1326 3 жыл бұрын
Koshur here too. Tuhund comment os on top of the list.
@juliogodel
@juliogodel 8 жыл бұрын
This is just great MIT. How I wished you could upload All classes from prof Winston.. I could keep watching them for days. Clarity and straight to the point. Marvelous!
@dr.mikeybee
@dr.mikeybee 7 жыл бұрын
I really like this course. When a Professor understands the material. it can be clearly explained, and Professor Winston really understands the material.
@maffixwilliam5471
@maffixwilliam5471 2 жыл бұрын
very true
@sharifk9860
@sharifk9860 3 жыл бұрын
What an amazing lecture! I have seen many neural network lectures. This one is by far the most comprehensive and easy to understand. I instantly fell in love with prof. Winston. I hope he is now teaching God and his angels.
@willroman3595
@willroman3595 2 жыл бұрын
We live in such an awesome time that this information is available to everyone, free of charge.
@soulysouly7253
@soulysouly7253 4 жыл бұрын
Holy shit everything is so clear. I also frickin love when he explains very simply why we use a that one specific function, why we square this, why do we divide that, where does that coefficient come from, etc... and it all makes so much more sense than the gibberish written on the slides that I have to decipher every lecture.
@balllaktomas
@balllaktomas 7 жыл бұрын
It's sad that in our school we had lecture for this and I was lost but I think teacher was too. And than this guy comes with all elegance and no arrogance providing you this information and let it share too people around the world. WELL PLAYED.
@adityanarendra5886
@adityanarendra5886 3 жыл бұрын
Prof Winston, your explanations of AI have always fascinated and inspired mw in to the field. Rest in Peace professor.
@RobBarter
@RobBarter 3 жыл бұрын
Just happened upon this youtube video and begun watching it as have a passing interest in Neural Networks.....then realised I recognised his name. Looked up and pulled down a book I bought back in 1992 (not opened in years), Artificial Intelligence by Patrick Henry Winston. Sorry to hear we've lost him.
@OhhBabyATriple
@OhhBabyATriple 8 жыл бұрын
Winston is the best AI lecturer
@OttoFazzl
@OttoFazzl 8 жыл бұрын
This.
@Schroeder2424able24
@Schroeder2424able24 7 жыл бұрын
wow, had me fooled, he's so lifelike
@guywithaname5408
@guywithaname5408 6 жыл бұрын
thelastphysician underrated reply
@jvanrs4928
@jvanrs4928 4 жыл бұрын
Thanks MIT, initiatives like this can truly spark innovation
@ryanalopez
@ryanalopez 3 жыл бұрын
Good in depth mathematical explanation of neural net components. If new to learning about neural nets, I'd recommend watching a few other videos first which cover the overall design goals of neural nets, how they work at a high level, and the outputs they are trying to achieve before jumping into the mathematical models used to describe errors and performance.
@jvwdigital
@jvwdigital 7 жыл бұрын
2 years later and this is still a great lecture. Amazing instructor. I actually watched the whole thing. Simple ideas only take a quarter century to find. We humans need to make more observations, put them together, and see what shakes out.
@kutilkol
@kutilkol 5 жыл бұрын
8:55 disclaimer . there exists also neurons connected directly without synaptic gaps as proposed by Camillo Golgi. so both Cajal and Golgi were right. RIP prof. Winston, beautiful classes, thank you sir
@bohusb.6879
@bohusb.6879 4 жыл бұрын
This professor is amazing. His lectures are so clear and the same time he goes really deep. Very well structured lectures.
@xXxBladeStormxXx
@xXxBladeStormxXx 8 жыл бұрын
To think that just as back as 2010, they thought Neural Nets weren't worth spending much time on and now the instructor, I'm guessing, felt compelled to update even the ocw playlist to include these videos, should give everyone an idea of how good a time it is to be studying these topics. In the course of just a few years, deep neural nets have become extremely relevant again. It's indeed a great time to be studying Artificial Neural Networks.
@limitless1692
@limitless1692 7 жыл бұрын
we are at the start of AI age being first here is a edge
@user-ol2gx6of4g
@user-ol2gx6of4g 7 жыл бұрын
we were at the start of AI age since 1950s.
@MrAlipatik
@MrAlipatik 6 жыл бұрын
wake me up when they create tiny computers in a chip, that can be able to calculate simultaneously, and all hell break loose.
@maffixwilliam5471
@maffixwilliam5471 2 жыл бұрын
Thanks MIT for making this lecture public. The Lecturer explained the concepts, which makes it very crystal clear. Thanks. btw rip to the lecturer. done an honorable thing to the world. am benefiting from his work. thnks again to him and MIT. Keep up the great works please.
@Nestorghh
@Nestorghh 7 жыл бұрын
world-class professor and lecture.
@bradjones2071
@bradjones2071 4 жыл бұрын
I agree. Everyone always assumed MIT professors will just leave you with there intelligence and not be able to connect with the average lay-person but that is an incorrect assumption. I can basically understand alot of what he's talking about and am glad for the video.
@psrajoria
@psrajoria 3 жыл бұрын
"All great ideas are simple. How come there aren't more of them? Well, because frequently, that simplicity involves finding a couple of tricks and making a couple of observations. So usually, we humans are hardly ever go beyond one trick or one observation. But if you cascade a few together, sometimes something miraculous falls out that looks in retrospect extremely simple." - Prof. Winston
@mathforai-j5y
@mathforai-j5y 2 ай бұрын
Well, this is the meaning of multihead attention.
@danielfernandes1010
@danielfernandes1010 6 ай бұрын
Oh my, that ending! That's the most beautiful thing I've heard today.
@EranM
@EranM 3 жыл бұрын
Patrick writing on the blackboard is ASMR to my ears :>
@backpropalgo
@backpropalgo Жыл бұрын
amazing content. I miss real blackboards like this. I have to admit that the prof looked to be struggling a bit. I heard he passed away, so I would just say thank you for a really great session that I have shared with everyone in my own circle that had questions about how the foundation/basics of modern AI work
@irazt
@irazt 4 жыл бұрын
I wish I could have taken these courses in person. Thank you for sharing your knowledge to the world professor
@JG_1998
@JG_1998 2 жыл бұрын
Rest in Power Dr. Winston.
@reda9877
@reda9877 4 жыл бұрын
Thank you professor Patrick ! you had an extraordinary simple explanation for complex principles ! Thank you MIT for sharing this incredible content.
@nikre
@nikre 4 жыл бұрын
what a privilege to be a student in this class.
@OnionKnight541
@OnionKnight541 2 жыл бұрын
that was fantastic. at the end, he says, this miracle was a consequence of two tricks plus an observation. and, all great ideas are simple and easy to overlook.
@bidhanmajhi
@bidhanmajhi 5 жыл бұрын
He explained it very well. Sadly he's no more RIP
@ragy1986
@ragy1986 Жыл бұрын
It's the best video on NN on youtube, bar none!
@jonelya
@jonelya 3 жыл бұрын
29:26 the best ever explanation of chain rule..thank you so much
@Ludiusvox
@Ludiusvox 5 жыл бұрын
Right now I am studying a Lexus ES350 Air Conditioning system, and Neural Networks are part of the A/C controls. Not being able to find any resources on it at the school this lecture is very useful. I might add, MATLAB deep learning toolkit is useful also.
@tuha3524
@tuha3524 3 жыл бұрын
yes, yes, absolutely agree with Professor. "hardly ever go beyond one trick or one observation."
@radsimu
@radsimu 8 жыл бұрын
this nicely explains some of the mathematical decisions of nn models. really good stuff!
@AdrianVrabie
@AdrianVrabie 8 жыл бұрын
hey Radu! I dunno what you are referring to when you say "mathematical decisions" but I agree with it that it's awesome stuff! Btw! You've also done some nice stuff with NLP in Romanian! :) You should contact me and give me the code in Java maybe I can continue in the free time to do some stuff too! Kudos to you in advance! :) (ce lume mica!)
@radsimu
@radsimu 8 жыл бұрын
Haha :). Will upload it all on github some day. Need to make it more tidy first. Will keep you posted
@AdrianVrabie
@AdrianVrabie 8 жыл бұрын
Radu Simionescu adauga.ma te rog pe facebook ca nu te gasesc. adrian vrabie
@qzorn4440
@qzorn4440 8 жыл бұрын
a very relaxing lecture, this makes me think of deep learning programs. thanks.
@NisseOhlsen
@NisseOhlsen 8 жыл бұрын
q zorn or maybe deep sleep?
@AlwaniAkber
@AlwaniAkber 6 жыл бұрын
Though I am not good in math but few of the explanation really make sense ..great professor and video
@rustycherkas8229
@rustycherkas8229 2 жыл бұрын
Great lecture! Lucid with moments of humour and humanity. Thanks MIT.
@yusuferoglu9287
@yusuferoglu9287 5 жыл бұрын
RIP Sir!
@mikeschmit6474
@mikeschmit6474 7 жыл бұрын
Just a minor correction at 4 minutes. That is a ring-tailed Lemur, not a Madagascar cat
@monyettenyom2540
@monyettenyom2540 4 ай бұрын
yeeees, I found this comment in 2024 :D
@maoqiutong
@maoqiutong 6 жыл бұрын
Between 46:00 and 49:00, dynamic programming also uses similar concept to avoid exponential blowup. Maybe back propagation is also a kind of dynamic programming.
@thevirginmarty9738
@thevirginmarty9738 8 жыл бұрын
Awesome course. Someday I will use this to build a robot girlfriend. Thank you!
@robl4836
@robl4836 8 жыл бұрын
You need a Robot first before you can build it a Girlfriend ;)
@23Ather
@23Ather 8 жыл бұрын
You need both the robot and the girlfriend to find the minimum of the cost function. (robot - girlfriend)^2 ;)
@koushik7604
@koushik7604 8 жыл бұрын
:)
@vijayd8634
@vijayd8634 7 жыл бұрын
Funny, the cost will be half of it!
@peterkay7458
@peterkay7458 7 жыл бұрын
When you get it working please make the CAD files available online. PLEAZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ jk
@daniyalali6016
@daniyalali6016 7 жыл бұрын
learn a lot about neural nets from this video course.
@devonallary5251
@devonallary5251 8 жыл бұрын
@24:30, shouldn't the weight for w0 be 1 instead of -1? Then, as long as the sum of the other inputs is greater than 0, they will always pass the threshold since w0 + SUM(w-0) >= T - -> Sum(w-0) >= T - w0 - -> Sum(w-0) >= 0.
@andrii5054
@andrii5054 4 жыл бұрын
I agree, thought the same thing
@nitinsiwach1989
@nitinsiwach1989 7 жыл бұрын
at 41:00 .. Starting off with weights being the same would not necessarily mean they remain the same. it would if they were in same layer but here the neurons are not.. am i missing something?
@drewlaino
@drewlaino 6 жыл бұрын
That P at 16:35 was amazing...
@montserratcano2389
@montserratcano2389 7 жыл бұрын
Thanks for sharing MIT! Excellent teacher!
@alv2648
@alv2648 7 жыл бұрын
at 4:10 seems he misspoke about misclassified examples by Geoffrey Hinton's U Toronto NN. Appears the right answers (aka labels) are shaded red (second choice for the first two photos). Labels are set by the researcher for the training set - so they chose cherry instead of dalmatian in picture #3.
@perrydeng6960
@perrydeng6960 6 жыл бұрын
Backpropagation starts at 26:25
@SharathPunreddy
@SharathPunreddy 5 жыл бұрын
Loved it, thank you very much for making complex things so simple.
@hassananwer3674
@hassananwer3674 4 жыл бұрын
50:02 "All great ideas are simple"
@Yomama4536
@Yomama4536 3 жыл бұрын
But not all simple ideas are great...
@shatandv
@shatandv 8 жыл бұрын
I'm loving this course
@fraollemecha
@fraollemecha 3 жыл бұрын
Awesome course. Someday I will use this to build a program that writes programs.
@mathhack8647
@mathhack8647 3 жыл бұрын
@26:05, I ike this philosophy. RIP Dear Winston, Your coursers are stil used by students and perpitual leaners, like me , all over the world/ الله يرحمك ويحسن لايك بقدر ما نفعت طلابك وعموم البشر
@benjaminhardisty66
@benjaminhardisty66 8 жыл бұрын
Sweet lecture! This stuff finally makes some good intuitive sense ;)
@mdtowhidurrahman8406
@mdtowhidurrahman8406 3 жыл бұрын
I am not sure if it's me or others who feel the same after the pandemic. I feel disturbed and lose focus as soon as the students start coughing in the background. The pandemic left us with a mental phobia.
@MICKEYISLOWD
@MICKEYISLOWD 3 жыл бұрын
Go look at climate change if you really want Mental phobias! It's shocking. The acceleration of change is scary as fuck. Just 10 yrs from now and economies will begin falling.
@michaelredenti2054
@michaelredenti2054 6 жыл бұрын
The fact that the derivative of the sigmoid function is given exclusively in terms of the input/sigmoid is not that surprising since the sigmoid is a function of the exponential function whose derivative is itself.
@chetjuall2269
@chetjuall2269 7 жыл бұрын
Great ending beginning at 50:00
@MinusBrain
@MinusBrain 7 жыл бұрын
Thanks for uploading such an awesome lecture. One point I did not get, though: Could anyone please explain what i and j are in the function to calculate the delta of the weights at 21:24 ? Did I miss where the professor explains where this comes from?
@aidenigelson9826
@aidenigelson9826 3 жыл бұрын
I assume i and j are for x and y values respectively, 2i + 3j is the coordinate of x equal 2 and y equal 3.
@kjyu
@kjyu 3 жыл бұрын
@@aidenigelson9826 I am pretty sure i and j are unit vectors, that space had only two dimensions w1 and w2 so the unit vectors looked like i =[1,0] and j=[0,1], so yes you are correct
@aidenigelson9826
@aidenigelson9826 3 жыл бұрын
@@kjyu it's a pretty long text for saying yep, I imagine you thought I was wrong, wanted to say sth then read again, found out it's correct but was too lazy to delete it hehehe
@myroseaccount
@myroseaccount 4 жыл бұрын
This wasn't overlooked but buried by Marvin Minsky in 1970 by his book Perceptrons
@somaprasadsahoo2446
@somaprasadsahoo2446 Жыл бұрын
How the performance function became -1/2 (d-z)^2 ? 28:08
@chuvaca189
@chuvaca189 3 жыл бұрын
Gracias MIT con la colaboracion de finis terrae :D
@ShinningDarkness
@ShinningDarkness 5 жыл бұрын
Seems like in the biological model the hill climbing is done by the physical architecture and the pull on the axiom path by the surrounding associated stimuli's, the added advantage of this pull is it lets us know where to head towards when the solution isn't fitting the question.
@zekeanthony
@zekeanthony 6 жыл бұрын
superb prof Winston
@SuperMaDBrothers
@SuperMaDBrothers 2 жыл бұрын
amazing lecture good points at the end on simplicity
@5hawnK3lly
@5hawnK3lly 4 жыл бұрын
really impressive drawing skills i must say
@KaiyuZheng
@KaiyuZheng 8 жыл бұрын
I don't quite get the last point: the computation with respect to width is w^2 (width squared).Can someone explain?
@Dennis4Videos
@Dennis4Videos 6 жыл бұрын
1 year late but to whom it may concern: it is because you can cross-link the neurons hence w^2
@tuha3524
@tuha3524 3 жыл бұрын
I love this course so so much. Exellent!!
@tsvisabo731
@tsvisabo731 2 жыл бұрын
What an awesome teacher
@TheZudork
@TheZudork 7 жыл бұрын
Thank you for this amazing class!
@dostoguven
@dostoguven 8 жыл бұрын
amazing teacher.
@gianluke
@gianluke 7 жыл бұрын
Some clarifications: 1) It's not true that, prior to 2012 ImageNet success, neural nets had not been used in practice. As an example, LeNet5 was deployed in the late 90s to recognize ZIP codes. 2) The ImageNet's ConvNet paper of the 2012 is authored (in order) by two students of Hinton, Krizhevsky and Sutskever, and Hinton himself. It was Alex Krizhevky to implement and train the network (in his room). Maybe we should stop to attribute every credit to the famous professors of the case. 3) The problem with step function is not the non-differentiability in 0. That's practically irrelevant. Indeed, even the most common activation function of today (the rectifier, aka ReLU) is non-differentiable in 0. The problem with step functions is that derivatives are equal to 0 everywhere (but in 0, where it's not differentiable). So gradient descent cannot be used. 4) Nobody was getting rid of the thresholds, it's just rewriting the same function in a different form. In modern terms, the threshold is now called "bias". And the so-called "bias trick" to "hide" the bias inside the matrix multiplication is just a notation convenience. The point here is just replacing the step activation function with another one that is (still) differentiable almost everywhere AND has non-zero derivatives in some parts of the domain. (Edited after a comment pointed out a mistake)
@asdfasdfuhf
@asdfasdfuhf 6 жыл бұрын
Wtf, this lecture is based on a lie
@An-wd9kk
@An-wd9kk 6 жыл бұрын
Uhmm just one point in your argument. The ReLu IS continous but NOT differentiable at one point while the step function IS BOTH discontinuous and undifferentiable at the same point.
@gianluke
@gianluke 6 жыл бұрын
@@An-wd9kk Right. I will update the comment. Thank you :)
@Briefklammer1
@Briefklammer1 6 жыл бұрын
hi sboby, you seem pretty familiar with neural net. i have a question in terms of backprop. I've understand that we wanna minimaze our errorfunktion, therefore we calculate the partiell derivatives of the weights W_1,..., W_n. My question is, how do we use stochastic gradient descent to find the best weights? Is it like you explained in 21:23 ?
@fulliculli
@fulliculli 7 жыл бұрын
Awesome video content. Just make the sound louder please.
@bendev6807
@bendev6807 5 жыл бұрын
Great lecture. Enjoyed it a lot. RIP Prof Winston.
@heri_prieto
@heri_prieto 7 жыл бұрын
This was beautiful.
@cagmz
@cagmz 8 жыл бұрын
Does anyone know where the 1/2 comes from at 28:00?
@ThomasFauskanger
@ThomasFauskanger 8 жыл бұрын
I think it's just to make the derivative nicer. He uses the derivate at 33:30 , and is just d-z and not 2(d-z) as it would've been otherwise. I think one of his points in other videos is that it's about mathematical convenience. The performance function is arbitrary and can be adjusted to "be nice".
@PullingEnterprises
@PullingEnterprises 7 жыл бұрын
It's however long you want your approximation step length to be. That is, if the optimization function is -1/2 then every step you'll reduce how far you were off (d-z) by half. If it was 1/3 then our approximation would be dividing the off distance by three and traveling just that far. The (d-z) term is how much you were off from the right result, and the -1/2 is just the step size to adjust (iteratively) until your gradient descent is within a threshold to give you the outputs you want while training your network.
@WhoForgot2Flush
@WhoForgot2Flush 6 жыл бұрын
Makes taking the derivative easier. You don't need it, you'll get the same result it just makes the math easier.
@Velvels
@Velvels 8 жыл бұрын
Excellent lecture by Prof Winston. Can someone share the link to the tool he uses to demonstrate neural net in action ( what he calls "World's smallest neural net in action" )
@prinzrainerbuyo3234
@prinzrainerbuyo3234 8 жыл бұрын
It's 'Fall 2105' in the description
@aoweishen3496
@aoweishen3496 8 жыл бұрын
Can you please build a full playlist of this course? Cuz it's really good but i don't know how to find the rest of the course. Thank you!
@mitocw
@mitocw 8 жыл бұрын
Here is the complete playlist: kzbin.info/aero/PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi
@ibadurrahman5954
@ibadurrahman5954 6 жыл бұрын
Thanks for this lecture it was amazing .
@samlaf92
@samlaf92 7 жыл бұрын
@41:05 why does he say starting with the same weights, they would stay the same? They won't have the same derivative since one of them goes through an extra sigmoid function.
@i890ola
@i890ola 3 жыл бұрын
Thanks from Syria 🇸🇾
@kabal127
@kabal127 5 жыл бұрын
Best course ever
@avawinters6184
@avawinters6184 3 жыл бұрын
ok, amazing lesson and all, but where do I get one of these chalkboards?
@michaelsu4253
@michaelsu4253 Жыл бұрын
22:55 "Sadly in Harvard" in 1974 gave us the answers. This makes my day😂
@keskinaytac
@keskinaytac 7 жыл бұрын
Thank you for the subtitles.
@tthtlc
@tthtlc 8 жыл бұрын
You mentioned 2010 as year when NN is nearly dumped. I tooked an AI course in 1990, and by end of 1990, have convinced myself enough that the whole idea is too probabilistic, and unlikely to show much intelligence superiority, preferring the algorithmic approach instead, and subsequently gave up the subject totally. Well, I was wrong. :-)!!!
@rustycherkas8229
@rustycherkas8229 2 жыл бұрын
You think you've got problems? I was the SysAdm at the UofT during the late '80s who set up Geoffrey Hinton's terminal in his office, and, not knowing any better, turned and asked if he needed any 'training' on how to send/receive emails... How was I to know that he'd become the "grandfather of AI"??? *sob*
@dnyaneshwardarade6120
@dnyaneshwardarade6120 4 жыл бұрын
I only dream of sitting there and watching the professor
@_bobbejaan
@_bobbejaan 7 жыл бұрын
Problem i have is that if in = 0 then the weight of that in does not change because its weight change depends on its input. (pd sigmoid in / pd w) = in where in = 0. I think weights should change if there is an error.But if out = 1 and in = 0 then w1 does not change.
@Jirayu.Kaewprateep
@Jirayu.Kaewprateep 3 жыл бұрын
From his example, how much initial random value create BETTER results since too wide create time approx because approach algorithms or because time widely scope⁉️
@brambeer5591
@brambeer5591 7 жыл бұрын
Cool guy, awesome lecture!
@monyettenyom2540
@monyettenyom2540 4 ай бұрын
just for fun: I thought the last identified picture of an animal is a lemur, not a madagascar cat/fossa. Isn't it?
@gauravstud
@gauravstud 6 жыл бұрын
Can someone post the pre reading and prerequisites for this course?
@mitocw
@mitocw 6 жыл бұрын
For course information and materials, see the course on MIT OpenCourseWare at: ocw.mit.edu/6-034F10.
@Anand_Agrawal
@Anand_Agrawal 2 жыл бұрын
This is art
@trevorjones2095
@trevorjones2095 3 жыл бұрын
Is Conway's Game of Life hard to do with neural nets?
@neurolife77
@neurolife77 3 жыл бұрын
13:45 As a neuroscience student, I confirm this statement ;)
@abhi1092
@abhi1092 8 жыл бұрын
Is this a Graduate level or Undergraduate level course?
@mitocw
@mitocw 8 жыл бұрын
+abhi1092 This is an Undergraduate level course. See the course on MIT OpenCourseWare for more information and materials at ocw.mit.edu/6-034F10
12b: Deep Neural Nets
49:06
MIT OpenCourseWare
Рет қаралды 185 М.
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 2 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
1. Introduction to 'The Society of Mind'
2:05:54
MIT OpenCourseWare
Рет қаралды 1,5 МЛН
6. Monte Carlo Simulation
50:05
MIT OpenCourseWare
Рет қаралды 2,1 МЛН
Lecture 5.3: Patrick Winston - Story Understanding
1:00:31
MIT OpenCourseWare
Рет қаралды 131 М.
1. Introduction and Scope
47:19
MIT OpenCourseWare
Рет қаралды 1,8 МЛН
11. Introduction to Machine Learning
51:31
MIT OpenCourseWare
Рет қаралды 1,6 МЛН
15. Learning: Near Misses, Felicity Conditions
46:54
MIT OpenCourseWare
Рет қаралды 70 М.
Lecture 1: Introduction to Superposition
1:16:07
MIT OpenCourseWare
Рет қаралды 8 МЛН
Inside the V3 Nazi Super Gun
19:52
Blue Paw Print
Рет қаралды 2,7 МЛН
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,9 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН