12b: Deep Neural Nets

  Рет қаралды 181,547

MIT OpenCourseWare

MIT OpenCourseWare

8 жыл бұрын

*NOTE: These videos were recorded in Fall 2015 to update the Neural Nets portion of the class.
MIT 6.034 Artificial Intelligence, Fall 2010
View the complete course: ocw.mit.edu/6-034F10
Instructor: Patrick Winston
In this lecture, Prof. Winston discusses BLANK and modern breakthroughs in neural net research.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 87
@asdfasdfasdf383
@asdfasdfasdf383 2 жыл бұрын
I really enjoyed this. "And no one quite knows how it works, except that when you throw an immense amount of computation into this kind of arrangement, it's possible to get performance that no one expected would be possible." ( at 15:28 )
@hectormoreira4316
@hectormoreira4316 2 жыл бұрын
This Professor is a master in his art. Simply hipnotizing. Thanks for sharing.
@ayarasheed114
@ayarasheed114 6 жыл бұрын
Well done Prof.Patrick H. Winston, providing us these great videos
@AntonPanchishin
@AntonPanchishin 7 жыл бұрын
Nice update Prof. Winston. It is a challenge to videos up to date with the changing times. Just 7 years ago you stated that people who used neural nets were overly fascination with that toolset and that NNs weren't going anywhere. The future is certainly hard to predict.
@keffbarn
@keffbarn 6 жыл бұрын
For some weird reason, the way he acts and talks makes him really funny and interesting to listen to. I have no idea why but it's awesome!
@jonnyhaca5999
@jonnyhaca5999 5 жыл бұрын
It's because he exposes his vulnerabilities. We are really defined by our vulnerabilities not our strengths. Or they combine to produce something better, more likeable.
@samarthsingal2072
@samarthsingal2072 7 жыл бұрын
Great lecture! Haven't seen a more concise explanation of NNs anywhere.
@rsd2dcc
@rsd2dcc 4 жыл бұрын
wow, very good illustrated examples to grasp deep nn terminologies and its building blocks. R.I.P.
@raketenrambo4860
@raketenrambo4860 6 жыл бұрын
With regard to gesture and voice at 15:28 when it comes to the question why exactly this works is just amazing, hence inspiring :) great lecture!
@akhilesh84
@akhilesh84 6 жыл бұрын
One of the best and simplest explanation of neural nets... Excellent
@onlinemath3072
@onlinemath3072 2 жыл бұрын
Prof we miss you.
@pedropeter_
@pedropeter_ 6 жыл бұрын
Thanks for the class!
@DisdainforPlebs
@DisdainforPlebs 5 жыл бұрын
About the end of the video, I find it really cool that people are so good at recognizing predators from very incomplete data. Really tells you something about how we have evolved! Even the rabbit, people see the predatory bird before the benign rodent. Very cool stuff, and great lecture!
@WepixGames
@WepixGames 4 жыл бұрын
R.I.P Patrick Winston
@AmitaKapoor
@AmitaKapoor 7 жыл бұрын
Nice video, and good information. but every time Prof Winston breathes, I get concerned about his heart health....
@Lord_Ferman
@Lord_Ferman 6 жыл бұрын
yeah ! I just hope that good Prof Winston is doing alright . . .
@Lord_Ferman
@Lord_Ferman 6 жыл бұрын
Thanks ! Glad to know !!!
@rbp365
@rbp365 5 жыл бұрын
he should go vegan, do some exercise! He'd be on top of it in no time
@sakibmahmud3621
@sakibmahmud3621 4 жыл бұрын
He died recently...
@jarvishan5615
@jarvishan5615 4 жыл бұрын
​@@sakibmahmud3621 That was sad. RIP.
@Tilex1990
@Tilex1990 7 жыл бұрын
For those looking for the first lecture (12a: Neural Nets): kzbin.info/www/bejne/q4nXaaR8Z7-tnNE (Please add this link in the description here. Makes it easier for some ppl accidentally clicking this vid instead of the first.)
@LunnarisLP
@LunnarisLP 7 жыл бұрын
check out the playlist or the complete course ;-)
@Jeffmaistro
@Jeffmaistro 7 жыл бұрын
Wonderful!
@allyourcode
@allyourcode 3 жыл бұрын
A sample of moments that show that we do not really understand WHY these things work: @22:20 @44:18
@tuseroni6085
@tuseroni6085 5 жыл бұрын
this man is very knowledgeable about neural networks, not so much about vampires...vampires have shadows, they don't have reflections. (though, as a bit of historicity, vampires were thought to lack reflections in mirrors no mention of a lack of reflections in general, this is because mirrors of the time were made of silver and silver is a holy metal and will not reflect something unholy like a vampire. so it's unknown if, within that context, a vampire would cast a reflection, say, on a body of water. but the myth has been further abstracted beyond the connection with silver to vampires do not cast reflections at all) one of the things he mentioned in the previous part is something i have noticed as a big failing in modern neural networks: they don't take into account timing. the human brain is a huge timing machine, timing plays a massive part in how the neurons function, and of course timing is important if you are dealing with the physical world. perhaps the reason AI has done as well as it has is related to why robotics hasn't done nearly as well, most AI today are working in a virtual world where time is...well not irrelevant but certainly more abstract. if i send an audio signal to an AI program that AI will not be working with pressure waves, he will be given intensity as an integer, or maybe even the result of a fast fourier transform, the brain however will be given something LIKE the result of a FFT but not exactly (the cochlea has hairs with various resonant frequencies and neurons that are attached so the brain will be given a series of pulses from a series of neurons corresponding each to hair keyed to a given range of resonant frequencies...i need to look up but i expect there is some overlap in ranges, like how the cones in the eye have overlap of ranges for what frequencies they respond to, this would make sense as it would allow for detection of frequencies between certain pure frequencies, like say...22.65hz if you only resonate with 22 and 23 you would miss that .65 but if you have a strong resonance around 22 and a weaker resonance from 20 to 25 say and the other hairs nearby have similar normal distributions you can work out the .65 from the overlap) the most important part here though is that it will be sending a bunch of pulses, the neurons lose charge over time, or gain charge...i'm not sure the right terminology...i think gain is correct here...the electronegativity of the membrane increases over time til it reaches a certain point, when it receives a pulse it's electronegativity goes down, if it drops enough it will fire, but if it doesn't meet the threshold it will start to go back to homeostasis, the electronegativity will go up over time. so if you hit it with a pulse while it's going up you can get it up over its threshold, you can use this kinda...almost a neuronal resonant frequency to turn a series of pulses into a waveform. another advantage for the kind of NN he describes is they are totally mathematical, rather than mechanical.it's an abstraction and it works well in the abstract world of software but i think that's why these things tend to struggle with the physical world, trying to use these neural networks to train a robot how to move its body, something VERY timing based, tends to lead to heartache..and property damage. it's also the opposite of how evolution did it, evolution started with neurons to move muscles, then worked up to neurons to move groups of muscles, then up to neurons to use sensors to best move muscles, then up neurons to coordinate multiple sensors to figure out how to best move muscles, then up to higher levels of abstractions beyond just how to move your muscles. and we've been going the opposite direction, working on these higher levels of abstraction and trying to work them into how to move mechanical muscles. and it's so much easier to move UP abstractions than down.
@mat650
@mat650 5 жыл бұрын
3 years of studying and living in Venice and I recognized the Gondola instantly but i can't tell you exactly how. Well trained neurons...
@abhi1092
@abhi1092 8 жыл бұрын
What software is used for demonstrating the neural network?
@mitocw
@mitocw 8 жыл бұрын
+abhi1092 Java is used for the demonstrations. See the Demonstrations section of the course on MIT OpenCourseWare for more information at ocw.mit.edu/6-034F10
@alexanderher7692
@alexanderher7692 7 жыл бұрын
I guess speed is not a priority...... ironic:)
@molomono9795
@molomono9795 7 жыл бұрын
Java allows fast graphic demonstrations to be developed. I like to rip on java as much as the next person but in what world is "speed" a design requirement of a demonstration of nerual networks and their structure in a class setting? Answer, not this one. So as a lecturer why waste your time developing something that does nothing? java was the right choice and it's not ironic.
@Joe-jc5ol
@Joe-jc5ol 6 жыл бұрын
If you want speed, you wouldn't waste the time of your resources to make them display to you their progress every step of the way... If this is a ready to use tool, it can be extremely helpful to use to know if you are on the right track, before firing the logic on your main machine and leaving it to process for days.
@Xaminn
@Xaminn 5 жыл бұрын
Brain.io
@gcgrabodan
@gcgrabodan 8 жыл бұрын
The visualization shown at min 40:40 is extremly useful. Is it also available somewhere? On the course website I could download something that only includes the curve fitting...
@jonnyhaca5999
@jonnyhaca5999 5 жыл бұрын
You can download the whole lecture (telechargerunevideo.com/en) and cut out the piece you want.
@samanrahbar8088
@samanrahbar8088 5 жыл бұрын
LOVE THIS Prof. Hope he is doing alright.
@TheCanon03
@TheCanon03 4 жыл бұрын
Sadly he passed away this month. Lucky to have been one of the students to study under him.
@GleiryAgustin
@GleiryAgustin 7 жыл бұрын
I laughed so hard with the initial song haha.
@qzorn4440
@qzorn4440 7 жыл бұрын
very nice ai information, this will help on the raspberry pi3 limited deep learning results thanks.
@fwtaras2012
@fwtaras2012 4 жыл бұрын
a question what does he mean by positive and negative examples?
@seanmchughinfo
@seanmchughinfo 6 жыл бұрын
When it guesses the wrong thing (school bus on black and yellow stripes) isn't the "real problem" there that it doesn't have enough data or good enough data?
@rad6014
@rad6014 4 жыл бұрын
RIP :(
@rodrigoloza4263
@rodrigoloza4263 7 жыл бұрын
haha they had to upgrade it! Is it me or he does not like very much CNNs?
@JohnTrustworthy
@JohnTrustworthy 4 жыл бұрын
So I get that at 44:25 the left picture is a school bus but can someone explain to me what's in the right picture?
@DrLewPerren
@DrLewPerren 3 ай бұрын
The right picture is the original picture of the school bus, with a tiny amount of changes that were enough to fool the neural network.
@stv3qbhxjnmmqbw835
@stv3qbhxjnmmqbw835 3 жыл бұрын
46:28 that certainly looks like someone's refrigerator's door btw.
@hex9973
@hex9973 6 жыл бұрын
wonderful
@nikhiljay
@nikhiljay 6 жыл бұрын
Can someone explain what he is doing with -1 and the threshold value at 25:49? I watched his previous lecture 12a, but I still don't really understand how he can get rid of thresholds by doing that.
@jakezhang6806
@jakezhang6806 6 жыл бұрын
Multiply -1 with the threshold T, and add to the sum. Without this, the result has to exceed T to trigger the threshold, but now it only has to exceed 0, which is more convenient and succinct. No matter what your T is, the curve looks the same.
@EranM
@EranM 6 жыл бұрын
Just accept the magic.
@alliedtoasters
@alliedtoasters 6 жыл бұрын
Without the -1, the minimum possible value of the "summer" is zero. So, you subtract the threshold from that sum to bring your threshold to zero. This happens to make your minimum possible value -T.
@suppresswarnings2030
@suppresswarnings2030 7 жыл бұрын
I wonder why it still works when shutting down some of the neurons and left only 2 of them.
@tuseroni6085
@tuseroni6085 5 жыл бұрын
imagine a bunch of ants looking for food. starting out all the ants go off in random directions, because none of them know where the food is, if they find food they leave a trail and go back the way they came (likely a long an winding path) when they, or another ant, come out of the hill they will be drawn to follow the trail (but it probabilistic, they might not) and if another ant finds the same food, or different food, they will do the same as that previous ant. so what happens next is a bunch of trails will be made to that food (because a lot of ants will have come across it and left a trail back to the ant hill) ants will follow one of those trails, whichever is strongest, the shorter the trail to the food the more ants will go to it and back in a given time and the stronger the pheromone trail will be. after a while the trail will be very strong and very straight. now how many ants are needed now to get to this food? likely just one, to go to the food, and leave a trail back. what would the trail look like if you only had the one ant from the start? likely very long and winding, because developing the optimal trail required multiple ants, and the more ants there are to work on the problem (finding the best path to the food) the better the result, but once the problem is solved you don't need a lot of ants to keep it going. this is what he meant by "local optima" this is also a process the brain goes through, it will prune connections that are no longer needed to save energy, once the problem is solved you can reduce the number of connections without sacrificing accuracy or precision.
@shpluk
@shpluk 7 жыл бұрын
no vampire involved here ))
@Gvozd111
@Gvozd111 7 жыл бұрын
I was felling bad for the curve at 31:45
@tuha3524
@tuha3524 2 жыл бұрын
human created NN which see things different from our brain, awesome!!!
@etherous
@etherous 5 жыл бұрын
No audio
@Dhirajkumar-ls1ws
@Dhirajkumar-ls1ws 3 жыл бұрын
What is the name of software prof. is using.
@mitocw
@mitocw 3 жыл бұрын
Some of the software demonstrations use Java. See ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/demonstrations for more details. Best wishes on your studies!
@Dhirajkumar-ls1ws
@Dhirajkumar-ls1ws 3 жыл бұрын
@@mitocw thank you MITOCW.
@dohyun0047
@dohyun0047 4 жыл бұрын
at @43:36 can any one explain why local maxima can be turn into saddle point?
@quangho8120
@quangho8120 4 жыл бұрын
No formal proof, but I can give you an intuition. In 2 dimension, there's only a local maximum or a local minimum that the network kinda got stuck in. In 3 dimension, there's a cone and a saddle. If you slice any 2d plane from the 3d surface, you will still get a local maximum or a local minimum only, but now that more dimensions have opened up, the network can just "go around" the local minimum if the local minimum in 1 2d slice is actually a local maximum in another 2d slice. As the dimension grows, there are more possible ways for the network to go around
@dohyun0047
@dohyun0047 4 жыл бұрын
@@quangho8120 thanks that make sense
@TheNishant1980
@TheNishant1980 5 жыл бұрын
How did the professor trained the Neural Net...any idea??
@danielbertak1539
@danielbertak1539 5 жыл бұрын
Any UF CS/CE/EE students here, the Schwartz 37 special has taken over MIT too at 7:56
@mesielepush2124
@mesielepush2124 3 жыл бұрын
Like if you're from 2020 and feel the terror: 23:36 to 23:40
@_bobbejaan
@_bobbejaan 7 жыл бұрын
I can't get auto coding working. kzbin.info/www/bejne/jKOweXRprr2Sh6sm21s I keep getting RMS of 2+ on 8 inputs/outputs and 3 hidden after training it 5000 times. I use 256 values to train it. Logic ports are trained correctly and quickly with RMS of 0.02 in around 100 training samples. So i do think my neural net works. Am puzzled.
@_bobbejaan
@_bobbejaan 7 жыл бұрын
If you are interested. I think i figured it out. I think it is 'over fitting'. It can be solved with regularization. I means some weights have too much influence and regularization keeps weights in check.
@dmitrinosovicki3399
@dmitrinosovicki3399 7 жыл бұрын
Autocoding: But it doesn't have to look familiar to be a valid representation. Important is there exists an encoder and a decoder that can compress further input into that representation, and decompress it back with acceptable loss. Fascinating! Compression rate can be extremely high, practically arbitrary. It is a language, not an entropy coding.
@EranM
@EranM 2 жыл бұрын
Mmhmm.. Actually using convolutional nets reduce computation.. Fully connected layers are much more computation expensive. 14:30
@fernandolk4536
@fernandolk4536 11 ай бұрын
It is a pity he is no longer among us; it would be extremely remarkabke to engage him into developing legal AI for an utter advanced jurisprudencial framework system, capable of fairly deploying a solution for the most heinous crimes commited against humanity, and humane solvency for the utmost failure in the known universe (US + A).
@claushellsing
@claushellsing 5 жыл бұрын
15:27 so it cutting edge tech on which we are building the AI future, is something that *no one now how it works!!!* LOL.
@seohyeongjeong
@seohyeongjeong 6 жыл бұрын
Thanks for the lecture. Notational nightmare lol
@fishface343673
@fishface343673 6 жыл бұрын
Why MIT has chalkboards in 2015?
@quangho8120
@quangho8120 5 жыл бұрын
On the contrary, their boards move automatically :D
@judgeomega
@judgeomega 4 жыл бұрын
the chalkboards are stuck in a local maximum
@jeffschlarb4965
@jeffschlarb4965 4 жыл бұрын
Chalk boards are way cool..my undergrad school used to leave classrooms open at night. They got me through differential equations and Combinatorics..white boards are not the same, or a Power Point slide projected to a screen.
@bachersaid8214
@bachersaid8214 7 жыл бұрын
Amazing visualization! Tho I don't know to find his *sigh*s funny or worrying about his health.
@MintSodaPop
@MintSodaPop 5 жыл бұрын
tag yourself, i'm lesser panda
@BJTUIronway
@BJTUIronway 6 жыл бұрын
前面十节都还比较容易听懂,神经网络这里好像听不懂了
@shpazhist
@shpazhist 6 жыл бұрын
Cant wait till some robot will get so smart that he will decide to build his own Army of similar robots and drones to overtake the world
@TripedalTroductions
@TripedalTroductions 7 жыл бұрын
6:15 mother of God....
@spityousomefacts
@spityousomefacts Жыл бұрын
Person coughing was really rude and should have stayed home sick
13. Learning: Genetic Algorithms
47:16
MIT OpenCourseWare
Рет қаралды 517 М.
10. Introduction to Learning, Nearest Neighbors
49:56
MIT OpenCourseWare
Рет қаралды 262 М.
La revancha 😱
00:55
Juan De Dios Pantoja 2
Рет қаралды 31 МЛН
When Steve And His Dog Don'T Give Away To Each Other 😂️
00:21
BigSchool
Рет қаралды 16 МЛН
Would you like a delicious big mooncake? #shorts#Mooncake #China #Chinesefood
00:30
12a: Neural Nets
50:43
MIT OpenCourseWare
Рет қаралды 525 М.
Lec 1 | MIT 5.60 Thermodynamics & Kinetics, Spring 2008
46:46
MIT OpenCourseWare
Рет қаралды 1,5 МЛН
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
7. Constraints: Interpreting Line Drawings
49:13
MIT OpenCourseWare
Рет қаралды 132 М.
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 201 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 257 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 260 М.
The Future of Mathematics?
1:14:48
Microsoft Research
Рет қаралды 103 М.
Lecture 5: Operators and the Schrödinger Equation
1:23:14
MIT OpenCourseWare
Рет қаралды 645 М.