Neural Network Architectures & Deep Learning

  Рет қаралды 803,923

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 404
@mickmickymick6927
@mickmickymick6927 3 жыл бұрын
Does anyone else feel weird when he says Thank You at the end? He just gave me a free, high-quality, understandable lecture on neural networks. Man, thank *you*!
@Eigensteve
@Eigensteve 3 жыл бұрын
:) People watching and enjoying these videos makes it so much more fun to make them. So indeed, thanks for watching!
@antoniofirenze
@antoniofirenze 3 жыл бұрын
@@Eigensteve ..being happy to see other people making progress. Man, you have a great heart..!
@carol-lo
@carol-lo 3 жыл бұрын
Steve, we should be thanking "you"
@oncedidactic
@oncedidactic 2 жыл бұрын
Presenter with true class 👏
@Forever._.curious..
@Forever._.curious.. 2 жыл бұрын
😁😍
@teslamotorsx
@teslamotorsx 5 жыл бұрын
KZbin's recommendation algorithm is becoming self-aware...
@florisr9
@florisr9 5 жыл бұрын
It was KZbin's turn in the introduction round
@GowthamRaghavanR
@GowthamRaghavanR 5 жыл бұрын
I hope Jus relu and sigmoid
@Xaminn
@Xaminn 5 жыл бұрын
@@GowthamRaghavanR those are the safe ones
@resinsmp
@resinsmp 5 жыл бұрын
Imagine for a second also what the algorithm never recommended to you, because it already knew you were aware.
@Xaminn
@Xaminn 5 жыл бұрын
@@resinsmp Now that's an interesting thought haha. "Since user searched this type of topic, it must already be aware of some other certain type of topics." Simply marvelous!
@farabor7382
@farabor7382 5 жыл бұрын
I don't know why youtube decided I needed that little course, but I'm glad that it did now.
@brockborrmann2931
@brockborrmann2931 5 жыл бұрын
This video has common variables with other videos you watch!
@TonyGiannetti
@TonyGiannetti 5 жыл бұрын
Sounds like you’ve been autoencoded
@fitokay
@fitokay 5 жыл бұрын
That's why the CF algorithm did
@Kucherenko90
@Kucherenko90 5 жыл бұрын
same thing
@РусланДиниц
@РусланДиниц 5 жыл бұрын
KZbin also uses neural networks
@Savedbygrace952
@Savedbygrace952 Жыл бұрын
I am addicted to your series of lectures for the last three months. your "welcome back" intro looks like a chorus to me. thank you!
@PhoebeJCPSkunccMDsImagitorium
@PhoebeJCPSkunccMDsImagitorium 5 жыл бұрын
steve brunton idk who u r before watching this. but this presentation style of a glass whiteboard w/ image superimposed is the best way ive ever seen someone teach tbh. thank u at least for that. but more importantly this actually helped me understand the beast of neural nets a little more and hopefully be more prepared when our new ai overlords enslave us at least we will know how they think
@dantescanline
@dantescanline 4 жыл бұрын
This was massively helpful as an intro! When my question is just "yes but how does this ACTUALLY work", you either get pointlessly high level metaphors about it being like your brain, or jumping straight into gradient descent and all the math behind training. A+ video, thanks.
@elverman
@elverman 4 жыл бұрын
This is the best short intro to this topic I've seen. Thanks!
@culperat
@culperat 5 жыл бұрын
Important note about the function operating on a node. If the functions of two adjacent layers are linear, then they can be equivalently represented as a single layer (compositions of linear transforms is itself a linear transformation and thus could just be its own layer). So, nonlinear transformations are -necessary- for deep networks (not just neural networks). That isn't to say you can't have a composition of linear transformations to compose an overall linear transformation, if there's nonlinear constraints for each operator.
@XecutionStyle
@XecutionStyle 3 жыл бұрын
Sir your deep learning videos are the only ones on KZbin I take seriously.
@theunityofthejust-justifyi7951
@theunityofthejust-justifyi7951 4 жыл бұрын
You really simplify the stuff in a way that has me feel enthusiastic to learn it. Thank you.
@johnwilson4909
@johnwilson4909 5 жыл бұрын
Steve, you are the first person I have ever seen describe an overview of neural networks without paralyzing the consciousness of the average person. I look forward to more of your lectures, focused in depth on particular aspects of deep learning. It is not hard to get an AI toolkit for experimentation. It is hard to get a toolkit and know what to do with it. My personal interest is in NLR (natural language recognition) and NLP (natural language programming) as applied to formal language sources such as dictionaries and encyclopedias. I look forward to lectures covering extant NLP AI toolkits. Sincerely, John
@pb25193
@pb25193 4 жыл бұрын
John, I recommend Stanford's course on recurrent neural networks. Free on KZbin. It's a playlist with over 20 lectures
@pb25193
@pb25193 4 жыл бұрын
kzbin.info/aero/PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
@PiercingSight
@PiercingSight 5 жыл бұрын
This is a perfectly compressed overview of neural networks. What autoencoder did you use to write this?
@bunderbah
@bunderbah 5 жыл бұрын
Human brain
@MilaPronto
@MilaPronto 4 жыл бұрын
@@bunderbah Bruman hain
@3snoW_
@3snoW_ 4 жыл бұрын
@@MilaPronto Humain bran
@mbonuchinedu2420
@mbonuchinedu2420 4 жыл бұрын
one hot encoder. lols
@mjafar
@mjafar 4 жыл бұрын
@@mbonuchinedu2420 That's like a robot trying to be funny
@RolandoLopezNieto
@RolandoLopezNieto 8 ай бұрын
I just found your channel as a suggestion from a 3Blue1Brown video. I subscribed instantly, easily explained, thanks.
@Eigensteve
@Eigensteve 8 ай бұрын
So cool! Which video?
@RolandoLopezNieto
@RolandoLopezNieto 6 ай бұрын
​@@EigensteveI was watching the playlist on NN from 3Blue1Brown, and then your video appeared on my suggestions, very glad and superb content, thanks.
@-SUM1-
@-SUM1- 5 жыл бұрын
KZbin is trying to teach us about itself.
@FriendlyPerson-zb4gv
@FriendlyPerson-zb4gv 5 жыл бұрын
Hahaha. Good.
@ImaginaryMdA
@ImaginaryMdA 4 жыл бұрын
It's becoming sentient! Even worse, it's a teenager who just wants to be understood. XD
@KeenyNewton
@KeenyNewton 4 жыл бұрын
These were most productive 9 minutes. Great explanation on the architectures.
@chris_jorge
@chris_jorge 4 жыл бұрын
forget neural networks, this guy figured out that it's better if you stand behind what your presenting instead of in front of it. mind blown
@MikaelMurstam
@MikaelMurstam 5 жыл бұрын
Very nice. I like the autoencoders. That is basically just understanding. Intelligence is basically just a compression algorithm. The more you understand the less data you have to save. You can extract information from your understanding. That's basically what the autoencoder is about. For instance, if you want to save an image of a circle you can store all the pixels in the image, or store the radius, position and color of it. Which one takes up more space? Well, storing the pixels. We can use our understanding of the image containing a circle in order to compress it. Our understanding IS the compression. The compression IS the understanding. It's the same.
@TheMagicmagic290
@TheMagicmagic290 5 жыл бұрын
shut up
@dizzydtv
@dizzydtv 5 жыл бұрын
profound observation
@bdi_vd3677
@bdi_vd3677 5 жыл бұрын
Thank you for your comment, excellent observance!
@SirTravelMuffin
@SirTravelMuffin 5 жыл бұрын
I dig that perspective. I do think that compression can have some downsides. I feel like my emotional reactions to things are a sort of "compression". I can't keep track of everything I've read about a potentially political topic, but I can remember how it made me feel.
@PerfectlyNormalBeast
@PerfectlyNormalBeast 5 жыл бұрын
I like to think of autoencoder as an architect outputting a blueprint, then a construction company building that building
@kevintacheny1211
@kevintacheny1211 5 жыл бұрын
One of the best introductions to AI I have seen.
@bensmith9253
@bensmith9253 4 жыл бұрын
YES. ☝️this
@saysoy1
@saysoy1 2 жыл бұрын
once you get hold of the back propagation and how to do the chain rule derivatives, you understand that was not the goal! you merely opened the door, and this video is the way to your goal!
@brian_c_park
@brian_c_park 4 жыл бұрын
Thank you, I've always seen the term neural networks generalized and always thought of it as probably a bunch of matrix operations. But now I know that there are diverse variations and use cases for them
@kennjank9335
@kennjank9335 Жыл бұрын
One of the most effective and useful introductory lectures on neural networks you can attend. It provides basic terminology and enables a good foundation for other lectures. HIGHLY RECOMMENDED. It would be helpful, Mr. Bunton, to say a little bit more about Neurons. Is a neuron strictly a LOGICAL function point in a process (my simple excel cell doing a logical function qualifies as a neuron with your definition), is it a PHYSICAL function point like a server, or is it both? Was there a reason you did not mention restricted Boltzmann motors? Thank you again, Sir, for the quality of this lecture.
@JorgeMartinez-xb2ks
@JorgeMartinez-xb2ks Жыл бұрын
A neuron is pure software, a computational unit that mimics the basic functions of a biological neuron. While software relies on specific hardware for execution, a neuron is not a simple server. Unlike an Excel cell, which takes a single input and produces a straightforward output, a neuron receives multiple inputs from other neurons, processes them, and generates an output based on the combined information. Each input to a neuron is multiplied by a weight, a numerical value that represents the strength of the connection between the neurons. These weighted inputs are then summed together, and a bias value, representing an inherent offset, is added to the result. The resulting value is then passed through an activation function, which introduces non-linearity into the network's decision-making process. Activation functions, such as sigmoid and ReLU, transform the weighted input into the neuron's output, allowing the network to capture complex patterns and relationships in the data. ReLU is often used as an activation function because it requires less computational power compared to other activation functions, such as the sigmoid function. Through a process called learning, artificial neurons adjust their weights over time, enabling the network to improve its performance on a given task. Algorithms like back propagation guide this learning process, allowing the network to minimize errors and optimize its decision-making capabilities. Hope this helps.
@Sumpydumpert
@Sumpydumpert 6 ай бұрын
Thank you too great video would they be building a quantum computer to be a single one of those dots to read internet transaction logs based on web page dynamics to filter and feed data across apps ?
@akirak1871
@akirak1871 4 ай бұрын
I've been studying machine learning models and got to neural networks, and it was a bit intimidating. This excellent lecture took the "scary" right out of it.
@lightspeedlion
@lightspeedlion 9 ай бұрын
Amazing time spent to understand the Networks a little more.
@mariasolandresMD
@mariasolandresMD 6 ай бұрын
Hi! I am medical doctor with little background on computing studies or mathematics but great interest in data and its use for medical research and patient's care. I am now drafting a booklet on Machine Learning for health care workers with no previous coding background and found this video extremely clear and helpful. Would you allow me to add a link to this video in the booklet?
@Eigensteve
@Eigensteve 6 ай бұрын
Absolutely, that would be great!
@tottiegod8021
@tottiegod8021 3 жыл бұрын
Great content for existing developers. Wow. Incredible. To say the least I am speechless. You didn’t waste my time and I appreciate that!!
@hurricane31415
@hurricane31415 2 жыл бұрын
I need to watch all the videos of this channel.
@DanWilan
@DanWilan 3 жыл бұрын
Finally a good presentation
@Eigensteve
@Eigensteve 3 жыл бұрын
Thanks!
@Illu07
@Illu07 4 жыл бұрын
Gosh i needed this intro at the start of my seminar paper...
@carnivalwrestler
@carnivalwrestler 4 жыл бұрын
Clear and concise. Thanks for posting.
@robertschlesinger1342
@robertschlesinger1342 5 жыл бұрын
Excellent overview on neural network architecture. Very interesting and worthwhile video.
@SaidakbarP
@SaidakbarP 5 жыл бұрын
Thank you for a good explanation. This is the quality of content we want to see! 10 folds better than Siraj Raval's channel, in my opinion.
@fzigunov
@fzigunov 5 жыл бұрын
Well, that makes sense given he's a renowned professor =)
@amegatron07
@amegatron07 5 жыл бұрын
I started to learn NNs in good old early 2000-s. No internet, no collegues, nor even friends to share my excitement about NNs. But even then it was obvious that the future lies with them, though I had to concentrate on more essential skills for my living. And only now, after so many years have passed, I tend to come back to NNs, cause I'm still very excited about them and it is much-much-much easier now at least ot play with them (much more powerful computers, extensive online knowlegde base, community, whatever), not speaking about career opportunities. I'm glad YT somehow guessed I'm interested in NNs, though I haven't yet searched for it AFAIR. It gives me another impetus to start learning them again. Thanks for the video! Liked and sub-ed.
@SimulationSeries
@SimulationSeries 4 жыл бұрын
Adore this free online schooling, thanks so much Steve!!
@Eigensteve
@Eigensteve 3 жыл бұрын
Glad you enjoy it! Thanks!
@AllTypeGaming6596
@AllTypeGaming6596 4 жыл бұрын
So youtube know that i am currently learning neural network and this video is appear in my recommendation ,great
@reallynotadatascientist
@reallynotadatascientist 2 жыл бұрын
"...a smiley face, I took this from Wikipedia." You know he's an academic when he cites EVERYTHING. He cites a smiley face image.
@husane2161
@husane2161 4 жыл бұрын
Awesome concise high level explanation! Thank you
@parvezshahamed370
@parvezshahamed370 4 жыл бұрын
I have been looking for this content a really long time. Thanks so much.
@mathiasfantoni2458
@mathiasfantoni2458 3 жыл бұрын
I guess neurones can be thought of a functions that call other functions if a certain variable has a sufficient value. And the main difference between an ANN and our biological neural network is that ANN has a fixed set of functions with fixed connections, only changing the conditions triggering the next callback, whereas brains can grow new neurones and even disconnect and rewire connections. The question then becomes: Can we write a function that writes a new function? Or a function that modifies the content of an existing function so as to change its callback to call a different function? If this holds true, we could get even closer to natural neural networks. I’m also debating myself when to use “artificial” vs “synthetic”. I guess an [A]NN can’t rewire/reprogram itself, whereas a real one can? In which case if we produce a neural network that indeed can change its own inner structure, we could promote it from “artificial” to “synthetic”? Great video. Definitely earned yourself a subscriber. :)
@mathiasfantoni2458
@mathiasfantoni2458 3 жыл бұрын
I was actually actively looking for a video like this - it wasn’t just the Algorithm™️ 😂
@YASHSHARMA-bf2mm
@YASHSHARMA-bf2mm 2 жыл бұрын
Thank you so much for the video! The way you teach makes learning so much fun:) If you were born in ancient time, you alone would have shot the literacy rate by over 20%
@ko-prometheus
@ko-prometheus Жыл бұрын
Can I use your mathematical apparatus, to investigate the physical processes of Metaphysics?? I am looking for a mathematical apparatus capable of working with metaphysical phenomena, i.e. metamathematics!!
@Radictor44
@Radictor44 4 жыл бұрын
Me: Why am I watching a video on neural network architectures? KZbin: Start learning bitch
@josephyoung6749
@josephyoung6749 5 жыл бұрын
Amazing program... I love the thing he's drawing on that projects his diagrams.
@arnolddalby5552
@arnolddalby5552 5 жыл бұрын
Loved neural nets since 1998 when I read a book which showed how 3 layer nets can solve difficult problems. In the 21st century the neural nets are magnificent and a credit to the brains of the human race. I am using a 21st century neural net myself and it's great. Hahahaha. Great video
@mrknarf4438
@mrknarf4438 5 жыл бұрын
Clear, simple, effective. Thank you!
@mrknarf4438
@mrknarf4438 5 жыл бұрын
Also loved the graphic style. We're the images projected on a screen in front of you? Great result, I wish more people showed info this way
@jaredbeckwith
@jaredbeckwith 4 жыл бұрын
Good overall neural net explanation!
@namhyeongtaek4653
@namhyeongtaek4653 3 жыл бұрын
I love this man. You are my role model.
@Eigensteve
@Eigensteve 3 жыл бұрын
Thanks so much!
@namhyeongtaek4653
@namhyeongtaek4653 3 жыл бұрын
@@Eigensteve OMG it's my honor😯. I didn't expect you would read my comment lol. I hope I could get in to UW this fall so that I could be in your class in person.
@jimparsons6803
@jimparsons6803 Жыл бұрын
Liked that the approach was direct and simplistic; and of course you can write your code in this manner too. So that you're not overwhelmed. Say four or five layers being coded, then you have outboard functions that handle the input and out put arrays. This last might take up most of the landscape of a program. Isn't this fellow clever? Dang. He's gotta be a Professor somewhere. Many thanks. The computer training that I had gotten was very rudimentary, first in the 60s and then another drop in the mid 90s. Luckily there's YT where you can catch up. And after a while the 'training' starts to remind you of subliminal sorts of stuff. Maybe?
@Jorpl_
@Jorpl_ 4 жыл бұрын
Hey I just wanted to say thank you for making this video. I found it really helpful! I particularly enjoyed your presentation format, and the digestible length. About to watch a whole bunch more of you videos! :)
@BenHutchison
@BenHutchison 3 жыл бұрын
Oh wow I've been educated by your channel for a while now but did not realise you have published a textbook until your remark. Only A$80 here in Aus. Done! purchased..
@VulpeculaJoy
@VulpeculaJoy 4 жыл бұрын
Would it be possible to have the structure itself evolve over the learning process?
@garlxx
@garlxx 4 жыл бұрын
yes. thats what genetical machine learning is for. basically survival of the fittest. this is what your YT algorithm is built upon.
@VulpeculaJoy
@VulpeculaJoy 4 жыл бұрын
@@garlxx Well, yes and no. Genetic machine learning can just mean that you take two different, best performing NNs that have the same structure and just splice their neural propagation values. That won't change anything about their sctructure though.
@nias2631
@nias2631 4 жыл бұрын
The framework might be an issue too. Static graphs can be a problem, maybe with a dynamic graph.
@hanyanglee9018
@hanyanglee9018 2 жыл бұрын
A question. 3:20, what are f,g and h? I didn't see anything similar to these.
@its_me_kirankumar
@its_me_kirankumar 4 жыл бұрын
KZbin recommended it. But i love it.
@FederationStarShip
@FederationStarShip 2 жыл бұрын
4:00 How come some of those don't have output nodes?
@raoofnaushad4318
@raoofnaushad4318 4 жыл бұрын
Thanks for sharing Steve
@antonioverdiglione1663
@antonioverdiglione1663 5 жыл бұрын
hi steve very cool video and you are a very good teacher. What kind of software did u use to do this lecture with this images in the screen? thx a lot.
@turjoturjo7422
@turjoturjo7422 5 жыл бұрын
My question too How did you draw in that screen?
@_modiX
@_modiX 5 жыл бұрын
The moment he started to draw on that screen I got lost and couldn't follow the topic anymore, because it's so amazing. I also like to know how this is done, please.
@zill150
@zill150 5 жыл бұрын
It’s done using a lightboard they also call it a learning glass.
@_modiX
@_modiX 5 жыл бұрын
@@zill150 Thank you, there are good behind the scenes videos regarding the lightboard on other learning channels. However, in this video he even projects an image on the glass. It cannot be post production, because he draws something related to the projected image. How is that possible?
@punitpatel5494
@punitpatel5494 5 жыл бұрын
@@_modiX Try searching for "smart mirror", he is standing in front of smart mirror and recording the mirror
@youcanlearnallthethingstec1176
@youcanlearnallthethingstec1176 4 жыл бұрын
I like the way of explaining by projecting on glass board....very very nice...
@nghetruyenradio
@nghetruyenradio 4 жыл бұрын
Best. I love your lecture. It explains problem in a simple way. Thank you so much.
@mr1enrollment
@mr1enrollment 4 жыл бұрын
Steve: nice talk,... many questions come up, I'll ask a few 1)Do you distinguish planar vs non-planar networks? 2)Do RNN(s) become unstable? They look like control system time dependent processes. 3)Has anyone applied Monte Carlo toward selection of topology of a NN, or toward the activation function selection,...? Fascinating area to study.
@ArneBab
@ArneBab 4 жыл бұрын
Thank you for your video! Seeing your example for principal values decomposition made neural networks much clearer to me than anything else I had seen till now. It allowed me to connect this to SVD-based linear modeling I used almost 10 years ago to create simplified models of visual features seen in fluid dynamics. I did not expect how much easier this suddenly seemed when it connected to what I already knew.
@JordanMetroidManiac
@JordanMetroidManiac 5 жыл бұрын
This video is brought to you by KZbin’s great Neural M. Network.
@satoshinakamoto171
@satoshinakamoto171 5 жыл бұрын
thank you. i somehow get inspiration from videos like these.
@tianz4710
@tianz4710 5 жыл бұрын
youtube recommendation system (powered by neural network?) brought us here..
@matt-stam
@matt-stam 5 жыл бұрын
"Thanksgiving? Nah, neural network time" -KZbin
@Vasharan
@Vasharan 5 жыл бұрын
AI using humans to improve AI. Clever girl.
@klodianelshani7708
@klodianelshani7708 5 жыл бұрын
@@Vasharanthey have become sneakily clever xD
@user-cf2pl9uy5k
@user-cf2pl9uy5k 5 жыл бұрын
How are you able to draw on your presentation in real time? What is this type of presentation called?
@nicolasfiore
@nicolasfiore 5 жыл бұрын
I'm scratching my head about that too. Please someone enlighten me before I start bleeding!
@nicolasfiore
@nicolasfiore 5 жыл бұрын
@Mwaniki Mwaniki it's not. I found the explanation and shared it in another comment. It's something called Lightboard (look it up, it's quite interesting) plus a monitor with the slides that were added to the video in post later on. Probably.
@mohamedmoustafa8924
@mohamedmoustafa8924 4 жыл бұрын
KZbin recommender: "oh sht, dat's me"
@tsylpyf6od404
@tsylpyf6od404 Жыл бұрын
7:45 Can it be combined with a Decision Tree? I think it would be a good idea, and I have found some research that has a similar idea
@goodlack9093
@goodlack9093 Жыл бұрын
Love your videos and your book! Can't wait to start working through it actually!
@smilefaxxe2557
@smilefaxxe2557 5 жыл бұрын
So youtube decided to make this 5 month old video famous? :D all comments are max 2h old..
@jvsonyt
@jvsonyt 5 жыл бұрын
2 days later and I'm here haha
@cyberneticbutterfly8506
@cyberneticbutterfly8506 5 жыл бұрын
Could easily be that some person with alot of followers shared the video. Then it has more views which makes it a more reccomended video.
@jvsonyt
@jvsonyt 5 жыл бұрын
@@cyberneticbutterfly8506 so the WHOLE system is self aware?
@cyberneticbutterfly8506
@cyberneticbutterfly8506 5 жыл бұрын
@@jvsonyt Hardly. It's just a trigger. Person A with a high number of followers shares a video -> They then go watch the video -> The video view number increases -> IF video has increase in X views THEN bump video ranking in reccomendations by Y amount -> You now get it in your reccomendations.
@jvsonyt
@jvsonyt 5 жыл бұрын
@@cyberneticbutterfly8506 aliens
@neiltucker1355
@neiltucker1355 Жыл бұрын
a fantastic overview thanks!!♥
@tw0ey3dm4n
@tw0ey3dm4n 5 жыл бұрын
Strangely enough. I needed this vid. Thank you YT ALGO
@ts.nathan7786
@ts.nathan7786 Жыл бұрын
Very good explanation. 🎉
@luiscordovadsgn
@luiscordovadsgn 4 жыл бұрын
Recommended gang, where you at?
@easylearn9350
@easylearn9350 5 жыл бұрын
Simple perfect enjoyable expaining of DNNs. Thanks for sharing!
@userou-ig1ze
@userou-ig1ze 4 жыл бұрын
simply great, thanks for this intro video
@karemabuowda2695
@karemabuowda2695 3 жыл бұрын
Thank you very much for this extraordinary way of teaching.
@lucasb.2410
@lucasb.2410 5 жыл бұрын
Amazing video and explication , focusing on key points is very interesting for such sciences, thank you a lot and keep doing that !
@CognitiveArchitectures
@CognitiveArchitectures 5 жыл бұрын
I'd submit that your architecture diagrams are missing a box for the process acting upon the network. It's great to show the data, but the process should also be shown as well. For example, what if you have two processes acting upon the same neural network graph simultaneously? Where would those processes be depicted?
@randythamrin5976
@randythamrin5976 4 жыл бұрын
Amazing good explanation and simple word for non english native speaker like me
@ankitbhurane5130
@ankitbhurane5130 4 жыл бұрын
How did you get computer screen on glass. Please let me know, I need it for my classroom
@_jikkujose
@_jikkujose 4 жыл бұрын
Same here!! I thought he was approximating looking at a different screen, till he started drawing on the glass.
@navinbondade5365
@navinbondade5365 4 жыл бұрын
kzbin.info/www/bejne/fIraiYKCipmHgc0 thanks me letter
@toonheylen4707
@toonheylen4707 4 жыл бұрын
Amazing video, thanks for the information
@abhaythakur8572
@abhaythakur8572 4 жыл бұрын
Thanks for this explanation
@GlobalOffense
@GlobalOffense 5 жыл бұрын
Great explanation. Thank you.
@vesperide598
@vesperide598 4 жыл бұрын
3:38 What is the difference between the Memory Cell's color and the Output Cell's color? ;-;
@garfieldbart
@garfieldbart 4 жыл бұрын
I think there is no difference, but if they are at the edge (right side) they are probably output cells, if they are somewhere in the middle, they are probaly memory cells.
@sitrakaforler8696
@sitrakaforler8696 Жыл бұрын
Really clear. Thanks for the vidéo !
@JohannesSchmitz
@JohannesSchmitz 5 жыл бұрын
Could you please do a follow up on this? I basically came here for the "many many more" you mentioned towards the end. LSTMs and other architectures that are useful for time series processing. It would be nice if you could do an overview video about that class of networks.
@charimuvilla8693
@charimuvilla8693 4 жыл бұрын
Didn't know about that encoder thingy. Got me thinking about stuff. Is the right side mirrored so it does the opposite of the left side or are they trained separately?
@nias2631
@nias2631 4 жыл бұрын
They are trained together. The encoder half mirrors the decoder in structure. You feed in an image and it passes through encoder to decoder and the decoder produces a reconstruction of the original. The error is calculated between the two. The gradients of the error are then backpropped to adjust the network weights and improve reconstruction. For me, I see the encoder as a series of projections onto lower dimensional affine spaces, the network is finding the least important null space to throw away. The bottleneck or latent space size has to be played with to figure out the minimal size for a good reconstruction. In an information theory sense the smallest bottleneck for a good reconstruction is kind of like an optimal message encoding. When trained you can split the encoder and decoder and use them separately if you freeze the weights.
@charimuvilla8693
@charimuvilla8693 4 жыл бұрын
@@nias2631 Thanks for the detailed answer!
@nias2631
@nias2631 4 жыл бұрын
@@charimuvilla8693 no prob
@doctorshadow2482
@doctorshadow2482 Жыл бұрын
He Steve, thank you a lot for all your brilliant videos! One request on the topic, could you please cover how all this works with shift/rotation/scale of the image? Nobody on youtube covers this tricky part of the neuron networks used for image recognition. I keep fingers crossed that you the one who could clarify this.
@IamWillMatos
@IamWillMatos 5 жыл бұрын
Great work on this video!
@wangjing8574
@wangjing8574 5 жыл бұрын
Why ur DAE doesn’t have encoding process? Should be less neurons in the hidden layer. And GANs should be inferring a vector to an image, so output neurons should be more than the input neurons.
@radhikasece2374
@radhikasece2374 Жыл бұрын
Thanks for your explanation in the video. have learned a lot. Am doing research in speech emotion recognition. Can you pls tell me the best Deep learning algorithms that will work?
@moaazkhaled6653
@moaazkhaled6653 4 жыл бұрын
Do you have more videos about ANN and CNN ? I look at your channel but I could not find
@okboing
@okboing 3 жыл бұрын
Finally, now I know what CaryKneesHurt is actually rambling on about!
@mahamatissa1711
@mahamatissa1711 Жыл бұрын
How did you make this video editing? What software do you like to? I am very interested to know how you made this video.
@AbeDillon
@AbeDillon 5 жыл бұрын
Autoencoders are awesome because they don't require labeled data. The data is the label.
@TheRaxxy1
@TheRaxxy1 5 жыл бұрын
how does he write with marker on correct places if the images on the desk are virtual???
@Luciencooper
@Luciencooper 5 жыл бұрын
Stupid question, but what did you use to write on the screen at around 6m in?
@Macatho
@Macatho 5 жыл бұрын
I was wondering the same thing. I'm guessing he is standing in front of a glass screen and the animations are displayed on a monitor which he is watching in real-time. Just to use as a space reference where to draw. Then, of course, the video is mirrored in post-production.
@aloka1997
@aloka1997 4 жыл бұрын
excuse me i have a question not about neural network but about the video itself, how did you shoot this video ?! i don't think you added this images and presentation in post, i think you can see it and where it is in the screen, i think that there is a glass in front of you and the presentation displayed on it, how you made this ?!, thanks
@hahe3598
@hahe3598 2 жыл бұрын
Dear Sir, would you mind advising which book will talk particularly on each of the architectures illustrated in the neural networks zoom? Thanks.
@aminnima6145
@aminnima6145 3 жыл бұрын
Thank you for this beautiful explanation.. I really enjoy it.
@tigerroar6071
@tigerroar6071 4 жыл бұрын
wow! how do you visualize these information do you have the iron-man technology?
A Neural Network Primer
19:14
Steve Brunton
Рет қаралды 40 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 170 М.
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 503 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,4 МЛН
What Are Neural Networks Even Doing? (Manifold Hypothesis)
13:20
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 210 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
Generative Model That Won 2024 Nobel Prize
33:04
Artem Kirsanov
Рет қаралды 244 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 446 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 547 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН