Adversarial Attacks on Neural Networks - Bug or Feature?

  Рет қаралды 93,812

Two Minute Papers

Two Minute Papers

Күн бұрын

❤️ Support us on Patreon: / twominutepapers
📝 The paper "Adversarial Examples Are Not Bugs, They Are Features" is available here:
gradientscience...
The Distill discussion article is available here:
distill.pub/20...
If you wish to play with some of these Distill articles, look here:
- distill.pub/20...
- distill.pub/20...
Andrej Karpathy’s image classifier - you can run this in your web browser: cs.stanford.ed...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil.
/ twominutepapers
Thumbnail background image credit: pixabay.com/im...
Splash screen/thumbnail design: Felícia Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / karoly_zsolnai
Web: cg.tuwien.ac.a...

Пікірлер: 174
@user-kw9cu
@user-kw9cu 5 жыл бұрын
What people think the war against AI like: humans killed by robots What it actually like: humans attacks AI by changing pixels
@KuraIthys
@KuraIthys 5 жыл бұрын
-is about to be attacked by an combat robot¬ -holds up a piece of cardboard with a strange pattern in it¬ AI:¬tree detected. No target found.- Yeah... XD
@odw32
@odw32 5 жыл бұрын
American/Russian/Chinese/European AIs all trying to fool the other ones by screaming various kinds of noise at each other, then carefully trying to figure out whether the noise fooled the other AIs, or whether the other AIs are just pretending to be fooled as part of a counter-adversarial attack.
@enormousmaggot
@enormousmaggot 5 жыл бұрын
Humans hiding from robots by drawing pixels on themselves, thus being classified as airplanes.
@abyteuser6297
@abyteuser6297 5 жыл бұрын
@@enormousmaggot Pixels hiding from planes by drawing themselves on humans, thus being classified as robots
@vannoo67
@vannoo67 5 жыл бұрын
@@enormousmaggot Would you also need to hold your arms out?
@atomscott425
@atomscott425 5 жыл бұрын
I really wish more papers were on distill, its really amazing.
@MaxwellMcKinnon
@MaxwellMcKinnon 3 жыл бұрын
Can you elaborate? I’m both curious if i missed something about distillation as well as maybe I can offer insight. I’ve studied and used distillation in an application before.
@StickerWyck
@StickerWyck 5 жыл бұрын
Look, all I'm saying is that the bus did look a little bit like an ostrich.
@immortaldiscoveries3038
@immortaldiscoveries3038 5 жыл бұрын
ikr, their algorithm is the problem....bus looks like a bus to me! No ostrich, BARELY, 0.001%
@ChrisD__
@ChrisD__ 5 жыл бұрын
YES FELLOW HUMAN, THAT BUS WAS ALMOST CERTAINLY AN OSTRICH IN DISGUISE.
@nicolasfiore
@nicolasfiore 5 жыл бұрын
why do you guys keep calling that ostrich "a bus"?
@larryng1
@larryng1 4 жыл бұрын
@@nicolasfiore Agreed! I only saw two ostriches.
@barjuandavis
@barjuandavis 5 жыл бұрын
Bug or feature? *YES.*
@claxvii177th6
@claxvii177th6 5 жыл бұрын
I see you are a man of logic as well.
@martiddy
@martiddy 5 жыл бұрын
Bethesda in a nutshell
@RubsNL
@RubsNL 5 жыл бұрын
Quantum logic unlocked
@my3bikaht88
@my3bikaht88 5 жыл бұрын
Maybe
@danielschwegler5220
@danielschwegler5220 4 жыл бұрын
No, marcel davis
@moth.monster
@moth.monster 4 жыл бұрын
*speck of dust lands on stop sign* AI: Yeah i think that's a green light, go ahead
@odw32
@odw32 5 жыл бұрын
The question is: Who uploads noisy cat videos to KZbin to trick the algorithm into recommending me a strange documentary about the history of toilets every few months?
@JM-us3fr
@JM-us3fr 5 жыл бұрын
Orian de Wit Actually, this sounds like an ingenious attack on KZbin’s algorithm
@vinayreddy8683
@vinayreddy8683 5 жыл бұрын
This is what I'm thinking
@cube2fox
@cube2fox 4 жыл бұрын
@@JM-us3fr Does KZbin even analyze videos? I thought analyzing video title, description, and comments would be much simpler and accurate enough.
@circuit10
@circuit10 4 жыл бұрын
@@cube2fox I heard it flagged a video of robot dogs for animal abuse automatically
@ShubhamYadav-xr8tw
@ShubhamYadav-xr8tw 5 жыл бұрын
In my opinion Distill needs more publicity, thanks for highlighting them!
@isg9106
@isg9106 5 жыл бұрын
That's really interesting, I've never heard of a discussion paper thread. I thoroughly enjoyed this, and hope to hear more about it!
@warmpianist
@warmpianist 5 жыл бұрын
Every time I see this, I have my fun analogy: my key has got a very small scratch. It won't work with my house anymore, but worked on someone else's car instead! What happens if someone else's key with small scratches actually unlocks my house?! We should have the unlocking system fixed!
@MrMysticphantom
@MrMysticphantom 5 жыл бұрын
Okay.. Did not know about distill.... Great... There goes my free time
@claxvii177th6
@claxvii177th6 5 жыл бұрын
I LOVE ALL YOUR VIDEOS. no matter how flashy are the articles you share, they are consistently informative and they ALWAYS provide a good read. (given i often don't read the whole of them, but thats on me XD)
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
You are very kind, thank you so much! :)
@ronnetgrazer362
@ronnetgrazer362 5 жыл бұрын
I shouldn't be drunk-commenting, prolly gonna regret this but... the passion, the sheer relentlessness with which this guy engages every single facet of the discipline... brings a tear to me eye. I'll shut up now. Don't do ethanol kids. Thanks Károly.
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
🙏
@noergelstein
@noergelstein 5 жыл бұрын
If the adversarial features arise from the dataset and can be eliminated after being found, wouldn't it also be possible to do the reverse and poison a dataset with a sort of backdoor?
@Pfaeff
@Pfaeff 5 жыл бұрын
One pixel in a 32x32 image is roughly the same relative area as a hundred pixels in a 224x224 image, though.
@Meg_A_Byte
@Meg_A_Byte 5 жыл бұрын
100 pixels is only an area of 10x10 pixels, it's still nothing if you look where those pixels were added.
@StormBurnX
@StormBurnX 5 жыл бұрын
@@Kipras.Skeirys the main difference is the fact that a 10x10 pixel chunk, which has the same relative area but is quite noticeable, could instead be replaced with 100 random pixels throughout the image, which would simply look like a very tiny bit of noise, if noticeable at all.
@therogerrogergang8517
@therogerrogergang8517 5 жыл бұрын
So you only need to change 0.1% of an image to fool it
@UnitSe7en
@UnitSe7en 5 жыл бұрын
@@happyfase Did you look at the examples? I'd posit that your assetion is 100% not true.
@WilliamBoothClibborn
@WilliamBoothClibborn 5 жыл бұрын
Keep going please! I need these updates to keep me in the loop of the research.
@aaronvr_
@aaronvr_ 3 жыл бұрын
He should get a narrator, seeing as the channel's grown pretty big over the past two years alone.
@NetherFX
@NetherFX 5 жыл бұрын
AI takes over "Just change 1 pixel"
@kebakent
@kebakent 5 жыл бұрын
I'm sure most social networks have an aggressive NSFW filter, that provide fast feedback. It would be fun to see if it could be cheated using these methods.
@kebakent
@kebakent 5 жыл бұрын
@Ahmed Nader CNNs tend to have some preprocessing, often starting out with some kind of cropping and scaling. Throwing different manipulated images at it, might reveal their process. In any case, this sort of attack is not new, as previous work revealed how structured noise could change the classification. I believe some spam use noise to cheat the filters, possibly for this reason.
@ArnoSelhorst
@ArnoSelhorst 5 жыл бұрын
You don't need "visual fireworks" to get me in here every time again. You do splendid work nonetheless. Keep it up! You are an intriguing source for new insights.
@MobyMotion
@MobyMotion 5 жыл бұрын
Karoly please keep making videos that interest you and your viewers - I don’t care if it’s lacking the visual “fireworks”, this topic is important
@keco185
@keco185 5 жыл бұрын
Adversarial attacks like this seem like a great way to train neural nets. It’s like a specialized version of a GAN
@funkybob7772
@funkybob7772 5 жыл бұрын
Fool me once, shame on you. Fool me 100.000.000 times, shame on me ;)
@Henrix1998
@Henrix1998 5 жыл бұрын
How about just train with random noise added? That could get rid of noise dependency
@warmpianist
@warmpianist 5 жыл бұрын
Henrix98 It could work similarly as a "data augmentation", but imo I don't think that we can cover all variations of noises. If we train with (img + noise 1) and (img + noise 2), we might not get a same result if we test (img + noise 3), or even (img + noise 1 + noise 2). I would think like this: define new img = old img + noise 1. If new img is trained, we can find a carefully crafted noise 2 such that (new img + noise 2) generates a wrong result. And if we have N noises to try, we would require N times more time to train the model.
@buttonasas
@buttonasas 5 жыл бұрын
This has been tried and it doesn't work. The network still tends to learn patterns instead of objects and so, giving a wolf a sheep's fur will likely fool it anyways. It will be harder than 1 pixel, though. No source, sorry!
@bukovelby
@bukovelby 4 жыл бұрын
There is "style transfer augmentation", which I believe do the thing
@MatiasPoggini
@MatiasPoggini 5 жыл бұрын
Interesting take on peer reviewing and cross examining a paper. Do you (or any commentators) know if this happens in the humanities as well?
@enormousmaggot
@enormousmaggot 5 жыл бұрын
LOVE this content. Your title is what made me view this particular one, actually.
@odw32
@odw32 5 жыл бұрын
Content-wise: I love the mix you bring. Sometimes icecream for the eyes, sometimes icecream for the mind. I think it's also important to cover AI security, ethics, implications for society. My absolute favorite videos though are when you cover projects where I can download the Python code and put my graphics card to work 😁
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
Thank you so much for the kind feedback Orian! Ice cream for the mind...damn, I wish I came up with this one. Mind if I use it? 🙂
@odw32
@odw32 5 жыл бұрын
@@TwoMinutePapers Not at all! Human communication is the most beautiful neural net, ideas that work well should propagate freely 😄
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
Noted, thank you!
@roua.
@roua. 5 жыл бұрын
I wonder if we could reduce the chance of a network getting tricked by these types of attack by adding ur own white noise on top of the image before feeding it into the network. I guess that might also reduce the overall accuracy of the network in some case
@maxinealexander9709
@maxinealexander9709 5 жыл бұрын
Fascinating topic, as always. Keep up the good work!
@mickmickymick6927
@mickmickymick6927 5 жыл бұрын
Very nice pepper today, I should get your recipe some time.
@MichaelSHartman
@MichaelSHartman 5 жыл бұрын
It answered some thoughts I had on the brittleness of image recognition. I was surprised at the level of one pixel at this stage of development
@leafhappy
@leafhappy 4 жыл бұрын
More discussions, rebuttals, and replicability of science!
@JamesHUN
@JamesHUN 5 жыл бұрын
why would you call noise that is computed specifically to reach a goal, not just randomly drawn?
@phillipotey9736
@phillipotey9736 5 жыл бұрын
This idea the paper has about creating mini discussions is crazy awesome! I need to look more into it but it could solve a lot of replication issues
@davidmartin1628
@davidmartin1628 4 жыл бұрын
I would love to see more journals with a discussions sections where other experts can publically discuss research. There are so many unreplicatable studies that make it in to peer reviewed journals that deserve to be scrutinized publically as flawed research papers waste other researchers time when they try to use said research!
@georhodiumgeo9827
@georhodiumgeo9827 5 жыл бұрын
I am up voting this so hard hopefully it gets you some more views.
@Flowtail
@Flowtail 4 жыл бұрын
God this channel is so pure
@0x0404
@0x0404 4 жыл бұрын
That is interesting and shows how important a proper set is since the algorithm will go with whatever is the most consistent even if that thing has nothing to do with the actual material.
@AsmageddonPrince
@AsmageddonPrince 5 жыл бұрын
If you think about it, a big part of human cognition is those exact non-robust features. All our cognitive and memory biases and a good chunk of our behavior are basically quick hacks our brains have that get in the way of properly abstract reasoning.
@Flowtail
@Flowtail 4 жыл бұрын
Dude, you would’ce got way more views on this if you had made the title something like “One Weird Pixel Makes This AI Think Everything is an Ostrich”
@cube2fox
@cube2fox 4 жыл бұрын
Google's reCAPTCHA apparently sometimes uses adversarial attacks on their images of cars, traffic lights etc. I noticed some very artificial looking noise on some of the images.
@FrazerKirkman
@FrazerKirkman 5 жыл бұрын
Discussion articles are a great idea.
@jlnrdeep
@jlnrdeep 5 жыл бұрын
This is an modern and awesome way to enhance conversation over a topic, Nice 👌.
@skyacaniadev2229
@skyacaniadev2229 5 жыл бұрын
Just add a new kernel that decides which pixel will be chosen for pooling instead of pooling directly. The CNN before was not designed to prevent this trick, if they want they can easily came up with some mechanism to deny this attack...
@ophello
@ophello 5 жыл бұрын
Then these networks are NOT “seeing” at all. We need to make a system that cannot be fooled this way.
@ophello
@ophello 5 жыл бұрын
Hopi Ng a system that can be fooled this way is not seeing like we see. Your analogy doesn’t make sense. Even without color, we can still accurately identify objects in a photo without being tricked by one pixel being changed.
@eduardoachach4099
@eduardoachach4099 5 жыл бұрын
​@@ophello I would put a big asterisk on that "accurately". I mean was the dress white and gold or black and purple? And how about all the optical illusions out there. We may not be fooled in the same way, but our perception can easily be tricked as well. If you've been following this channel there was a paper showcased in which they even applied the same noise technique targeted to humans and ai: /watch?v=AbxPbfODGcs
@ophello
@ophello 5 жыл бұрын
Eduardo Achach dude, those examples are so completely far away from this system that it’s laughable. You can’t trick a human to see something completely different by changing a tiny part of the image. That’s not how we see. We see by generalizing the whole image. You can’t trick the eye into seeing a photo of a cat when it’s actually a dog, by changing the color of one pixel of the image. Get it? Finally??
@williamrichards5241
@williamrichards5241 5 жыл бұрын
This paper style is worth exploring more.
@nonameplsno8828
@nonameplsno8828 5 жыл бұрын
Wasnt there a paper about how adversarial neural networks encode information in the noise so that they could cheat? Something about satellite images to maps? Because it looks like that got modified in the noise attack.
@SymEof
@SymEof 5 жыл бұрын
It's definitely a more interesting format even though the normal format is great in other regards.
@kalebbruwer
@kalebbruwer 4 жыл бұрын
What is you have two independently trained classifiers (identical except for their initial state before training)? How hard would it be to fool both with the same alteration?
@kfftfuftur
@kfftfuftur 5 жыл бұрын
but if you have two independent networks that are trained to classify images would they fall for the same wrong pixel or would you need to fool them independently? If so can you come up with a noise pattern that fools both networks?
@TheSolarScience
@TheSolarScience 4 жыл бұрын
Could one apply a textured/"pixelated" "makeup" to avoid facial recognition?
@kebakent
@kebakent 5 жыл бұрын
Well, if it's somehow recognizing DNA, that dog is probably 99.9% cat.
@donotlike4anonymus594
@donotlike4anonymus594 5 жыл бұрын
While i understand how it works and... still feels amazing... the point we reached with ai.. and how easy it is to manipulate...
@skr_8489
@skr_8489 5 жыл бұрын
Karol, how these noise patterns perform if the image is greyscale and pre-processed to make better contrast between lines and surfaces? I noticed, that in all these examples, neural networks work on color images. But human perception has a split between color and shape.
@oddsandexabytes
@oddsandexabytes 5 жыл бұрын
Thanks! You always give me something interesting to think about
@Alekosssvr
@Alekosssvr 5 жыл бұрын
Excellent overview!
@larryng1
@larryng1 4 жыл бұрын
wow, great discussion!
@jaydeepvipradas8606
@jaydeepvipradas8606 5 жыл бұрын
This problem may be fixed by varied size of pixel in an image. Arranging say 3 by 3 pixels into 1 pixel for entire image can help neural network to classify correctly. Or 4 by 4 pixels into 1 pixel. Usually things which want to classify in an image are bigger than 8 by 8 pixels. Multiple training sets will have to created, original image, image where each pixel is 3 by 3 of original and another image where each pixel is say 5 by 5 of original.
@Guztav1337
@Guztav1337 5 жыл бұрын
I feel like if it was that easy, the researchers would have already done that
@MegaKakaruto
@MegaKakaruto 5 жыл бұрын
Aren't this the main idea of CNN?
@jaydeepvipradas8606
@jaydeepvipradas8606 5 жыл бұрын
@@MegaKakaruto CNN looks for hierarchical patterns, may be like a door knob pattern inside a door pattern. Here, it's more like pre-processing data, so as to create better training set. Before data going to neural network, it's human eye like zoom-out for better visualisation. Down side is that, after training, for run time usage of network, again an image will have to be translated into 3 images for pattern matching. Also, here some noise removal techniques can also help. Or, training multiple networks for the same data, where each network uses different approach. E.g one network for edge detected shapes, one CNN like network etc. Then combining output from each network to decide final conclusion.
@MegaKakaruto
@MegaKakaruto 5 жыл бұрын
@@jaydeepvipradas8606 wow, thanks for detailed answers! There's so much stuff I need to learn more.
@FiguraCinque
@FiguraCinque 5 жыл бұрын
ty sir
@Teluri
@Teluri 5 жыл бұрын
1.sooo could this be used in a similar way to capcha? (stopping advanced bots from spamming and stuff) 2.what about an AI with the goal to fool another generic image recognition AI while making the less changes possible?
@vd.se.17
@vd.se.17 4 жыл бұрын
Thanks for this, I need your help that where can i find all the machine learning papers from last 3 years? Please give me reply. Thank you.
@badhombre4942
@badhombre4942 5 жыл бұрын
More interesting would be to learn why the AI thinks a horse with a hole, is a bus.
@kylebowles9820
@kylebowles9820 5 жыл бұрын
This one is very interesting! Could they have naively corrupted the dataset with salt and pepper to address that weakness? That'd probably be more inefficient on training resources and only move the goal post slightly
@JoaoVitor-mf8iq
@JoaoVitor-mf8iq 5 жыл бұрын
Sometimes a paper is not the best way to pass forth our knowledge, the structure is very important, its pretty bad to create something good and don't have visualizations , or to create something not that great and be famous, most papers with machine learning should have a link to github or something like that for example.
@AtulLonkar
@AtulLonkar 5 жыл бұрын
One pixel attack ! Sounds like a good news -)
@johnniefujita
@johnniefujita 5 жыл бұрын
excellent man!! keep up the good work!
@paulgarcia2887
@paulgarcia2887 5 жыл бұрын
1 pixel: I'm about to end this whole neural network carrier
@terner1234
@terner1234 5 жыл бұрын
this "one pixel attack" isn't fair, because those pictures are very low res
@VictorCaldo
@VictorCaldo 5 жыл бұрын
Amazing, thank you.
@robertweekes5783
@robertweekes5783 4 жыл бұрын
It sounds like some AI’s took major shortcuts with image classification
@frankx8739
@frankx8739 5 жыл бұрын
The man who though his wife was a hat.
@noviian3092
@noviian3092 4 жыл бұрын
I don't think alot of people know what frank is referencing, so I'll link it here. super interesting stuff. it's about neurological disabilities and illnesses which lead up to one person who mistook his wife for a hat. en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_for_a_Hat
@nopethisisnotreal1434
@nopethisisnotreal1434 5 жыл бұрын
Make more of this!
@sermuns
@sermuns 5 жыл бұрын
This is how we will fight the AI Revolution
@Wecoc1
@Wecoc1 5 жыл бұрын
1:20 Wait... Always an ostrich? When it has no idea what it could be it simply goes "Must be an ostrich"? I love that AI 😂
@Villfuk02
@Villfuk02 5 жыл бұрын
no, it was specifically tricked to think it was an ostrich
@npip99
@npip99 5 жыл бұрын
The idea behind the adversarial attack is you specifically write an algorithm that given a neural net and a photo of a bus, it can manipulate the photo only slightly to trick the neural net into thinking it's an ostrich. They specifically forced it to be ostrich. They could've forced it to be a car, because they're cherry picking exactly the pixel manipulations needed to trick it. If you change a random pixel of a bus, it'll almost always still be a bus.
@SuperVfxpro
@SuperVfxpro 5 жыл бұрын
could this overlay be used to get into someones facial recognition security?
@SuperVfxpro
@SuperVfxpro 5 жыл бұрын
identifying as someone else
@adammartin6347
@adammartin6347 5 жыл бұрын
there’s a puppy pic on the thumbnail - obviously it’s gonna go viral
@werty7098
@werty7098 5 жыл бұрын
This work is brilliant
@kyleanderson1613
@kyleanderson1613 5 жыл бұрын
This is interesting stuff.
@jeffGordon852
@jeffGordon852 3 жыл бұрын
So you basically saying that I can wear a cloak in the future robot war so they think I'm a friend? cool
@bernardvantonder7291
@bernardvantonder7291 5 жыл бұрын
Another awesome episode!
@googacct
@googacct 5 жыл бұрын
I wonder what GPT-2 could come with for an argument of, is it a feature or is it a bug?
@souravjha2146
@souravjha2146 4 жыл бұрын
is it a bug or feature of ML..????
@cipherxen2
@cipherxen2 5 жыл бұрын
Can it be termed "lacuna"?
@linusklocker2890
@linusklocker2890 5 жыл бұрын
Always tought you are saying in the intro: Dear Fellow Scholars, this is Tow Minute Papers with "name" *here*. But it is ... this is Tow Minute Papers with Károly Zsolnai-Fehér! thx to commentary
@gorgolyt
@gorgolyt 5 жыл бұрын
I prefer interesting conceptual videos like these over "visual fireworks" videos and I'd be very happy if the channel shifted its balance a bit more in this direction... anyone else agree?
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
Noted - thank you so much for the feedback!
@darksidegirl
@darksidegirl 5 жыл бұрын
I'm patreon. Join, guys!
@TwoMinutePapers
@TwoMinutePapers 5 жыл бұрын
Thank you so much for your support! 🙏
@andybaldman
@andybaldman 5 жыл бұрын
It's a frog. You can tell by the pixel.
@madcio
@madcio 4 жыл бұрын
One pixel for extremely low res image? I am supposed to be impressed by that?
@marverickbin
@marverickbin 5 жыл бұрын
I am still in the single pixel camera.
@ellenajt1027
@ellenajt1027 4 жыл бұрын
This gives a pretty convincing explanation of why one pixel attacks are (perhaps) not too surprising: arxiv.org/abs/1901.10861
@eleos5
@eleos5 4 жыл бұрын
Std training is very effective
@clapton79
@clapton79 5 жыл бұрын
Wow this is serious.. I would not say it is a bug but definetely something that scienctists want to fix to avoid serious vulnerabilities.
@puppy3908
@puppy3908 4 жыл бұрын
I love these
@Flowtail
@Flowtail 4 жыл бұрын
!?!? I feel feelings of joy???
@o1ecypher
@o1ecypher 5 жыл бұрын
YOU HAVE CREATED A PARADOX IN THIS VIDEO
@Omar-bi9zn
@Omar-bi9zn 5 жыл бұрын
?
@NICK....
@NICK.... 5 жыл бұрын
Aren't all neural networks technically bugs? Bug: _An error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or _*_unexpected result_*_ , or to behave in unintended ways._
@shagster1970
@shagster1970 5 жыл бұрын
It makes you wonder how we can identify it though.
@Flowtail
@Flowtail 4 жыл бұрын
Holy shit ive seen that at 2:44! There’s a great website called Explorable Explanations that may be of interest to you
@OpreanMircea
@OpreanMircea 5 жыл бұрын
I love this
@hazzard77
@hazzard77 5 жыл бұрын
has anyone taken a monte carlo approach to machine learning sample inputs?
@Serthys1
@Serthys1 5 жыл бұрын
What about making podcast two minute papers with the one that doesn't have visual fireworks.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 711 М.
Adversarial Machine Learning explained! | With examples.
10:24
AI Coffee Break with Letitia
Рет қаралды 21 М.
Win This Dodgeball Game or DIE…
00:36
Alan Chikin Chow
Рет қаралды 11 МЛН
Apple peeling hack @scottsreality
00:37
_vector_
Рет қаралды 127 МЛН
NVIDIA’s New AI Did The Impossible!
9:26
Two Minute Papers
Рет қаралды 317 М.
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
OpenAI Plays Hide and Seek…and Breaks The Game! 🤖
6:02
Two Minute Papers
Рет қаралды 10 МЛН
Finally, Differentiable Physics is Here!
5:25
Two Minute Papers
Рет қаралды 365 М.
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 17 МЛН
The future of AI looks like THIS (& it can learn infinitely)
32:32
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 387 М.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 789 М.