'How neural networks learn' - Part II: Adversarial Examples

  Рет қаралды 54,447

Arxiv Insights

Arxiv Insights

Күн бұрын

Пікірлер: 93
@nimasanjabi626
@nimasanjabi626 6 жыл бұрын
This is a super-expert view on Neural Networks. It's discussing over fooling (or in other words hacking) NNs, while the NNs themselves are still highly complicated structures for most people and even experts. precious content I say.
@nishparadox
@nishparadox 6 жыл бұрын
Just discovered this channel. Awesome content. Better than most of the hyped channels around regarding ML and Deep Learning. Cheers. :)
@dheten4462
@dheten4462 6 жыл бұрын
True..
@JonesDTaylor
@JonesDTaylor 4 жыл бұрын
He stopped maintaining this channel I am afraid :(
@revimfadli4666
@revimfadli4666 4 жыл бұрын
@@JonesDTaylor nooo, why?
@joshuadennis9400
@joshuadennis9400 3 жыл бұрын
I guess im asking the wrong place but does someone know of a way to get back into an instagram account..? I was dumb lost my password. I would love any help you can offer me.
@denniswaylon6744
@denniswaylon6744 3 жыл бұрын
@Joshua Dennis Instablaster =)
@TopGunMan
@TopGunMan 6 жыл бұрын
All of your videos are incredibly well made and presented. Perfect mix of detail and generality and excellent visuals. So hooked.
@Niteshmc
@Niteshmc 6 жыл бұрын
This video deserves so much more attention! Great Job!
@iiternalfire
@iiternalfire 6 жыл бұрын
Great content. One suggestion : Can you please provide links of the shown research in the video description?
@WonderSilverstrand
@WonderSilverstrand 4 жыл бұрын
Yes, Citations please
@pial2461
@pial2461 6 жыл бұрын
Your vids are just gold.Please do more videos on other nets like rnn,cnn,ladder net,DBN,se2seq etc.I think you can make people understand better that anyone.Best of luck.Really a big fan of your contents
@sushil-bharati
@sushil-bharati 4 жыл бұрын
@Arxiv Insights - The paper you showed does not claim that the sunglasses fail 'every' facial recognition systems. In fact, they are personally curated and would work for a particular recognition neural net.
@hikingnerd5470
@hikingnerd5470 6 жыл бұрын
Great video! One suggestion is to include links to all relevant papers.
@MartinLichtblau
@MartinLichtblau 6 жыл бұрын
They are a blessing! They show that NN's are getting something fundamentally wrong. And we can gain insight from those wrong classifications to understand what's really going wrong. I think, we humans don't use any chroma or HUE information for object detection - we only use detect structural patterns (at first).
@williamjames6842
@williamjames6842 6 жыл бұрын
9:48 that comment was pretty deft. The neural network's sarcasm is showing. I'd give that comment a positive rating too.
@hackercop
@hackercop 3 жыл бұрын
This was absolutely fascinating. Have liked and subscribed!
@kareldumon808
@kareldumon808 6 жыл бұрын
Nice! Didn't know yet about cross-model generalization. Also nice to have your take on how to avoid and even exploit these attacks. Keep up the video-making & posting :-)
@maximgospodinko
@maximgospodinko 6 жыл бұрын
You deserve much more subscribers. Keep up a good work
@bjbodner3097
@bjbodner3097 6 жыл бұрын
Super cool video! Love the depth of the content! Please keep making more videos:)
@inkwhir
@inkwhir 6 жыл бұрын
Wow, your videos are fantastic! The format is great, the content is awesome... Please post more videos :D
@xianxuhou4012
@xianxuhou4012 5 жыл бұрын
Thanks for posting the awesome video. Could you please provide the reference (paper) at 14:12?
@ScottJFox
@ScottJFox 6 жыл бұрын
Just subscribed for part III! :D
@dacrampus2656
@dacrampus2656 3 жыл бұрын
Really great videos thanks!
@jc-wh9mq
@jc-wh9mq 4 жыл бұрын
love your videos, keep it up.
@substance1
@substance1 4 жыл бұрын
Humans also have adversarial examples in the form of Pareidolia, seeing faces in inanimate objects. It's an evolutionary thing that helps humans detect predators. An example are people who scour the images taken on Mars, and they see rocks that they claim are the heads of broken statues, when it's really just a rock that was photographed at a particular angle.
@MithiSevilla
@MithiSevilla 6 жыл бұрын
Thanks for making this video. I hope you also link to your references for this video in the description box like you did in part one. Thanks, again!
@ArturVegas
@ArturVegas 6 жыл бұрын
great work! keep developing your great channel! 💎
@yusun5722
@yusun5722 4 жыл бұрын
Great video. Perhaps humans are robust to the adversarial examples of computers, and vice versa. In the end is how to align both adversarial distributions.
@nikab1852
@nikab1852 4 жыл бұрын
Great videos! what are your sources for this video? trying to find the ostrich/temple confusion in a paper!
@threeMetreJim
@threeMetreJim 5 жыл бұрын
Some image pre-processing might help with stopping imperceptible pixel changes giving wrong results. If you extract the detail from the image, then blur the image to spread out any noise, before adding the detail back in, you'll at least have the areas that are large and without much variation, mostly noise free (detail mask --> blur detail mask --> use it to mask the original image to get the detail). Recognition would probably be best done in 2 steps, use a grey scale image (just the detail?) for one channel, and the colours from the same, but heavily blurred image, for a second channel. Most objects can be identified in black and white, colour only adds a smaller amount of information (like whether an animal/insect is likely poisonous). Using a reduced resolution for the colour may also be of help; there's little point in letting an neural network be able to distinguish between each of 16.7 million colours for object recognition; less to learn and less opportunity for small variations (that a human couldn't even see) causing upsets. Is that really Keanu Reeves? Looks more like Sylvestor Stallone :->
@DontbtmeplaysGo
@DontbtmeplaysGo 6 жыл бұрын
Thanks a lot for your videos. At the beginning of this series, you announced a third part on "Memorization vs Generalization", but I couldn't find it even though you posted other videos after that. Was it deleted for some reason, or is it still a work in progress?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Dontbtme Haven't made part III yet, it's coming though.. some day :p Gonna finish the series on Reinforcement Learning first I think :)
@DontbtmeplaysGo
@DontbtmeplaysGo 6 жыл бұрын
Good to know! Thanks for your reply and all your hard work! :)
@infoman6500
@infoman6500 10 ай бұрын
Glad to see that human biological computer network is still much efficient than machine with artificial neural network.
@abdoumerabet9874
@abdoumerabet9874 5 жыл бұрын
your explication is awesome keep going
@absolute___zero
@absolute___zero 4 жыл бұрын
This just proves a few points: 1) The problems of overfitting and inability of deep nets to converge is not due to the complexity of the network, but due to missing *Complenent of a Set* ( en.wikipedia.org/wiki/Complement_(set_theory) ) of training data. When you train a network you have to provde the *NOTs* of train/test data too, these would be pictures of mamals , birds, humans, not just digits with a black background if we speak about MNIST dataset. It is like you only believe in matter and forget about the dark matter of the Universe. Well, some day , because you didn't consider the whole Universe this dark matter is gonna eat your planet. 2) The adversarial examples are not adversarial examples really, they are just pointing out on an inconsistent training method we are using currently. We need to modify our methods to include the *Complement of a Set* . This will increase training time by orders of magnitude, but you are going to get real generalization, just like it was originally conceptualized in the mid of last century.
@srgsrg762
@srgsrg762 2 жыл бұрын
Amazing content. keep it up
@septimusseverus252
@septimusseverus252 3 жыл бұрын
This video is amazing
@doviende
@doviende 6 жыл бұрын
Great content. I'd like if the sound quality were a bit better, particularly due to the echos in your room. It sounds like you have some bare walls that are really reflective. Without changing your mic setup, you might be able to do something like hang up some towels to absorb the echoed sounds.
@ArxivInsights
@ArxivInsights 6 жыл бұрын
I know, there's a ton of echo in the room I'm filming, need to find a fix for that! Thought the clip on mic would help, but it's still not ideal, I know :p
@mjayanthvarma6125
@mjayanthvarma6125 5 жыл бұрын
Bro, would love to see more and more content coming on your channel
@Kram1032
@Kram1032 6 жыл бұрын
Ok so, "simple" solution? (I'm sure actually implementing this is a different story) GAN-like network, but it takes the input and does all kinds of transforms to it (noise, rotation, scaling, changing single letters / words, what have you), optimizing for minimal necessary transforms to get the network to classify any images as any given class. Basically the network is supposed to learn to generate and then overcome its own failure cases. "Just" protect against all kinds of sources of adversarial examples. (Because obviously it's super easy to know that you've covered all your bases and that you haven't overlooked any problem *cough)* Would that work? Perhaps something like IMPALA could be used to make it work on multiple possible variants of breaking a network at once?
@wvg.
@wvg. 6 жыл бұрын
Keep making videos, great job!
@haroldsu1696
@haroldsu1696 6 жыл бұрын
very good lectures, and thank you!
@drdeath2667
@drdeath2667 6 жыл бұрын
Great Job man and thanks a lot for this awesome content
@chizzlemo3094
@chizzlemo3094 4 жыл бұрын
this was so cool. MASSIVE SUBSCRIBE!
@CodeEmporium
@CodeEmporium 6 жыл бұрын
This is really good content. Subscribed! So I'm a Machine Learning guy as well (I make similar videos on my channel) but I don't have a decent face cam. What camera do you use?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Thx!! I use my GoPro Hero 5 for filming and a clip on mic for audio which I sync afterwards while editing! Also bought a green screen + studio lights rig from Amazon :p
@igorcherepanov4765
@igorcherepanov4765 6 жыл бұрын
Regarding the optimization of the car frame, consequently we end up with an adversarial example, as you said. Can you give some papers aimed at this subject? Thanks
@geraldkenneth119
@geraldkenneth119 2 жыл бұрын
Things like this have made me appreciate the “artificial” in artificial intelligence, as it shows that the way AI works is very different than that of naturally evolved organisms, for better and for worse
@vegnagunL
@vegnagunL 3 жыл бұрын
If we look to a NN as a program (A huge function with many inputs) it becomes clear why AE works.
@AlvaroGomezGrowth
@AlvaroGomezGrowth 6 жыл бұрын
SUPER GOOD CHANNEL. I have never seen ML Videos for learning so good. Thanks you really much. ¿One Question? Can we defense from an adversarial attack for image recognition, making one first step that would be simplifying the input image (Ex: Panda) to a vectoriced version, smooth edges and less colors, just for cheking the form? The noise will desapear in this process but the fundamentals of the image will continue the same. Then You can compare the result of the normal input and the simplified input.
@dufferinmall2250
@dufferinmall2250 6 жыл бұрын
dude that was awesome. THANKS
@tarsmorel9898
@tarsmorel9898 6 жыл бұрын
Awesome! Keep them coming ;-)
@yajieli8933
@yajieli8933 6 жыл бұрын
Very good videos! What is the frequency of uploading new vids?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Yajie LI well I don't always have a lot of spare time besides work etc.. so I try to do something like one vid every three weeks :p Would love to do more, but currently that's kinda hard :)
@wiiiiktor
@wiiiiktor 6 жыл бұрын
maybe you make a video on how to follow the white papers in the field of AI (where to find them, how to create alerts, if there are any such tools, are there good websites that follow new published papers, etc) - i don't know if this is a good topic, but just an idea. greets! :)
@astrofpv3631
@astrofpv3631 6 жыл бұрын
Dude nice vid, Do you have an education background in AI?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Well I studied engineering, so got the mathematical background from there, than took one course + a thesis in Machine Learning at University, everything else is self-learned (online MOOCS, blogs, papers, ...)
@christinealderson7357
@christinealderson7357 6 жыл бұрын
Where is part ||| ?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
christine alderson Still need to make that one :p But it's coming! Someday.. ;)
@christinealderson7357
@christinealderson7357 6 жыл бұрын
thanks, i look forward to more great vids in the future
@SantoshGupta-jn1wn
@SantoshGupta-jn1wn 6 жыл бұрын
Great vid!
@binaryfallout
@binaryfallout 6 жыл бұрын
This is so cool!
@abdellahsellam912
@abdellahsellam912 5 жыл бұрын
A great video
@hfkssadfrew
@hfkssadfrew 6 жыл бұрын
Does adverisal attack work for regression?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Yes! Adversarial examples have been shown to exist for many different types of ML algorithms. There is a great talk by Ian Goodfellow on KZbin where he dives into this, can't find the title right now though..
@hfkssadfrew
@hfkssadfrew 6 жыл бұрын
Arxiv Insights this is great. Intuitively, adversial example should work the best or to say most significant, for black box fitting model with: 1. very very high dimensional input. 2. Classification with soft max. The reason I think is due to we lack enough data to cover the whole extremely high dimensional vicinity volume near our training data. If the case is in 1 or 2 dimensional, We can graphically find where the adverisal example should be. But shouldn’t be that similar to original input. For regression, I think just do SVD on the Jacobian of the neural network w.r.t input, the first direction should be the optimal adversial in the vicinity limit. But optimization allow the optimal adversial example to go far away. So i think it should be very interesting.
@chrismorris5241
@chrismorris5241 5 жыл бұрын
9:16 column 7 row 3 predicts Sylvester Stallone
@gorgolyt
@gorgolyt 3 жыл бұрын
How do we know that classification adverserial examples don't exist for the human brain?
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Wow, the fact that adversarial examples exist, doesn’t that indicate that our algorithm hasn’t extrapolated the essence of what makes an image such an image. And that the models, despite appearances of working most of the times are really not really working.
@ArxivInsights
@ArxivInsights 5 жыл бұрын
Yes I would agree. It basically points out that they fail to fully exploit all the structure in natural images and instead are such general learning architectures that locations just outside the natural data manifold (eg adversarial images) can trigger arbitrary responses from the network since there was never any guided gradient descent in those regions.
@obadajabassini3552
@obadajabassini3552 6 жыл бұрын
There is a really cool paper about using adversarial attack on humans, do you think that being fooled by an adversarial example is a fundamental property in our visual system?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Obada Jabassini Well, our visual system evolved through evolutionary selection in the natural world. It only had to provide usefull features in the physical world that surrounded us. I think it's very likely that any complex system will inevitably show signs of adversarial vulnerabilities outside of it's evolved domain, including our own visual system. With the advent of digital technologies and machine learning I think it's quite likely we can discover a whole range of 'optical illusions' and other kinds of adversarial tricks our brains did not evolve to manage. Super interesting to think about the relation between subjective perception and objective 'reality' (if such a thing exists) in the context of adversarial examples.. If you're really interested I suggest you check out Donald Hoffman's Ted talk, super interesting stuff!
@joshuascholar3220
@joshuascholar3220 3 жыл бұрын
What marketing departments and political consultants and propagandists do is generate adversarial examples for human cognition and emotion.
@ultraderek
@ultraderek 6 жыл бұрын
The problem is over generalization.
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Is this the machine learning equivalent of pareidolia?
@Zeropadd
@Zeropadd Жыл бұрын
😎
@monstercolorfunco4391
@monstercolorfunco4391 5 жыл бұрын
Neural Networks are not using proper logic for colors, uniformity, they are perhaps 1000 times simpler than brain ones, so they can be tricked using simple stuff which doesnt even contain color and uniformity. things will come along though. It's amazing that they are efficient already and it shows that tensor flow version 17.0 will be very awesome, even if it does require 0.01nm procesors!
@seijurouhiko
@seijurouhiko 6 жыл бұрын
Kenu Reeves or Sylvester Stallone?
@pooorman-diy1104
@pooorman-diy1104 4 жыл бұрын
bikini pics are adversarial examples which flip human's (males category) attention focus ......
@aronhighgrove4100
@aronhighgrove4100 4 жыл бұрын
Good presentation, but you are a bit too much in the foreground and moving a lot, which is highly distracting.
@vijayabhaskar-j
@vijayabhaskar-j 6 жыл бұрын
If we trained a GAN, throw away the Generator, take only the discriminator, will it be more robust to adversarial attack than normal image classifiers?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Vijayabhaskar J Well, the discriminator itself can only perform binary classification (Real/Fake) so you can't just use it as an image classifier. But what people have done is train a discriminator to distinguish between normal / adversarial images and put that network in front of a normal classifier. So if an adversarial image is sent to the API, it simply gets rejected by the discriminator.
@vijayabhaskar-j
@vijayabhaskar-j 6 жыл бұрын
Thank you for the reply, What if the discriminator is modified to tell not only the image given by the generator is fake or not, but also classify the image? something like output array length is 1001, where the array[0] tells the image is fake or not and the array[1:] are the probabilities of the classes? While training misclassification losses are only added if array[0]==0 (0 means Original image,1 means Generated image)
'How neural networks learn' - Part III: Generalization and Overfitting
22:35
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 649 М.
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29
Smart Sigma Kid #funny #sigma
00:14
CRAZY GREAPA
Рет қаралды 108 МЛН
When mom gets home, but you're in rollerblades.
00:40
Daniel LaBelle
Рет қаралды 141 МЛН
'How neural networks learn' - Part I: Feature Visualization
15:00
Arxiv Insights
Рет қаралды 105 М.
An introduction to Reinforcement Learning
16:27
Arxiv Insights
Рет қаралды 659 М.
Editing Faces using Artificial Intelligence
25:27
Arxiv Insights
Рет қаралды 372 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 235 М.
Adversarial Examples for Deep Neural Networks
43:54
Paul Hand
Рет қаралды 11 М.
Why humans learn so much faster than AI
9:54
Arxiv Insights
Рет қаралды 49 М.
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
Reinforcement Learning with sparse rewards
16:01
Arxiv Insights
Рет қаралды 117 М.
Practical Defenses Against Adversarial Machine Learning
31:15
Black Hat
Рет қаралды 3,8 М.
Inside a Neural Network - Computerphile
15:42
Computerphile
Рет қаралды 428 М.
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29