This is a super-expert view on Neural Networks. It's discussing over fooling (or in other words hacking) NNs, while the NNs themselves are still highly complicated structures for most people and even experts. precious content I say.
@nishparadox6 жыл бұрын
Just discovered this channel. Awesome content. Better than most of the hyped channels around regarding ML and Deep Learning. Cheers. :)
@dheten44626 жыл бұрын
True..
@JonesDTaylor4 жыл бұрын
He stopped maintaining this channel I am afraid :(
@revimfadli46664 жыл бұрын
@@JonesDTaylor nooo, why?
@joshuadennis94003 жыл бұрын
I guess im asking the wrong place but does someone know of a way to get back into an instagram account..? I was dumb lost my password. I would love any help you can offer me.
@denniswaylon67443 жыл бұрын
@Joshua Dennis Instablaster =)
@TopGunMan6 жыл бұрын
All of your videos are incredibly well made and presented. Perfect mix of detail and generality and excellent visuals. So hooked.
@Niteshmc6 жыл бұрын
This video deserves so much more attention! Great Job!
@iiternalfire6 жыл бұрын
Great content. One suggestion : Can you please provide links of the shown research in the video description?
@WonderSilverstrand4 жыл бұрын
Yes, Citations please
@pial24616 жыл бұрын
Your vids are just gold.Please do more videos on other nets like rnn,cnn,ladder net,DBN,se2seq etc.I think you can make people understand better that anyone.Best of luck.Really a big fan of your contents
@sushil-bharati4 жыл бұрын
@Arxiv Insights - The paper you showed does not claim that the sunglasses fail 'every' facial recognition systems. In fact, they are personally curated and would work for a particular recognition neural net.
@hikingnerd54706 жыл бұрын
Great video! One suggestion is to include links to all relevant papers.
@MartinLichtblau6 жыл бұрын
They are a blessing! They show that NN's are getting something fundamentally wrong. And we can gain insight from those wrong classifications to understand what's really going wrong. I think, we humans don't use any chroma or HUE information for object detection - we only use detect structural patterns (at first).
@williamjames68426 жыл бұрын
9:48 that comment was pretty deft. The neural network's sarcasm is showing. I'd give that comment a positive rating too.
@hackercop3 жыл бұрын
This was absolutely fascinating. Have liked and subscribed!
@kareldumon8086 жыл бұрын
Nice! Didn't know yet about cross-model generalization. Also nice to have your take on how to avoid and even exploit these attacks. Keep up the video-making & posting :-)
@maximgospodinko6 жыл бұрын
You deserve much more subscribers. Keep up a good work
@bjbodner30976 жыл бұрын
Super cool video! Love the depth of the content! Please keep making more videos:)
@inkwhir6 жыл бұрын
Wow, your videos are fantastic! The format is great, the content is awesome... Please post more videos :D
@xianxuhou40125 жыл бұрын
Thanks for posting the awesome video. Could you please provide the reference (paper) at 14:12?
@ScottJFox6 жыл бұрын
Just subscribed for part III! :D
@dacrampus26563 жыл бұрын
Really great videos thanks!
@jc-wh9mq4 жыл бұрын
love your videos, keep it up.
@substance14 жыл бұрын
Humans also have adversarial examples in the form of Pareidolia, seeing faces in inanimate objects. It's an evolutionary thing that helps humans detect predators. An example are people who scour the images taken on Mars, and they see rocks that they claim are the heads of broken statues, when it's really just a rock that was photographed at a particular angle.
@MithiSevilla6 жыл бұрын
Thanks for making this video. I hope you also link to your references for this video in the description box like you did in part one. Thanks, again!
@ArturVegas6 жыл бұрын
great work! keep developing your great channel! 💎
@yusun57224 жыл бұрын
Great video. Perhaps humans are robust to the adversarial examples of computers, and vice versa. In the end is how to align both adversarial distributions.
@nikab18524 жыл бұрын
Great videos! what are your sources for this video? trying to find the ostrich/temple confusion in a paper!
@threeMetreJim5 жыл бұрын
Some image pre-processing might help with stopping imperceptible pixel changes giving wrong results. If you extract the detail from the image, then blur the image to spread out any noise, before adding the detail back in, you'll at least have the areas that are large and without much variation, mostly noise free (detail mask --> blur detail mask --> use it to mask the original image to get the detail). Recognition would probably be best done in 2 steps, use a grey scale image (just the detail?) for one channel, and the colours from the same, but heavily blurred image, for a second channel. Most objects can be identified in black and white, colour only adds a smaller amount of information (like whether an animal/insect is likely poisonous). Using a reduced resolution for the colour may also be of help; there's little point in letting an neural network be able to distinguish between each of 16.7 million colours for object recognition; less to learn and less opportunity for small variations (that a human couldn't even see) causing upsets. Is that really Keanu Reeves? Looks more like Sylvestor Stallone :->
@DontbtmeplaysGo6 жыл бұрын
Thanks a lot for your videos. At the beginning of this series, you announced a third part on "Memorization vs Generalization", but I couldn't find it even though you posted other videos after that. Was it deleted for some reason, or is it still a work in progress?
@ArxivInsights6 жыл бұрын
Dontbtme Haven't made part III yet, it's coming though.. some day :p Gonna finish the series on Reinforcement Learning first I think :)
@DontbtmeplaysGo6 жыл бұрын
Good to know! Thanks for your reply and all your hard work! :)
@infoman650010 ай бұрын
Glad to see that human biological computer network is still much efficient than machine with artificial neural network.
@abdoumerabet98745 жыл бұрын
your explication is awesome keep going
@absolute___zero4 жыл бұрын
This just proves a few points: 1) The problems of overfitting and inability of deep nets to converge is not due to the complexity of the network, but due to missing *Complenent of a Set* ( en.wikipedia.org/wiki/Complement_(set_theory) ) of training data. When you train a network you have to provde the *NOTs* of train/test data too, these would be pictures of mamals , birds, humans, not just digits with a black background if we speak about MNIST dataset. It is like you only believe in matter and forget about the dark matter of the Universe. Well, some day , because you didn't consider the whole Universe this dark matter is gonna eat your planet. 2) The adversarial examples are not adversarial examples really, they are just pointing out on an inconsistent training method we are using currently. We need to modify our methods to include the *Complement of a Set* . This will increase training time by orders of magnitude, but you are going to get real generalization, just like it was originally conceptualized in the mid of last century.
@srgsrg7622 жыл бұрын
Amazing content. keep it up
@septimusseverus2523 жыл бұрын
This video is amazing
@doviende6 жыл бұрын
Great content. I'd like if the sound quality were a bit better, particularly due to the echos in your room. It sounds like you have some bare walls that are really reflective. Without changing your mic setup, you might be able to do something like hang up some towels to absorb the echoed sounds.
@ArxivInsights6 жыл бұрын
I know, there's a ton of echo in the room I'm filming, need to find a fix for that! Thought the clip on mic would help, but it's still not ideal, I know :p
@mjayanthvarma61255 жыл бұрын
Bro, would love to see more and more content coming on your channel
@Kram10326 жыл бұрын
Ok so, "simple" solution? (I'm sure actually implementing this is a different story) GAN-like network, but it takes the input and does all kinds of transforms to it (noise, rotation, scaling, changing single letters / words, what have you), optimizing for minimal necessary transforms to get the network to classify any images as any given class. Basically the network is supposed to learn to generate and then overcome its own failure cases. "Just" protect against all kinds of sources of adversarial examples. (Because obviously it's super easy to know that you've covered all your bases and that you haven't overlooked any problem *cough)* Would that work? Perhaps something like IMPALA could be used to make it work on multiple possible variants of breaking a network at once?
@wvg.6 жыл бұрын
Keep making videos, great job!
@haroldsu16966 жыл бұрын
very good lectures, and thank you!
@drdeath26676 жыл бұрын
Great Job man and thanks a lot for this awesome content
@chizzlemo30944 жыл бұрын
this was so cool. MASSIVE SUBSCRIBE!
@CodeEmporium6 жыл бұрын
This is really good content. Subscribed! So I'm a Machine Learning guy as well (I make similar videos on my channel) but I don't have a decent face cam. What camera do you use?
@ArxivInsights6 жыл бұрын
Thx!! I use my GoPro Hero 5 for filming and a clip on mic for audio which I sync afterwards while editing! Also bought a green screen + studio lights rig from Amazon :p
@igorcherepanov47656 жыл бұрын
Regarding the optimization of the car frame, consequently we end up with an adversarial example, as you said. Can you give some papers aimed at this subject? Thanks
@geraldkenneth1192 жыл бұрын
Things like this have made me appreciate the “artificial” in artificial intelligence, as it shows that the way AI works is very different than that of naturally evolved organisms, for better and for worse
@vegnagunL3 жыл бұрын
If we look to a NN as a program (A huge function with many inputs) it becomes clear why AE works.
@AlvaroGomezGrowth6 жыл бұрын
SUPER GOOD CHANNEL. I have never seen ML Videos for learning so good. Thanks you really much. ¿One Question? Can we defense from an adversarial attack for image recognition, making one first step that would be simplifying the input image (Ex: Panda) to a vectoriced version, smooth edges and less colors, just for cheking the form? The noise will desapear in this process but the fundamentals of the image will continue the same. Then You can compare the result of the normal input and the simplified input.
@dufferinmall22506 жыл бұрын
dude that was awesome. THANKS
@tarsmorel98986 жыл бұрын
Awesome! Keep them coming ;-)
@yajieli89336 жыл бұрын
Very good videos! What is the frequency of uploading new vids?
@ArxivInsights6 жыл бұрын
Yajie LI well I don't always have a lot of spare time besides work etc.. so I try to do something like one vid every three weeks :p Would love to do more, but currently that's kinda hard :)
@wiiiiktor6 жыл бұрын
maybe you make a video on how to follow the white papers in the field of AI (where to find them, how to create alerts, if there are any such tools, are there good websites that follow new published papers, etc) - i don't know if this is a good topic, but just an idea. greets! :)
@astrofpv36316 жыл бұрын
Dude nice vid, Do you have an education background in AI?
@ArxivInsights6 жыл бұрын
Well I studied engineering, so got the mathematical background from there, than took one course + a thesis in Machine Learning at University, everything else is self-learned (online MOOCS, blogs, papers, ...)
@christinealderson73576 жыл бұрын
Where is part ||| ?
@ArxivInsights6 жыл бұрын
christine alderson Still need to make that one :p But it's coming! Someday.. ;)
@christinealderson73576 жыл бұрын
thanks, i look forward to more great vids in the future
@SantoshGupta-jn1wn6 жыл бұрын
Great vid!
@binaryfallout6 жыл бұрын
This is so cool!
@abdellahsellam9125 жыл бұрын
A great video
@hfkssadfrew6 жыл бұрын
Does adverisal attack work for regression?
@ArxivInsights6 жыл бұрын
Yes! Adversarial examples have been shown to exist for many different types of ML algorithms. There is a great talk by Ian Goodfellow on KZbin where he dives into this, can't find the title right now though..
@hfkssadfrew6 жыл бұрын
Arxiv Insights this is great. Intuitively, adversial example should work the best or to say most significant, for black box fitting model with: 1. very very high dimensional input. 2. Classification with soft max. The reason I think is due to we lack enough data to cover the whole extremely high dimensional vicinity volume near our training data. If the case is in 1 or 2 dimensional, We can graphically find where the adverisal example should be. But shouldn’t be that similar to original input. For regression, I think just do SVD on the Jacobian of the neural network w.r.t input, the first direction should be the optimal adversial in the vicinity limit. But optimization allow the optimal adversial example to go far away. So i think it should be very interesting.
@chrismorris52415 жыл бұрын
9:16 column 7 row 3 predicts Sylvester Stallone
@gorgolyt3 жыл бұрын
How do we know that classification adverserial examples don't exist for the human brain?
@user-or7ji5hv8y5 жыл бұрын
Wow, the fact that adversarial examples exist, doesn’t that indicate that our algorithm hasn’t extrapolated the essence of what makes an image such an image. And that the models, despite appearances of working most of the times are really not really working.
@ArxivInsights5 жыл бұрын
Yes I would agree. It basically points out that they fail to fully exploit all the structure in natural images and instead are such general learning architectures that locations just outside the natural data manifold (eg adversarial images) can trigger arbitrary responses from the network since there was never any guided gradient descent in those regions.
@obadajabassini35526 жыл бұрын
There is a really cool paper about using adversarial attack on humans, do you think that being fooled by an adversarial example is a fundamental property in our visual system?
@ArxivInsights6 жыл бұрын
Obada Jabassini Well, our visual system evolved through evolutionary selection in the natural world. It only had to provide usefull features in the physical world that surrounded us. I think it's very likely that any complex system will inevitably show signs of adversarial vulnerabilities outside of it's evolved domain, including our own visual system. With the advent of digital technologies and machine learning I think it's quite likely we can discover a whole range of 'optical illusions' and other kinds of adversarial tricks our brains did not evolve to manage. Super interesting to think about the relation between subjective perception and objective 'reality' (if such a thing exists) in the context of adversarial examples.. If you're really interested I suggest you check out Donald Hoffman's Ted talk, super interesting stuff!
@joshuascholar32203 жыл бұрын
What marketing departments and political consultants and propagandists do is generate adversarial examples for human cognition and emotion.
@ultraderek6 жыл бұрын
The problem is over generalization.
@revimfadli46664 жыл бұрын
Is this the machine learning equivalent of pareidolia?
@Zeropadd Жыл бұрын
😎
@monstercolorfunco43915 жыл бұрын
Neural Networks are not using proper logic for colors, uniformity, they are perhaps 1000 times simpler than brain ones, so they can be tricked using simple stuff which doesnt even contain color and uniformity. things will come along though. It's amazing that they are efficient already and it shows that tensor flow version 17.0 will be very awesome, even if it does require 0.01nm procesors!
@seijurouhiko6 жыл бұрын
Kenu Reeves or Sylvester Stallone?
@pooorman-diy11044 жыл бұрын
bikini pics are adversarial examples which flip human's (males category) attention focus ......
@aronhighgrove41004 жыл бұрын
Good presentation, but you are a bit too much in the foreground and moving a lot, which is highly distracting.
@vijayabhaskar-j6 жыл бұрын
If we trained a GAN, throw away the Generator, take only the discriminator, will it be more robust to adversarial attack than normal image classifiers?
@ArxivInsights6 жыл бұрын
Vijayabhaskar J Well, the discriminator itself can only perform binary classification (Real/Fake) so you can't just use it as an image classifier. But what people have done is train a discriminator to distinguish between normal / adversarial images and put that network in front of a normal classifier. So if an adversarial image is sent to the API, it simply gets rejected by the discriminator.
@vijayabhaskar-j6 жыл бұрын
Thank you for the reply, What if the discriminator is modified to tell not only the image given by the generator is fake or not, but also classify the image? something like output array length is 1001, where the array[0] tells the image is fake or not and the array[1:] are the probabilities of the classes? While training misclassification losses are only added if array[0]==0 (0 means Original image,1 means Generated image)