Is DeepFake Really All That? - Computerphile

  Рет қаралды 131,779

Computerphile

Computerphile

3 жыл бұрын

How much of a problem is DeepFake, the ability to swap people's faces around? Dr Mike Pound decided to try it with colleague Dr Steve Bagley.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 505
@bagandtag4391
@bagandtag4391 3 жыл бұрын
The solution is to require all videos featuring a politician to be recorded while they dance and there are disco lights everywhere.
@sollybrown8217
@sollybrown8217 3 жыл бұрын
Top comment here please hahaha
@kubokubo722
@kubokubo722 3 жыл бұрын
Hahah I thought about a swinging lightbulb when he mentioned that light changes cause problems :D
@tristandeniet
@tristandeniet 3 жыл бұрын
Yes please
@tristandeniet
@tristandeniet 3 жыл бұрын
"About my plan for national security.." 🕺🕺🕺🕺
@user-rx3ny9ji8i
@user-rx3ny9ji8i 3 жыл бұрын
Add in the confetti to mess up compression algorithm and we're ready for a perpetual 80's party
@flurki
@flurki 3 жыл бұрын
I was just thinking: You probably filmed this in the garden because of covid, but it also just gives such a nice atmosphere compared to the videos filmed in windowless rooms. Maybe you could keep doing that even if it's no longer necessary?
@du42bz
@du42bz 3 жыл бұрын
I agree
@codycast
@codycast 3 жыл бұрын
Doubt it’s due to covid. It’s 1/2 way into 2021. Most have been vaccinated (or had it). And most are going about their normal lives again.
@Anvilshock
@Anvilshock 3 жыл бұрын
It's also pleasantly void of pointless, artsy-fartsy, nauseating, ram-the-camera-up-the-guy's-nostrils-style whip pans and zooms, so that's a nice change.
@positronalpha
@positronalpha 3 жыл бұрын
@@codycast Having had it does absolutely not mean it's safe to go back to pre-pandemic behavior, nor does a part of the world being vaccinated. Look at current infection rates worldwide. We're definitely not out of the woods yet. In the UK, for example, it's increasing again.
@omegashoutmonful
@omegashoutmonful 3 жыл бұрын
It’s probably because of the bad weather
@EDoyl
@EDoyl 3 жыл бұрын
It's interesting that emergency broadcasts from world leaders, surely among the videos you most badly want never to be faked, are the easiest type to fake. One face looking straight towards a camera with minimal emotion.
@B0bb217
@B0bb217 3 жыл бұрын
And because they are public figures they have tons of training data available
@veggiet2009
@veggiet2009 3 жыл бұрын
By that same token, videos that report to be from a leader should be the most suspect. This is also why deepfake detector networks are being made, so that videos can be scanned and analyzed whether or not it is legitimate
@victos-vertex
@victos-vertex 3 жыл бұрын
@@veggiet2009 well that's fine until you realize that someone could just as well use this deepfake detector as a discriminator and thus build a network that produces undetectable fakes. If you had the best deebfake detector in the world then this would be at the same time the best training environment ever. As soon as a network beats that detector, there is no more detector left to find the deepfake, because yours was the best to begin with. So you would have to make sure that no one ever has your detector and also can't make sufficiently many requests either (or else they could just use it remotely). In the end I think a detector is simply not enough, one has to add something else.
@y.h.w.h.
@y.h.w.h. 3 жыл бұрын
@@victos-vertex right. Either it continues forever or you add something that GANs can't fake (for now.) It's still an infinite loop. But at least now it's not nested.
@pkramer962
@pkramer962 3 жыл бұрын
Would a video platform that only allows videos linked to a blockchain be a possible solution?
@SlopedOtter
@SlopedOtter 3 жыл бұрын
It's 100 degrees and Mike is still flexing on us with the sweater. Mad lad.
@AboveEmAllProduction
@AboveEmAllProduction 3 жыл бұрын
He is in Nottingham, UK.. And they use Celsius..
@rabsputin
@rabsputin 3 жыл бұрын
Not in England it isn’t.
@SlopedOtter
@SlopedOtter 3 жыл бұрын
@@rabsputin Are you in the same England as me, I'm melting
@neurocidesakiwi
@neurocidesakiwi 3 жыл бұрын
It's because he's that cool 😎
@ricecake1228
@ricecake1228 3 жыл бұрын
Celsius
@doougle
@doougle 3 жыл бұрын
We should deep fake that dirt off of the paper facing camera lens!
@mwgondim
@mwgondim 3 жыл бұрын
I tried to wipe my phone screen clean one too many times. Came here for this comment
@miskee11
@miskee11 3 жыл бұрын
timestamp
@muhammadsiddiqui2244
@muhammadsiddiqui2244 3 жыл бұрын
It doesn't need deep fake, it needs deep cleaning ...
@maidanorgua
@maidanorgua 3 жыл бұрын
Doesn't look like dirt to me (too sharp to be on the lens). More like sensor damage from accidental prolonged exposure to the sun or a laser.
@toohardtowatch
@toohardtowatch 3 жыл бұрын
@@miskee11 @3:06 first time it's shown
@notoriouskiller1
@notoriouskiller1 3 жыл бұрын
If Dr. Mike had his own channel, I would watch every minute of it
@genentropy
@genentropy 3 жыл бұрын
9:30, I like how he briefly considered mentioning the most common use case and then clearly decided against it.
@hebl47
@hebl47 3 жыл бұрын
I find it quite funny how he's avoiding the elephant in the room.
@carlosmspk
@carlosmspk 3 жыл бұрын
I dont follow. What would that be?
@pedroalvesvalentim7652
@pedroalvesvalentim7652 3 жыл бұрын
What would that be?
@JeremiahMC100
@JeremiahMC100 3 жыл бұрын
I like how @genentropy considered writing the most common use case in his comment and then clearly decided against it.
@VideoAulaslo
@VideoAulaslo 3 жыл бұрын
why don't you guys make a guess? =)
@zenithparsec
@zenithparsec 3 жыл бұрын
One of the problems of using cryptographic signing for image verification is 'what are you signing?' You're taking the input stream from the camera, and signing that, but how do you know it's coming from the camera? Because the library calling the 'sign this please' function told you it was. A software vulnerability could let someone call the method with arbitrary content, and you'd have no way to know. Even if you move the cryptography into hardware, there is still going to be a place between the camera and writing the signed data to the network or storage device which could be attacked. (How are you securing/validating the key material? How are you authenticating the date/time info? Was it the front camera, the back camera, or a USB camera which was used? (I have a USB display adapter which appears as a USB camera on another computer... whatever program I want to display on that will show up on theUSB camera. ))
@globalincident694
@globalincident694 3 жыл бұрын
The only thing I can think is setting up an internet connection from the camera to a trusted third party and just ask them to hold the videos, with accurate timestamps. That way the courts or whoever would have a record of what time the video was recorded, at least. But that does nothing to prevent people from sending faked videos at the correct time, and does raise the question of who would serve as the third party.
@ruben307
@ruben307 3 жыл бұрын
there could always be a source attack. But this is probably never solvable but preventing a man in the middle attack after a video leaves lets say CNN headquarter would be useful like a watermark public key on every video to ensure that providers like youtube and facebook only allow those videos with the same key.
@msn3wolf
@msn3wolf 3 жыл бұрын
I'm not a lawyer, but what you describe would not be a problem of cryptography itself but a topic more in the realm of security as a whole. Crytography is the toolbox, and HMAC signing (what Dr Mike Pound was refering) is a tool that only "proves" if the message has been tampered with from the signers generation point. That is why while generally speaking of security you have to consider the concept of rings of trust. If haven't established a trust relationship with the source, beyond reasonable doubt, there is very little value for the tools to verify the content source or legitimacy.
@trogper
@trogper 3 жыл бұрын
You could have a tamper-proof security camera which signs the footage by itself. If the camera is tampered with, the crytpo key would be destroyed.
@drdca8263
@drdca8263 3 жыл бұрын
this is probably not feasible (I'm guessing it would make the messages too long to fit in the short time windows needed), but: what if GPS satellites signed the signals they send? Then, if the hardware-secured-key signed the combination of the gps signals it received along with what was recorded, and there was an additional timestamp afterwards certifying that the whole thing was signed shortly afterwards (to avoid replay attacks with the gps satellite signals), they we could maybe be highly confident not only that what the camera recorded was actually the light going into the camera, but in addition, that the camera was at a particular time and place when it was recorded. So, to fake a recording, you would have to be near where the event to fake was supposed to be, and present the input to it there.
@stannone7272
@stannone7272 3 жыл бұрын
6:25 "return a nice picture of me [...] nice is relative" aaaand zoom on the face. well done!
@henryzhang7873
@henryzhang7873 3 жыл бұрын
The solution is something that already exists in PDF signing: the timestamping server. You have to get all your frames timestamped within x seconds of recording
@TheWilyx
@TheWilyx 3 жыл бұрын
This is a great idea! Is anyone working on this?
@carlosmspk
@carlosmspk 3 жыл бұрын
I dont really know how signatures are made, but I doubt getting a signature for every frame would be computationally viable
@ethansimmons82
@ethansimmons82 3 жыл бұрын
The fake could just put timestamps on the video too, or are you recommending having something else (server?) track the timestamps?
@TheWilyx
@TheWilyx 3 жыл бұрын
@@carlosmspk Signing is rather quick... with something like AES you can sign more than one GB under a second... On top of that, you can do something like signing 2 every 3 frames or something like that... Find the minimum number of signatures needed to make altering a video in a significant way impossible
@henryzhang7873
@henryzhang7873 3 жыл бұрын
@@ethansimmons82 another server signs your frames. All they are certifying is the current time. Basically you calculate a hash of the frame, then you send this to the timestamper, and they sign it with the current time using their private key.
@BlueTJLP
@BlueTJLP 3 жыл бұрын
All the videos with Michael are my favorite Computerphile videos!
@theophrastusbombastus8019
@theophrastusbombastus8019 3 жыл бұрын
If arm waving makes the process more difficult I rekon Italians are safe from deepfakes for an extra couple of years
@anshumansingh5088
@anshumansingh5088 2 жыл бұрын
just watched your 5 year old stegnanography video,it was amazing. I am happy that you are still active and spreading knowledge.thank you sir
@CathyInBlue
@CathyInBlue 3 жыл бұрын
I think Mike is selling himself short about whether Tom Cruise's face on his body would look realistic.
@JohnYoungblood1337
@JohnYoungblood1337 3 жыл бұрын
Incoming deep fake with Tom cruise's face on Mike's in this interview
@U014B
@U014B 3 жыл бұрын
Visually maybe, but the accent is a bit off.
@stevenbergom3415
@stevenbergom3415 3 жыл бұрын
Actually, Corridor Crew did a deep-fake experiment using a Tom Cruise impersonator as the base and then overlaid with Tom Cruise's face. It was quite impressive!
@kubokubo722
@kubokubo722 3 жыл бұрын
my thoughts exactly, I think they look similar...ish :D
@mm49439
@mm49439 3 жыл бұрын
I really enjoy listening to Mike on these flixes!
@marklonergan3898
@marklonergan3898 3 жыл бұрын
"Sort that out in post for me". I was really hoping it would change over to animation and just scribble the legs out. 😀
@TheGreatAtario
@TheGreatAtario 3 жыл бұрын
I was hoping there'd be a half-second "No." superimposed
@boggless2771
@boggless2771 3 жыл бұрын
Maybe you could have CCTV cameras when they store the footage after compression and the like, for each frame they use a private public key combo to make a signature using SHA 256 (if it's fast enough). Then use the public key to check if the file is genuine. Basically this would make each frame of the video verifiable. If thats too slow, you could probably do the same thing but in segments of video 1 min long or the like.
@RandomNullpointer
@RandomNullpointer 2 жыл бұрын
It can't be too slow. mpeg (or the like) compression is waaaay heavier on the processor or compression chip.
@ProfJonatan
@ProfJonatan 3 жыл бұрын
For the discussion at the end: it would be really great to have a video on data provenance.
@Hyraethian
@Hyraethian 3 жыл бұрын
I really appreciate the bits of jargon tossed in. Its difficult to learn more about something without a jumping off point.
@Twisted_Code
@Twisted_Code 3 жыл бұрын
Dear Computerphile( and by extension, U of N), thanks for another great upload. I agree cryptography might have a role to play here, though we would need something more secure than RSA PKC due to Shor's Algorithm (particularly in regard to quantum computing). so far, we haven't had to worry about that because we don't have quantum computers fast enough to make use of it, but it's theoretically possible, and that's an issue. I think we really need to start migrating to elliptic curve! Still, all concerns like that aside, the idea of using digital certificates that are built into security systems seems like it would work as well as TLS does, right? every camera needs to be able to encrypt with a certificate belonging to the manufacturer, and if you wanted to use machine learning to tamper with the image, I expect you'd also need to tamper with it in a way that causes a collision. At some point, it becomes no longer worth the effort to deceive. Sincerely yours, a very much intrigued IT student from the US :-)
@Veptis
@Veptis 2 жыл бұрын
An auto encoder is such a magical tool. You can use massive compression factors and get back an almost perfect image. But you have to train a decoder for every specific subject. So you can't easily make a general purpose auto encoder for general video for example. But me might one day. I am really looking forward to get back to university in person next semester (October) and do one deep learning course and perhaps an additional one.
@xtieburn
@xtieburn 3 жыл бұрын
Im glad that people are worried about this. I wouldnt put money on it not becoming a major problem in a few years time. ... but, AI has a tendency to plateau. It keeps following the same pattern over and over again pretty much since its inception. From the Turing Test, to voice recognition, to driving cars, the tools of AI are brought to bear, there is a burst of activity as they rapidly ramp up to accomplish things that were previously considered impossible, but almost as rapidly it runs in to walls all over the place, and it all grinds down, leaving problems frustratingly far away from the finish line for literally decades. (and counting.) - In the case of deep learning AIs* these 'walls' seem inherent to the black box nature of the thing. You can tweak the dials, set it going, and get some impressive results, but the closer you want to get it to perfection, the more you have to fine tune the whole thing. That becomes increasingly intractable, because ultimately you dont know what the AI is doing under the hood. (You have people train AIs with AIs and other such tricks to try to get around these issues.) - You could argue that you just need to increase the volume of data its learning from, but that also appears to get harder and harder. Compilations of hundreds of images have become compilations of thousands, tens of thousands, hundreds of thousands, and yes the AI improves, but by smaller and smaller increments. You ultimately end up falling in to exactly the same trap: How do I get the AI to work better with these image sets, you have to fine tune it, how do you fine tune something that is fundamentally quite opaque? With great difficulty... - Ill reiterate, I wouldnt put money on this, Ive studied AI but it certainly wasnt the central aspect of my degree, and I could be proven dramatically and horrifically wrong tomorrow. Hope for the best, prepare for the worst and all that. but I also wont lose sleep over the coming fake video frenzy. Especially as other means of manipulating the populace have proven themselves incredibly effective, considerably simpler (Assuming you have the money and influence to steer it.), are already doing inestimable amounts of damage to us, and consequently are presently far more terrifying to me than the possibility of truly convincing Deep Fakes at some point in the future. - *I say in the case of deep learning, but its a heuristic problem that extends to all the forms of AI Im aware of, there is just nuance in exactly how.
@RhysTuck
@RhysTuck 3 жыл бұрын
How do we not know what the AI is doing? Can we not program them to output human readable code of whatever it is they do?
@aviralsood8141
@aviralsood8141 3 жыл бұрын
@@RhysTuck The AI has far too much data and does far too many operations for any human to track.
@CyberNancy
@CyberNancy 3 жыл бұрын
I really enjoy your work. Thank you.
@bilboswaggings
@bilboswaggings 3 жыл бұрын
one way you could detect face swaps is finding the video on the internet where you don't count the face, since other things will still be the same in the video
@bahtiyarozdere9303
@bahtiyarozdere9303 2 жыл бұрын
Thank you for amazing video again. Can I ask which library did you use to create your own model? It would be very nice to see the demonstration of this with the code.
@TheChipMcDonald
@TheChipMcDonald 3 жыл бұрын
The only way to encrypt is to pre-raster the key steganography style in each frame, in a discreet portion of the image that would be a moving watermark revealed by the video decompression algorithm. Tampering would wreck the watermark. We will at some point be locked in a constant pre-authentication steganography watermark morphing algorithm vs. auto image manipulation algorithm. The manipulation would have to constantly be updated to stay aware of active watermarking.
@masansr
@masansr 3 жыл бұрын
Also, deepvoice is so, so much easier, and already 100% believable. So you can fake a recording, and quite a few people can do it.
@durnsidh6483
@durnsidh6483 3 жыл бұрын
Is there a link to a paper or repo with working code?
@shaider1982
@shaider1982 3 жыл бұрын
I watched a video on that where they had Homer simpson tell the story of darth plageuis.
@masansr
@masansr 3 жыл бұрын
@@durnsidh6483 I don't think there is, but look up Speaking of AI on KZbin. He does it for fun, and clearly writes that they are fakes, but others might replicate that for malicious purposes.
@Thermophobe
@Thermophobe 3 жыл бұрын
Mike covering all the interesting topics!
@sdHansy
@sdHansy 3 жыл бұрын
I just realised that I have a year of Computerphile videos to catch up to o/
@masansr
@masansr 3 жыл бұрын
I just know there will be a perfect deepfake of Steve doing this video in a year or two, just to prove his "5-10 years" timeline wrong.
@nunyobiznez875
@nunyobiznez875 3 жыл бұрын
He didn't say that it wouldn't be possible for another 5-10 years. He said he thinks that they will be convincing in 5-10 years, which would prove to be a correct statement, if it only took a year or two.
@seanriokifarrell
@seanriokifarrell 3 жыл бұрын
With that software, I have had the experience that you get the best results if you use two short video clips. This goes against the general guidance of having a diverse set of inputs and probably does not generalize, but if you are trying to get one face onto (only) one clip, this is the best aproach.
2 жыл бұрын
I really like Dr Mike's sensible approach to the topic of banning the technology. Instant like 🙂
@GabrielPettier
@GabrielPettier 3 жыл бұрын
I was also wondering about video frames being signed by the hardware/software that produce them, so even if you extract them, you could still trace the origin and evolution of the video, that would be a lot of metadata addition though (though i guess in the world of 4k becoming the norm…), and having all the devices have a verifiable ID would be a major hurdle, but at least, sources could decide to make the effort to be "trustable" by using devices and software allowing that.
@115410055644
@115410055644 3 жыл бұрын
03:30 onwards looks like the hand is DeepFaked or overlayed on top of the scene(ignore the dirt spots on the lens/sensor for a bit)
@stoopidgenious
@stoopidgenious 3 жыл бұрын
For CCTV video most of the software includes a proprietary output for format and video players with digital signatures to ensure the video displayed has not been tampered with. Simple mpeg or Avi files are not generally admissible in court because of potential tampering.
@Gahanun
@Gahanun 3 жыл бұрын
I was thinking about something quite similar with what you describe at the end. As an artist I often struggle with seeing great work reuploaded somewhere like Pinterest, but because the image file doesn't carry any meta information it is impossible to trace it back to the creator without the use of tools. I wish digital signature of files and meta data was more established and integrated into the standards. I feel that the rapid data exchange online ran away from the standards that used to be satisfactory back in the 90s when all those consortiums gathered to determine how a PNG looks like.
@DanielSavageOnGooglePlus
@DanielSavageOnGooglePlus 3 жыл бұрын
I'm not ready for the phishing companies to start making deepfakes of my parents in distress so I'll send them Visa gift cards.
@nicolaiveliki1409
@nicolaiveliki1409 3 жыл бұрын
I am. I'm installing gpg encryption and verification on their EVERYTHING
@jesuszamora6949
@jesuszamora6949 3 жыл бұрын
I doubt there's anywhere near enough quality data for training.
@minhuang8848
@minhuang8848 3 жыл бұрын
jokes on them, my parents don't expect a call from me, they'll smell bs right away
@ethansimmons82
@ethansimmons82 3 жыл бұрын
Jokes on them, I don't have money
@johannesfalk8146
@johannesfalk8146 3 жыл бұрын
I think it's worth considering that a lot of the manual work that goes into this kind of image reconstruction could be automated. A lot of people have loads of pictures tagged on social media, sorted and catalogged in a bot friendly register. I think it's not unreasonable to imagine that within 10 years someone could make a user friendly tool that autonomously gathers its source material and trains a decoder. If something like that got widespread this could turn weird.
@B3ennie
@B3ennie 3 жыл бұрын
Could you reduce the deepfake back down to the feature information that you got from the original information? Or would it be altered so much you couldn’t get the original compressed information?
@antonios4553
@antonios4553 3 жыл бұрын
The last few minutes from the Computing Limit video (Nov. 2017) talks about how far we have to go still....
@M4pster
@M4pster 3 жыл бұрын
I'm curious why when encoding why not take all the previous and next frames into consideration? Could get a more reliable Feature set of the frame to see which direction the head is facing when obstructed
@HordrissTheConfuser
@HordrissTheConfuser 3 жыл бұрын
When filming left-handed people while they write and draw, maybe you should put the camera on their right?
@DLinton
@DLinton 3 жыл бұрын
I'll have to come back in 5y or 10y to see how well this video ages. XD
@olivierpelvin
@olivierpelvin 3 жыл бұрын
For audio, is it a similar process, or is it much more complicated ? A voice also has its own characteristics, but is it possible to reproduce something like a regional accent ?
@ckameron9959
@ckameron9959 3 жыл бұрын
In general there is a chain of custody established, starting at the latest when a triage of the data is collected with the intent of being used in court. However, the idea of a camera cryptographically signing the video is interesting. A possible algo could be a hash of the video signed with the cameras private key, which is stored on dedicated hardware.
@scwfan08
@scwfan08 3 жыл бұрын
Dr. Mike is my favourite presenter
@notsafeforpbs3408
@notsafeforpbs3408 3 жыл бұрын
Are we able to train neural networks to learn how to compress and decompress data, in order to reduce the file size of the amount of data needed? I feel like natural language processing is dependent on neural networks being able to compress and decompress information through the structure of the network itself; a close approximation of human "inference..."
@longdarkrideatnight
@longdarkrideatnight 3 жыл бұрын
Have a bit of jewellery, that displays a matrix of information from the scene signed with the person's private key. Audio->convert into a low bitrate but sufficiently quality compressed file->sign with private key->convert to QR type codes->display on pendent. Anyone can take the displayed QR codes check that they are correctly signed and compare to the audio of the video. You could also include other information, like accelerations of the pendent, or information from a time source or GPS data.
@TechyBen
@TechyBen 3 жыл бұрын
If it's not the entire image, it's not going to work. But anything that makes it *harder* to deepfake helps. Higher resolution original, more colours, higher FPS, all add to the compute requirements of the deepfake. So you could at least make sure the originals are out before any "deepfakes". A Deepfake of the president telling everyone to evacuate, is useless if it's released 10 years after the president leaves office to retire. :P
@BytebroUK
@BytebroUK 3 жыл бұрын
Love the idea of home CCTV cams that sign their footage securely in a verifiable way..
@antopolskiy
@antopolskiy 3 жыл бұрын
how does the network match the encoder of one person to the decoder of another person? the autoencoders are guaranteed to create identical compressed feature spaces just by default, right? if feature in the position A encodes, I don't know, some nose related shape in one autoencoder, the feature in the same position in the output of another person's encoder may encode head orientation. how is this solved?
@Hexanitrobenzene
@Hexanitrobenzene 3 жыл бұрын
Note that both encoders are equal, i.e., they use the same transformation and extract the same features.
@lolerskates876
@lolerskates876 3 жыл бұрын
Does this work for 3D? Are there Deep Fakes for Lidar images and video? Or is it limited to 2d video?
@AndreasZetterlund
@AndreasZetterlund 2 жыл бұрын
Seeing the research advances in this area that Two Minute Papers is regularly posting the example approach in this video seems outdated by a couple of years already. Stuff shown on TMP can generate pretty convincing video and voice of a person talking just from text, and doing face transfers with even a single photo.
@jeremygreer4039
@jeremygreer4039 2 жыл бұрын
Same sweater!
@marklonergan3898
@marklonergan3898 3 жыл бұрын
2nd comment because i couldn't resist. You wanted us to tell you why you were wrong. I haven't thought this through either, but i don't think you are wrong. Towards the end of the video i thought the same thing, and right as you mentioned it, i was in the middle of thinking "how could you integrate a private key into a person". Probably a moot point given that quantum computing will eventually beat the current public/private paradigm, but whatever we end up using in computers in the future for reliable auth, i reckon the solution will be to somehow use the same principle in real life. Only problem is, it would only work for things you want to pre-authenticate. If someone is caught robbing a bank on CCTV, they aren't going to willingly authenticate themselves to the camera. 😀
@TechyBen
@TechyBen 3 жыл бұрын
Keys are already being generated that are "hard" to compute with quantum computing. Older methods are certainly easier to compute with quantum computing. But there are some types of tasks QM computing finds difficult, and those can sometimes be easy for a Turing (normal) computer. So those algorithms are being planned to be used for the foreseeable size of quantum computers being built. :)
@syedalimehdi-english
@syedalimehdi-english 2 жыл бұрын
How about calculating hash for specific section of "face" in any video? like for example from any of my videos, take a frame and find nose in my face and calculate its hash then crop nose from deep fake video in each frame and calculate hash Its going to be expensive but wouldn't it work?
@vladpuha
@vladpuha 3 жыл бұрын
having certificates applied to images and moving images - really neat idea.
@jesuszamora6949
@jesuszamora6949 3 жыл бұрын
I suspect that's going to be a thing very soon. Cameras that sign images with unique IDs would probably help. The camera itself can then be confiscated to check for conflicts.
@AndreasZetterlund
@AndreasZetterlund 2 жыл бұрын
But that doesn't solve anything. It can in the best case just confirm that a camera recorded something and that the recording was not altered, but it doesn't say anything about how real/altered that what was recorded is. For example if the camera was recording a display in front of it.
@maxcl3474
@maxcl3474 3 жыл бұрын
I'm a simple man, I see Dr Mike Pound I click.
@zenithparsec
@zenithparsec 3 жыл бұрын
So would you click on the Deep-Faked Steve parts?
@maxcl3474
@maxcl3474 3 жыл бұрын
@@zenithparsec most likely.
@vesk4000
@vesk4000 3 жыл бұрын
Yes! Mike, along with Professor Brailsford, are my favourite hosts on the channel.
@EDoyl
@EDoyl 3 жыл бұрын
The point where well-made fake photographs became indistinguishable from real ones was a while ago now. I don't know exactly when. Anyway we've dealt with it pretty well, which has me hopeful that we'll adapt to the presence of fake video (and fake audio which is also getting better) in the same way.
@adityakhanna113
@adityakhanna113 3 жыл бұрын
That's what I believe too. But often I see my relatives actually falling for fake photos, so I am sure that videos will be worse
@iivin4233
@iivin4233 3 жыл бұрын
Could you do verification of a stream blockchain style? Looking up at the data lines strung across my street, I'm thinking we're gonna need bigger lines.
@drdca8263
@drdca8263 3 жыл бұрын
The only way I see blockchain being helpful for this is for timestamping, and while it does that fairly well (you can aggregate the things you want to timestamp into a Merkle tree, and therefore provide a timestamp for many many things in a single transaction), there are also other timestamping methods if you are willing to allow for just a little trust. Also, if you want to use bitcoin for the timestamping, you can't really get it to a precision of more than 10 minutes. I guess other blockchains wouldn't have that issue but still. I think the most precise + cheapest timestamping solutions would probably be the ones that involve a little bit of trust, but not much. And, timestamping isn't enough to solve this problem. You need to authenticate the *content* of the image as well, and this is not something blockchain helps with. I think for that, you probably want physical security and signing, as well as some way to establish physical location. And blockchains do not help with this. And, if you already have a machine running which you trust the software it is running, and which has a private key in it in a way such that if you try to physically access the key or otherwise tamper with how the device is working, it immediately deletes the key, well, as long as that device is constantly running and had the right time initially, you might as well use that device for your timestamping solution? In short, I don't think blockchain is particularly helpful for this problem.
@NESMASTER14
@NESMASTER14 Жыл бұрын
This video has the funniest cover image hahaha. I hope that portrait is hanging up somewhere...
@klyanadkmorr
@klyanadkmorr 3 жыл бұрын
See Mike, POUND LIKE!
@Jet-Pack
@Jet-Pack 3 жыл бұрын
Could a system be created that essentially 3D scans the two faces using photogrammetry, mapping them onto an animated model using facial expression features and renders it like a CGI video and layers it on top of the original video?
@RhysTuck
@RhysTuck 3 жыл бұрын
If you have the two people to be photographed for the photogrammetry, yes probably. Doing it from already existing photos/videos? I dunno
@JohnDlugosz
@JohnDlugosz 3 жыл бұрын
That's how movie characters are animated now. You're just saying that we don't need the marker dots anymore as the AI can figure out the face just from a normal photo.
@myownsite
@myownsite 3 жыл бұрын
What is Mike's graphics card
@mark..
@mark.. 3 жыл бұрын
Could videos be authenticated by comparing to a sample video, made on the same camera after the fact?
@shards1627
@shards1627 3 жыл бұрын
most likely not, there is a detectable amount of noise in every recording, so that would probably be too inconsistent to completely verify any video, and it's fairly easy to just add a noise filter over a deepfake to make it more convincing
@mark..
@mark.. 3 жыл бұрын
@@shards1627 you’re probably right. Plus the faker has the original footage, so they could make sure the fake passes any comparison tests.
@avi12
@avi12 3 жыл бұрын
3:06 Why is the lens dirty?
@franziscoschmidt
@franziscoschmidt 3 жыл бұрын
A very pleasant Friday afternoon surprise
@TheWilyx
@TheWilyx 3 жыл бұрын
Did you guys checked "One-Shot Free-View Neural Talking Head Synthesis for Video Conferencing" the samples looked amazing
@billybob-uz6wz
@billybob-uz6wz 3 жыл бұрын
Could be they'll have something similar to the difference between word/txt files and PDFs/Docusign for official/government stuff for authentication. So it would be something like uneditable video files (mp4Sec?) that is maybe hardware locked. You could imagine only being able to play the video on the device that recorded it, so the device itself would act more like "Physical Evidence" like DNA or a weapon from the crime scene.
@TheAstronomyDude
@TheAstronomyDude 3 жыл бұрын
Can't you imbed cryptographic information in the video pixel data - a hard to replicate watermark?
@drdca8263
@drdca8263 3 жыл бұрын
I don't think embedding it in the image itself helps any. What does that buy you that including it separately doesn't get you?
@ZT1ST
@ZT1ST 3 жыл бұрын
@@drdca8263 Embedding it in the image allows for additional information to avoid a collision attack.
@TheAstronomyDude
@TheAstronomyDude 3 жыл бұрын
@@drdca8263 I don't know what you mean. I was just thinking of ways the physical camera doing the recording can make the recording unique and traceable to the source. Like those black blobs that appear on movie theatres screens to deter Kramers with cameras since they are unique to each theater room.
@drdca8263
@drdca8263 3 жыл бұрын
@@ZT1ST Elaborate? Taken literally, embedding it in the image rather than appended at the end actually implies using less information (fewer bits). Not sure what you mean by avoiding a collision attack. Do you mean like, finding a hash collision or something, some modified image that would also have the same hash or digest or whatever the thing is represented in the signature and the MAC? If you accept the cryptographic assumptions behind the signing and MAC functions, this shouldn't be necessary, and I'm still not sure how it would help.
@drdca8263
@drdca8263 3 жыл бұрын
@@TheAstronomyDude Ok, sure, that could kind of work to some degree, but if it is meant to be secure, then you probably want actual cryptography, not just steganography ("the adversary knows the system"), especially if this is supposed to be a standardized system. And if you are using a cryptographic signature, then hiding it in the pixel values doesn't give you anything. It at most makes it so that you can use an existing file formats and upload it places that use existing file formats, in a way that with the existing ways people interact with images doesn't result in accidentally losing the signature. I guess that could be useful, if you want the image to be potentially verifiable after being passed along by a bunch of people who don't know or care about keeping whatever signature is on it.
@stefanmatei3827
@stefanmatei3827 3 жыл бұрын
What kind of paper is that?
@jonathan-._.-
@jonathan-._.- 3 жыл бұрын
how do you make sure that both encoders encode the features in the right order ?
@drdca8263
@drdca8263 3 жыл бұрын
You only have one encoder. You use the same encoder for both. It is only the decoders that differ. The encoder therefore, has to work the same way for both, in ways that are sufficient for the two decoders to decode to the correct thing.
@ChoChan776
@ChoChan776 2 жыл бұрын
8:58 I wish more people understood this about all technologies.
@sergodobro2569
@sergodobro2569 2 жыл бұрын
For real cameras there could be a new format where pixels are coded like RGBAN where N is some other wave (ultraviolet or something different) or some more information based on other info in the video the way that deepfake couldn't see.
@PKTraceur
@PKTraceur 2 жыл бұрын
Someone created a modders tool for Bethesda games that allows realistic recreations of character voices based on things the voice actors had previously said. I can only imagine the damage this technology will do in the coming years.
@Manmax75
@Manmax75 3 жыл бұрын
I think a valid solution to this would be to establish a cryptographic identity chain from hardware to software. A camera would have its own hardware signature. Video editing software can use a company's NFT to sign the video and so forth until a chain is formed and can be verified by the end users player.
@nicolaiveliki1409
@nicolaiveliki1409 3 жыл бұрын
Like a chain of custody that is required for evidence. I like it. Since there are ~10^50 ec keypairs we'll also probably never have to worry about key collisions
@ItsOnlyLogixal
@ItsOnlyLogixal 3 жыл бұрын
Need some sort of signing system that can't be broken even with direct access to the hardware. That's a tall order. Maybe some creative use of RSA?
@maninalift
@maninalift 2 жыл бұрын
I'm afraid that if a piece of hardware contains the private key to enable it to sign the videos or creates then it is only a matter of resources to extract that key. More generally if you have hardware that can do X in your possession, you can examine it to find the process that allows you to do X.
@user-ue1vw6iv3s
@user-ue1vw6iv3s 2 жыл бұрын
Thanks for commenting , I'll advise you look up to investing and making huge profit in Bitcoin with Samanthaleeward she's currently managing my crypto portfolio and making great return's*
@user-ue1vw6iv3s
@user-ue1vw6iv3s 2 жыл бұрын
Contact OnTelegram @Samanthaleeward
@boltez6507
@boltez6507 3 жыл бұрын
Plz can a make a video on searx...
@Fatone85
@Fatone85 3 жыл бұрын
Someone send this to Corridor Crew
@positronalpha
@positronalpha 3 жыл бұрын
Turning A to B isn't the main problem at the moment, it's replacing mouth movements and voice in video of B, creating statements that were never made.
@jesuszamora6949
@jesuszamora6949 3 жыл бұрын
Which I imagine will be a problem for a while yet, for video anyway. For still shots (faking a politician in a 'salacious' position, for example) we still have the problem.
@rchandraonline
@rchandraonline 3 жыл бұрын
Digitally sign the video would be the way to go I would think
@123TeeMee
@123TeeMee 3 жыл бұрын
Sounds like he's onto something in regards to encryption being the solution.
@jeffreyblack666
@jeffreyblack666 3 жыл бұрын
While it appears to be, it really isn't. This only works when you have private keys held by "trusted" individuals/entities. And that means they can't be in the cameras, which means the cameras can't sign the footage. As soon as you make it so everyone can have the private key to sign the video in their device, you have the risk of the private key being extracted and used to sign anything.
@123TeeMee
@123TeeMee 3 жыл бұрын
@@jeffreyblack666 I suppose some cameras might be reasonably "trusted" though so it could have some applications, despite being imperfect and situational.
@jasonburbank2047
@jasonburbank2047 3 жыл бұрын
@@jeffreyblack666 The cameras can create a random private key when booted for the first time, burning the key to ROM. While not impossible to tamper with, this scheme would reduce the number of people with the necessary skills to fake a video to a pretty small group. Furthermore, all members of that group will be fairly identifiable as people with the necessary technical skills.
@jeffreyblack666
@jeffreyblack666 3 жыл бұрын
@@123TeeMee The issue then is what part of the camera is trusted, and how can you know it is actually the camera that signed it? Presumably the people providing the footage from the camera have the camera and could potentially get the key from it.
@jeffreyblack666
@jeffreyblack666 3 жыл бұрын
@@jasonburbank2047 You still have the issue of authenticating the video. Do you need to present the camera with the video to be able to use it as evidence? What if the camera is damaged? Can you extract the key from the camera and what is the technical requirements to do that?
@JonathanOsborneAU
@JonathanOsborneAU 2 жыл бұрын
Maybe something like a digital signature embedded with stenography over sensitive parts of the video, such as the faces.
@JonathanOsborneAU
@JonathanOsborneAU 2 жыл бұрын
Oh in the case of spoken words, signed subtitles could be a verification method. Potentially it just comes down to not trusting sources that aren't verified, social media already has the concept of verification for public figures.
@0MoTheG
@0MoTheG 3 жыл бұрын
Wait, why does one decoder understand the output of the other encoder? Why is the feature space the same?
@shards1627
@shards1627 3 жыл бұрын
because they use the same encoder for both, so the output format will always be the same
@afaulconbridge
@afaulconbridge 3 жыл бұрын
because the encoder is trained on a mixture of _both_ faces, but each decoder is only trained on _one_ face.
@MeppyMan
@MeppyMan 3 жыл бұрын
“It wasn’t gonna work with me as the base”. How long before someone redoes this video with Tom Cruise’s face :)
@yassinelakbir2515
@yassinelakbir2515 3 жыл бұрын
can you post the code source, please?
@Euruzilys
@Euruzilys 2 жыл бұрын
That dirt on the camera caused me to wipe my screen many more times than I would like to admit.
@f16madlion
@f16madlion 3 жыл бұрын
The cryptographic aspect is interesting, each frame of a CCTV video could be signed as originating from the trusted entity.
@ESTEBANTMAN
@ESTEBANTMAN 2 жыл бұрын
3:15 editor: sorry not sorry
@CH4NNELZERO
@CH4NNELZERO 3 жыл бұрын
Yes I had the same idea about how cryptographic signing of the original image by the camera hardware is the solution to deepfake verification. If you include date and location information in the hash it becomes harder and harder to fake.
@brunomartel4639
@brunomartel4639 3 жыл бұрын
love this guy
@user-ue1vw6iv3s
@user-ue1vw6iv3s 2 жыл бұрын
Thanks for commenting , I'll advise you look up to investing and making huge profit in Bitcoin with Samanthaleeward she's currently managing my crypto portfolio and making great return's*
@user-ue1vw6iv3s
@user-ue1vw6iv3s 2 жыл бұрын
Contact OnTelegram @Samanthaleeward
@andrez76
@andrez76 3 жыл бұрын
I reckon adding professor Brailsford's voice to that deep fake would take it to the next level of bizarre.
@verybasedguy
@verybasedguy 3 жыл бұрын
a few months back when the nft for meme jpgs craze was going on, I thought it was idiotic like most reasonable people did. But I did think that if it had any kind of practical application at all, it would be as a counter to deepfake technology. Hearing someone else posit a similar concept is a relief. Maybe I'm not so crazy. Or at least not the only one.
@drdca8263
@drdca8263 3 жыл бұрын
What if instead of two different decoders, you trained one decoder which had an extra input encoding "which person is this supposed to be", and I guess, uh, you could have a learned embedding from ids to this extra input? And then maybe you could have like, a network that does "image of person -> what this extra input should be to produce images of this person" thing?
@Parisneo
@Parisneo 3 жыл бұрын
What about using the block chain to authentify the source of videos. You may sign the hash of a vidreo and then publish it and store that on the blaockchain then if someone wants to temper with the file, people can verify the authenticity on the blockchain. Any little change in the content will change the hash code so it would be pretty difficult to fake a block chain protected video.
@grisoe
@grisoe 3 жыл бұрын
What fields of mathematics and programming languages should a person study to begin building stuff like this? I'm a computer engineer since two years ago, and I've been interested in AI... I even want to study a masters degree in computer science next year, but I feel scared of not having the necessary skills or knowledge.
@MrCmon113
@MrCmon113 3 жыл бұрын
As a computer engineer you should have most of the maths background already. What comes up over and over again is probability theory and linear algebra.
@UserUnknown07
@UserUnknown07 3 жыл бұрын
Why is the table made like that ?
@SteinGauslaaStrindhaug
@SteinGauslaaStrindhaug 3 жыл бұрын
3:43 I kept wanting to brush the dust off my screen...
@drdca8263
@drdca8263 3 жыл бұрын
I think I might have a bit of face blindness. I could barely see a difference between the versions with the face swapped and with the face not swapped.
@sandeepbanik
@sandeepbanik 3 жыл бұрын
Shouldn't one be replacing the whole head as opposed to just the face? It might provide better results.
ChatGPT Jailbreak - Computerphile
11:41
Computerphile
Рет қаралды 322 М.
Types of PDF - Computerphile
13:57
Computerphile
Рет қаралды 129 М.
ХОТЯ БЫ КИНОДА 2 - официальный фильм
1:35:34
ХОТЯ БЫ В КИНО
Рет қаралды 2,1 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 100 МЛН
Why Did Facebook Go Down? - Computerphile
15:26
Computerphile
Рет қаралды 887 М.
Explaining Digital Video: Formats, Codecs & Containers
14:43
ExplainingComputers
Рет қаралды 265 М.
Hacking Out of a Network - Computerphile
25:52
Computerphile
Рет қаралды 237 М.
Cracking Enigma in 2021 - Computerphile
21:20
Computerphile
Рет қаралды 2,4 МЛН
Iterative Closest Point (ICP) - Computerphile
16:25
Computerphile
Рет қаралды 134 М.
How DNS Works - Computerphile
8:04
Computerphile
Рет қаралды 455 М.
How Ray Tracing Works - Computerphile
20:23
Computerphile
Рет қаралды 70 М.
How ChatGPT Works Technically | ChatGPT Architecture
7:54
ByteByteGo
Рет қаралды 698 М.