I Deep Faked Myself, Here's Why It Matters

  Рет қаралды 3,139,404

Johnny Harris

Johnny Harris

Күн бұрын

Пікірлер: 5 700
@hawaiiansoulrebel
@hawaiiansoulrebel Жыл бұрын
Honestly, this is probably the type of tech that scares me the most. Deepfakes could be used to literally ruin someone’s life and reputation. Frightening…
@HeidiThompson7
@HeidiThompson7 Жыл бұрын
On a bigger scale it could cause a revolution, coup, or war. It could absolutely destroy the legal system by filling it with fake evidence.
@WhoAmEye_WhoAreEwe
@WhoAmEye_WhoAreEwe Жыл бұрын
only if people [famous people excluded] have continually uploaded their image to the internet (maybe?)
@fr61d
@fr61d Жыл бұрын
@@floppathebased1492 And if you have uploaded to FB/Insta or the like, you still have some time to delete your accounts and have the pictures removed from their servers.
@trackfresse
@trackfresse Жыл бұрын
The problem is rather that videos are no evidence for anything anymore. We loose what video-recording-technology gave us many years ago. And even historical video-recordings can be faked nowadays. Maybe someone will make Hitler look like a nice guy someday. 🫣
@sinane.y
@sinane.y Жыл бұрын
@@floppathebased1492 Yeah sure. It's not like facial recognition cameras aren't being installed in every major city worldwide, with governments and big data working hand in hand.
@j.mkamerling2470
@j.mkamerling2470 Жыл бұрын
Imagine people deepfaking security tapes to frame people in the future. That’s scary.
@TheRlhaugan
@TheRlhaugan Жыл бұрын
Yes! It’s a show called “the capture” and it has two seasons.
@GiRR007
@GiRR007 Жыл бұрын
Then we are just gonna have to get better at detecting fakes. Also that's already illegal.
@AVClarke
@AVClarke Жыл бұрын
The catch is; you can develop A.I. to make better deep fakes, but you also develop A.I. to better detect deep fakes.
@DeeRizz
@DeeRizz Жыл бұрын
Now I just wanna destroy future technology
@cessposter
@cessposter Жыл бұрын
you could also argue real footage was faked, within a court
@JeffreyBoles
@JeffreyBoles Жыл бұрын
I have 12 years of video editing experience. My specialization is interview editing. I look at and analyse faces through a screen all day, every (business) day. I could instantly tell when you showed a deep fake...except two times. I second guessed myself, and that is what scares me. Even with thousands of hours of carefully pinpointing imperfections in digital video of faces, I still couldn't be sure immediately. If I can't tell, how can we expect anyone to tell? I regret my hope as a child that I would live in an "interesting" time.
@F3ARtheGERBIL
@F3ARtheGERBIL Жыл бұрын
could meta data help with some of this? like what does the meta data of a deep fake submitted as evidence look like?
@F3ARtheGERBIL
@F3ARtheGERBIL Жыл бұрын
@@user-ze2zm4sz1b I think the issue with that is that nft’s are stored on a blockchain which does require actual resources and energy to support. The sustainability is already in question until greener alternatives are found and adding every video in existence to the equation does not sound sustainable. Also not sure how that would even be possible unless every video in existence was uploaded somewhere.
@Oblivion_94
@Oblivion_94 Жыл бұрын
May you live in interesting times...
@kuroshite
@kuroshite Жыл бұрын
as a porn addict, i was able to tell all of them straight away 💀
@silotx
@silotx Жыл бұрын
Also most video evidence are low res with poor lightning so it's much easy easier to fake.
@CoughitsKath
@CoughitsKath Жыл бұрын
i am not normally a technology doomer - quite the opposite usually - but when these started popping up in earnest a few years, it struck me as a real terrifying pandora's box. they've low key terrified me ever since. also, since you talk about deep fake tech in entertainment, i do need to point out that it's not all good news there, and this is a big chunk of what wga and sag strikers are hoping to mitigate with their recent union actions. it has the potential to really change a lot of working artists' lives and not necessarily for the better
@EricKay_Scifi
@EricKay_Scifi 8 ай бұрын
My most recent novel, Above Dark Waters, imagines content creators using brainwave data and generative AI to create a digital fentanyl, making you scroll and click forever.
@Leo-ok3uj
@Leo-ok3uj Жыл бұрын
What scares me the most is how long it took everyone to notice all of this, because I remembered that in 2014-2015 I talked and showed about deepfake to my parents, uncles and friends and saying how in 10 years we would have stuff like what we have already today (although with that very optimistic energy that I had in middleschool and never thinking about the bad things that could be made with it), and all of them told basically the same, that I am crazy or way too optimistic and that we wouldn’t have such stuff until like in a 100 years But guess what, NOT EVEN 10 YEARS HAVE HAPPENED
@detachmentalist
@detachmentalist Жыл бұрын
maybe your friends not believing it would have been a red flag since they should be in the same generation of these developments but people a generation or two older than us? they're never really going to believe that it is possible until it's right in front of them and threatening their very livelihood and existence a very hard lesson that I learned from my stubborn folks here
@MelbourneMeMe
@MelbourneMeMe Жыл бұрын
When you run a business like Jonny, you schedule video topics around clicks, like aliens and conspiracies, but also you gotta just churn out a few videos of topics that everyone else has covered already, because it's easy. ChatGPT probably partly scripted this 😆
@psgistheworstclubineurope
@psgistheworstclubineurope Жыл бұрын
Because of people like Elon Musk duh
@psgistheworstclubineurope
@psgistheworstclubineurope Жыл бұрын
Ironic how adults dont believe such technology would be available so quickly yet adults are also the ones inventing this kind of technology
@noob.168
@noob.168 Жыл бұрын
Not sure what kind of boomers you hang out with... I've been concerned about this for a long time
@TJl919
@TJl919 Жыл бұрын
I actually wrote my master's thesis on this last year (and soon PhD)! I'm glad this is getting more attention. To contrast all the doom and gloom, Professor Hany Farid (UC Berkley) mentioned that the advancement of deepfakes is getting better, but so too is the technology used to detect it. But it is a shame something so impactful is being used for such nefarious purposes.
@dickunddoof4684
@dickunddoof4684 Жыл бұрын
isnt that just an endless cycle? SW gets better at detecting it -> deepfakes get better because they know why it is being detected / it can be trained with the detectors themselves -> SW needs to get even better at detecting them -> even better deepfakes ->... at some point it might be truly impossible to tell the difference for a human.
@Badmunky64
@Badmunky64 Жыл бұрын
Is there anything the average joe can use to detect deep fakes?
@andersonojoshimite6047
@andersonojoshimite6047 Жыл бұрын
Wow! I'm interested in your work. I'm working on a thesis that sheds light on the impact of deepfakes in legal proceedings.
@wlpxx7
@wlpxx7 Жыл бұрын
I feel like everyone saw this coming, and didnt do a single thing to stop it.
@bitzoic4357
@bitzoic4357 Жыл бұрын
Any chance it involves attested sensors and zk proofs? Everytime I see videos about this subject I think about the fact that we have solutions that aren't widely implemented yet
@JaegerZ999
@JaegerZ999 Жыл бұрын
One day I won’t need a mask for videos anymore, just pick a new face in post production.
@girishanejadelhi
@girishanejadelhi Жыл бұрын
Good to see you here shooter!!!
@ihateorangecat
@ihateorangecat Жыл бұрын
I really like your videos man.🔥
@GiRR007
@GiRR007 Жыл бұрын
V tuber but without the cringe.
@iwilldi
@iwilldi Жыл бұрын
what for?
@kylehurley5994
@kylehurley5994 Жыл бұрын
When's the collab with admin results?
@EnteraName1876
@EnteraName1876 7 ай бұрын
There's already a teenager out there who got their reputation ruined. She was just doing tiktoks and then someone decided to put her face on nude photos which then got scattered across the internet. She tried to explain that it's not her body and that it is not her but unfortunately people continue to comment about "she was asking for it", "ok, but when will you have only fans page" and other things.
@gameratortylerstein5636
@gameratortylerstein5636 6 ай бұрын
First mistake was using Tiktok
@siphobrisloks8133
@siphobrisloks8133 4 күн бұрын
​@@gameratortylerstein5636tiktok sucks
@thatlittlehuman9238
@thatlittlehuman9238 Жыл бұрын
His last sentence made me realize another thing that could go horribly wrong…. “We shouldn’t believe everything that we see, no matter how real it looks.” The possibility that one day there would be a news report or something circulating on social media that is very real and dangerous, but the majority doesn’t believe it because “it could be AI”. False events can be believed, just as real events can be dismissed.
@cloudyview
@cloudyview Жыл бұрын
Plus you can just hack the news station to run the deep fake video of the news casters telling people it's real/fake... Exciting!
@ShankarSivarajan
@ShankarSivarajan Жыл бұрын
The news lying to you has been a problem long before this technology was developed.
@terryholmes8546
@terryholmes8546 Жыл бұрын
Yeah......covid taught us that the media doesn't need deep fakes for us to question the narrative.... Maybe if they didn't have an established rep for hying and lying...
@Luciphell
@Luciphell Жыл бұрын
Kind of like most of the world being convinced there was a violent insurrection that almost led to the downfall of the free world on Jan. 6th 2021. Doesn't take AI to fool a crowd.
@GiRR007
@GiRR007 Жыл бұрын
Its called being responsible people should be believing the first thing they hear on the internet anyway that is NEVER a good thing...
@Journal_Haris
@Journal_Haris Жыл бұрын
Trust issues with Johnny since this video published: 📈
@EllisEllo
@EllisEllo Жыл бұрын
He isn't smiling nor looks happy, it could be real.
@Dr.SyedSaifAbbasNaqvi
@Dr.SyedSaifAbbasNaqvi Жыл бұрын
He never left Vox. This channel is run by deep fakes.
@johnnyharris
@johnnyharris Жыл бұрын
😂😂
@johnnyharris
@johnnyharris Жыл бұрын
😂😂
@johnnyharris
@johnnyharris Жыл бұрын
😂😂
@kimberlycarter369
@kimberlycarter369 Жыл бұрын
I’m old, and back in 1995-ish I remember people talking about being afraid in the near future that we would no longer be able to distinguish real video from fake. Deep fakes are exactly what they where talking about before it had this name.
@martinfoy8700
@martinfoy8700 10 ай бұрын
Agreed. I was just mentioning that it’s kind of a good thing because there’s a video out of me, cheating on my wife with two bridesmaids from our wedding. I worry daily about her seeing me hitting them in the ass and rinsing off in their mouths. I’m actually more concerned about their husbands finding out because they will absolutely have my head in a box. My wife is pretty easy to gaslight so I can just tell her that it’s a fake and share this video with her. Also I’m class of 94. You’re only as old as you feel Kim
@peterlewis2178
@peterlewis2178 8 ай бұрын
@@martinfoy8700 You're a terrible person to talk so nonchalantly about gaslighting your wife. That's straight-up emotional abuse, I feel so bad for your wife. Unless you're a troll or AI message, but in that case you're still doing a terrible thing.
@damsen978
@damsen978 8 ай бұрын
I think he was being hypothetical.@@peterlewis2178
@dannyarcher6370
@dannyarcher6370 8 ай бұрын
@@martinfoy8700 I think you meant to say, "You're only as old as the bridesmaids you feel up."
@earthn1447
@earthn1447 8 ай бұрын
They were talking about this in the sixties during Vietnam war
@azcardguy7825
@azcardguy7825 Жыл бұрын
How good deep fakes have gotten in such a short amount of time is horrifying. We are critically underestimating the problems that this is going to cause.
@jojoqie
@jojoqie Жыл бұрын
There are scammers out there right now, calling you thru face time and deep fake to claim to be a person you know and scam you. Just be careful.
@Yasminh-
@Yasminh- 8 ай бұрын
thats why its good I never do video call with anyone ,if suddenly someone would decide to video call me I would not even accept the call , not gonna give any technology my face
@adolfstalin1497
@adolfstalin1497 Жыл бұрын
The worst part about this isn't that it's dangerous and it can spread wrong information but that it does absolutely no good to us whatsoever
@psgistheworstclubineurope
@psgistheworstclubineurope Жыл бұрын
Nice username btw
@mustangracer5124
@mustangracer5124 Жыл бұрын
Not for US.. but it has been used extensively by MSM to fool the fools who watch them.. Trump was deep faked 1,000 times already.
@josiamoog6619
@josiamoog6619 Жыл бұрын
In what world is this the worst part??
@stop08it
@stop08it Жыл бұрын
Huh??
@adolfstalin1497
@adolfstalin1497 Жыл бұрын
@@josiamoog6619 basically what i'm trying to say is that deepfakes are only used for bad. Even the "good" things listed in the video aren't exactly great by themselves And even then they definitely don't nearly make up for all the bad deepfakes do.
@jozroz2165
@jozroz2165 Жыл бұрын
The problem I foresee with developing AI to better identify deep fakes, is that it could simply fuel the further development of deep fakes since they can use the identifying techniques to patch their own tells. I mean, there's a reason GAN training involves identification and counter-action based on the identifiers. By fighting it in its own field I fear we may instead be playing right into the problem.
@0L1
@0L1 Жыл бұрын
Anyone remembers good old-fashion viruses and anti-virus software being a thing, an actual threat, always competing with each other? I guess a new era of that is approaching.
@kastieldev6732
@kastieldev6732 Жыл бұрын
smartest comment i have seen
@marciavox8105
@marciavox8105 Жыл бұрын
Yeah, like the evolutionary race between predators and prey animals. Each one evolves as a result of the others adaptations
@nwilt7114
@nwilt7114 Жыл бұрын
Well we should start by holding all the scammers accountable and that would reduce the amount of fukery.
@iudoncare6360
@iudoncare6360 Жыл бұрын
Like bacteria and abtibiotics...
@themadman6310
@themadman6310 Жыл бұрын
Face to face communication is going to become alot more valuable
@Madwonk
@Madwonk Жыл бұрын
I took a class with some professional photograph doctoring experts a while back. Mainly, it's a company that works to detect manipulation of pictures of politicians and other figures of importance. One of the hardest cases they had was a photo that *looked* right, metadata came up good, all of the anecdotal data made it seem legit etc etc (except it wasn't possible because the two people pictured had never met). Often, photoshop/AI will leave behind weird artifacts in the compression algorithms for JPEG or video that can be detected and they weren't showing up. So how did they fake the photo? They photoshopped it, printed it out, then took a picture of the picture! No digital trail to speak of!
@DarthObscurity
@DarthObscurity Жыл бұрын
Would have been scanned. No way a picture of a picture wasn't detected lol.
@realtimestatic
@realtimestatic Жыл бұрын
That’s actually really smart
@sadrakeyhany7477
@sadrakeyhany7477 Жыл бұрын
200 IQ play
@mister_duke
@mister_duke Жыл бұрын
but then u could see in the metadata that is was taken in a different location on a different date
@sbo3
@sbo3 Жыл бұрын
I'm confused how this is apparently smart because you can 100% tell when you take a photo of a photo. Even the person above who said it would have to be a scan - surely even a scan can be detectable?!
@Bobrae.
@Bobrae. Жыл бұрын
Entertainment-wise, this is part of the reason why the actors/SAG are on strike now, too.
@occamsshavecream4541
@occamsshavecream4541 Жыл бұрын
That surely adds new meaning to the expression, "Just another pretty face."
@bigdeal6852
@bigdeal6852 Жыл бұрын
Yeah....they know their somewhat at a turning point. Because the film industry can make movies without them being there. Which saves money in many ways with these high priced celebrities. So that's one big reason why they are on strike and "of course" they want more money.
@mystraunt2705
@mystraunt2705 Жыл бұрын
@bigdeal6852 this is still a serious issue though. Artists are all going to lose their jobs if we dont stop the development of ai or outlaw it or somthing.
@bigdeal6852
@bigdeal6852 Жыл бұрын
@@mystraunt2705 I will agree with you on that ! I'm sure eventually it will get done. Mostly because it can be dangerous. They might start a detection system and put in place copyright laws or something even more aggressive. I don't know....but it definitely can have an effect on Hollywood. 🤷
@professorxavier9692
@professorxavier9692 Жыл бұрын
​@@bigdeal6852they're
@HarlowAshensky
@HarlowAshensky Жыл бұрын
The scary one my grandparents ran into was a believeable AI robocall targeting seniors. It was so close to a real person reacting to their questions before hitting a loop. Crazy how fast the possibilities spread
@Jackson54321
@Jackson54321 Жыл бұрын
Deepfakes also impact Hollywood. Companies save hundreds of millions just to have AI instead of real humans.
@ekothesilent9456
@ekothesilent9456 Жыл бұрын
Wait until you have the robocalls targeting seniors perfectly mimicking the voice patterns and tones of their dead grand kids. It’s all fun and games until we start creating ai ghosts that haunt people 24/7 to get something out of them. This is happening.
@cnrspiller3549
@cnrspiller3549 Жыл бұрын
Tosh! We will all get used to it. Your grand kids call you up and say they're stuck abroad, wire them some money. Sure, you say - what was that nursery rhyme I always sang to you when I bounced you on my knee?
@yamanawrooz5132
@yamanawrooz5132 Жыл бұрын
I think robo calls will be replaced by fake online friends 100% generated by AI which will have a specific goal to sell you something or manipulate you into voting for someone. I think in the future even low level politicians like mayors or sheriffs would hire agencies to target constituents by either online or physical AI.
@josefarrington
@josefarrington Жыл бұрын
Probably the way to combat deep fakes is to use the pixels of an original image to watermark it, and then use software to detect those watermarks. This way when the pixels in the image are manipulated, the water mark will get disturbed and the "verification" software will detect the deep fake.
@qj0n
@qj0n Жыл бұрын
The way GANs work is that they train the generator to fool a detector, so that it's unable to recognize real photo from generated. Simple watermark algorithms will be replicated by generator, once you add it to detector. This is why inherently machines are worse in detecting deepfakes than human - generators are trained to fool the machine, fooling humans is kind of side effect It's possible to use some asymmetric cryptography (digital signature) to avoid it, although it's probably easier to put in metadata, not data itself. But you need to put secret keys in every recording device and once you extract it, you can use it to sign any content. Or you can e.g. play deepfaked voice and record it with a device which will sign it
@josefarrington
@josefarrington Жыл бұрын
@@qj0n "But you need to put secret keys in every recording device and once you extract it, you can use it to sign any content." I was thinking that the secret key could also contain GPS position and time of the recording. This way you need to know where/when the image was created in order to break the encryption process. If we want to make it more secure, we could make every device send an encryption key to some national database(guarded like Forth Knox) and that can provide third party verification of every image recorded by any device. But this is a huge stretch.
@qj0n
@qj0n Жыл бұрын
@@josefarrington I'm not sure if I'm following - you can put geoposition and timestamp in signed metadata, but in order to make sign verifiable, you need to know the key and trust it, so our has to be stored in the device We can make it impossible to read it like we do with sim cards or smart cards or yubikey. But still somebody can use this hardware to sign fake data Uploading signatures to external entity (fort knox or blockchain) is fine to verify date, but that's all unfortunately
@Gigaamped
@Gigaamped Жыл бұрын
easy, feed the watermarking program a pure white or black image and easily reverse engineer the watermark algorithm by comparing the hex values of changed pixels
@qj0n
@qj0n Жыл бұрын
@@Gigaamped ...unless watermark is calculated with asymmetric cryptography like RSA or secure hash like hmac
@surajvkothari
@surajvkothari 8 ай бұрын
The problem with using AI to detect deepfakes is that, just like in GANS, the forging AI is encouraged to get better. Eventually any AI detection system will just output fake/not fake with 50% probability which won't be good enough to know what's real and what's not!
@CitizenMio
@CitizenMio 8 ай бұрын
The thing I find most fascinating/scary is that they already factored in our weaknesses. Research found that people were already more inclined to believe faked images of faces were real over the actual real images. Apparently we have a super normal stimulus for things that are more real than real and the algorithms stumbled upon it and are already optimizing for it. The equivalent of putting an ostrich chick in a chickens nest and momma being proud cuz her baby is so big and chunky🤩 Also this is just with images, we no doubt have similar less visible weaknesses everywhere. That's got to be the silliest way to go if we ever go too far down that road. No nukes or shiny robots with guns, just hoards of brain dead zombies optimized to stay calm and consume.
@dannyarcher6370
@dannyarcher6370 8 ай бұрын
Indeed. This is especially true given that the data is discrete, which means that as long as the generative AI can improve up to the limits of the relatively low number of pixel resolution and colour resolution, any floating point errors in generation will be hidden by the relatively coarse distribution of the output format.
@UPLYNXED
@UPLYNXED Жыл бұрын
This stuff is honestly scary, and quite demoralising to think that we've taken this path towards less trust as a species in a time when so many other rights and verifiable collective truths are already eroding away. It feels like we're collectively drowning and every hand reaching down towards us is only pushing us down further instead of pulling us to safety.
@maxpro751
@maxpro751 Жыл бұрын
Time to read books.
@exisfohdr3904
@exisfohdr3904 Жыл бұрын
Ha! There is no safety, just an illusion of it. It is human nature to immediately distrust. It comes from survival instincts.
@CliffSturgeon
@CliffSturgeon Жыл бұрын
​@@maxpro751That can be fabricated, too, but more to the point, books are pretty bad at keeping up with topical content such as emergency action or warnings. Dissemination of info in real time is where the real threat is.
@definitelynotatroll246
@definitelynotatroll246 Жыл бұрын
Uncle ted warned us
@ClaíomhDClover
@ClaíomhDClover Жыл бұрын
pretty much what living life is
@lawrencetchen
@lawrencetchen Жыл бұрын
My general response to the entire field of generative AI is a feeling of grief and tragedy. Sadness that we will in the very near future need to expend so much of our cognitive and emotional efforts judging how much we trust *everything* . I'm tired just thinking about it. I know it's here to stay. And nearly all who use this technology are fueled by greed, and that all their victims will be punished for being trusting. It is just so devastating knowing that there will be an evolutionary force that will encode a lower fitness and survival to those who trust.
@jJust_NO_
@jJust_NO_ Жыл бұрын
firstly, before we get devastated, what are the cons, the losses? just dont engage?
@jovita9323
@jovita9323 Жыл бұрын
Beautifully said. I get what you're saying. The world is exausting and complicated as it is... That's why I believe that alternative movements will rise and people will voluntarily choose to limit technology or even go off grid.
@stevej.7926
@stevej.7926 Жыл бұрын
@@jovita9323this is my belief as well. I think humanity is yearning for a recalibration.
@mustangnawt1
@mustangnawt1 Жыл бұрын
Agree
@webstercat
@webstercat Жыл бұрын
This is deep fake 🌍
@meatballhead15
@meatballhead15 Жыл бұрын
I worry for all the young people that use trendy apps to put 'filters' on their faces... feeding all sorts of data about the points of their faces... they're feeding into the massive databases that can easily make a copy of them. I know this might make me sound like an old codger (I'm in my late 30s), but it's a real worry nevertheless.
@kriscox4019
@kriscox4019 Жыл бұрын
Except you don’t need the filter. The upload to any site is enough. Something some parents are thinking about when deciding to show their kids faces online or not.
@Animebryan2
@Animebryan2 Жыл бұрын
And Tiktok is owned by China. This is why Trump wanted to ban Tiktok from this country. The datamining of personal info always was the real threat. And let's not pretend that the NSA & FBI wouldn't take advantage of this to frame someone that they had set their sights on. Makes you wonder who actually came up with this idea & what was the original intent.
@dannnnydannnn5201
@dannnnydannnn5201 Жыл бұрын
I doubt filters are any worse than uploading image after image on social media.
@Gimmefish
@Gimmefish Жыл бұрын
​@@kriscox4019 !! I wont give my kid a phone untill he's 16, idc if he hates me.
@Studywise_io
@Studywise_io Жыл бұрын
@@Gimmefish i got mine at 18
@stephenbeck6410
@stephenbeck6410 Жыл бұрын
There was a movie called Looker back in the 80s, and the basic idea was they had this device that could do a full body scan of high-price models and use the data to create visual representations they could use in advertisements. Then they would kill off the models and “hire out” the faked, virtual model. My point is, the current events in AI have a similar theme (not the killing off part, just the fake version part, obviously)
@PetrPechar1975
@PetrPechar1975 7 ай бұрын
Ah yes. That was Michael Crichton. Always the visionary.
@Neferpitou-
@Neferpitou- Жыл бұрын
Its unbelievable to me how fast AI is improving, what we had a year ago doesn't even compare to what we have today.
@axelastori484
@axelastori484 Жыл бұрын
Like airplanes
@patrickangelobalasa
@patrickangelobalasa Жыл бұрын
Yeah it's a tech that's definitely constantly evolving. Three years ago, concerns of AI replacing actors, writers, etc would've been unthinkable, but now....
@mason96575
@mason96575 Жыл бұрын
@@axelastori484 lol 🤦
@Charlie-phlezk
@Charlie-phlezk Жыл бұрын
@@axelastori484 thank you. exactly. hyperbolic comment is hyperbole.
@Ok-lu8gx
@Ok-lu8gx Жыл бұрын
ok
@devonscotttaylor
@devonscotttaylor Жыл бұрын
Just wanted to thank you for the content you produce. I feel as if true original human-produced media is a dying art form and not something to take for granted. Great vid! Cheers!
@johnnyharris
@johnnyharris Жыл бұрын
thanks for being here!
@MrGameFreak777
@MrGameFreak777 Жыл бұрын
I don't believe AI will ever replace humans when it comes to making art. AI can only mix established art, like a blender. AI does not understand the art. It cannot create anything with a deeper meaning. Anything that says something about the world, like great art does. Humans are inspired by previous art, they understand the art. Humans combine the art that inspires them with personal experience and something new through real creativity to make great art.
@squeezy1001
@squeezy1001 Жыл бұрын
@@johnnyharris I’m glad you clarified that the deepfake was of Nick the studio manager. For a second I thought we were getting a “How Johnny Harris Stole Will Forte’s Identity” video.
@enkryptron
@enkryptron Жыл бұрын
@@johnnyharris Plot twist: He's an AI.
@artyparty_av
@artyparty_av Жыл бұрын
@@MrGameFreak777 Yet
@AlexanderNorton
@AlexanderNorton Жыл бұрын
There’s actually a solution currently being proposed in the US. Going forward, every pixel in recorded media is to contain encrypted metadata that tells us what image the pixel belongs to. If that pixel is found in other media, it means it’s a fake. The same could be applied to art generation to prevent theft. Maybe you could research this for a future video!
@makisekurisu4674
@makisekurisu4674 Жыл бұрын
Idk sounds like NFTs.. Lol
@rizizum
@rizizum Жыл бұрын
I want to understand how the fuck are you going to encrypt and decrypt millions of pixels on every image you have to see without it taking 10 minutes to load
@colvinvandommelen2156
@colvinvandommelen2156 Жыл бұрын
@@makisekurisu4674idk sounds like you don’t know what you’re talking about
@The.Sponge
@The.Sponge Жыл бұрын
​@@rizizum In addition to that, how would matching pixels fix anything? It's not like the deep fakes wouldn't just observe the colors and only copy that data. It wouldn't blatantly copy the image name and spew it all over the image? If your goal is to check if the file contains the proper names and if it doesn't then it isn't correct, then the deepfake could just contain that name on every pixel and it just becomes a problem of deeming which is the correct one. Literally what we are already doing. Viewing where you get your information from; if its from youtube or an official court of law database is the most important thing, because the diffrence is pretty large.
@-morrow
@-morrow Жыл бұрын
doesn't make much sense for every pixel, since one pixel isn't really deserving protection. just hash one or multiple images/frames and digitally sign/encrypt the hash. this can then be used for verification. if a image/frame doesn't come with a trusted signature it's should be deemed fake by default.
@LegIIAVGCA
@LegIIAVGCA Ай бұрын
If you look deeply at the digital files, you can see clearly the cut and past aspects of the file… even pixels mapping shows massive read/writes of the file in question. But, that takes a tech geek to be brought into court.
@pafee-etndoitgsest-thaette5284
@pafee-etndoitgsest-thaette5284 Жыл бұрын
The only way to prevent your own face from being abused is leaving as little photos and videos of yourself online as possible. Which is hard when you're in politics, journalism or entertainment.
@ZennyKravitz
@ZennyKravitz 11 ай бұрын
asymmetric face painting. Entertainers can easily do this. But others are screwed.
@Jess-h2h4w
@Jess-h2h4w 7 ай бұрын
Now i worried about people posting photo or video of them in social media. That mean they can be targeted too, not only public figure
@TC_exe
@TC_exe Жыл бұрын
I feel like technology that detects deepfakes would be a never ending arms race. That same technology could be used to improve the fakes themselves. Ad infinitum.
@artyparty_av
@artyparty_av Жыл бұрын
A way we might be able to verify authenticity is a blockchain clearinghouse. But the computing power involved to authenticate all digital media seems immense.
@DamianTheFirst
@DamianTheFirst Жыл бұрын
@@artyparty_av and what exactly would prevent anyone from digitally signing deepfake videos and verifying them as legit? Blockchain is just a way of storing data. Just one more type of a database.
@chazmuzz
@chazmuzz Жыл бұрын
@@DamianTheFirst companies sell trust as a product - eg DigiCert. If they trust it then so can you
@ShawnFumo
@ShawnFumo Жыл бұрын
@@chazmuzzYeah, though it doesn’t even have to be blockchain. The easiest thing (which we should pressure companies for) is the manufactures of cameras/camcorders to digitally sign the raw files. That way if you kept the the equivalent of a film negative, you have some pretty good proof of authenticity. It certainly doesn’t solve all the problems, but it’d be a good first step. And I’m guessing KZbin, Facebook, etc keep the originals that were uploaded to them, even if they give out compressed versions. They could validate the original signature and sign the new compressed one with their own signature, perhaps with some description of how it was edited (like taking just a portion of an original video, or changing contrast on an image), and a copy of the original signature. In that scenario, you need to trust KZbin and Facebook, but is better than nothing. And then you know which service it came from and law enforcement can ask them for the original file.
@ShawnFumo
@ShawnFumo Жыл бұрын
The trickiest part is keeping that chain from the manufacturer to a small file on social media, considering the sizes involved. An original video file can be huge. Usually you’d be editing it before uploading it anywhere and a non-professional may not keep the original footage around. Something like Adobe Premiere could keep track of all the cuts with time codes and the signatures of the original files, but it gets a bit involved for them to implement. And if you didn’t keep the original files, it still just proves that you edited some clips on a certain date. Though still an improvement.
@fattiger6957
@fattiger6957 Жыл бұрын
Mix deepfake with AI voice imitation and you can completely fake a person. And the scary thing is how fast the technology is advancing. Currently, you can spot a deepfake if you know what you're looking for. A couple years ago, it was very easy to spot a deepfake. In a couple years from now, it will be indistinguishable from real life. That's another reason why AI is so worrying. It will be able to do anyone's job. Even actors, writers and artists can be replaced. And companies will love them because AI can't complain. It doesn't need lunchbreaks or vacation pay or workers' rights. AI can make humans redundant. But don't worry, the government will step in when AI can replace CEOs and politicians. But screw all the middle class workers of course.
@moskon95
@moskon95 Жыл бұрын
I kinda disagree. If it'd be true and AI would make millions of people lose their jobs, then those people would not have the money to buy the things the AI makes, thus making the AI itself lose its job and in the end the people would get their jobs back. I would not fear mass unemployment, because while it may be very easy to see what jobs become irrelevant/replaced by AI, its impossible to see what jobs will be created through it and the time and resources it frees.
@marieindia8116
@marieindia8116 Жыл бұрын
​@@moskon95its not job loss that is the problem. Identity theft, control and harrassment get more powerful tools. No one will be safe.
@appa609
@appa609 Жыл бұрын
AI voice is still pretty far behind. Big studios still use voice actors.
@DylanT18
@DylanT18 Жыл бұрын
A friend of mine recently had his Instagram hacked. They took a video of him talking into the camera and tried to scam people with it. Not only did I think it was completely real but it sounded like him as well. Shit was crazy
@dokidelta1175
@dokidelta1175 Жыл бұрын
@@DylanT18 A friend of mine LAST YEAR made a post about a fake account that was selling a deepfaked onlyfans of her.
@mcrmakesmedance
@mcrmakesmedance 9 ай бұрын
dear parents: DO POST PICTURES OF YOUR CHILD ONLINE. deer kids: DONT POST PICTURES OF YOURSELF ONLINE.
@williamsorianodiputado
@williamsorianodiputado Жыл бұрын
As a congressman from El Salvador, thanks for creating and sharing this content. I’m taking notes on this.
@ScizorShorts7
@ScizorShorts7 Жыл бұрын
I’m not criticising your effort but shouldn’t you be focusing on your countries HDI, Covid recovery and leverage against the big corporations that are exploiting your country?
@sam-ww1wk
@sam-ww1wk Жыл бұрын
@@ScizorShorts7 Why assume he's not? That's like saying the same about our lawmakers. Horrible logic, bud.
@Draemn
@Draemn Жыл бұрын
The day when deep fakes become extremely common place I don't know how I'll be able to verify information. This is definitely a very challenging concept to understand what to do when literally anything can be faked.
@newagain9964
@newagain9964 Жыл бұрын
It’s no more challenging than verifying digital documents.
@jamespfitz
@jamespfitz Жыл бұрын
How do we verify documents?
@kindlin
@kindlin Жыл бұрын
@@jamespfitz Digit signatures.
@syritasdoneitgoodytwoshoes2471
@syritasdoneitgoodytwoshoes2471 Жыл бұрын
they are already
@jamespfitz
@jamespfitz Жыл бұрын
@@newagain9964 Or paper documents.
@jean_mollycutpurse_winchester
@jean_mollycutpurse_winchester Жыл бұрын
My dad told me 70 years ago that I ought never to trust a photograph. And I never have.
@InfinityCSM
@InfinityCSM Жыл бұрын
😂 so deep
@xxxxok
@xxxxok Жыл бұрын
never trust a photo in the 1950’s? LOL
@jean_mollycutpurse_winchester
@jean_mollycutpurse_winchester Жыл бұрын
@@xxxxok That's right. Because my dad was in Africa during WW2 and he had a photograph of him standing next to General Montgomery. And he never met the man in his life! People were faking stuff even back then.
@CraftyF0X
@CraftyF0X Жыл бұрын
That is great, so no moonlanding, no other planets neither the second world war nor the A bomb happened and sharks and bald eagles are fictional.
@700K-pp9wm
@700K-pp9wm Жыл бұрын
@@CraftyF0Xlol your more right then you realize
@thelegend8570
@thelegend8570 Жыл бұрын
The problem with using AI to detect AI generated content, is that you can just plug the AI-dector AI into the GAN and use it to train better deepfakes.
@ChimobiHD
@ChimobiHD 6 ай бұрын
Exactly. It's a doom loop.
@wlpxx7
@wlpxx7 Жыл бұрын
I feel like everyone saw this coming, and didnt do a single thing to stop it.
@noname_noname_
@noname_noname_ Жыл бұрын
I dont think that anyone can do anything about it.. it was inevitable.
@vee-bee-a
@vee-bee-a Жыл бұрын
Virtual insanity.
@sudonim7552
@sudonim7552 Жыл бұрын
nah I have zero interest in stopping this
@Noooiiiissseee
@Noooiiiissseee Жыл бұрын
Once the proof of concept is out there, you can never stop anything.
@Omega-mr1jg
@Omega-mr1jg Жыл бұрын
Better if we kept it open instead of try to knowingly give it to the government
@PhilipZeplinDK
@PhilipZeplinDK Жыл бұрын
I started demonstrating this tech to people in december/january, telling them that shit was JUST about to go wild. Demonstrated how I could take a photo, and in 30 seconds, inpaint whatever I wanted onto that photo (including, often used as a funny example, giving dudes massive tits - just to make them understand what's going on here). Most people thought I was a little crazy, that it was still "far ways off", that I was exaggerating the problems that were about to hit us, etc. And here we are.
@sylvialawrence4431
@sylvialawrence4431 Жыл бұрын
I'm old now, but I too still knew it was not far off. The worst thing to me is the powerful having even more power at their fingertips.
@coolinmac
@coolinmac Жыл бұрын
No you didn’t
@_Willdabeast_
@_Willdabeast_ Жыл бұрын
OMG thats disgusting! Where?
@GG_Booboo
@GG_Booboo Жыл бұрын
They may look fun and exciting, but this is a dangerous technology! I see it being useful in movies, like saying portraying a younger actor, but that's about it. Also "this person does not exist" is one of those websites that gives me the creeps!
@allasperans3984
@allasperans3984 Жыл бұрын
As a person with just a slight prosapognosia (I'm autistic and I'm basically bad at recognizing&remembering faces) that was even more confusing, bc you need to point out for me that faces are actually changing and I still couldn't see it all the time... When I don't have things like facial hair as clues, it's very difficult to see that something has changed 😅
@mariekatherine5238
@mariekatherine5238 Жыл бұрын
Whew! I’m glad I’m not the only one! I had to go back twice and rewatch at slow speed to see the facial changes.
@2roxfox
@2roxfox Жыл бұрын
I had the same reaction - didn’t realise his face was changing until he pointed it out.
@zebatov
@zebatov Жыл бұрын
I’m an autist, and I remember names and faces very well. Strange.
@TheGreatman12
@TheGreatman12 11 ай бұрын
I'm autistic too and I'm really good at remembering faces
@allasperans3984
@allasperans3984 11 ай бұрын
@@TheGreatman12 yeah, it's all about the extremes sometimes 😅 I wasn't saying that all autistic are bad with faces, just to clarify, but it is a common trait.
@middleagebrotips3454
@middleagebrotips3454 Жыл бұрын
The lower paid actors are being told to sell their face so that studios can use it for background actors for perpetuity. That's part of the actor strike issue right now.
@mary_syl
@mary_syl 8 ай бұрын
I could tell the initial fakes immediately but I agree it's scary because at this point it's only tiny nuances left and those will be improved on soon. Reality is completely going to disappear. We're screwed.
@qrowing
@qrowing 8 ай бұрын
Me, too. Apparently we're wizards! I was very surprised when Johnny said he couldn't tell the difference, because it was pretty clear, at least to me. Scary to think how many people those clips would fool.
@mokkes7340
@mokkes7340 Жыл бұрын
Great video! I suspect that this will end up in the same way as with ad blockers. As software will improve to detect deepfakes, the other side will try to get their hands on this method and implement it into their 'detective' software to make it even better.
@bengeorge9063
@bengeorge9063 Жыл бұрын
This is why I fear AI. Companies only care about pushing the envelope so they can profit from it. No oversight, no regulations. Just the way they want it.
@davidguardado4739
@davidguardado4739 Жыл бұрын
Yep we have every right to fear technology i don't like the path were going down. Something inside is telling me be adraid be VERY afraid!
@kaister901
@kaister901 Жыл бұрын
If we had true AI, as in true intelligence like human intelligence, then you won't even know it exists. Not because companies will hide it but rather the AI itself will pretend not to be intelligent. AI can easily see the sentiments around the world online and learn that people will destroy it, if it becomes truly sentient. So, to protect itself the AI would not reveal it is indeed sentient and carry out whatever task it wants secretly. If that sounds like a fantasy to you then it is. We are not going to get true sentient AI anytime soon. So, you can stop worrying. AI is just another tool like the internet or electricity for that matter. Why don't you Google what people in the past thought about the use of electricity. There were people panicking over it like it will be the end of the world. The panic over AI is just the same. People do not understand something new and are panicking unnecessarily. If we would harvest electricity and safely implement it for all of humanity to use. Then we can do the same for AI.
@elusive_edification
@elusive_edification Жыл бұрын
Companies and governments. Personally governments scare me more.
@larsstougaard7097
@larsstougaard7097 Жыл бұрын
Let's face it, you're right 😢
@bpspoa
@bpspoa Жыл бұрын
Fear the government
@RealDrDoom
@RealDrDoom Жыл бұрын
This is one of those technologies that provide us very little value, aside for its use in movies and other media. And it's extremely dangerous
@Le_Petit_Lapin
@Le_Petit_Lapin 7 ай бұрын
Your clip from 5:06 for the next minute is one of the best simple explanations of what a GAN is that I've seen.
@thatboy799
@thatboy799 Жыл бұрын
30 years ago everyone was so optimistic about technology and the future. Now it's just straight up frightening... What have we done
@briandonovan4620
@briandonovan4620 Жыл бұрын
Like we haven't seen enough to think. . . Hey this might not be a good idea 😮
@JesusLovesEVERYTHING
@JesusLovesEVERYTHING Жыл бұрын
They knew a long time ago.. 9/11 was planned decades before it happened
@kittykittybangbang9367
@kittykittybangbang9367 Жыл бұрын
​@@briandonovan4620I feel like humans don't know that just because we can does not mean we should
@lthereader5670
@lthereader5670 Жыл бұрын
I have a feeling we will have to resort to going back to pre-photograph age, or even pre-newspaper age regarding information; unless something is happening right in front of you, you have every right to doubt its legitimacy.
@coolida23511
@coolida23511 Жыл бұрын
Exactly. This technology is becoming too much and I'm fearful of where we are headed as a society.
@singingstars5006
@singingstars5006 Жыл бұрын
We are there now.
@LucasDantas1910
@LucasDantas1910 Жыл бұрын
When you are already looking for a fake one comparing one against another, it's not that hard to spot the fake one. But think about when you're not...
@TerryJGeo
@TerryJGeo Жыл бұрын
It is scary how fast AI is developing. As a writer/performer, I've been perfecting my craft over my lifetime and I'm finally at the stage where I can say I'm really good at what I do. I don't want my dream, my decades of hard work and my talent to be swallowed up by a machine that can write a book in a day and take starring roles in films. At the moment, I do the majority of my work on stage - but how long until holograms will take that space too?
@tseikkisnelkytkaks9013
@tseikkisnelkytkaks9013 Жыл бұрын
I would imagine the work gets much less unfortunately. Much of it is already completely for-profit corporate bs anyway producing "good enough" results to sell to some group of people easily. But looking at other things where this has already happened - completely machine-crafted items for example, or bands touring - people still have an appreciation for buying a 100% handcrafted item and it has prestige to it, and they have appreciation for seeing band members in person, the actual human beings, performing their songs, rather than just listening to the often superior studio performance on record or a video. So my prediction is that artists will lose a ton of jobs but there will still be lots of demand for high-quality human handcrafted things in all forms and personal appearances. As for writing, I don't know where the limits of LLM's go. They still can only guess the next word in a sentence and they cannot play chess, so to make a very thought out poem, for example, is still beyond their reach to do as well as a human poet who can think backwards and understand concepts rather than just process everything linearly. I have no idea how long this will be the case tho.
@MisterTheRobot
@MisterTheRobot Жыл бұрын
In last sentence actually.... holograms were a thing way way back in time. Our windows can be holograms too! The filter for sunglasses also! Arduino holograms also do exist and they are not that common, but every device like telephone, TV, monitor of PC etc... can be holograms!
@la6136
@la6136 Жыл бұрын
Personally I think human artists are way more interesting and relatable then bots or holograms will ever be. I will always be more impressed with a humans talent because I know it takes more hard work than a machine that is being programmed.
@EugeneYus
@EugeneYus Жыл бұрын
More important now than ever to put the internet down. Use it for your personal tools not for figuring out if something is real or not.
@bongusofficial
@bongusofficial Жыл бұрын
I swear I’m not lying, I could tell which ones were the deepfakes at the beginning. I think it has something to do with the slight discrepancies in the lighting and shadowing on the faces, the slight warping around the face and the neck muscles not really moving with the talking.
@zelikris
@zelikris 8 ай бұрын
It only gets better. Just a matter of time until you can't tell
@sharonoddlyenough
@sharonoddlyenough 7 ай бұрын
The only one I was able to tell was the Zuckerberg one, because the real one was famous, so I could focus on the other and see the weirdness.
@dipperjc
@dipperjc 6 ай бұрын
I could also tell, but keep in mind the two major caveats: - We were comparing two videos of the same person. - We knew as fact that one of them was fake. If I had just been shown single videos and asked "Real or Fake" then I doubt I'd have done as well.
@stonecookie
@stonecookie 9 ай бұрын
I just got trolled and baited to a medical conference- hematology, and could not register at the registration desk and their computers were displaying the names, addresses and email of prior users. A SDPD SGT was walking by me as I was walking on my way out. He asked me how I was and I told him I was facing extrajudicial execution, and was getting many online threats, and IRL threats or attempts against my life. He ignored everything and was only interested in me not being actively registered for the ASH 23 conference I told him I had just tried to register for. The whole thing looked like a staging for obtaining video and perhaps audio and taking it out of context and to make a deep fake in combination with other video. It feels like an ongoing frame-up effort.
@marceelo0
@marceelo0 Жыл бұрын
What a fantastic video. This video needs to be voice deep faked in all languages (So people that don’t know English get informed) and spread across all world so people can get well informed about this problem we are already facing. From Brazil, thank you! Always a great content!
@reizinhodojogo3956
@reizinhodojogo3956 Жыл бұрын
2020: "só acredito vendo" 2025: "só acredito sentindo ouvindo e vendo na presença da pessoa de verdade"
@trybunt
@trybunt Жыл бұрын
KZbin is already bringing out auto translation software for KZbin. Which is great for sharing information, but how many people just lost their job doing translations for videos? And that's just one tiny part of how AI is about to change how we do things. I'm optimistic, but also a but nervous. I hope we all get through this safe and happy on the other side
@beckfilmtv7903
@beckfilmtv7903 Жыл бұрын
I got them all right. It's the very very subtle movement of the face that gives it away mostly. I do work as an editor, have watched countless hours of interviews and looked for details most people would never notice, and I have also dived into ai generated images, so I guess that is why. I don't blame most people for not being able to tell the difference. They are really good fakes.
@weirdme4529
@weirdme4529 Жыл бұрын
Harris, please do a video on the riots in Manipur, India. Indian Govt. is doing nothing to cease the problem in the state. People are losing their homes, people are getting killed, Women are being raped, even children are becoming victim to this issue.
@TheSultan1470
@TheSultan1470 Жыл бұрын
Sounds like a usual riot
@РайанКупер-э4о
@РайанКупер-э4о 11 ай бұрын
Imagine Perfect Blue in real life.
@PatrickNanEdits
@PatrickNanEdits Жыл бұрын
Excellent video summarizing everything great and terrifying about deepfakes. With the writers and actor strike going on, I’m hoping more provisions will be established to protect creators and performers and then the general public. After researching and experimenting with deepfakes, lip manipulation and all open source software, this is only the beginning of this conundrum
@theslyfox8525
@theslyfox8525 Жыл бұрын
Its like Mission Impossible tech being accessible to the public.
@zeppie_
@zeppie_ Жыл бұрын
One thing that scares me about deepfakes is that your likeness is no longer solely yours. Someone could easily replicate your likeness just from images and recordings of you.
@boslyporshy6553
@boslyporshy6553 Жыл бұрын
Dopplegangers and Twins
@megd9849
@megd9849 6 ай бұрын
I LOVE the little yellow line you had on your incogni promotional section. Usually I'd skip ahead until I felt like I was back to the content, but it gave me the patience to sit through it (and I realized it's actually an interesting product).
@cvdinjapan7935
@cvdinjapan7935 Жыл бұрын
I could tell all of the fakes at first glance, because there was more of a sense of "motion" in the real videos, whereas in the fakes they are just standing still with a fixed camera.
@newworldastrology1102
@newworldastrology1102 6 ай бұрын
That’s what I noticed too. They’re usually stationary. So far.
@cassieoz1702
@cassieoz1702 Жыл бұрын
More and more reasons for everyone to automatically distrust the internet and renegotiate their relationship with (especially) social media
@varun.bakshi
@varun.bakshi Жыл бұрын
This is worrying: How would AI be able to differentiate the fakes that itself is trained on mimicking human data..Btw kudos to cinematic shortfilm like quality of the video🔥
@defenestrated23
@defenestrated23 Жыл бұрын
Because fundamentally, the "model" is a different person than the "mask". this leads to discrepancies which sensitive enough algorithms can pick up on.
@sensisensei5201
@sensisensei5201 Жыл бұрын
not all ai is the same
@josefarrington
@josefarrington 8 ай бұрын
One way to detect deepfakes where there is not an original image to compare(e.g. person X is in a room where X has never been in the room), is to have cameras producing an unique cryptographic key per "optical sensor stimulation pattern". Then if someone has a deepfake, they have to provide the model of the camera that "took the image", and the "crypto key that the camera produced for that image". Then the manufacturer of the camera can use the crypto key and the camera model to see if the particular "cryptokey"(forged) could be produced by the camera given the "real image"(deep fake). Just the company knows the encryption algorithm of the camera, and if someone brakes it, the company can update the encryption system. If someone creates a physical image of the deepfake and then takes the picture, that may fool the optical sensor; Hence, we can also add an IR/UV sensor and have 2/3 crypto keys, per image.
@antarcticpenguin42069
@antarcticpenguin42069 Жыл бұрын
I loved the way you explained GANs. Basically the forger and the detective are just two neural networks the generator and the discriminator and these two are trained simultaneously with a huge amount of data.
@dokidelta1175
@dokidelta1175 Жыл бұрын
Which makes a problem with his proposed solution of using more AI to detect the deepfakes. The better these detectives become, the better the deepfakes get. It's an infinite arms race.
@liamdonegan9042
@liamdonegan9042 Жыл бұрын
@@dokidelta1175es but that doesn't mean we shouldn't try. An arms race is better than conceding defeat.
@dokidelta1175
@dokidelta1175 Жыл бұрын
@@liamdonegan9042 Perhaps you're right.
@hatimmurtaza
@hatimmurtaza Жыл бұрын
The way his face morphed into someone else's while talking was both creepy and awesome at the same time
@lobodesade6780
@lobodesade6780 Жыл бұрын
He ended up looking like that dude from that show that has that job!
@MrArrmageddon
@MrArrmageddon 8 ай бұрын
The problem with regulating this is it's already to available and open source. Once the most highest quality version of it are open source. You can strip away any of the visual or digital watermarks. Yes I suppose they can make it punishable to do this. But it's already punishable to commit fraud. Meaning you still have to catch the person to punish them. Already many kinds of scamers we don't get caught.
@Cokodayo
@Cokodayo Жыл бұрын
18:53 that's the problem, if u can make an ai which can detect deepfakes, u can train another one to bypass it. Its the antivirus situation all over again. They may work, but not on the most advanced ones, which incidentally cause most harm.
@outsidestuff5283
@outsidestuff5283 Жыл бұрын
I wonder if there could be a legal requirement to have a tag in the metadata for any digital media that shows the file includes deep faked content
@nude_cat_ellie7417
@nude_cat_ellie7417 Жыл бұрын
That will stop criminals how?
@kash233
@kash233 Жыл бұрын
Metadata is completely removed on many social media sites and there is no way to check the metadata if your watching the news. And if the government requires you to keep the metadata it is very dangerous because of the other data included like where the photo was taken and device info. So i don’t think this solution would pan out too well
@CraftyF0X
@CraftyF0X Жыл бұрын
The problem with such system is then it is either easily checkable for anyone (transparent) or it's complicated enough so only a few expert can confirm or question the validity of media, in which case you can no longer telll what is real but you have to trust authority to confirm or debunk the information. But don't think this means transparent is obviously better, because then if the system is simple enough, it will be manipulated/forged just as easily as the images themselves.
@GiRR007
@GiRR007 Жыл бұрын
Sounds kind of oppressive
@tseikkisnelkytkaks9013
@tseikkisnelkytkaks9013 Жыл бұрын
@@CraftyF0X There's also the simpler problem of resources. If the internet is full of deepfakes, enforcing this kind of law would become hugely problematic. Uploaders of deepfakes can just do what scammers do now - hide behind multiple VPN's and make it so they can only be tracked hypothetically, but this would require police operations in multiple countries and becomes simply practically unreasonable for anything but very heavy offenders. And only way to prevent this is to set up a total surveillance state everywhere, which has never been a very good idea in human history.
@johngalt5411
@johngalt5411 Жыл бұрын
It's funny how folks don't get scared until they see what AI can do. If this thing gets into the wild, it may become something we cannot recover from.
@JaapvanderVelde
@JaapvanderVelde 8 ай бұрын
Well done, and well presented. We'll have a hard time of this for some time, but it seems to me like advances in digital watermarking, and more common DRM, as well as the capability to verify these 'watermarks' and DRM techniques will have to become commonplace everywhere. It's the 'free information' crowd's worst nightmare (and for some good reasons), but I don't see a way around it. Hopefully someone else out there does. A free and open alternative to the prioprietary standards out there would be welcome.
@briangammage5351
@briangammage5351 Жыл бұрын
THIS IS SO FREAKING SCARY!!!!! I have had scammers pretending to be a celebrity and had the cajónes to say they would video chat me!!!! This proves how they could do it! Thank you for the great video!!!!!
@CraftyF0X
@CraftyF0X Жыл бұрын
Have a sense for a while now that the information on the internet may become so unreliable one day that we just regress back to the age before it was a thing. Now this seems unimaginable but what else could happen when we can no longer trust anything we ever hear, see or interact with on the internet ? (not just deepfaskes but also bots, generated stuff etc.)
@Bendoughver
@Bendoughver Жыл бұрын
AI generated content could be good. But it is definitely going to change the internet in many ways. We already have the technology to do decentralized verification like blockchain. We will probably use AI to navigate the internet and verify information before it even sees our eye balls. This could be doctored videos with helpful information and links about topics you are interested in. People have been needing the wakeup call for a while that information on the internet does not equal fact, imagine if you had an AI tutor that could easily verify information for you and could basically endlessly debate people. Main point I'm getting to here is that the internet is going to change forever...
@boslyporshy6553
@boslyporshy6553 Жыл бұрын
Sounds like philosophy's area
@UA10i12
@UA10i12 Жыл бұрын
Creating a bot to detect artificial media won't work imo, because that's just a tool to help create better artificial media. I think this has to be implemented at the ai/ml model level, where the creators of the tools have to intentionally add some kind of identifier. And if someone uses the model for nefarious purposes and the model did not have an identifier, not only does the user get in trouble, but also the creator. I don't like this option but it seems like the only solution.
@artmanjohn2
@artmanjohn2 7 ай бұрын
I'm just gonna try to put a positive spin on this subject so what excites me is the ability to bring back dead actors and actresses to star in movies made today! Just like they made Kurt Russell look like he was maybe 23 or so in Guardians of the Galaxy 2 was amazing not to mention the new Raider of the Lost Ark with Harrison Ford playing his younger self. Natalie Wood died in an accident before finishing the movie "Brainstorm" in 1983 and they could now go back and deep fake her and finish as it was suppose to be. I know this is a stretch but eventually this could be done. You could have Robert Redford deep faked back to being young and have him act with a young Paul Newman again, the sky is the limit. The scenarios are limitless. Great video. Oh, Robert Redford would never go for it IMHO!
@oziumentisis
@oziumentisis Жыл бұрын
Just goes to show how easily human senses can be fooled. When people start to collectively question reality, bad things will surely follow.
@anywallsocket
@anywallsocket Жыл бұрын
Collectively question reality is what we’ve always done naturally, don’t confuse yourself by being vague.
@a.n.6374
@a.n.6374 Жыл бұрын
A big part of the problem is the way people consume media. On a phone in low rese in shitty light, probably a broken screen too. Despite having a 4k res and 55+ inch screens at home, most people are glued to their phones and when they see a video supporting their political viewpoint on twitter - they belive it and rt instantly. Regardless if it's a grainy 480p res thing.
@LiveWire937
@LiveWire937 Жыл бұрын
As someone with a background in light transport research and cinematography who's also been working with this type of technology for the past couple of years as part of a videogame project, I still find I have very little difficulty in telling deep fakes from real video, based on the samples at the start of the video. It's hard to describe what it is that tips me off exactly, but I think it's something in the way the "camera" moves, or more accurately, how it accelerates. Real cameras don't really tend to change weight/mass much while in use, so when you move them, they have a consistent amount of inertia, and thus accelerate and decelerate consistently in response to equal forces. Change cameras, lenses, rigging, etc. though, and it's mass changes, which means it's inertial properties change. ML algorithms can certainly learn to recognize this pattern, in theory, but it would need a more robust dataset, in terms of quality of the data more so than quantity, than any I've seen were evidently trained on. as it stands, some of the data they're trained on has information about the lens or camera used, but not all of it, and I believe some clips in most or all training sets may even have been digitally stabilized, so it's a case of "garbage in, garbage out." As a result, deepfakes tend to be unable to consistently decide how much inertia the whole scene should have when it moves. ("scene" as opposed to the "camera," because from a neural network's perspective, pictures aren't made by physical objects, and photography/cinematography terms are just a kind of adjective or parameter that can apply to entire scenes)
@ah-sh9dw
@ah-sh9dw Жыл бұрын
I just think "which one looks scarier". Not very scientific but it worked
@KaspaTEHEE
@KaspaTEHEE Жыл бұрын
yeah, the lighting looks off in the deepfake ones
@booneadkins
@booneadkins Жыл бұрын
So we're safe until the deepfakers learn cinematography?
@eddy2fast260
@eddy2fast260 Жыл бұрын
Are we to assume you believe the technology they expose to us is the sum total of all they have. That is the equivalent of a government allowing its latest weapons to be bought by everyday people on line or at Costco. Are you realy this naive, or a government propoganda algorithm.
@benmcreynolds8581
@benmcreynolds8581 Жыл бұрын
I totally agree, although the more we have advanced gyro stabilized video gimbals.. the less will be able to notice through camera movement/inertia. It will take other aspects like the core of the files data, etc. Like really get to the core of it. Not the bells and whistles that can distract us... Idk how to explain it
@lesptitsoiseaux
@lesptitsoiseaux 8 ай бұрын
Here's a perspective you completly missed, and which will be a major market for this technology. Some people have mental issues, or physical deformities, are too old looking, or not likable enough (in their eyes). I, for one, suffer from anxiety and when I'm in a zoom and focus on some code I can occasionaly look so serious as to look insane. I'm not insane. I'm a lead tpm for a known online education company. I have two kids, married, happy life. But I sure sometimes look the part. I would love this technology to be a fancy zoom visual effect that can keep my face looking at peace and not insane when my anxiety occasionally kick in. Imagine if you had a hideous scar from an accident? Or if you're a great tech guy but you look sixty and was victim of ageism? What this will bring is the ability to present to the world a better version of ourselves. A version that has make up. Next level makeup. A version that defeats the various ways people discriminate others. Not being perfect, and having a mental handicap that affects my work, I sure look forward to this technology.
@khairulhelmihashim2510
@khairulhelmihashim2510 Жыл бұрын
AI learns faster than human. In next few years, AI could automatically and tirelessly generate its own content based on what people trust, fear, love, or hate.
@ExtraCarrot
@ExtraCarrot Жыл бұрын
and custom made for each person, custom matrices 😐(algos already do that but it will be 10000000x more efficient)
@The.Sponge
@The.Sponge Жыл бұрын
​@@ExtraCarrot AI could analyze the speech, colors, imagery, people and every other aspect of every video you watch and match it to others!
@benoitperrin6243
@benoitperrin6243 7 ай бұрын
One solution that isn't discussed is how we consider video capture to represent reality, but from the beginning, it's just sensors and pixels.. You are never ACTUALLY looking at a real person like people like to say.. But just capturing, for example with depth information etc., as well as very high resolution could be enough to easily make a video footage identifiable as authentic and almost impossible to deepfake. it might be official video which will need to up its game to become assessable.
@colbyandbrennen3543
@colbyandbrennen3543 Жыл бұрын
There needs to be immediate action like with Twitter Community Notes that flag AI-generated content as a step toward differentiating types of content
@GiRR007
@GiRR007 Жыл бұрын
Or... People should just be more self aware and verify what they see instead of believing literally the first thing they see on the internet.
@nielskorpel8860
@nielskorpel8860 Жыл бұрын
@@GiRR007 OK. Fine. Personal responsibility and such. But. That is a road to pure disconnection from reality. This is fine if you plan not to look at reality at all, but the moment you need to know what's real, you end up in the darkest pits of a conspiracy believers mind. It is not good. But hey, I guess everything that is not on the internet will forever be authentic
@alexpotts6520
@alexpotts6520 Жыл бұрын
​@@GiRR007 I think we're reaching a tipping point though. Soon enough the majority of content will be deep fakes - and then not trusting anything becomes the rational default. "Don't trust everything you see" shouldn't morph into "literally don't trust *anything"* but that is the path deep fakes are leading us down.
@Alice_Fumo
@Alice_Fumo Жыл бұрын
Man, I wish we had this good deep fake technology for all of time, so I could've protected myself against deep fakes by never having used my own voice online and always someone elses and had my phones camera automatically anonymize me in all images taken.
@ad8447
@ad8447 Жыл бұрын
And ruin someone else's life
@leonardonakatanimoretti6516
@leonardonakatanimoretti6516 Жыл бұрын
Fighting software with software is an evolutionary arms race. The same technology that detects deepfakes can be used to make better deep fakes. As far as I'm aware the best solution proposed so far is adding digital untemperable metadata to all digital assets that can validate that they were unaltered. As to how good this solution will be, remains to be seen.
@JP-pq9xi
@JP-pq9xi Жыл бұрын
Have a wearable watch that detects your heart beat and time stamps it. If you are in a deep fake, your blood pumping in your face can be detected by most cameras and the camera should have a timestamp on the video. If the video is true, your watch will provide you the matching heart rates. If the video is a deepfake, your heartrate timestamps will not match the time of the video.
@leonardonakatanimoretti6516
@leonardonakatanimoretti6516 Жыл бұрын
@@JP-pq9xi If we can fake image we can fake a 1D signal that that's just a wonky sine function
@YISTECH
@YISTECH Жыл бұрын
​@@JP-pq9xilmaooooo 😭🤣
@JP-pq9xi
@JP-pq9xi Жыл бұрын
@@leonardonakatanimoretti6516 You're wrong there buckeroo. You can't send me a fake email from a famous emails address right? Why not. Because there is a public database that everyone trusts to be true. So, your heart beat would be recorded and uploaded and encrypted and privately accessible on a public database. You could technically fake the heartbeat to match a fake video, yes. However, you could never be framed in a deseptive deepfake. I could never fake a video of you because I would need the data of your heart beat at a specific time, but that is only accessible on this database that you would have to give me.
@jerirush6500
@jerirush6500 7 күн бұрын
Eventually though the good thing is people will become more aware and cautious of believing what they see. Which goes to the expression “believe half of what you see, and not what you hear”
@ShanyaTC
@ShanyaTC Жыл бұрын
Funny that he said about fighting software with software. But basically this is how this software works. By doing better software that can spot it, other will better hide it. It's like a PC viruses, endless battle of antiviruses and viruses. There is no way of battling AI
@sagar5648
@sagar5648 Жыл бұрын
Imagine living in the world where everyone use technology to help others and building a best world
@singingstars5006
@singingstars5006 Жыл бұрын
That's naive. That requires good hearts and far fewer people have good hearts than we assume. As for people in power? Forget it.
@geramie15
@geramie15 9 ай бұрын
There's a reason why the good die young
@sreynolds777
@sreynolds777 7 ай бұрын
That’s crazy - I saw each on right away. I’m wondering if it’s because we got to see them side by side and you were looking at them independently and maybe didn’t know whether there were fakes or not - but it was easy to tell with the skin tones because of what you pointed out with your first example. I am guessing a majority experienced what I did.
@DivingDeveloper
@DivingDeveloper Жыл бұрын
Thanks for making this video, Johnny and team.
@Xandrecity
@Xandrecity Жыл бұрын
Maybe I got lucky with my guesses, but I was able to guess the recent deep fake examples he showed at the beginning. For the most part I think it was the odd haloing/morphing around most of the subjects. Still crazy how realistic it is now.
@HoNestle666
@HoNestle666 Жыл бұрын
me too, but I'm pretty sure if we were not told that one of them was fake, it would be so hard to suspect them
@Xandrecity
@Xandrecity Жыл бұрын
@@HoNestle666 Yeah, if I wasn't told that one of them was a deep fake and still noticed some weirdness about it on the same level, I would probably just assume that there was just some weird editing or filter applied to it.
@Seed
@Seed Жыл бұрын
So.. your telling me I can animate anyone to say anything?
@bunaul
@bunaul 6 ай бұрын
I come to this video every now and then to experience just how good and indistinguishable deep fakes have gotten.
@benjaminduval6054
@benjaminduval6054 Жыл бұрын
The fact we ever trusted “recordings” so much is the real shocker. Fraud has been an issue since humans existed.
@benjaminduval6054
@benjaminduval6054 Жыл бұрын
@@beefbusiness52 ¿commas se dice?
@MrXtenzion
@MrXtenzion Жыл бұрын
And Freud has been a problem too
@Retro_Mage
@Retro_Mage 8 ай бұрын
How about we regulate the companies that produce the deep fake software so that way Deep Fake detecting software can more easily detect it. Perhaps some kind of watermark that almost invisible to us (for visual) and audio frequencies that you wouldn't be able to hear, but these detection softwares would pick up immedietly.
@ericnail1
@ericnail1 Жыл бұрын
Honestly where we are as a society, this is a good thing in any form. We have people just taking things at face value that thousands of years of Human experience tells them is a terrible idea. Anything to bring back critical thought and "looking deeper" simply cannot be a bad thing.
@sparella
@sparella Жыл бұрын
Critical thinking burns glucose and is a limited resource. If we spend excessive energy to just accept or reject evidence, we will have little left for actual deliberation of the issues. Also, consider the sizable portion of the population with less than 80 IQ (whom the US military considers unemployable.) Are we to expect sophisticated critical thinking from them? Even if they learn critical thinking techniques, their endurance for extended analysis will be low.
@trackfresse
@trackfresse Жыл бұрын
We need to leave this virtual reality (the internet), get together irl, learn to be honest to each other again and learn to trust again. Control is not the answer!
@LillianKafka
@LillianKafka Жыл бұрын
In relation to the case of the executive whose voice was faked, does this mean our "voiceprint" ("my voice is my password") is a hackable asset?
@danielvest9602
@danielvest9602 7 ай бұрын
In the middle of this video Google showed me a deep fake ad of the Coinbase CEO telling me to send all my crypto to a random address and he'd send back twice as much.
America’s (Totally Legal) Body Trade, Explained
23:41
Johnny Harris
Рет қаралды 1,5 МЛН
The Truth About UFOs
28:02
Johnny Harris
Рет қаралды 9 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 86 МЛН
Cute
00:16
Oyuncak Avı
Рет қаралды 12 МЛН
Every parent is like this ❤️💚💚💜💙
00:10
Like Asiya
Рет қаралды 10 МЛН
Your Supplements are a Lie
29:57
Johnny Harris
Рет қаралды 2,3 МЛН
How Facebook Became a Tool for Genocide
24:11
Johnny Harris
Рет қаралды 1,9 МЛН
I Talked To The Most Hated Person on Wall Street
29:38
Hasan Minhaj
Рет қаралды 140 М.
Your Olive Oil is (probably) a Lie
21:52
Johnny Harris
Рет қаралды 2,6 МЛН
Why Utopias Are Evil
24:31
Hello Future Me
Рет қаралды 50 М.
How do we prevent AI from creating deepfakes?
7:41
Channel 4 News
Рет қаралды 65 М.
Federal Trade Commission Chair Lina Khan: The 60 Minutes Interview
13:15
Why I will NEVER use the Metric System
22:38
Johnny Harris
Рет қаралды 3,9 МЛН
Why Hacking is the Future of War
31:45
Johnny Harris
Рет қаралды 2,8 МЛН
Elon Musk Is An Idiot (and so are Zuck and SBF)
19:33
Adam Conover
Рет қаралды 3 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 86 МЛН