is there really much of a difference? i dont think so
@poikazal3 жыл бұрын
This is probably the best walkthrough explanation on deepfakes I've ever seen. Quick question, how long does it take to merge? Will it depend on the type of PC your using?
@Deepfakery3 жыл бұрын
Merging should go pretty quickly, at least a few FPS. Adding extra options like color transfer and super resolution will slow it down, sometimes dramatically. I believe the process is highly CPU.
@ruski1712 жыл бұрын
@@amj76425 shush
@MMAUniversityTime4 жыл бұрын
Don't understand why other videos have such a hard time explaining the merge part, Thank you!
@CerebralRiches4 жыл бұрын
right? now i understood it at once. others are making my brain commit suicide.
@Deepfakery4 жыл бұрын
I don't get it either. They just skip right over really important and basic stuff while adding in all sorts of useless info. Its like they expect you to know how to do it before even watching the tutorial.
@LexlutherVII3 жыл бұрын
you described the school education system in one comment!!!😂
@LexlutherVII3 жыл бұрын
@@Deepfakery They are naive "ASF" "SMH"wanna make a simple thing into rocket science!!!🤢
@LuigiLuppo20213 жыл бұрын
Excellent... I followed the tutorial faithfully and got the same result as you ... very cool ... really amazing ... thanks!
@Deepfakery3 жыл бұрын
Great to hear!
@mandiraganguli78142 жыл бұрын
@@Deepfakery Is this software free of cost?? or I have to pay?? Awaiting your reply..
@JohnWhite-iz4nl2 жыл бұрын
@@mandiraganguli7814 its free
@dreamsanimationstechnology3363 Жыл бұрын
hi... how can i download soft
@The404Studios4 жыл бұрын
This is dangerous, and super freaking realistic what the hell.
@HuyHoang-gq4kz4 жыл бұрын
why this is dangerous
@JoeMitchell8684 жыл бұрын
@@HuyHoang-gq4kz In the wrong hands of somebody with bad intentions
@hayleeeeeee4 жыл бұрын
Not dangerous at all as wont work for most people, takes to long and the results are very poor to be honest, need professional software and knowledge to pull off anything remotely good really.
@hohohappyday774 жыл бұрын
@@HuyHoang-gq4kz Because the tool will waste your time.
@K3v1144 жыл бұрын
@@hayleeeeeee dude, in a matter of a year it went from needing thousands of high quality pictures of a person in all different kinds of lighting and days of training to a single video and 1 hour of training. Soon it will be one picture of subject A to perfectly replace the face in video B in 1 minutes. Then we’ll have audio deepfakes and bam.
@binbinn284 жыл бұрын
I'm stucked at the "Starting, Press "Enter" to stop training and save model" The trainer doesn't show up.
@Deepfakery4 жыл бұрын
What is your GPU?
@electricanimation33794 жыл бұрын
@@Deepfakery I have a 2080 super and it won't work I'm stuck at that point
@AppyTheApe4 жыл бұрын
@@electricanimation3379 will this work properly on an integrated graphic card? I have Radeon Vega 8.
@keyopipipi42984 жыл бұрын
@@AppyTheApe work for radeon vega 8?
@AppyTheApe4 жыл бұрын
@@keyopipipi4298 I asked the same buddy
@Deepfakery4 жыл бұрын
*IMPORTANT: CHOOSING THE RIGHT BUILD* Download Here: mega.nz/folder/Po0nGQrA#dbbttiNWojCt8jzD4xYaPw DeepFaceLab is designed to run on Windows 10 and Linux. DFL 2.0 NVIDIA RTX3000 series build - NVIDIA 3000 series GPU required. DFL 2.0 NVIDIA up to RTX2080Ti build - NVIDIA GPU with CUDA 3.5, 5.0, 6.0, 7.0, 7.5, 8.0 and higher. Check your GPU here: developer.nvidia.com/cuda-gpus CPU training requires AVX instruction set. DFL 2.0 DirectX12 build - AMD, Intel, and NVIDIA devices with DirectX12. DFL 1.0 OpenCL build - Devices supporting OpenCL. This version is no longer maintained.
@wyngreece72 ай бұрын
for nvidia GTX super 1660?
@CerebralRiches4 жыл бұрын
This worked well for my 1660 ti. It took 1 hour for 35k Iter's. I'm also happy with the result. Thanks for the tutorial!
@choksm90794 жыл бұрын
whats your cpu and ram?
@parishna48824 жыл бұрын
@@choksm9079 lol why? The gpu does the work.
@IzludeTingel4 жыл бұрын
12 hours for me to reach 49k iterations lol, 9900k underclocked to 2.8 (speed steps to 3.6)
@karenslivesmatter21863 жыл бұрын
Guys how do you not keep pressing p
@karenslivesmatter21863 жыл бұрын
Nvm I asked a dumb question I just realized they keep running and p is just for preview
@SkyShazad3 жыл бұрын
This is the BEST straight Forward Video out there for learning this ... THANK YOU
@I77AGIC Жыл бұрын
this is perfect for first timers. thanks for *acutally* keeping it short and sweet
@toxicshot4528 Жыл бұрын
short and straight to the point, thanks!
@worldmasterpiececartoonsli1332 жыл бұрын
Hi there! I really enjoyed the video but I came across a little issue... I get the following error when doing the step at 3:09: Error: No training data provided. Did I miss a step? Please could you help me? Thanks.
@Deepfakery2 жыл бұрын
Make sure you extracted the images and then the facesets from both videos.
@worldmasterpiececartoonsli1332 жыл бұрын
@@Deepfakery Okay thanks!
@XMrRex4 жыл бұрын
Wow really good and straight to the point. Thanks for the video man I've been interested in this for a while. Is there some sort of guidelines, for example if I want to attach my own face to a video and I'm recording a source vid. What would be the fastest way to capture all my face angles and have the source be as small as possible? Just look around in all directions?
@Deepfakery4 жыл бұрын
That sounds like a good start for a general faceset, but you also want many facial expressions, so maybe try reciting something during the recording or have someone interview you. If you’re doing a specific video then anything you can do to match the angles and lighting will be a great help. You can delete unnecessary src images during the View Facesets step.
@1GTX13 жыл бұрын
I was about to give up but than i used the rtx2080ti zip build for my 1660 super and it works, thanks for the video!
@Deepfakery3 жыл бұрын
Yeah there’s 2 different builds now
@Luciferaorg4 жыл бұрын
when I click on train Quick96, it iniciate models at 100%, then loading samples at 100%, then the second loading samples do not start, I have a Rizen Tyhreadripper 1920 with 12 cores and 24 threads and an Nvidia GeForce RTX 2070, do you think the problem is because I am using feepface lab from an external hard disc? I will copy it to my internal hard disc and see what happen
@teriosshadow174 жыл бұрын
Here is the issue i have with it. It says error "Failed to load native Tensorflow run time" What am i doing wrong?
@Deepfakery4 жыл бұрын
Could be a file writing error. Try unzipping the package again. Sometimes one of the little python files gets corrupted during extraction.
@Gaurav-by5bb4 жыл бұрын
Wow man u are replying to each comments hatsoff bro 👍🏻
@Deepfakery4 жыл бұрын
I'm trying to help all the people who are in the same situation I was. It took a long time to figure out how all this works and I'm still learning each day.
@vodkavodka89033 жыл бұрын
@@Deepfakery thank you sir
@AlexDelgado3284 жыл бұрын
Time To make a dame da ne meme
@xvx_j_xvx52554 жыл бұрын
You don't need to use this just search up dame da ne tutorial
@everettsalmans1044 жыл бұрын
Just use python and Kawping
@Deepfakery4 жыл бұрын
They're right, it's a different software called First Order Motion. You can make a kind of Dame Da Ne meme using DeepFaceLab but it would look weird. Best bet is to use the other method...
@parishna48824 жыл бұрын
honestly, that janky method is better left out of the deep fake side of things. More like the obviously bad fake to annoy your friends with, side of things... lol
@methadonmanfred27873 жыл бұрын
@@parishna4882 quality of the result depends on the user, you have to know how to use it properly, then you can get some damn good looking deepfakes
@hagemaru964 жыл бұрын
i get phyton has stopped working during training
@danielwillems9114Ай бұрын
Thanks for the detailed instructions. Approximately how long should one expect this process to take from start to finish?
@pasatorman82943 жыл бұрын
what do "f", "wf" and "head" mean in the faceset extraction? what changes depending on what you choose?
@Deepfakery3 жыл бұрын
They are different sizes/areas of the face that can be trained. Check out my faceset extraction tutorial for a full explanation
@fakeshemptaboo2 жыл бұрын
@1:42 all I get is “failed to load the native tensorflow runtime”… how do I fix this?
@KaelumKrispr3 жыл бұрын
Having issues with the samples loading part in the training, just stops after all the samples load then says press any key to continue, which causes nothing to happen
@FJPkagami3 жыл бұрын
i have the same problem too
@kalevteener33802 жыл бұрын
same
@paulgeorge92282 жыл бұрын
are you using the right build for your nvidia or amd card?
@MCMIXING12 күн бұрын
@@paulgeorge9228 what build do i need for 7900xtx? thanks
@yasayanpan4 жыл бұрын
Hey man, this is a great video! I just have one question. After I did everything in the video the result I got was just a still image. What can I do to fix this?
@Deepfakery4 жыл бұрын
Did you do both final steps of the merger, apply settings to all frames (shift + /) then render all frames (shift + >)?
@yasayanpan4 жыл бұрын
@@Deepfakery Thank you so much for answering, I must have missed those steps.
@charan19693 жыл бұрын
@@Deepfakery Faceset extraction is not working properly on command prompt... Extracting faces ... Error.. I didn't get any options to type CPU or GPU . . . Unable to start subprocess . Press any key to continue 😞
@diegorobles100able3 жыл бұрын
@@Deepfakery when im merging im not getting any response when the merger keys pop up, i cant set any of the settings
@lewisjaygomes161711 ай бұрын
@@Deepfakery i did this but the merging doesnt happen. it never starts. what is going on?
@ym171914 жыл бұрын
amazing, clear explanation! worked like a charm on my gtx 1080 and 2700x
@Deepfakery4 жыл бұрын
Excellent! You should be able to use the SAEHD trainer with that setup. I'm working on a tutorial but there's alot more details to cover.
@worldmasterpiececartoonsli1332 жыл бұрын
Hi again. I'm almost at 2000 iterations in my training but I still have just weird pixels next to the faces. There is no progress. Is this to do with my drivers?
@Deepfakery2 жыл бұрын
2000 iterations isn't much when using Quick96. Let it go overnight at least. If you're just seeing weird colors and no shape face its possible you're using the wrong build for your card.
@kimotee10813 жыл бұрын
Whenever I start training quick96, it pops up a window saying python has stopped working. How do I fix this?
@pockydew94672 жыл бұрын
I tried training quick96 with my GPU(RTX3060) but it stops at "Press any key to continue". I had to use my igpu(AMD Radeon Graphic) to render but do you have any solutions to fix this?
@ChristopherMikrowelle3 жыл бұрын
On Step 5 when i do the training it, my training preview window doesn’t open. Please help me I’ve been looking for the solution and for help everywhere and noone can help me:/
@SlimeCore_3 жыл бұрын
if you still have the problem just close it and start again it should open after some tries
@exZact4 жыл бұрын
If you remove upside down photos of face in the dst will it cause the mask to cut out in the video? Or getting rid of unwanted faces? I know it won't affect src but does it affect dst?
@Deepfakery4 жыл бұрын
Upside down photos are usually a result of false face detection. You will need to remove them and they will be skipped in the merger. There can be more than one face per person sometimes, so those upside down phots might have good ones in the previous or next file. Check the filenames for possible duplicates. If you have to remove those faces then you can either fix it in post by duplicating a nearby frame, or skip that section of the video altogether.
@anonymousviper50754 жыл бұрын
hey! just wanted to ask can i select source(data_scr) as image and destination(data_dst) as video and is the graphics card important for it
@Deepfakery4 жыл бұрын
If you want to do a single image (like the Baka Mitai meme) look for First Order Motion Model.
@ea-fc-mobile-goals Жыл бұрын
5:07 how many times do we have to press the shift + / key? just once or till the frames/video ends?
@Deepfakery Жыл бұрын
Just once, it will apply the settings from the current frame to the remaining frames, then shift + > will process the frames
@ea-fc-mobile-goals Жыл бұрын
@@Deepfakery thanks man. With your tutorial I was able to make a pretty good deepfake. Need to start learning the adv settings now.
@ChuckstaGaming4 жыл бұрын
OMG, this is gonna be fun! Gonna stick myself in my favourite shows and film 😂🤣
@armitx93 жыл бұрын
🤣😂
@XXLV-3 жыл бұрын
Hahahah
@brockphillips64113 жыл бұрын
Can you use a picture and insert that face into the video or do you need two videoos?
@amoghvarshhattigoudar58323 жыл бұрын
It takes more than a day just too create 10min video
@markkocsicska25903 жыл бұрын
@@brockphillips6411 you need 2 videos, but any free video editor software can put pictures after each other using a user friendly interface and extract it as mp4
@mapiles263 жыл бұрын
Hi just a question how you got so many it/s per second mines running at 1.20 is it depends on the hard ware? or the software i downloaded, i use the DX12 something
@Deepfakery3 жыл бұрын
I have 1080ti's so I use the NVIDIA version. It is faster.
@bigbanana5314 жыл бұрын
So I’m having a problem with step 4 when I go to check the faceset extracts from data_dst and data_src it doesn’t show me anything it just pops up with some sort of browser and and only shows images extracted from the video and they aren’t aligned I followed all the step correctly do you have a solution?
@Deepfakery4 жыл бұрын
You should be seeing a file browser with square images of all the faces that have been pulled. Those are the aligned faces, meaning they’ve been rotated upright and cropped to a square. Are you seeing something else?
@bigbanana5314 жыл бұрын
Deepfakery I solved the problem
@bigbanana5314 жыл бұрын
I ran into another problem, when I go to convert the face to the video after I finished training it says no faces found for 00001.png, copying without faces did I miss a step after the training
@Deepfakery4 жыл бұрын
Did you run the interactive merger? You need to run Merge Quick96 then Merge to mp4
@bigbanana5314 жыл бұрын
Deepfakery I’m using an older version of the software for AMD.
@IEnjoyDarkSouls2PVP2 жыл бұрын
when i get to the merger part of the vid the face is nowhere near as clear as urs mine is really blurry did i do something wrong maybe
@ronanm44184 жыл бұрын
followed the tutorial and got a pretty good result, it was actually pretty easy!
@kaoe1454 жыл бұрын
what specs do you have and how long did it take
@ronanm44184 жыл бұрын
@@kaoe145 I have a nvidia 1060 6GB max q, I only did about 3000 iterations and it took probably less than an hour. Let me know if you want to know the other specs but I'm pretty sure gpu is the most important
@kaoe1454 жыл бұрын
@@ronanm4418 thanks for replying I have a gtx 970 I been training a 5 min video it’s been running for 6 days the preview is still Blurry how long was your video
@ronanm44184 жыл бұрын
@@kaoe145 I used the default one that came with it, the elon musk/tony stark one. Mine wasn't perfect but I didn't leave it running that long
@MrEmotional332 жыл бұрын
May i ask which cpu you have?
@notallpolitical4 ай бұрын
I am running with a 4080 super on the 30 series executable as thats the most modern, however, my iterations per second is only 1.46it/s when a 1080ti in the video is getting 3.71it/s , how do I increase the iterations, my GPU isnt being utilised at all, thanks
@se96011 ай бұрын
Anything more recent thats better than this for deepfakes?
@SongStudios6 ай бұрын
The same software has a more capable model called SAEHD Quick96 is more so meant for tests or fun. Warning: I recommend you to learn everything you can about deepfakes to get good results. You ***NEED*** a powerful computer, more importantly, a powerful GPU If you want to go for highly realistic models, I cannot recommend anything less than a 3090
@BeyondAnabolics3 ай бұрын
@@SongStudios how can you use saehd? Do i have to press "train saehd" instead of "train quickxxx" following the same steps as this tutorial, or there are some other steps to do?
@matu4pc4883 жыл бұрын
I have gtx 1070 and me didnt open training preview (i must go with cpu but I do not want go with cpu) 3:15
@intheworld95784 жыл бұрын
I have a problem. when i merge it just shows me the video with the original face without the face that I chose
@anc52754 жыл бұрын
Me too 😢
@TREXHUNTERX3 жыл бұрын
have you solve this problem?
@a8468103 жыл бұрын
CATFISHING TO A WHOLE NEW LEVEL
@SECourses4 ай бұрын
would these pop up windows work on ubuntu linux desktop? those GUI windows?
@BnymnSntrkNu4 жыл бұрын
best deepfacelab tutorial without a doubt. thanks man!
@Deepfakery4 жыл бұрын
Thanks for the support! Try the SAEHD model next, its a similar process and you can even use the same files from this tutorial. Only thing I would mention is turn off 'flip faces randomly', and use random warp for a good while then turn it off. There's alot more options to SAEHD but it doesn't have to be complicated either.
@fakerdeeper93293 жыл бұрын
awesome tutorial thank you. just started uploading my own deepfakes today... hopefully I can get better and better
@abrahamnunez97614 жыл бұрын
i followed the instructions but when i play the merged video, keep feezing the video in 3 secs while audio is running up to end of video. help me
@Deepfakery4 жыл бұрын
Did you do both final steps of the merger, apply settings to all frames (shift + /) then render all frames (shift + >)?
@abrahamnunez97614 жыл бұрын
@@Deepfakery i did that right now, still get only 3 secs video. i will try now more frames
@SuperFinGuy4 жыл бұрын
@@abrahamnunez9761 In my case the interactive merger is not working properly, I use it only to play around with the settings, after that I rerun the merger and just say no to the interactive merger, and everything goes smoothly.
@hans9834 жыл бұрын
I also only get 3 seconds. Could you solve it?
@stoney8193 ай бұрын
can I re use the source for multiple destinations? or do I have to clear the workspace and re train every time
@onkarjadhav384 жыл бұрын
I have gtx710 and 24 gb of ram. Which one would be better to use?
@Deepfakery4 жыл бұрын
Not sure what you’re asking. Your 710 has 2gb of vram correct? You should be able to run Quick96 on that, but 4gb is recommended for the SAEHD model
@onkarjadhav384 жыл бұрын
@@Deepfakery I mean to say, Should I use CPU or GPU?
@Deepfakery4 жыл бұрын
I think you will have to use CPU. I have just checked that DeepFaceLab requires your GPU to have CUDA Compatibility 3.0 and above. Yours appears to be version 2.1
@onkarjadhav384 жыл бұрын
@@Deepfakery Yes! So is the process same for CPU?
@Domi-lb2pj4 жыл бұрын
Dude wtf why would you have so much ram and that gpu
@BhavyaGandhiViolin3 жыл бұрын
heyy! i am getting the error in the first step itself where it says, the system cannot find the specified path. ' "" ' is not a recognized as an internal or external command
@kalel_with_a_hyphen48123 жыл бұрын
did you fix it?
@Eclair_Visual3 жыл бұрын
Fatal Python error: Py_Initialize: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' ??????????
@h3xagon00013 жыл бұрын
use 7zip to extract it
@plixplop2 жыл бұрын
SO cool, thanks for the tutorial. Will this method handle applying the fake face onto a target face that is turning to a full-on side profile? What about being partially in shadow? Or does it need to be a pretty well-lit target face? Thanks!
@Deepfakery2 жыл бұрын
Yes, it will work for almost any face with enough effort. However it will require using an XSeg mask to really get the tough angles and dark faces. Also a good faceset of course. Profiles are kind of difficult to do well; you can see some examples in my latest videos.
@FemboyLucky3 жыл бұрын
Hello! I was following your tutorial and man! It's very detailed but I do have a question/problem. When using the face extraction, Some faces are either upside down, blurred, out of placement or not even there. Do I just delete it and re-run the extraction or Can I just delete and continue from there?
@Deepfakery3 жыл бұрын
Check out this Faceset Extraction Tutorial - kzbin.info/www/bejne/p2WXfYOvnMmArrc If you still have questions feel free to ask!
@kalvinwin592 күн бұрын
Is there any way to streamline the deepfake ? I just want to be able to pass in a video and have it go through and choose a face from a dataset. I would like to be able to pass in a meeting with 3/4 individuals and have the deepfake be generated. Is this possible ?
@JustAGuyProduction3 жыл бұрын
First of all, great video. You're a great teacher. Second, I know deepfake was invented for creating "adult" content and I shouldn't do it, but I can't resist the urge to put some of my favorite celebrities into my favorite "adult" videos.
@skymoore31773 жыл бұрын
Username checks out
@_Looft3 жыл бұрын
bruhh what I would've never even thought of that. Seems like a lot of effort just for a nut lmao
@Akotski-ys9rr3 жыл бұрын
@@_Looft lmao thats kinda weird
@elsaylli21613 жыл бұрын
weird ahh
@seanplew65482 жыл бұрын
lol u didnt need to say this out loud my guy
@felixbo4 жыл бұрын
Thanks for the great explanation. Everything worked fine so far. The problem is, i am stuck at the merging process. It tells me merging is 100% but the lines "Session is saved to model/xy.dat. Done. Press any key to continue" (5:17) are not showing up. It doesent save the session, there is no file inside of the folder. I waited for half an hour after it was 100% but seems like its stuck forever. Tried a second time, same problem. Do you have any idea what could be causing the problem and may you help me?
@felixbo4 жыл бұрын
Hey i "solved" the problem by ignoring the missing .dat file and converting the merged frames. I didn't know these were created.
@Deepfakery4 жыл бұрын
Yeah once it reaches 100 and the hourglass goes away it should be done. As you've probably seen the files are in data_dst/merged
@boondox2704 жыл бұрын
I love the Arnold Schwarzenegger Sylvester Stallone deepfakes. I wish someone would do John Candy and Chris Farley!
@deeber359 ай бұрын
If I have a frame where the face is at a weird ngle, do I delete in Aligned Results, will I still get to manully extract it? Not sure what deleting a frame means for masking in that frame.
@lendur27434 жыл бұрын
Never thought deepfaking would be so easy
@JainamSutaria7774 жыл бұрын
Exactly. it's not an easy
@nukmuk4 жыл бұрын
@@JainamSutaria777 the video is literally 6 minutes long soo .......
@JainamSutaria7774 жыл бұрын
@@nukmuk Yeah! 🌝
@jj9879879874 жыл бұрын
You think its easy because you dont know why it works. You are just following a video and pressing buttons. Someone already did the hard part for you.
@lendur27434 жыл бұрын
@@jj987987987 yea, thats the point of programs? they do things for you so you can use them
@iamdeveloper-xt3it3 жыл бұрын
hello i like make my own dfm model with SAEHD. i tried based on Instruction but its failed. Can you make explain how to do that. i Want make new dfm model which will work for all relatable face in Deepfacelive like Pretrained Tom cruise dfm model
@GROTTEZ3 жыл бұрын
My fantasy it’s gone be real 😎
@fuzsd15903 жыл бұрын
@Paul Mathews bro???
@bigdj0ntz8 Жыл бұрын
hello, follow these steps but when try to train the mdel with quick96 it got stuck at "Initializing models: 100%", have a 3060ti, tried with cpu and got the same issue, know how to proceed?
@b0chen4 жыл бұрын
lol I mean Elon is basically irl Tony Stark
@Odolwa2 Жыл бұрын
does 2060 work? if so, any tutorials how to fully utilize it? else, the one i downloaded from deep face lab's git doesn't recognize it and goes to quick96 :( i don't want to use the CPU...
@mrfakeybr25104 жыл бұрын
Is the result of the final video with just 100,000 interactions? OMG!
@Deepfakery4 жыл бұрын
Yes 100k, about 2 hrs time. I have an example of the same video at 1M iterations for comparison
@gustavkirchoff46334 жыл бұрын
yeah just did this 95k interactions looks good, im on a 1050ti btw
@diab29014 жыл бұрын
@@gustavkirchoff4633 I am too but im to impatient to wait expessially if its just a test to learn how to use the program
@Monero_Monello4 жыл бұрын
@@gustavkirchoff4633 Do I have to hold down P or is there a way to make it automatic? Or can I just leave the program, come back in two hours, press P and I'll get 90k?
@Monero_Monello4 жыл бұрын
@@gustavkirchoff4633 Also, how do you get 90k? I get 50/min (I have a geforce GTX 1050)
@paulgeorge92282 жыл бұрын
i got 603037 iterations and some of my result previews are still blurry, do I have too much src/dst (9 k and 30 respectively), do i need more iterations, better face set, or fix some settings?
@hans63004 жыл бұрын
thanks for the free upvotes man
@sickojoseph93774 жыл бұрын
Haha reddit go brrrrr
@artfood65084 жыл бұрын
Some NVIDIA GPUs, even among the newest ones (but for notebooks), will not do any of the processes. You also can't activate hardware scheduling for these types of GPUs on Windows 10. Because of this I had to do everything with CPU only, slow but worked, thanks a lot.
@Deepfakery4 жыл бұрын
It seems like a lot of their notebook GPUs do not support the CUDA Compute version needed for DeepFaceLab, even if they have enough VRAM and other resources
@Pikelol3 жыл бұрын
Kind of technology which definitely won't be used for porn.
@potato222583 жыл бұрын
You're way late to the party, my man.
@hassassinator88583 жыл бұрын
@@potato22258 Some of the stuff out there is so realistic it's downright terrifying
@yeetogami25754 жыл бұрын
I can't extract the facesets. If i extraxt even with default settings(src or dst), it extracts just 1 image and then shows a bunch of errors. What do I do?
@kavya16384 жыл бұрын
i'm using this on myself lol, it's just funny to put random stuff on my face. i wonder, can i put my cat's face on my body?
@Deepfakery4 жыл бұрын
You can try the cat's face using the manual extractor but you'd probably have better luck with EbSynth, maybe...
@charan19693 жыл бұрын
@@Deepfakery @Deepfakery Faceset extraction is not working properly on command prompt... Extracting faces ... Error.. I didn't get any options to type CPU or GPU . . . Unable to start subprocess . Press any key to continue 😞
@paulgeorge92282 жыл бұрын
after training the xseg do i still need to do saehd training (i already have 1.4 mill iterations byt without xseg) or can i go straight to merging the saehd?
@Deepfakery2 жыл бұрын
Well you should really apply to the mask then train the deepfake. Technically you can go straight to merge and use the mask, but the full area of the face won’t be properly trained.
@paulgeorge92282 жыл бұрын
@@Deepfakery about how many more iterations with training do i need, now i have the xseg mask trained? is 30 k enough?
@Raniaska03064 жыл бұрын
"More advanced projects"... hehe
@randomguy-jo1vq4 жыл бұрын
I see you are a person of culture as well
@arnavtete77932 жыл бұрын
*insert Lenny face here
@FSXNOOB3 жыл бұрын
Wich text 2 speech do you use for this movie?
@Deepfakery3 жыл бұрын
Nuance
@speedhead3 жыл бұрын
dawg - you just allowed me to have more goddamn fun THANK YOU
@hackmedia7755 Жыл бұрын
how long it usually take to train from scratch? should I just load a pretrained model?
@shrusher22344 жыл бұрын
нихуя не понял, но очень интересно
@jarnekumbruck38432 жыл бұрын
hey this is a good video But i have a question. How long should you train him? and you can, for example, let him train too long so that it no longer works.
@Deepfakery2 жыл бұрын
The longer you can train the better it will be. I have an example of this video after 1 million iterations
@Factbitesx3 жыл бұрын
at train Quick96 I have a issue can you help me? ( Memory error: Unable to allocate 3.00 MiB for an array with shape (512, 512, 3) and data type float32) my pc has gtx 1650ti ryzen 4600h 8gb ram
@Factbitesx3 жыл бұрын
trainer fails every time :(
@knightshadow3932 жыл бұрын
I have a 2070 super and amd 3600 but when i try to do training with quick96 or the other 2 it won't run on my gpu. After loading all images it just says press any key and doesn't do anything. Cpu works but its too slow. How do i make gpu work?
@aag19772 жыл бұрын
Is there a way to run multiple instances? So I can 'prep' a video while another one is training. I've noticed Face Extract won't work if there is another process running. Is there a way around this?
@XristosGiotis Жыл бұрын
so, if i want to make a video with photos (from the begining), i must put my images into the folder with the name data_src ???
@Deepfakery Жыл бұрын
Yep, just put them in the data_src folder, where the frame images would be, then extract the faces. I covered this in my faceset extraction tutorial: kzbin.info/www/bejne/p2WXfYOvnMmArrc
@Psybernetiks3 жыл бұрын
Once my merging process is complete, the line about this session being saved & press any key doesn't pops up in the prompt. Any leads on what I can do?
@Deepfakery3 жыл бұрын
It will stay at 100% until you escape the preview window. Kinda weird it doesn’t end itself but that’s how it is
@FatToadRecords3 жыл бұрын
On Step 6 (train Quick96), it loads the first Sample but not the second. How can I fix this?
@paulgeorge92282 жыл бұрын
so is saehd or quick96 better for more accurate faces? i see from this video that quick96 does a very good job already, does the saehd do an even better job? i hae nvidia gtx1060 max-q design with vr ready graphics (6 GB VRAM) and 8th gen intel core i7-8750H processor and 6 cores/12threads, up to 4.1 GHz, would saehd or quick 96 run better on my laptop? the extraction and face detection processses seems pretty slow for me compared to yours (then again i have a 20 minute video to deepfake)
@Deepfakery2 жыл бұрын
Quick96 is basically just a proof of concept for the software that can be used to quickly test things. SAEHD offers better models and the ability to tune various dimensions and additional processing. Quick96 is locked at 96px resolution and most model settings, so its not meant for production.
@DouglasJones-i5d Жыл бұрын
Hi... I love this video and thank you for making this... I want to ask, is the training preview fast or slow? Because on the video it goes fast but on my pc its slow.
@paulgeorge92282 жыл бұрын
Can the merge work well on videos where the destination video's angle causes the face to rotate a lot (like a clock, not side to side)?
@Deepfakery2 жыл бұрын
Yes, but the face extractor might have a problem detecting if it rotates too far.
@torindanius95304 жыл бұрын
i did exactly what you did.but when i was going to quick train it say no faces found.not with SAEHD too.result is same. on quick train i used video names and trained but when i am at the page of MERGER there is only original footage,and on his face there is square that's have blur white color.please help
@Deepfakery4 жыл бұрын
First, do you have face images when you run view aligned result? Also run data_dst view aligned_debug. You will see all of your frames with landmarks from the detection. Maybe some are missing or false alignments?
@allscars28822 жыл бұрын
when arrive to step of merge, i dont see commands. if i click on tab i see first frame than if i click w or s or shift + > etc ... freeze all what is the error?
@Deepfakery2 жыл бұрын
Could be a few things. First off you can reset the merger settings by deleting the merger session file or selecting not to use it when you run the merger. If your frames are very high resolution you may have trouble loading them. Also make sure you still have all the destination frames and didn't move or delete any of them.
@amparo7624 жыл бұрын
I have exactly the same gpu (1080gtx ti) than you, but training takes a lot of time. Did I miss something? Should I install some CUDA library in order to accelerate the training process? Thanks a lot and congratulations!
@Deepfakery4 жыл бұрын
I have 2 x 1080 Ti so with 1 it will be significantly slower. CPU may affect the speed as well; I'm using an i5-9600K at around 4.30GHz. You cannot easily change the speed of training with the Quick96 model. You can remove some data_src/aligned images to lower the overall package size. The SAEHD trainer, while a bit more complicated, allows you to tune various settings to your system.
@RockyWild2 жыл бұрын
Thanks! Do you know why I can't train with Nvidia geforce rtx 3070-8gb? it stops and the training window doesn't appear. I can only train with CPU (AMD Ryzen 7 5800H) 😴 And the training end automatically? Or do you have to manually stop it after X hours? Cheers
@Deepfakery2 жыл бұрын
First make sure you're using the RTX 3000 build of DFL. In Quick96 you have to end in manually. With SAEHD and others you can set an iteration number to stop at.
@RockyWild2 жыл бұрын
@@Deepfakery Thanks. I just want to learn the basic. At the moment I only got blurred faces (I also tried SAEHD and the Eyes and Mouth priority option). I'll use Quick 96 until I get something decent. Maybe it's because I have faceset to 1024, I'll try 512 and train with to GPU. With Xseg I haven't tried it yet but I guess it's also important too Regards
@Deepfakery2 жыл бұрын
The trainer has to resize the images to fit the model resolution. You can help it by using the image resizer to match them to the current model. Also packing the images will help with the initial loading.
@denynugraha90992 жыл бұрын
Bro, if the skin color is not the same as the one in the video, what settings are you in? I'm using the SAE model with 300000 epoch iterations, please enlighten me bro🙏🙏
@Deepfakery2 жыл бұрын
SAE model has more steps to do. I’m getting to that part in my tutorials but here’s some general advice: (1) pretrain model. (2) disable pretrain and enable random warp. (2.1 optional) enable learning rate dropout. (3) disable RW and LRD. (3.1 optional) enable LRD (4 optional) enable gan.
@denynugraha90992 жыл бұрын
@@Deepfakery i use dfl google colab bro
@moldorm992 жыл бұрын
After I launch the training file and it initializes models and loads samples, I hit any key to continue, and nothing happens. You didn't say what this could be caused by or how to fix.
@ViolinVoid3 жыл бұрын
hi sir! great tutorial! question: when im doing step 5: training, if i press enter and save my progress... how do i restart progress on the same file later?
@Deepfakery3 жыл бұрын
Just run the trainer again. You will be prompted to choose a saved model or start a new one
@mayurgavali98474 жыл бұрын
Hey great tutorial!, I had one question- Suppose I have person A as the source and B as the destination in model 1, and person C as the source and D as the destination in model 2. Can I make a deepfake with A as the source and D as the destination by using my pre-trained model 1?
@Deepfakery4 жыл бұрын
You could do that to jumpstart the training, but if B is drastically different from D then you should just start from scratch.
@mayurgavali98474 жыл бұрын
@@Deepfakery Thanks!!!
@deeber35 Жыл бұрын
Somehow I'm at the point where images in the debug folder show the proper masks {in green} from manual extraction, but when I merge, those masks are not applied, and so the output is an uncorrected image. I'd rather not re-do all that manual extraction {there are 100s}. Any idea on what to do? Thanks.
@Deepfakery Жыл бұрын
Any face you extract will have a default mask as shown in the debug image. When you train the deepfake it also trains the mask. In the merger you should have mask options such as DST (original) and Learned PRD / DST (learned during training). If you don't have XSeg masks then don't choose any of the modes mentioning XSeg. Were you able to train the model and select one of these mask modes in the merger?
@deeber35 Жыл бұрын
Do I hqave to do step 6, train, if I already have 200,000 iterations with the source images?
@Vagaberry3 жыл бұрын
Disregard my previous question, I understand now but if I wanted to use pics as a face swap into a video instead of a video can I upload a jpeg instead of mp4 file for data_src? While that mess the script up? What should I do? And can I upload multiple face pics for it to generate a good one for a swap?
@Deepfakery3 жыл бұрын
You can bypass the video extraction and place the images directly into the data_src and/or data_dst folder. After that run the face extraction as normal.
@Vagaberry3 жыл бұрын
@@Deepfakery okay I’ve done that but it’s not that great, I used about 10 different images of myself and 450k iter’s and it’s not that great. Should I have used only one image?
@Deepfakery3 жыл бұрын
Sounds like you're trying to put a deepfake on just a few images of yourself? First off, Quick96 is slow, so you'll want to try SAEHD. Having few DST images is ok but sometimes more images will help the model generalize the face better. Also make sure to remove any angles from SRC that aren't found in the DST images. Again don't expect alot from Quick96 since you can't change the settings.
@DanielGroff10 ай бұрын
Great tutorial, thank you. I have a problem because the video "data_dst" contains 2 faces, how can I process only one? THANKS
@Deepfakery6 ай бұрын
You need to remove the extra faces. Process is the same for SRC and DST: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/#step-4-2-source-faceset-sorting-cleanup
@nelsonjoseph36732 жыл бұрын
ok, that was cool. So we only need to rename the images to this format if we are using custom data for the process. Is there any specific size ratio in which the data should be made?
@paulgeorge92282 жыл бұрын
i dont think so, i took a look at the code and i think it automatically converts the pixels and size ratio for you. the video sources need to be mp4 or something as the video formats though