DeepFaceLab 2.0 Xseg Tutorial

  Рет қаралды 65,526

Deepfakery

2 жыл бұрын

In this DeepFaceLab XSeg tutorial I show you how to make better deepfakes and take your composition to the next level! I’ll go over what XSeg is and some important terminology, then we’ll use the generic mask to shortcut the entire process. After that we’ll do a deep dive into XSeg editing, training the model, and applying the masks to your facesets, and making backups along the way. I’ll also cover some ways of dealing with obstructions in front of the face.
00:00 DeepFaceLab 2.0 XSeg Tutorial
00:26 What is XSeg?
01:00 Why Use XSeg?
01:27 Generic XSeg Pretrained Mask
02:10 XSeg Terminology
03:00 Launch the XSeg Editor
03:14 XSeg Editor User Interface
04:45 Labeling Mask Polygons
06:44 Masking Obstructions
07:59 Fetch a Backup & Remove Labels
08:35 XSeg Training
09:39 Applying the XSeg Mask
🔴 Beginner Deepfake Tutorial: kzbin.info/www/bejne/ooSwXmyId9BmfrM
🔴 How to Install DeepFaceLab 2.0: kzbin.info/www/bejne/boicpqhjpbuCf6c
🔴 DeepFaceLab Faceset Extract: kzbin.info/www/bejne/p2WXfYOvnMmArrc
🔴 DeepFaceLab 2.0 Tutorials Playlist: kzbin.info/aero/PLLqzaOTf8gCW3SIojJOZE_89kIPEXDNBg
✅ Full DeepFaceLab 2.0 Guide: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/
✅ Download Pretrained Models: www.deepfakevfx.com/pretrained-models-saehd/
✅ Download Celebrity Facesets: www.deepfakevfx.com/celebrity-facesets/
📌 Subscribe for more deepfake tutorials: kzbin.info/door/wLnn3Myal9USHSm2LbLM7g

Пікірлер: 129
@Gethris
@Gethris 10 күн бұрын
Do i need to label forehead if i'm using f face type or over the eyebrows will do? Or its still better to label whole face?
@gauravlokha8787
@gauravlokha8787 9 ай бұрын
My process - 2) extract images from video data_src 3) extract images from video data_dst FULL FPS 4) data_src faceset extract 5) data_dst faceset extract 5.XSeg) data_dst mask - edit 5.XSeg) data_src mask - edit 5.XSeg) train 5.XSeg) data_dst trained mask - apply 5.XSeg) data_src trained mask - apply 7) merge SAEHD This is exactly what I did. Now while I started to merge, the model seems to apply the destination face itself to data_dst. What am I doing wrong?? It's not applying source face for some reason.
@Cradge666
@Cradge666 4 ай бұрын
In the final chapter, "applying the xseg mask," you say to draw new masks and retrain. It's necessary for me to do this in the project that I began, because even after 85000 iterations some of the masks were very wonky, but do I need to delete the old masks first? Or can I reshape the old mask? I'm hoping you may be able to explain that part a little more. Thank you
@castejon777
@castejon777 10 ай бұрын
Hey! Nice video. I did the XSeg training and generated the dst masks. Now, I want to train a pretrained SAEDH. So, do I have to deal with something in order to achieve that the model works with the XSeg generated masks? The training is going to operate with this XSeg generated masks directly?
@squeeps78
@squeeps78 3 ай бұрын
Thanks. Do I need to train the model before or after I do custom masks? It keeps showing my pretrained model as the option for custom mask training.
@Deepfakery
@Deepfakery 3 ай бұрын
You need to train the XSeg mask, then apply it to the faceset, then train the model.
@MWcrazyhorse
@MWcrazyhorse 3 ай бұрын
Somehow for me it is only applying about a third of the masks in the merger. Tried everything. edit, train, apply, remove, redo. It only applies about a third of the src masks to the destination video frames.
@khoipham2132
@khoipham2132 Жыл бұрын
Thanks for this tutorial. So this a mask trainer but is it correct that I must used SAEHD trainer after this for the actual face training? Wasn't clear on how they interplay. And do I transition the /aligned_xseg files to the /aligned prior to SAEHD or has the masks already been applied to the /aligned output?
@Deepfakery
@Deepfakery Жыл бұрын
Yes its just training the mask, not the deepfake. Quick96/SAEHD/AMP are used to do the actual deepfake. When you fetch the masks to /aligned_xseg thats mainly for a backup copy, so I usually choose not to delete the original files when asked. If you did delete them, then you can copy these back into the aligned folder. Otherwise the labeled files should still be in the aligned folder. The labels themselves aren't considered during SAEHD training, only the applied mask is. So make sure to apply the trained mask after xseg training but before SAEHD training. For example, if you apply the Generic XSeg, there are no xseg labels, but it will train properly. Again only the applied mask is used in training, and the applied mask in fact gets trained as the learned masks (as seen in the SAEHD trainer preview), which come into play during final merging.
@ORonyOficial
@ORonyOficial Жыл бұрын
Hey brother, how are u ? I dont know if you have some tutorial of MVE but, bring some tutorials about MVE MACHINE VIDEO EDITOR
@Deepfakery
@Deepfakery Жыл бұрын
Not right now, the developer is working on a new version with massive changes. You can find some guides in the github repo
@Hisham_HMA
@Hisham_HMA Жыл бұрын
when do i do this step? instead of quick training? or before face extraction?
@Deepfakery
@Deepfakery Жыл бұрын
After face extraction, before model training.
@krinodagamer6313
@krinodagamer6313 Жыл бұрын
so oncetraining is done what do i need to do next using a 4090
@mosesadamu4716
@mosesadamu4716 10 ай бұрын
XSeg train is giving me an error “ WinError 1455 paging file is too small small for this operation to complete”
@Deepfakery
@Deepfakery 10 ай бұрын
www.deepfakevfx.com/guides/deepfacelab-2-0-guide/#system-optimization
@traida111
@traida111 Жыл бұрын
subscribed, high level tutorial dude, thanks for taking time out to share your knowledge.
@Zaungast-of7wy
@Zaungast-of7wy 6 ай бұрын
First of all thank you for that extremely useful tutorial. I have one question left, if I collected all my faces in my src folder and apply all my masks in "head" Typ for my first project and after that I star a second project with where I need forehead masks. Is it possible to save my head src masks so if I need them in other projects I can use them later? I all projects the same src pictures are used, only other mask types. Thank you for your time!
@troxity5589
@troxity5589 6 ай бұрын
idk
@Deepfakery
@Deepfakery 3 ай бұрын
You can save all of the files that have masks labeled using 5.XSeg) data_dst mask - fetch.bat and 5.XSeg) data_src mask - fetch.bat. You can also save the trained XSeg model and use it to reapply later.
@GamaGamer451
@GamaGamer451 Жыл бұрын
sir I'm facing This problem 👉 'no training data provided' how to solve this
@Deepfakery
@Deepfakery Жыл бұрын
Usually this means you missed an extraction step. You need to extract frames then faces for both the source and destination.
@IzludeTingel
@IzludeTingel Жыл бұрын
which home user (consumer) hardware is required for deepfacelab? i used my 9900k about 2 years back for deep learning and it would usually take about a day for satisfactory results. i don't own a GPU. would you say a 4080 (non-Ti) would work?
@Deepfakery
@Deepfakery Жыл бұрын
I would go with anything RTX 3000 and up at this point
@馬英狗
@馬英狗 9 ай бұрын
After running "add" with Xseg, each photo incorporates a trained mask. I'm trying to determine where the information about these added mask regions is stored. For each batch of 2,000 to 3,000 images, there are instances where only a few mask regions may not be satisfactory, requiring manual drawing of the entire face (which is time-consuming). If I could retrieve the coordinates or regions after the "add" operation, I could write a program to automatically generate breakpoints along specified lines. This would significantly save time, as manual adjustments could be made as needed. Unfortunately, I'm unable to locate where the records of the added mask coordinates or regions after the "add" operation are stored. Does anyone have any insights?
@crazy2720
@crazy2720 Жыл бұрын
Whenever I try to train the Xseg it stops after getting samples and says "ImportError: numpy.core.multiarray failed to import" I tried with lower and higher batch sizes. I have an RTX 3080 graphics card so hopefully that is enough to use the software.
@Deepfakery
@Deepfakery Жыл бұрын
Are you on Windows and using the RTX 3000 build of DFL?
@1hitkill973
@1hitkill973 Жыл бұрын
It's sad that I couldn't follow along. The preview window ( 09:00 ) just doesn't open for me. It doesn't even progress. It's just stuck at the imgur link. I guess I'll just use the generic xSeg masks.
@Deepfakery
@Deepfakery Жыл бұрын
Only thing I can think of is having the wrong version of the software
@waluyomasktied1384
@waluyomasktied1384 Жыл бұрын
So this Xseg is avoid disturbing masking on face ?? Can i send you SRC and DST video privately for example to my learn
@Deepfakery
@Deepfakery Жыл бұрын
Check out DeepfakeVFX.com, you can download facesets there which already have Custom Xseg labels.
@waluyomasktied1384
@waluyomasktied1384 Жыл бұрын
@@Deepfakery i have my own faceset and DST file that part of face wear a mask .. maybe its not powerfull train because im not use GPU like yours .. i take 4000 iteration in 7 hours :( .. the dst file just 1 minute
@jriker1
@jriker1 11 ай бұрын
I see in the video you are masking into the hairline. isn't this going to cause dark areas on the outside of the mask that will have to be blured or eroded out more? I heard you are supposed to avoid going into the hairline or masking in shadows in the face but not sure.
@Deepfakery
@Deepfakery 10 ай бұрын
Yeah you can pull it back a bit from the hairline, as long as you're consistent with it. The part I've had the most trouble with is different sideburns.
@ihassan1001
@ihassan1001 Жыл бұрын
Hi I have a quick question. I have over 1 million iterations of my subject..I just bought a new computer and I would love to use the same training and face in my new computer without starting all over again...is it possible to do that? How can I use the same trained model in my new computer without having start the process from the beginning?
@Deepfakery
@Deepfakery Жыл бұрын
Just copy the workspace folder over to the new machine, it has all the files. If you got a different GPU then make sure you download the correct DFL build.
@ihassan1001
@ihassan1001 Жыл бұрын
@@Deepfakery thank you so much...I have rtx 3080... it's that good enough? I've been using paperspace for deepfake and I know their gpu is much more powerful
@twin9980
@twin9980 Жыл бұрын
Hey at 8:15 will deleting the orginal files delete ALL the aligned pictures? Or just the ones that are manually masked
@Deepfakery
@Deepfakery Жыл бұрын
It will just delete the masked ones. I'm not really sure what the point is though and I just keep them in. It might be useful to delete them for some specific workflows but you would still have to put them back in for mask training...
@shawnluan4892
@shawnluan4892 Жыл бұрын
Your video is awesome! is it possible to use generated face images as a modeling data set? or is it only videos that can be used for the learning process on deepfacelab?
@Deepfakery
@Deepfakery Жыл бұрын
You can use any images. My Faceset Extraction tutorial shows how: kzbin.info/www/bejne/p2WXfYOvnMmArrc
@1Prince_Emmy
@1Prince_Emmy Жыл бұрын
please i need an assistance when ever i start training XSEG i got an error when i started pretraining please what could have caused it ?
@Deepfakery
@Deepfakery Жыл бұрын
Usually the error will include a cause in the first few lines, something you could search for. Most likely you're using too many CPU cores, using the wrong version of DFL, or you need to increase your page file size.
@NamNguyen15294
@NamNguyen15294 Жыл бұрын
Im working on dst video with multiple faces, how can I just pick one face to apply face mask from the source vid to it please?
@Deepfakery
@Deepfakery Жыл бұрын
You have to remove all of the other faces
@bustymcnutters801
@bustymcnutters801 Жыл бұрын
After running the generic mask files and go to train it acts like there's no mask there at all and all my images are completely green. I can see the mask in xseg editor but it's like the trainer is completely ignoring it.
@t.jofficialmusic6993
@t.jofficialmusic6993 2 ай бұрын
Try restarting the training at that point where they say you should click enter in 2 secs to override
@p.domino3077
@p.domino3077 Жыл бұрын
Hi,is there a location for the Xseg labels(poligons) that I can backup and work on a different project,then go back to the previous one and restoring them without restarting the labeling?
@Deepfakery
@Deepfakery Жыл бұрын
The labels are held as metadata within the image files. You can use the fetch scripts to grab all the labeled images.
@p.domino3077
@p.domino3077 Жыл бұрын
@@Deepfakery Thanks.
@PhotoHall
@PhotoHall Жыл бұрын
Brand new to this, should I be editing with xseg before or after training? I'm under the impression I do this and then train, and this makes the training more accurate. Is that right?
@Deepfakery
@Deepfakery Жыл бұрын
Yes, do the xseg beforehand.
@cesarsalve1512
@cesarsalve1512 Жыл бұрын
when i start 5.XSeg) train it always takes the files from _internal\pretrain_faces instead of the faces i want to train. do i miss a step after i fetched the Lables? everytime i start xSeg train it says Press enter in 2 seconds to override model settings. but i cant do anything
@Deepfakery
@Deepfakery Жыл бұрын
Ok it seems like you are having 2 different problems... 1: If its using the pretrain faces then you must have enabled xseg pretrain mode. Check the setting and set to 'n'. 2: If you can't change the settings i'm not sure what the problem is. You could delete the xseg model files and start over.
@cesarsalve1512
@cesarsalve1512 Жыл бұрын
@@Deepfakery thx for respond i will try this. ----- now its working :) i deleted all XSeg Files in workspace --> model
@scorpi1756
@scorpi1756 Жыл бұрын
Xseg labelling - When I see you setting each polygon point so fast, I wonder if you have sped up the video in that section? In reality, I find I can't just freely outline the face that fast as you have to literally click the left mouse button for every x on the face you select as the next point. Or is there a trick to automatically trace the face and set each point evenly by holding down another key as you trace?
@Deepfakery
@Deepfakery Жыл бұрын
Yes its sped up, and no there's not a shortcut. Getting faster at XSeg / roto is a skill you have to develop over time.
@scorpi1756
@scorpi1756 Жыл бұрын
@@Deepfakery Haha yes I thought so! One does get the hang of it after a few 100 labels!
@tcalleja74
@tcalleja74 11 ай бұрын
just answered my questions - awesome video
@dancespoilers6203
@dancespoilers6203 9 ай бұрын
Where to download pretraines xseg
@oOoZilverwarrioroOo
@oOoZilverwarrioroOo Жыл бұрын
Could you please tell me if i have to get rid of obstructions from the destination data or just do it with the source data ?
@Deepfakery
@Deepfakery Жыл бұрын
You need to do both. At the end of training you will have a mask that cuts out that part of the deepfake face so that the obstruction can show through.
@oOoZilverwarrioroOo
@oOoZilverwarrioroOo Жыл бұрын
@@Deepfakery thank you for the reply, do i have to add the shape of the obstruction to the source data too...? For example, let's say there's glasses in the destination data, should i draw imaginary glasses in the source data faces too.. so that the A.I gets rid of them when i merge the masks ?
@t.jofficialmusic6993
@t.jofficialmusic6993 2 ай бұрын
​@oOoZilverwarrioroOo I don't think so, no
@ywueeee
@ywueeee Жыл бұрын
hey man, how does one get this working on AWS? there's nvidia v100, k80 and tesla available. Not sure which builds to choose. Can you make a video please?
@Deepfakery
@Deepfakery Жыл бұрын
I've never done this on AWS. My assumption is that you could use a Jupiter-ish notebook just like in Colab, but you or someone would need to make that notebook first
@Anthony_Soleri
@Anthony_Soleri Жыл бұрын
Do I have to remove the mask before I continue training in order to put on the "improved"(further trained) mask later?
@Deepfakery
@Deepfakery Жыл бұрын
For XSeg training, no. The trainer is only using the labeled faces. The applied mask and default mask have no part in it.
@Anthony_Soleri
@Anthony_Soleri Жыл бұрын
@@Deepfakery thanks
@cabritinhaproductions6813
@cabritinhaproductions6813 Жыл бұрын
Hi, great tutorial, thanks for doing it. I have a question that I can't find answered in the deeplabs guide: After training and applying the Xseg masks, how do I merge it with the data_dst video? I already have a SAEHD trained model, should i use merge_SAEHD? because when i do that, the masks I trained and applied in XSeg are not there.
@Deepfakery
@Deepfakery Жыл бұрын
The applied mask is only used during training, which will create the learned mask. The applied mask itself is not available in merging. So when you get to merging you'll want to look at 2 options: learned mask and XSeg mask. The learned modes will be whatever the deepfake model learned, either from your applied mask or the default mask. The XSeg mask modes will be taken directly from the XSeg model, therefore the software must find the trained XSeg model files in the model folder while merging. You can find more instructions here: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/#step-7-merge-deepfake-model-to-frame-images
@scorpi1756
@scorpi1756 Жыл бұрын
Hi Again, In my final merge and result.mp4 I picked up flashing faces showing the original dst face so I went back and did a MANUAL RE-EXTRACT DELETED ALIGNED_DEBUG and also did more labelling. I then did a 5.Xseg train for awhile to get these new fixed masks trained and then ran XSeg Generic apply (dst only). Q1. How long do you think they will be correctly applied back into my already completed SAEHD training process or will the fixed mask be merged without me having to SAEHD train again or do I continue SAEHD from where I left off? I don't want to lose 100,000+ iterations of final SAEHD training so I hopefully I don't need to start traning from scratch again? Anyway, I actually did try this and continued SAEHD training +50k iter), but it looked like my new masks were still broken although they look fine in the data_dst/aligned_debug folder? I'm thinking I may have have missed a step or did not apply the masks as I thought I did before going back to SAEHD training? Q2. What is the best process to go back, fix and re-apply Xseg masks when you have finished SAEHD training? Q3. Originally I was thinking since the debug aligned dst images are only used during merge so I should just be abled to fix the masks, train and re-apply and skip straignt back to the merge without the need to continue SAEHD train? IS this correct or I need to do what I Originally described in Q1? Apologies for the loooong questions so I sent little donation :) And may I say YOU ROCK! :)
@Deepfakery
@Deepfakery Жыл бұрын
TL;DR: Fix masks, train xseg, apply xseg, delete model inter files, train with random warp on. I'm a little confused because it seems like you labeled and trained the dst mask, but applied the generic mask? You need to run 'trained mask - apply' in order use the mask during SAEHD training. Yes you will need to train some more after fixing the masks, but it kinda depends on how much the mask changed. The problem is you have already trained the face with the wrong shape and it takes a long time to 'forget' the bad data. The deepfake model has to learn the mask, which is kind of like XSeg training within the deepfake itself, but its based on the applied mask instead of the labels you drew. The model learns best early on with random warp, so its best to make any changes early, while in the random warp phase, or at least enable random warp again after the changes. If you've already progressed to 'normal training' or 'gan training' the model will have difficulty adapting. Basically you don't want to make changes after you've finished training. In the future i'd recommend doing a test merge early on, or go through and make sure each frame has a face before training. However I can suggest that you try deleting any of the 'inter' files from the deepfake model, then train again with random warp on. This will kind of reboot the face while keeping some latent data in the model. It will take a while to retrain, but not as long as starting over. If you're using an LIAE model there will be 2 inter files: inter_B realted to dst, and inter_AB related to the src-dst swapped face. So if you're just changing the dst try removing inter_B and see if that helps. If not then try removing both files and continue training. This might seem hacky, but it can help, and is even recommended by the developer in certain situations. There's not much you can do with the mask after you've trained the deepfake. If you apply a new mask afterward the only way to actually use it would be by applying XSeg-dst mask during merging (which is different from the learned-dst mask). However you've probably noticed that the learned xseg mask is much better quality than the raw XSeg mask. There's really not much of a point in applying it at the end. Debug images are actually never used in training or merging. They're just for manual review of alignments, or the manual re-extract process. Other than that they can be deleted.
@scorpi1756
@scorpi1756 Жыл бұрын
​@@Deepfakery Thanks for the detailed reply. Apologies for asking all these questions here, I should be in a forum but most of the time no one responds. I did "trained mask - apply" after Xgeg training but forgot to mention it. It didn't actually take that long to get back the lost training because I have trained that src model a lot in other projects. But yes, removing the inter_B file may be the best way. I was under the impression only the src aligned-debug (If you create them) should be deleted? So you say the data_dst/aligned_debug images are not needed or are they just needed for 5.XSeg train? They must be needed for something or why would there be the RE-EXTRACT option to fix. I thought that when you have flashing faces in the merge (frames missing the learned scr mask), that this was the way to fix them? I'm currently attempting the Iperov method of pre-training using the RTT Model files and RTM facesets so I can use my same src model for multiple dst videos in future for super fast training. Apparently you have to delete the inter_AB every 100k iterations for some reason? When I do this all the learned src to dst files go back to the beginning. I don't understand why this needs to be done but apparently that's what he recommends (Github - continue train +500.000 iter, deleting inter_AB.npy every 100.000 (save, delete, continue run). Do you know why? I thought the idea was to give the src model unlimited training of the 90k+ random faces so to prepare it for any new dst faces you finally use to create your video? After 500k iter and GAN is applied then it is not longer deleted for the next 500-700 iters. When I watched Druuzil's advanced YT Guide where he details the RTT/RTM sets for DFL, he did not delete the iter_AB, only at the beginning after training his src for only for a few iter - then he replaced it with the RTT inter_AB and left it. The other question is... As the whole point of the Iperov SAEHD pre-training method is do you can just merge any new dst video immediately. But what if your destination needs obstructions masked out, this would require to Xseg the new destination video after extracting it and that I'm not sure if that would stuff up all that SAEHD training? If I were to extract and Xseg new masks (and label) then apply, I would need to XSeg train and apply that too for the dst (only) I would think? But I'm not sure if this would effect all that previous training unless that data is retained in one of the inter files? Perhaps this is probably why using your method and using the actual pre-training option on the RTM set may be more practical?? So many different pre-training methods can confuse noobs like me. I promise this is my last LOOOOONG question :) Thanks Again
@Deepfakery
@Deepfakery Жыл бұрын
Yeah the data_dst/aligned_debug images are used for the re-extraction process. So, if you're missing some dst aligned images, you delete the corresponding debug images (if present) then run the manual re-extract. It will load all the frames that don't have a corresponding debug image and allow you to manually place the landmarks. Other than during that specific process the debug images are not used. If you're having trouble with missing faces I would recommend looking at MachineVideoEditor: github.com/MachineEditor/MachineVideoEditor It has real tools for dealing with missing faces, like copying alignments from one face to the other, and approximating alignments from nearby faces. Its amazing!
@Deepfakery
@Deepfakery Жыл бұрын
I believe the RTT/RTM methods are done with normal training, not pretraining, so check on that. I may be wrong. The point of deleting the inter files is so the model doesn't get over-trained. It kind of reboots the model so that it only has latent color and shape info, not specific face details. This way the model can easily adapt to any face. I haven't done this process much but I believe it can dramatically speed up production in the right circumstances.
@Deepfakery
@Deepfakery Жыл бұрын
You're correct about the dst mask. You should always label/train/apply DST (or use the generic mask). The idea with RTT/RTM is that it has already been trained on faces with obstructions therefore it should easily adapt to your dst mask. Even so, you're probably going to want to train on your DST faceset if it has any obstructions. So in that case RTT would be better than RTM. I have my doubts about the ability of RTM to be used with obstructions, particularly when they are unique, but your mileage may vary. Personally stick to normal training because I need to have a baseline for these tutorials.
@blynkpham4369
@blynkpham4369 7 ай бұрын
thanks, your video is so good, I subscribed and liked.
@dialecticalmonist3405
@dialecticalmonist3405 Жыл бұрын
Could someone please tell me which VERSION of the software to download for an RTX4090 card? They don't even mention the 4000 series on their tutorial page.
@Deepfakery
@Deepfakery Жыл бұрын
RTX 3000 build covers pretty much everything higher in the current RTX family (3000-6000)
@dialecticalmonist3405
@dialecticalmonist3405 Жыл бұрын
@@Deepfakery I trust you, because I also asked them directly. But I'm just curious how you learned this.
@gaminside2736
@gaminside2736 2 жыл бұрын
How many itterations do you usually do ?
@Deepfakery
@Deepfakery 2 жыл бұрын
For the masks maybe a couple hundred thousand. Usually start with the generic and continue training on that so its much faster.
@scorpi1756
@scorpi1756 Жыл бұрын
Thanks again :)
@knife4lyfe
@knife4lyfe Жыл бұрын
what folder does it apply it to?
@Deepfakery
@Deepfakery Жыл бұрын
The mask will apply to the files in data_src/aligned or data_dst/aligned depending on which script you run.
@JhonfaRaccoon
@JhonfaRaccoon 2 жыл бұрын
Why is it that whenever I export my file, it is only 480px... how can I export it in HD?
@Deepfakery
@Deepfakery 2 жыл бұрын
Was the original file 480px?
@JhonfaRaccoon
@JhonfaRaccoon 2 жыл бұрын
@@Deepfakery The original video has a similar resolution 480px, but when exporting the overlay face looks very pixelated compared to the original video. * Destination video in 480px (low quality) but very sharp (focus) * Source in HD, sharp to What do you recommend cause the resource video is in HD.
@Deepfakery
@Deepfakery 2 жыл бұрын
The file is going to be 480 just like the original. As far as the quality goes, which trainer are you using? Quick96 is only 96res, SAEHD can go much higher
@JhonfaRaccoon
@JhonfaRaccoon 2 жыл бұрын
@@Deepfakery ok thanks i will try with SAEHD
@shawnluan4892
@shawnluan4892 Жыл бұрын
Nice tutorial! Quick question, how many iteration is sufficient enough to move to the next step? 100K?
@Deepfakery
@Deepfakery Жыл бұрын
Its a judgement call based on how the preview and applied masks look. More iterations/masks if you have obstructions or a lot of angles, less iterations/masks if you don't.
@adrijusgrinius1945
@adrijusgrinius1945 Жыл бұрын
Is it possible to apply same polygon to multiple photos?
@Deepfakery
@Deepfakery Жыл бұрын
Do you mean transfer the polygon from one file to another? Sadly, no. I wish there were a script for that.
@adrijusgrinius1945
@adrijusgrinius1945 Жыл бұрын
@@Deepfakery Understood. And if I want to exclude (for example tongue) from the face, do I need to exclude it for every single frame where tongue is?
@Deepfakery
@Deepfakery Жыл бұрын
No, but you should still have a variety, and more is usually better. The trainer will attempt to learn to exclude the tongue in unmarked images the same as it does with the rest of the mask. You should exclude it on all the xseg mask you draw though. Also remember that you cannot just exclude the tongue, you have to also draw the face inclusion polygon.
@jompavic8200
@jompavic8200 Жыл бұрын
Please how can I get the full tutorial as a newbie?
@Deepfakery
@Deepfakery Жыл бұрын
I have a beginner deepfake tutorial that will walk you through the basic required steps using the Quick96 trainer: kzbin.info/www/bejne/ooSwXmyId9BmfrM I also have a guide here that goes deeper into topics not covered in my videos: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/
@jompavic8200
@jompavic8200 Жыл бұрын
Can you install this on any computer?
@Deepfakery
@Deepfakery Жыл бұрын
How to install DeepFaceLab: kzbin.info/www/bejne/boicpqhjpbuCf6c If you have an NVIDIA GTX GPU or better you're probably good to go! Install Guide: www.deepfakevfx.com/guides/deepfacelab-2-0-guide/#download-install-deepfacelab
@supermariogalaxy376
@supermariogalaxy376 2 жыл бұрын
What if it has the image upside down in the editor
@Deepfakery
@Deepfakery 2 жыл бұрын
If the face is upside down in the XSeg editor that means it was a bad detection. If it’s source then you can just delete it. If it’s destination then you’ll have to fix with re-extraction or Machine Video Editor.
@Deepfakery
@Deepfakery 2 жыл бұрын
📌 Beginner Deepfake Tutorial - kzbin.info/www/bejne/ooSwXmyId9BmfrM
@DruuzilTechGames
@DruuzilTechGames 2 жыл бұрын
Nice video m8!
@grandesmentes1
@grandesmentes1 Жыл бұрын
Do you start with Generic Xseg? Then after how much thousands of iterations do you start labeling the polygons?
@Deepfakery
@Deepfakery Жыл бұрын
Do the labels first then drop the xseg files in the model folder and just continue training from there. The training won't work without the labels.
@SyntheticVoices
@SyntheticVoices Жыл бұрын
Amazing tutorial
@DDD.123s
@DDD.123s Жыл бұрын
how to re-extract in src face set
@moss9452
@moss9452 2 жыл бұрын
Great video!
@scorpi1756
@scorpi1756 Жыл бұрын
Thanks
@nuca5104
@nuca5104 2 жыл бұрын
Does this work on CPU?
@Deepfakery
@Deepfakery 2 жыл бұрын
Yes, but it depends on the CPU. Please see my DFL Installation Tutorial for more information: kzbin.info/www/bejne/boicpqhjpbuCf6c
@robinadahl7841
@robinadahl7841 2 жыл бұрын
So you go frame by frame including and excluding every step of the way 😵
@Deepfakery
@Deepfakery 2 жыл бұрын
Not every frame. If its a clean faceset you can use the generic or only label a few frames and train from the generic. If there's lots of movement, lighting changes, or obstructions you'll have to do more, but certainly not frame by frame.
@robinadahl7841
@robinadahl7841 2 жыл бұрын
@@Deepfakery ... I literally went through arouns 160 frames just now labeling each one as careful as I could 😂 There's so much i don't know
@robinadahl7841
@robinadahl7841 2 жыл бұрын
@@Deepfakery can I keep trained src face for other projects? Can I just delete the dst files so I can cut off some work?
@Deepfakery
@Deepfakery 2 жыл бұрын
Yes you can. Its helpful to reset the (LIAE) model for the new video by deleting the inter_B file, which holds destination data.
@allecazzam8224
@allecazzam8224 Жыл бұрын
Hello, thanks for putting in so much effort into making this video! I really hope you see this comment because I have a question.. I have 8k frames I need to make but the xseg generic doesn’t do a great job on all of them.. is there anyway to go in there an modify the xseg generic masks manually to certain frames without having to do all 8k manually?
@Deepfakery
@Deepfakery Жыл бұрын
In DFL its not possible to directly edit the mask. There's a tool called MachineVideoEditor that does more advanced masking like that. For DFL if there's no obstructions but the mask is just off here and there then the best bet is to label some frames and use the pretrained files as your base, like in the video. You might be able to tighten things up a little with just a few labels and a short training session. If its some obstruction you're dealing with then its going to take more manual work, in DFL or MVE. The total number of frames you have isn't a determining factor though. Its more about the ranges of motion, lighting, and obstructions. If there's 8k frames but mostly the same angle, such as a heavy dialogue scene, then you'd only need to do a few frames. On the other hand if its a crazy action scene then you'd have to do a lot, maybe in the hundreds, and train the mask longer.
@dantheman5582
@dantheman5582 2 жыл бұрын
Hey do you do requests? And if so, can you please do Katy Perry's face on Ryan Keely?
@rrwoow
@rrwoow 2 жыл бұрын
1st
@Dragatsis.Palaiologos
@Dragatsis.Palaiologos 2 жыл бұрын
I've been trying to download the file from GitHub for a couple of days, after 79% downloading it, Mega says to free space etc etc, and then it says to pay premium . And also Chrome and Microsoft blocks my downloads wtf. Any ideas...
@Deepfakery
@Deepfakery 2 жыл бұрын
Its should only be like 3 GB or so. Here's some more links that might work: www.deepfakevfx.com/downloads/deepfacelab/
@Dragatsis.Palaiologos
@Dragatsis.Palaiologos 2 жыл бұрын
@@Deepfakery Got it , thank you so much 👍😀
@nicklybarger582
@nicklybarger582 Жыл бұрын
how many labels do you recommend we use per faceset?
@Daneliya4ever
@Daneliya4ever Жыл бұрын
He said in the video a few dozen will do. I just started using XSEG and I started with 50.
Watermelon magic box! #shorts by Leisi Crazy
00:20
Leisi Crazy
Рет қаралды 114 МЛН
Inside Out 2: ENVY & DISGUST STOLE JOY's DRINKS!!
00:32
AnythingAlexia
Рет қаралды 18 МЛН
А что бы ты сделал? @LimbLossBoss
00:17
История одного вокалиста
Рет қаралды 8 МЛН
САМЫЙ ДЕШЕВЫЙ iPhone
10:08
itpedia
Рет қаралды 453 М.
Дым-машина из Китая
0:57
Денис Шалюта
Рет қаралды 1,3 МЛН
Luminous screen protectors 🔥 #iphone ##screenprotector #android
0:19