to render without missing frames, disable "real-time" in file source
@Monoville2 жыл бұрын
Great, thanks!
@Monoville2 жыл бұрын
One other thing to note which I forgot to mention: after rendering all the sequence images, hit x next to "save sequence" in DeepFaceLive or the previous files will be overwritten the next time you load a different video file
@loveit86022 жыл бұрын
I am so happy you're doing more videos!!
@NextgenPL2 жыл бұрын
CANT wait to see this real actors faces in a games!😲😲😲
@Ziutek_DE5 ай бұрын
Thank you bro i can finally have Indiana jones
@SlayeRFCSM5 ай бұрын
David Kovalniy - Alexei Navalny (RIP)
@sheleg48072 жыл бұрын
It's faster probably because of the FPS, try matching the fps in the composition to 25/29.97/30 or even 15. Idk what it's like, didn't try it yet.
@Monoville2 жыл бұрын
Tried that already, didn't seem to work (although I'm sure it must be fps related)
@Romulus_YT9 ай бұрын
It likely has something to do with the compression of the original compared to your custom file. Grabbing frames from a video and creating stills is essentially the same thing as when you compress a video file, ignoring duplicate chunks of data to save space and only pulling unique still images/data. But those frames are not extended and blended together to create an identical length video footage. Instead, it is put back together more like a handmade flipbook scene where each frame still holds the same value in terms of fps without the data that was ignored. So it appears to be faster, even if the scene looks the same to the naked eye. It's really just missing chunks. When you compress a video, the encoders are taking out the work for you of adjusting the speed in the final product by correcting the fps each time chunks are removed. That is why compressed videos are always seamless. The only thing that will change is the fuzziness or color distortion, and that is based on how many chunks you remove from the file size. The higher the file size and the more it is compressed, the fuzzier the image gets. Deepfake is not as seamless because rather than correct fps where duplicates are removed, you are only changing the overall playback speed. It helps, but good eyesight can detect missing fames still. Deepfaking is more or less sloppy manual video compression with edited faces if that helps. But if you are good at video editing. You can use pro tools like Premier/After Effects to blend the stills better than apps like this, if you want genuine looking fakes. But if you're just having fun playing with faces this is perfect. It IS fps related, but it is not as simple as just changing fps, since different amounts of data are ignored in different places. Unless you can specify fps by frame sets instead of as a whole project? That's why I recommend those apps.
@ruudygh2 жыл бұрын
can use custom face not from that list?
@craksracing0com2 жыл бұрын
hi just tried it and ...and some questions came up.... how i stream that into obs for ex or record it (not frame by frame...) thanks
@MickaelSchaack Жыл бұрын
where i put .fdm from the drive ?
@Monoville Жыл бұрын
DeepFaceLive_NVIDIA (or whichever version you have) > userdata > dfm_models
@DruuzilTechGames Жыл бұрын
It'd be mildly cool if you gave me credit when using my models. I made them public so idc what you use them for, but there seems to be a lack of indication in any of your videos that specify who made the models or whose Google drive you're linking to.
@Monoville Жыл бұрын
Sorry, I had no idea you made these face models. I assumed the Google Drive link was just a general repository for people to add the models. DeepFaceLab itself seems to be made by a whole bunch of people (going by the credits) and this particular model is included with it, not from the Drive link. I always credit people on my videos (hence I would just add the credit "DeepFaceLab" on the DeepFake videos), in this case it just wasn't clear to me that there was a single person to give credit to. But anyway I've amended all the videos featuring DeepFake to include a link to your channel. Thanks for the great work.
@DruuzilTechGames Жыл бұрын
@@Monoville I didn't make Deepface Lab (the creator Iperov is from Russia) but I made all the models in that Google drive folder, and made it public so people could play with the live models. Thanks for updating the description. I enjoy your content btw.
@craksracing0com708 Жыл бұрын
Hello, why i have no sound?....i record a video with obs, then i deepfacelive it but there is no sound....at the window out, not even when i record with obs.... why? thnnks!!!
@Romulus_YT9 ай бұрын
Deepfake is not for audio editing, if I understand your question correctly. It takes video from the container of your selected video and creates still images of it, then recompiles them to form an animation with the different faces. If you want audio in the final product, you need to add it back via video editors as a second layer with your faked file in your format of choice. Then export it. You will then have your deepfake video with sound and no need to record it. There are tons of splitters online, free of charge if you need to create your audio track off the original video you edited to add back later. Make sure whatever video you are editing had sound to begin with. If you are trying to record a streamed video from the internet or some other app to use for editing, take caution. The partnership between OBS and Google will prevent some videos/audios being recorded from Chrome and some apps, since OBS has built in detection to protect DRM content. To prevent that, only record streamed content from a browser like Firefox or Dolphin..or streaming apps that are not affiliated with Google in any way, as long as you are not doing it in a way that violates legal terms of use.