I found deepshot ai is the best so far for lip syncing. But it’s not free.
@johnovercash754715 сағат бұрын
for the Asian guy lip syncing is very bad.
@hyperbolicfilms4 сағат бұрын
Kling is very overpronounced with its lipsync generally. It's good for people speaking loud. Runway is better for people speaking quietly.
@gintokigojoКүн бұрын
Wow
@Herve_art4 күн бұрын
Crazy
@knicement10 күн бұрын
Have you tried the new Viggle V3 model?
@hyperbolicfilms10 күн бұрын
Yes, just working with it right now. Much better on characters with realistic shading, but not getting great results at the moment with anime style characters.
@tunbimideoluyede846413 күн бұрын
Hey this video is amazing, I was just wondering what prompt you used to get the specific style of character you got?
@hyperbolicfilms13 күн бұрын
Thanks! The Midjourney prompt was "Photorealistic, full body, latino soldier with stubble in dirty t-shirt and black pants, 40 years old, white background, f4, 35mm" He ended up looking more concept art style than photorealistic. I later ran the image through Krea to get it to be more realistic looking, but that was in a video after this.
@TheJan18 күн бұрын
insane thanks!
@NoMouthHammocks21 күн бұрын
Can't you make them for people if they ordered them???
@RegalRoyalWasTaken23 күн бұрын
"Horror"
@cadenr716523 күн бұрын
Heathcliff??!?
@ariel-u7y24 күн бұрын
Bro ai should never replace actual artist
@Ayahuasca9824 күн бұрын
I miss when ai made twitter artists cry I prefer that to whatever this is
@arsalabbasmirza24 күн бұрын
And this is the kinda stuff youtube recommends me at 3 am quirky, almost nonsensical and horror because of being excessively uncanny - but true comedy gold!
@captaindonut159124 күн бұрын
Absolute dogshit
@rosemeyler720324 күн бұрын
this sucks ass my guy
@danieladam174024 күн бұрын
shitting on the toilet is more entertaining , maybe you should try getting a job
@Awesomenessss24 күн бұрын
Pure comedy gold😂😂😂😂
@atul.aditya24 күн бұрын
😂😂 should be a crime to call this horror.
@user-ek7xm3hu1w24 күн бұрын
This is good!
@harryraymonddias429024 күн бұрын
A Whovian with enough disposable income could rebuild so many lost episodes from Doctor Who!
@elcriticohdp378526 күн бұрын
A comerla
@brianmartin69729 күн бұрын
Automatic 111 is great... solved a lot of problems.
@zherusalemvideosАй бұрын
Hi there! Just shot you an email, but in case you missed it - I lead Partnerships at Viggle, and we would love to connect and chat!
@SamhainBaucognaАй бұрын
sempre interessante, grazie!
@PHATTrocadopelusАй бұрын
Great pipeline! These tools are getting better and better! Reminds me of the work by Ralph Bakshi.
@hyperbolicfilmsАй бұрын
Yeah, definitely has that rotoscoped feel!
@SpaceGhostNZАй бұрын
Good stuff!
@hyperbolicfilmsАй бұрын
Glad you enjoyed it
@EllisJonesDeathАй бұрын
What site did you use for your character, I have tried Kling and Bing but it always creates characters with shadows on the face, I have tried prompting no shadows etc, but it always adds them.
@hyperbolicfilmsАй бұрын
It's hard to not get shadows. You can try asking for even lighting, flat lighting, or diffused lighting and see if that works.
@bytecentralАй бұрын
This is so cool and amazing. Which tools did you use?
@hyperbolicfilmsАй бұрын
This started as Midjourney images that I animated with Viggle, and then used Krea to clean up the video quality.
@MikeGonzalezАй бұрын
Great tutorial, super down to earth. A+
@KalinyaiYainlieАй бұрын
yes great Job, nice transition!
@gabeaiartistАй бұрын
Wow, amazing film!
@greenysweltАй бұрын
dope
@rodrigobarrosempreendedorАй бұрын
Congratulations on the video. Doubts: 1. 10 credit per 1 second is very expensive. In the UNLIMITED plan it should be possible (as the name says) to create in an unlimited way right? 2. Can I upload an audio ready for the character to speak? Or does it have to be my own voice straight? 3. If I record my voice in a language (for example English) can I change it to Portuguese in the Runway itself or will I have to take it to Elevenlabs later and change it? 4. Because if I take it to Elevenlabs and change the language, then I’ll need another AI to do the lip sync, right? Congratulations again on the video!
@hyperbolicfilmsАй бұрын
1. In theory. I think they slow you down after a certain number of credits. 2. You have to upload a video of someone acting. It's essentially like a motion capture for the face/head. 3. I don't think Runway has any translation functions. 4. If you want to take a photo and an audio clip and make a talking head, there are other tools that do that. Kling does it indirectly. Hedra is probably the easiest way to do this.
@SpaceGhostNZАй бұрын
Really cool stuff
@knicementАй бұрын
How did you change the voices?
@hyperbolicfilmsАй бұрын
ElevenLabs voice to voice
@knicementАй бұрын
How did you slice the 2 minutes into 10 seconds each?
@hyperbolicfilmsАй бұрын
In my editing app (Davinci Resolve), I rendered out 10 seconds of the performance at a time. It's very slow and tedious.
@hyperbolicfilmsАй бұрын
In Resolve, you can also set the Output to Individual Clips, and then break up your video into 10 second fragments. That works well.
@knicement19 күн бұрын
@@hyperbolicfilmsthank you
@Mrim86Ай бұрын
Really smart to incorporate the walking action and the talking action in what appears to be the same shot. Great work with this.
@hyperbolicfilmsАй бұрын
@@Mrim86 Thanks! I’m trying to think up ways to break the dialogue shots up as well, so there can be some change in pose to fit the dialogue. Might not be feasible with Act One as it is
@ShoKnightzАй бұрын
What do you use for virtual sets/ backgrounds?
@hyperbolicfilmsАй бұрын
These backgrounds were generated in Midjourney itself, along with the character.
@JayJay3DАй бұрын
I may be wrong but doesnt Hedra and Live Portrait do the same??
@hyperbolicfilmsАй бұрын
Hedra uses audio to automatically animate a photograph, but you don't get control over how it moves the face. Live Portrait is similar, but the results of Act One are much better. With Live Portrait, some face movements add jitter to the face. Act One also seems to work well on stylized faces, which I don't think is the case for Live Portrait. At least I can't remember seeing any results of stop-motion style characters.
@JayJay3DАй бұрын
@@hyperbolicfilms Cheers for the reply, be interesting to see the coming updates from Viggle, Hedra and poss Live Portrait - lots of compation with AI tools now :D
@Steger13Ай бұрын
It looks like a ps4 gme. Pretty good but i say in about 2 years it's going to be crazy easy to make a realistic ai film at home. Cant wait!
@hyperbolicfilmsАй бұрын
Yeah, I have found some tricks since that really improve the quality, but it doesn't ever quite look 100% real. It's fun watching this technology get better and better.
@upscalednostalgiaofficialАй бұрын
with viggle, what I usually do is do video enhanced using Krea AI. You get additional detail of the character and blend the character with the background well. It sometimes fix the jittery clips from viggle. After that, I apply a a frame interpolation using Flowframes to convert it to 60fps clip. If you lose the identity of the character, I usually do a faceswap using either Roof unleashed or FaceFusion.
@hyperbolicfilmsАй бұрын
I have used Krea for this in the past too, and it does work well. The 10 second limit is a pain, but the results are great. I hadn't heard of Flowframes. Thanks, I'll check it out!
@armondtanzАй бұрын
I used to make stuff in unreal. I joined sum intensive courses where they had worked on movies. They ALWAYS said right at the beginning to work in 24fps when making movies . Maybe to many frames gives the look a strange artificial vibe.
@hyperbolicfilmsАй бұрын
@@armondtanz You're right, but the reason it's a good idea to make Viggle footage into 60 fps is that it smooths out some of the jittering that Viggle causes when it outputs a video. Then you can turn use that 60 fps video in a 24 fps timeline to get more consistent motion.
@armondtanzАй бұрын
@@hyperbolicfilms oh ok. I'll have to check out viggle. Did u ever see the workflow of the guy who made the joker walk on stage viral. He put it thru comfy. I'm a complete noob when it comes to comfyui. Looks insane. His end result was so polished. You could see textures and shadows in clothes. They were not there in original viggle version.
@hyperbolicfilmsАй бұрын
@@armondtanz I haven't seen the Joker one. But Eric Solorio did a Deadpool video with Viggle and Comfy that came out really good.
@JT-wu3wiАй бұрын
This is really great! How did you get the over the shoulder shots? Were they made in MidJourney?
@hyperbolicfilmsАй бұрын
It's a combination of images generated in Midjourney and some Photoshop. There's a very long tutorial about it called Making Dialogue scenes for AI films with Runway and Kling
@JT-wu3wiАй бұрын
@@hyperbolicfilms thank you
@Mr.Superman2024Ай бұрын
Good, but so confused what you trying to deliver ib yuor video. So confusing
@stevensteverlyАй бұрын
what's the max resolution like? you could make the shots much more dynamic with a simple pan or camera shake... also is there the option to have it render without the background (ie as an alpha)? if so I can see this being a decent tool for some indie people. if not then it's kinda meh
@hyperbolicfilmsАй бұрын
The resolution is 1280x768. There is no option to do any camera movement, so you would have to add that in post production. The background is limited to whatever is in your input image, so no alphas. It's a step in the right direction, but not a magic bullet.
@MabelYolanda-c9iАй бұрын
Run Viggle through Krea and you’ll be amazed…..
@hyperbolicfilmsАй бұрын
I did that a few weeks ago, and it did give amazing results. The 10 second limit in Krea is a bit of a bottleneck, but it definitely gives great and consistent results.
@С.Н-ш2уАй бұрын
I have seen a workflow where the video was first animated using RunWay (ImageToVideo), and then a character's head was mounted on this animated video, animated in LivePortrait. Do you think it is possible to combine Viggle and RunWay(ActOne) in such a process.
@hyperbolicfilmsАй бұрын
I don't think this would work because Act One only takes images as an input. LivePortrait is the only tool that I know of that works on video like that. You could use the Runway lipsync on a Viggle clip, but I don't know if it would fix the resolution of the Viggle clip. That to me is where Viggle is falling apart. It doesn't use the full resolution of your input image, or doesn't upscale the results.
@С.Н-ш2уАй бұрын
@@hyperbolicfilms I meant: 1) animate the head in Act One (picture + video with facial expressions). 2)animation in Viggle (the same picture, only on a smaller scale, with arms and legs + video with body movements). 3) Superimposition of the head from point No. 1 on the body from point No.2. P.S. A lot of hassle, and you can't twist your body much, but your head is in adequate quality.
@armondtanzАй бұрын
@@С.Н-ш2уI tried this a while back with hedra and my body. I tried to composite it. It's never going to work. The human body is so so complex every tiny movement has a 100 off shoots which all need perfect sync otherwise it looks like crap. Funny thing is my test was the simplest , I was talking into the camera and it still looked like a bad bad animation. I've wasted over $1000 and 100s of hours trying to crack this. Its 100% not worth it.
@С.Н-ш2уАй бұрын
@@armondtanz Thanks for your experience
@armondtanzАй бұрын
@@С.Н-ш2у i learned the hard way, stubborn fool who didnt check out the competition.. lol, thats probably the easiest way to work something out, if no one else is doing it, its probably not going to work... The only way it can come off in the slightest is if you are a super advanced animator and can match the movement with advanced motion tracking and stabilizers, then sit the head on using tracking markers... But even then its still gonna look unnatural and everyone will be focusing on this head that's not quite sitting right on the body. Other factors will come into play, like lighting, 3d rotation, your neck muscles not reacting to your shoulder muscles. If you look at all great animators (tex avery). That part of the body is so so crucial. Theres so much expression in head-nick-shoulders. Thats why these new AIs look a bit a flat, that area is really behind.
@HumanOpinions-bz9kyАй бұрын
Thats what it did to me. Can't recognize human face. I was a bit disappointed. Not to be greedy but I'm looking forward to the days we can move our head, move arms around and even allow props that we are holding. Only THEN ... a Vid Jedi will you be.
@armondtanzАй бұрын
The ultimate would be a gaussian splat type scenario where it's almost like 3d software. U can pan out of your scene and see yer 3d world. I think midjourney are launching it or looking to. They said there video gen is ready but they wanna better it. They also talk of 3d environment, so maybe that's the future.
@С.Н-ш2уАй бұрын
Thank you for the content, it is very useful! Have you tried animation in this way kzbin.info/www/bejne/aquTqJ-JZa-NaLMsi=wfhs7pECmIyB8eZv ? This is not spam, I just want to know your opinion, since I do not understand ComfyUI
@LeonGustinАй бұрын
See now they need to combine Act One with vid2vid, plus not to mention vid2vid needs to have at least a 1min generation
@LeonGustinАй бұрын
Amazing work, love the perseverance on getting your idea to reality.
@SamhainBaucogna2 ай бұрын
Great job, congratulations! Sorry, but do you use two steps with Kling to make the characters talk? I mean, do you first tell the character to speak and then take the generated video to 'lip sync,' or in the first step, do you try to create a static video and then make them talk? It's a shame that Minimax doesn't have these features, it would be unbeatable. Thanks, regards.
@hyperbolicfilms2 ай бұрын
I first generage the images in Midjourney because it's the easiest way to get consistent characters. So I prepare my shots there. Then I use the image to video in Kling to get something like "the man talks calmly". When I am happy with the way a clip looks, I then use Kling's lipsync to get the mouth to match the dialogue. There's a long tutorial on my channel that shows my whole process (I show both Kling and Runway)
@SamhainBaucogna2 ай бұрын
@@hyperbolicfilms Thank you very much. Yes I have already seen your very interesting tutorial, but I had this doubt because maybe at that time Kling did not have lip sync and the videos it generated were not very good. Anyway you have seen now Runway has announced act-one which can be very useful in this. Greetings!
@violentpixelation54862 ай бұрын
Great presentation and explanation of this fantastic new opportunity of getting into virtual production without spending a fortune. Thank you! Love to see more like that from the Aximmetry world.
@hyperbolicfilms2 ай бұрын
Glad it was helpful! Aximmetry is a great option if you want to see your final pixels immediately, and Jetset is great if you are willing to do some post production.