The Thought of You (2018)
8:22
2 ай бұрын
The Factory
1:14
3 ай бұрын
The Sky's the Limit- test scene
0:58
The East Wing - Short horror film
3:20
Make Comics in Stable Diffusion!
46:25
Landfall (a MOVE.AI mocap test)
2:12
Moon Rocks
1:29
Жыл бұрын
Пікірлер
@johnovercash1798
@johnovercash1798 3 сағат бұрын
I found deepshot ai is the best so far for lip syncing. But it’s not free.
@johnovercash7547
@johnovercash7547 15 сағат бұрын
for the Asian guy lip syncing is very bad.
@hyperbolicfilms
@hyperbolicfilms 4 сағат бұрын
Kling is very overpronounced with its lipsync generally. It's good for people speaking loud. Runway is better for people speaking quietly.
@gintokigojo
@gintokigojo Күн бұрын
Wow
@Herve_art
@Herve_art 4 күн бұрын
Crazy
@knicement
@knicement 10 күн бұрын
Have you tried the new Viggle V3 model?
@hyperbolicfilms
@hyperbolicfilms 10 күн бұрын
Yes, just working with it right now. Much better on characters with realistic shading, but not getting great results at the moment with anime style characters.
@tunbimideoluyede8464
@tunbimideoluyede8464 13 күн бұрын
Hey this video is amazing, I was just wondering what prompt you used to get the specific style of character you got?
@hyperbolicfilms
@hyperbolicfilms 13 күн бұрын
Thanks! The Midjourney prompt was "Photorealistic, full body, latino soldier with stubble in dirty t-shirt and black pants, 40 years old, white background, f4, 35mm" He ended up looking more concept art style than photorealistic. I later ran the image through Krea to get it to be more realistic looking, but that was in a video after this.
@TheJan
@TheJan 18 күн бұрын
insane thanks!
@NoMouthHammocks
@NoMouthHammocks 21 күн бұрын
Can't you make them for people if they ordered them???
@RegalRoyalWasTaken
@RegalRoyalWasTaken 23 күн бұрын
"Horror"
@cadenr7165
@cadenr7165 23 күн бұрын
Heathcliff??!?
@ariel-u7y
@ariel-u7y 24 күн бұрын
Bro ai should never replace actual artist
@Ayahuasca98
@Ayahuasca98 24 күн бұрын
I miss when ai made twitter artists cry I prefer that to whatever this is
@arsalabbasmirza
@arsalabbasmirza 24 күн бұрын
And this is the kinda stuff youtube recommends me at 3 am quirky, almost nonsensical and horror because of being excessively uncanny - but true comedy gold!
@captaindonut1591
@captaindonut1591 24 күн бұрын
Absolute dogshit
@rosemeyler7203
@rosemeyler7203 24 күн бұрын
this sucks ass my guy
@danieladam1740
@danieladam1740 24 күн бұрын
shitting on the toilet is more entertaining , maybe you should try getting a job
@Awesomenessss
@Awesomenessss 24 күн бұрын
Pure comedy gold😂😂😂😂
@atul.aditya
@atul.aditya 24 күн бұрын
😂😂 should be a crime to call this horror.
@user-ek7xm3hu1w
@user-ek7xm3hu1w 24 күн бұрын
This is good!
@harryraymonddias4290
@harryraymonddias4290 24 күн бұрын
A Whovian with enough disposable income could rebuild so many lost episodes from Doctor Who!
@elcriticohdp3785
@elcriticohdp3785 26 күн бұрын
A comerla
@brianmartin697
@brianmartin697 29 күн бұрын
Automatic 111 is great... solved a lot of problems.
@zherusalemvideos
@zherusalemvideos Ай бұрын
Hi there! Just shot you an email, but in case you missed it - I lead Partnerships at Viggle, and we would love to connect and chat!
@SamhainBaucogna
@SamhainBaucogna Ай бұрын
sempre interessante, grazie!
@PHATTrocadopelus
@PHATTrocadopelus Ай бұрын
Great pipeline! These tools are getting better and better! Reminds me of the work by Ralph Bakshi.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Yeah, definitely has that rotoscoped feel!
@SpaceGhostNZ
@SpaceGhostNZ Ай бұрын
Good stuff!
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Glad you enjoyed it
@EllisJonesDeath
@EllisJonesDeath Ай бұрын
What site did you use for your character, I have tried Kling and Bing but it always creates characters with shadows on the face, I have tried prompting no shadows etc, but it always adds them.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
It's hard to not get shadows. You can try asking for even lighting, flat lighting, or diffused lighting and see if that works.
@bytecentral
@bytecentral Ай бұрын
This is so cool and amazing. Which tools did you use?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
This started as Midjourney images that I animated with Viggle, and then used Krea to clean up the video quality.
@MikeGonzalez
@MikeGonzalez Ай бұрын
Great tutorial, super down to earth. A+
@KalinyaiYainlie
@KalinyaiYainlie Ай бұрын
yes great Job, nice transition!
@gabeaiartist
@gabeaiartist Ай бұрын
Wow, amazing film!
@greenyswelt
@greenyswelt Ай бұрын
dope
@rodrigobarrosempreendedor
@rodrigobarrosempreendedor Ай бұрын
Congratulations on the video. Doubts: 1. 10 credit per 1 second is very expensive. In the UNLIMITED plan it should be possible (as the name says) to create in an unlimited way right? 2. Can I upload an audio ready for the character to speak? Or does it have to be my own voice straight? 3. If I record my voice in a language (for example English) can I change it to Portuguese in the Runway itself or will I have to take it to Elevenlabs later and change it? 4. Because if I take it to Elevenlabs and change the language, then I’ll need another AI to do the lip sync, right? Congratulations again on the video!
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
1. In theory. I think they slow you down after a certain number of credits. 2. You have to upload a video of someone acting. It's essentially like a motion capture for the face/head. 3. I don't think Runway has any translation functions. 4. If you want to take a photo and an audio clip and make a talking head, there are other tools that do that. Kling does it indirectly. Hedra is probably the easiest way to do this.
@SpaceGhostNZ
@SpaceGhostNZ Ай бұрын
Really cool stuff
@knicement
@knicement Ай бұрын
How did you change the voices?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
ElevenLabs voice to voice
@knicement
@knicement Ай бұрын
How did you slice the 2 minutes into 10 seconds each?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
In my editing app (Davinci Resolve), I rendered out 10 seconds of the performance at a time. It's very slow and tedious.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
In Resolve, you can also set the Output to Individual Clips, and then break up your video into 10 second fragments. That works well.
@knicement
@knicement 19 күн бұрын
​@@hyperbolicfilmsthank you
@Mrim86
@Mrim86 Ай бұрын
Really smart to incorporate the walking action and the talking action in what appears to be the same shot. Great work with this.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
@@Mrim86 Thanks! I’m trying to think up ways to break the dialogue shots up as well, so there can be some change in pose to fit the dialogue. Might not be feasible with Act One as it is
@ShoKnightz
@ShoKnightz Ай бұрын
What do you use for virtual sets/ backgrounds?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
These backgrounds were generated in Midjourney itself, along with the character.
@JayJay3D
@JayJay3D Ай бұрын
I may be wrong but doesnt Hedra and Live Portrait do the same??
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Hedra uses audio to automatically animate a photograph, but you don't get control over how it moves the face. Live Portrait is similar, but the results of Act One are much better. With Live Portrait, some face movements add jitter to the face. Act One also seems to work well on stylized faces, which I don't think is the case for Live Portrait. At least I can't remember seeing any results of stop-motion style characters.
@JayJay3D
@JayJay3D Ай бұрын
@@hyperbolicfilms Cheers for the reply, be interesting to see the coming updates from Viggle, Hedra and poss Live Portrait - lots of compation with AI tools now :D
@Steger13
@Steger13 Ай бұрын
It looks like a ps4 gme. Pretty good but i say in about 2 years it's going to be crazy easy to make a realistic ai film at home. Cant wait!
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Yeah, I have found some tricks since that really improve the quality, but it doesn't ever quite look 100% real. It's fun watching this technology get better and better.
@upscalednostalgiaofficial
@upscalednostalgiaofficial Ай бұрын
with viggle, what I usually do is do video enhanced using Krea AI. You get additional detail of the character and blend the character with the background well. It sometimes fix the jittery clips from viggle. After that, I apply a a frame interpolation using Flowframes to convert it to 60fps clip. If you lose the identity of the character, I usually do a faceswap using either Roof unleashed or FaceFusion.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
I have used Krea for this in the past too, and it does work well. The 10 second limit is a pain, but the results are great. I hadn't heard of Flowframes. Thanks, I'll check it out!
@armondtanz
@armondtanz Ай бұрын
I used to make stuff in unreal. I joined sum intensive courses where they had worked on movies. They ALWAYS said right at the beginning to work in 24fps when making movies . Maybe to many frames gives the look a strange artificial vibe.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
@@armondtanz You're right, but the reason it's a good idea to make Viggle footage into 60 fps is that it smooths out some of the jittering that Viggle causes when it outputs a video. Then you can turn use that 60 fps video in a 24 fps timeline to get more consistent motion.
@armondtanz
@armondtanz Ай бұрын
@@hyperbolicfilms oh ok. I'll have to check out viggle. Did u ever see the workflow of the guy who made the joker walk on stage viral. He put it thru comfy. I'm a complete noob when it comes to comfyui. Looks insane. His end result was so polished. You could see textures and shadows in clothes. They were not there in original viggle version.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
@@armondtanz I haven't seen the Joker one. But Eric Solorio did a Deadpool video with Viggle and Comfy that came out really good.
@JT-wu3wi
@JT-wu3wi Ай бұрын
This is really great! How did you get the over the shoulder shots? Were they made in MidJourney?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
It's a combination of images generated in Midjourney and some Photoshop. There's a very long tutorial about it called Making Dialogue scenes for AI films with Runway and Kling
@JT-wu3wi
@JT-wu3wi Ай бұрын
@@hyperbolicfilms thank you
@Mr.Superman2024
@Mr.Superman2024 Ай бұрын
Good, but so confused what you trying to deliver ib yuor video. So confusing
@stevensteverly
@stevensteverly Ай бұрын
what's the max resolution like? you could make the shots much more dynamic with a simple pan or camera shake... also is there the option to have it render without the background (ie as an alpha)? if so I can see this being a decent tool for some indie people. if not then it's kinda meh
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
The resolution is 1280x768. There is no option to do any camera movement, so you would have to add that in post production. The background is limited to whatever is in your input image, so no alphas. It's a step in the right direction, but not a magic bullet.
@MabelYolanda-c9i
@MabelYolanda-c9i Ай бұрын
Run Viggle through Krea and you’ll be amazed…..
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
I did that a few weeks ago, and it did give amazing results. The 10 second limit in Krea is a bit of a bottleneck, but it definitely gives great and consistent results.
@С.Н-ш2у
@С.Н-ш2у Ай бұрын
I have seen a workflow where the video was first animated using RunWay (ImageToVideo), and then a character's head was mounted on this animated video, animated in LivePortrait. Do you think it is possible to combine Viggle and RunWay(ActOne) in such a process.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
I don't think this would work because Act One only takes images as an input. LivePortrait is the only tool that I know of that works on video like that. You could use the Runway lipsync on a Viggle clip, but I don't know if it would fix the resolution of the Viggle clip. That to me is where Viggle is falling apart. It doesn't use the full resolution of your input image, or doesn't upscale the results.
@С.Н-ш2у
@С.Н-ш2у Ай бұрын
@@hyperbolicfilms I meant: 1) animate the head in Act One (picture + video with facial expressions). 2)animation in Viggle (the same picture, only on a smaller scale, with arms and legs + video with body movements). 3) Superimposition of the head from point No. 1 on the body from point No.2. P.S. A lot of hassle, and you can't twist your body much, but your head is in adequate quality.
@armondtanz
@armondtanz Ай бұрын
​@@С.Н-ш2уI tried this a while back with hedra and my body. I tried to composite it. It's never going to work. The human body is so so complex every tiny movement has a 100 off shoots which all need perfect sync otherwise it looks like crap. Funny thing is my test was the simplest , I was talking into the camera and it still looked like a bad bad animation. I've wasted over $1000 and 100s of hours trying to crack this. Its 100% not worth it.
@С.Н-ш2у
@С.Н-ш2у Ай бұрын
@@armondtanz Thanks for your experience
@armondtanz
@armondtanz Ай бұрын
@@С.Н-ш2у i learned the hard way, stubborn fool who didnt check out the competition.. lol, thats probably the easiest way to work something out, if no one else is doing it, its probably not going to work... The only way it can come off in the slightest is if you are a super advanced animator and can match the movement with advanced motion tracking and stabilizers, then sit the head on using tracking markers... But even then its still gonna look unnatural and everyone will be focusing on this head that's not quite sitting right on the body. Other factors will come into play, like lighting, 3d rotation, your neck muscles not reacting to your shoulder muscles. If you look at all great animators (tex avery). That part of the body is so so crucial. Theres so much expression in head-nick-shoulders. Thats why these new AIs look a bit a flat, that area is really behind.
@HumanOpinions-bz9ky
@HumanOpinions-bz9ky Ай бұрын
Thats what it did to me. Can't recognize human face. I was a bit disappointed. Not to be greedy but I'm looking forward to the days we can move our head, move arms around and even allow props that we are holding. Only THEN ... a Vid Jedi will you be.
@armondtanz
@armondtanz Ай бұрын
The ultimate would be a gaussian splat type scenario where it's almost like 3d software. U can pan out of your scene and see yer 3d world. I think midjourney are launching it or looking to. They said there video gen is ready but they wanna better it. They also talk of 3d environment, so maybe that's the future.
@С.Н-ш2у
@С.Н-ш2у Ай бұрын
Thank you for the content, it is very useful! Have you tried animation in this way kzbin.info/www/bejne/aquTqJ-JZa-NaLMsi=wfhs7pECmIyB8eZv ? This is not spam, I just want to know your opinion, since I do not understand ComfyUI
@LeonGustin
@LeonGustin Ай бұрын
See now they need to combine Act One with vid2vid, plus not to mention vid2vid needs to have at least a 1min generation
@LeonGustin
@LeonGustin Ай бұрын
Amazing work, love the perseverance on getting your idea to reality.
@SamhainBaucogna
@SamhainBaucogna 2 ай бұрын
Great job, congratulations! Sorry, but do you use two steps with Kling to make the characters talk? I mean, do you first tell the character to speak and then take the generated video to 'lip sync,' or in the first step, do you try to create a static video and then make them talk? It's a shame that Minimax doesn't have these features, it would be unbeatable. Thanks, regards.
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
I first generage the images in Midjourney because it's the easiest way to get consistent characters. So I prepare my shots there. Then I use the image to video in Kling to get something like "the man talks calmly". When I am happy with the way a clip looks, I then use Kling's lipsync to get the mouth to match the dialogue. There's a long tutorial on my channel that shows my whole process (I show both Kling and Runway)
@SamhainBaucogna
@SamhainBaucogna 2 ай бұрын
@@hyperbolicfilms Thank you very much. Yes I have already seen your very interesting tutorial, but I had this doubt because maybe at that time Kling did not have lip sync and the videos it generated were not very good. Anyway you have seen now Runway has announced act-one which can be very useful in this. Greetings!
@violentpixelation5486
@violentpixelation5486 2 ай бұрын
Great presentation and explanation of this fantastic new opportunity of getting into virtual production without spending a fortune. Thank you! Love to see more like that from the Aximmetry world.
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
Glad it was helpful! Aximmetry is a great option if you want to see your final pixels immediately, and Jetset is great if you are willing to do some post production.