The Thought of You (2018)
8:22
3 ай бұрын
The Factory
1:14
4 ай бұрын
The Sky's the Limit- test scene
0:58
The East Wing - Short horror film
3:20
Make Comics in Stable Diffusion!
46:25
Landfall (a MOVE.AI mocap test)
2:12
Moon Rocks
1:29
Жыл бұрын
Пікірлер
@TheObaby18
@TheObaby18 Күн бұрын
You did a great job 👍🏿
@ariel-u7y
@ariel-u7y 8 күн бұрын
Bro Bram Stroker would be soo disapointed if he saw sm like this
@atlantianson515
@atlantianson515 13 күн бұрын
Well done, good story, very entertaining. Thank you for sharing!
@hyperbolicfilms
@hyperbolicfilms 13 күн бұрын
Thank you for watching!
@firesofcreation
@firesofcreation 13 күн бұрын
Awesome! Nice work! ✨👏🏽✨ The voice over animation was flawless! 🌟
@hyperbolicfilms
@hyperbolicfilms 13 күн бұрын
Thanks!
@VegterMedia
@VegterMedia 17 күн бұрын
Gonna try this with blender
@offilawNoone
@offilawNoone 19 күн бұрын
well. ai generated content is still light years away from looking normal.
@dionysislarson6352
@dionysislarson6352 20 күн бұрын
Right on! That was cool. I do gotta complain that the Chick wasn't wearing a Mini though.
@CosmicOutpost
@CosmicOutpost 22 күн бұрын
Damn that was epic! Can't wait to see a behind the scenes to know which platform did which! The characters consistency and poses alone is hard to do but wrapping it up in a great Star Trek story? Amazing!😎
@swagadordali1611
@swagadordali1611 23 күн бұрын
I mean it's impressive but I really hope this kind of identity theft is treated as criminal someday
@SpaceGhostNZ
@SpaceGhostNZ 23 күн бұрын
Impressive as usual! It's nice seeing AI-generated content that puts effort into having a story too.
@hyperbolicfilms
@hyperbolicfilms 23 күн бұрын
@@SpaceGhostNZ Thanks! I wish I’d done something more classic weird Star Trek, but now that I have the assets I may try to do another episode that goes into giant hand holding a ship territory.
@LFPAnimations
@LFPAnimations 24 күн бұрын
On the one hand it's impressive how far this tech has gone. On the other hand I don't like anything about your final result. It is sloppy, inconsistent, uninspired, and inhuman. I'd rather watch the raw webcam performance of you delivering the lines than the AI version.
@hyperbolicfilms
@hyperbolicfilms 24 күн бұрын
That's fair, but in the past 3 months since I posted this, it's come much further along. Characters are much more realistic and I've incorporated moving cameras, so the filming experience can be much closer to traditional filmmaking, just with an AI layer over it. I see the evolution of this letting a small group of actors play dozens of characters in whatever setting they want. I also think this style of AI generated stuff at least incorporates the performance of an actor. So many of the AI video bros are desperate to move to the point where they press a button and get a shitty TV show. I am looking for ways to make sure humans remain a key part of the process.
@LFPAnimations
@LFPAnimations 24 күн бұрын
@@hyperbolicfilms I could see how this would be good to try out an idea, but I'd never trust it for a final. If anything it is probably good reference for a VFX artist to take it the rest of the way. Glad to hear you aren't on the AI bro train, but personally I remain extremely skeptical. Traditional 3D tools are so accessible and so good now that AI honestly has steep competition. Have you tried metahumans and Unreal yet?
@hyperbolicfilms
@hyperbolicfilms 24 күн бұрын
@@LFPAnimations Professionally, I am using AI stuff for previz. Essentially when clients need to show what the final project can look like, it's a better communication tool than just a storyboard or a moodboard. And that's where I think all of this tech is best at, I have used Metahumans quite a bit, even live on an LED wall as a background character, but I'm more of an iClone guy. More flexibility even if the characters aren't as realistic. I'm in the midst of a big iClone project now, and nothing can beat animation for real control of what the characters are doing.
@offishow
@offishow Ай бұрын
Is that old topaz vision free
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
No, I just have an old license and can't justify the upgrade price. I don't think you can get the previous versions.
@johnovercash1798
@johnovercash1798 Ай бұрын
I found deepshot ai is the best so far for lip syncing. But it’s not free.
@johnovercash7547
@johnovercash7547 Ай бұрын
for the Asian guy lip syncing is very bad.
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Kling is very overpronounced with its lipsync generally. It's good for people speaking loud. Runway is better for people speaking quietly.
@gintokigojo
@gintokigojo Ай бұрын
Wow
@Herve_art
@Herve_art Ай бұрын
Crazy
@knicement
@knicement Ай бұрын
Have you tried the new Viggle V3 model?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Yes, just working with it right now. Much better on characters with realistic shading, but not getting great results at the moment with anime style characters.
@tunbimideoluyede8464
@tunbimideoluyede8464 Ай бұрын
Hey this video is amazing, I was just wondering what prompt you used to get the specific style of character you got?
@hyperbolicfilms
@hyperbolicfilms Ай бұрын
Thanks! The Midjourney prompt was "Photorealistic, full body, latino soldier with stubble in dirty t-shirt and black pants, 40 years old, white background, f4, 35mm" He ended up looking more concept art style than photorealistic. I later ran the image through Krea to get it to be more realistic looking, but that was in a video after this.
@TheJan
@TheJan Ай бұрын
insane thanks!
@NoMouthHammocks
@NoMouthHammocks 2 ай бұрын
Can't you make them for people if they ordered them???
@RegalRoyalWasTaken
@RegalRoyalWasTaken 2 ай бұрын
"Horror"
@cadenr7165
@cadenr7165 2 ай бұрын
Heathcliff??!?
@ariel-u7y
@ariel-u7y 2 ай бұрын
Bro ai should never replace actual artist
@Ayahuasca98
@Ayahuasca98 2 ай бұрын
I miss when ai made twitter artists cry I prefer that to whatever this is
@arsalabbasmirza
@arsalabbasmirza 2 ай бұрын
And this is the kinda stuff youtube recommends me at 3 am quirky, almost nonsensical and horror because of being excessively uncanny - but true comedy gold!
@captaindonut1591
@captaindonut1591 2 ай бұрын
Absolute dogshit
@roseypunk
@roseypunk 2 ай бұрын
this sucks ass my guy
@danieladam1740
@danieladam1740 2 ай бұрын
shitting on the toilet is more entertaining , maybe you should try getting a job
@Dev_ilicious
@Dev_ilicious 2 ай бұрын
Pure comedy gold😂😂😂😂
@atul.aditya
@atul.aditya 2 ай бұрын
😂😂 should be a crime to call this horror.
@user-ek7xm3hu1w
@user-ek7xm3hu1w 2 ай бұрын
This is good!
@harryraymonddias4290
@harryraymonddias4290 2 ай бұрын
A Whovian with enough disposable income could rebuild so many lost episodes from Doctor Who!
@elcriticohdp3785
@elcriticohdp3785 2 ай бұрын
A comerla
@brianmartin697
@brianmartin697 2 ай бұрын
Automatic 111 is great... solved a lot of problems.
@zherusalemvideos
@zherusalemvideos 2 ай бұрын
Hi there! Just shot you an email, but in case you missed it - I lead Partnerships at Viggle, and we would love to connect and chat!
@SamhainBaucogna
@SamhainBaucogna 2 ай бұрын
sempre interessante, grazie!
@PHATTrocadopelus
@PHATTrocadopelus 2 ай бұрын
Great pipeline! These tools are getting better and better! Reminds me of the work by Ralph Bakshi.
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
Yeah, definitely has that rotoscoped feel!
@SpaceGhostNZ
@SpaceGhostNZ 2 ай бұрын
Good stuff!
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
Glad you enjoyed it
@EllisJonesDeath
@EllisJonesDeath 2 ай бұрын
What site did you use for your character, I have tried Kling and Bing but it always creates characters with shadows on the face, I have tried prompting no shadows etc, but it always adds them.
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
It's hard to not get shadows. You can try asking for even lighting, flat lighting, or diffused lighting and see if that works.
@bytecentral
@bytecentral 2 ай бұрын
This is so cool and amazing. Which tools did you use?
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
This started as Midjourney images that I animated with Viggle, and then used Krea to clean up the video quality.
@MikeGonzalez
@MikeGonzalez 3 ай бұрын
Great tutorial, super down to earth. A+
@KalinyaiYainlie
@KalinyaiYainlie 3 ай бұрын
yes great Job, nice transition!
@gabeaiartist
@gabeaiartist 3 ай бұрын
Wow, amazing film!
@greenyswelt
@greenyswelt 3 ай бұрын
dope
@rodrigobarrosempreendedor
@rodrigobarrosempreendedor 3 ай бұрын
Congratulations on the video. Doubts: 1. 10 credit per 1 second is very expensive. In the UNLIMITED plan it should be possible (as the name says) to create in an unlimited way right? 2. Can I upload an audio ready for the character to speak? Or does it have to be my own voice straight? 3. If I record my voice in a language (for example English) can I change it to Portuguese in the Runway itself or will I have to take it to Elevenlabs later and change it? 4. Because if I take it to Elevenlabs and change the language, then I’ll need another AI to do the lip sync, right? Congratulations again on the video!
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
1. In theory. I think they slow you down after a certain number of credits. 2. You have to upload a video of someone acting. It's essentially like a motion capture for the face/head. 3. I don't think Runway has any translation functions. 4. If you want to take a photo and an audio clip and make a talking head, there are other tools that do that. Kling does it indirectly. Hedra is probably the easiest way to do this.
@SpaceGhostNZ
@SpaceGhostNZ 3 ай бұрын
Really cool stuff
@knicement
@knicement 3 ай бұрын
How did you change the voices?
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
ElevenLabs voice to voice
@knicement
@knicement 3 ай бұрын
How did you slice the 2 minutes into 10 seconds each?
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
In my editing app (Davinci Resolve), I rendered out 10 seconds of the performance at a time. It's very slow and tedious.
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
In Resolve, you can also set the Output to Individual Clips, and then break up your video into 10 second fragments. That works well.
@knicement
@knicement Ай бұрын
​@@hyperbolicfilmsthank you
@Mrim86
@Mrim86 3 ай бұрын
Really smart to incorporate the walking action and the talking action in what appears to be the same shot. Great work with this.
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
@@Mrim86 Thanks! I’m trying to think up ways to break the dialogue shots up as well, so there can be some change in pose to fit the dialogue. Might not be feasible with Act One as it is
@ShoKnightz
@ShoKnightz 3 ай бұрын
What do you use for virtual sets/ backgrounds?
@hyperbolicfilms
@hyperbolicfilms 2 ай бұрын
These backgrounds were generated in Midjourney itself, along with the character.
@JayJay3D
@JayJay3D 3 ай бұрын
I may be wrong but doesnt Hedra and Live Portrait do the same??
@hyperbolicfilms
@hyperbolicfilms 3 ай бұрын
Hedra uses audio to automatically animate a photograph, but you don't get control over how it moves the face. Live Portrait is similar, but the results of Act One are much better. With Live Portrait, some face movements add jitter to the face. Act One also seems to work well on stylized faces, which I don't think is the case for Live Portrait. At least I can't remember seeing any results of stop-motion style characters.
@JayJay3D
@JayJay3D 3 ай бұрын
@@hyperbolicfilms Cheers for the reply, be interesting to see the coming updates from Viggle, Hedra and poss Live Portrait - lots of compation with AI tools now :D