The Sky's the Limit- test scene
0:58
The East Wing - Short horror film
3:20
Make Comics in Stable Diffusion!
46:25
Landfall (a MOVE.AI mocap test)
2:12
Moon Rocks
1:29
Жыл бұрын
The Bad Date
1:03
Жыл бұрын
The Church in the Woods
0:30
Жыл бұрын
The Church in the Woods
0:44
Жыл бұрын
Real Criminal
0:43
2 жыл бұрын
Aldric The Arisen
3:01
2 жыл бұрын
Go Quest!- Evil Wizard
0:30
3 жыл бұрын
Go Quest! -Introducing Igbar
0:39
3 жыл бұрын
Go Quest! Floor Tiles
0:25
3 жыл бұрын
Go Quest! - Actual Treasure
0:32
3 жыл бұрын
Free Adobe Character Animator puppet
0:26
Corpse on the Can - Wumbaberry Hills
0:56
True Stories that Happened (cartoon)
0:52
Пікірлер
@HowDidIGet3700Subs
@HowDidIGet3700Subs 7 күн бұрын
Starts at 3:48
@feedyourmind1
@feedyourmind1 12 күн бұрын
😂😂 thanks Rodrigo
@dwainmorris7854
@dwainmorris7854 29 күн бұрын
Yes, you're absolutely correct, it has too much censorship. That's why I'm gonna go ahead and pay that $2000 to buy a gaming computer. So I can run Stable de fusion on it directly. I'm tired of being toe note go ahead. M***********, it won't let you do fight scenes. It won't let you let the figures hold weapons. And that's a plus if you're gonna make comic books. If you're gonna make superhero comic books you're gonna have to show some violence and midjourney as well as others like leonardo I won't let you do it
@itanrandel4552
@itanrandel4552 Ай бұрын
Excuse me, how do I add the speech bubbles?
@CessarVillarreal
@CessarVillarreal Ай бұрын
burns my eyes see people using chatgpt or SD white theme 😂😂
@crimson_fire_Dragon1
@crimson_fire_Dragon1 Ай бұрын
darn so you need to pay for mid journey and cant do what you want
@artbywaqas
@artbywaqas Ай бұрын
@hyperbolicfilms Hi, I'm also using Iclone for the Facial Mocap. How did you end up recording the dialogue? Tech Support told me you have to use a wired mic connected to your PC and that the Live Face app will only record motion not the sound? I'm also trying to record body and face mocap at the same time. I'm using Rokoko Vision for body mocap which is also AI based and not live BTW Rokoko's Head Rig is quite good and affordably priced at $295.
@alibaba-wy1iv
@alibaba-wy1iv 2 ай бұрын
How to make every generation, is the same character? I heard some people using own lora training. Can you make a tutorial for it? Thanks
@WalidDingsdale
@WalidDingsdale 2 ай бұрын
enjoy listening to your reflection, comments and other talks about this ai implications from a professional artist's perspective. keep going
@WalidDingsdale
@WalidDingsdale 2 ай бұрын
thanks for sharing this amazing walkthrough of combination of SD and PS. This is my first comic art class, which interests many. keep going with more such prefessional comic insight and ideas.
@Lalambz
@Lalambz 2 ай бұрын
LEts go!!! :))
@todosmiros8119
@todosmiros8119 2 ай бұрын
i want to make BLACK people. is everyone in these things white/asian looking?
@Elaneor
@Elaneor 3 ай бұрын
When I found this video, I hoped the authoк would use IP adapter of Control net extension ...
@myst1049
@myst1049 3 ай бұрын
Hi im prolly gonna go unnoticed but can you make an animestyled comic using stable diffusion like those manhwas(korean) beautiful 2D characters comics always have more viewers.I would like it if u made a series for beginners like me.Many people read manhuas(chinese)just cause of an op mc.The storyline is just overused plot.Anyway would like if u made a beginner series.
@godofdream9112
@godofdream9112 3 ай бұрын
Sir please make more video on this topic... I want to make comic.
@d.banksdesigns1995
@d.banksdesigns1995 4 ай бұрын
AWESOME VIDEO SIR. EXTREMELY HELPFUL.
@lifestoryentertainment
@lifestoryentertainment 4 ай бұрын
Your voice work is fantastic!!!!!!
@ka9648
@ka9648 4 ай бұрын
😂
@MrSka7cis
@MrSka7cis 4 ай бұрын
Thanks for sharing. I just 3d printed this binaural mic setup. I used stereo clippy mics. I was not sure how you mounted the mics internally. I just taped them to test it out. It is working fantastic. I could enlarge it to fit the 12mm diamater rather than use 7.5mm diameter of the Polsen OLM-20 mics, but then the ears would be too big.
@Xandercorp
@Xandercorp 4 ай бұрын
My dude, =you don't have to wait 5 minutes to load one of those... Why is your install so slooooow?
@fadimantium
@fadimantium 4 ай бұрын
Amazing!
@petroglyphsentertainment8498
@petroglyphsentertainment8498 4 ай бұрын
Which iPhone basic model can i use for motion capture?
@hyperbolicfilms
@hyperbolicfilms 4 ай бұрын
I'm not sure, MoveAI has changed what services are available now. Their MoveOne app needs an iPhone, but I think your best bet is to download the app from the App Store and test out if it works on your device. www.move.ai/single-camera
@petroglyphsentertainment8498
@petroglyphsentertainment8498 4 ай бұрын
Can I use an iPhone SE 1st gen for motion capture?
@ascarselli
@ascarselli 5 ай бұрын
Thanks for the video, I had a feeling this process would require a little bit of external editor work. Hopefully reallusion will implement a quick solution in future iterations
@TheSoleProprietor
@TheSoleProprietor 5 ай бұрын
Sometimes, the advanced tools does not give me the option of a batch count. just everything else like choosing the model, adjusting the random seed number, etc.
@TheSoleProprietor
@TheSoleProprietor 5 ай бұрын
I made the mistake of trying to prompt a scene where I square off against Bruce Lee, as a 12-year old challenger... and it banned me! Sometimes these stupid ai engines get their "minds in the gutter" and misinterpret under-age prompts as being child porn. While that is understandable, it was not what I intended ... to be deliberately offensive. Rather than having to bug the administrator and get unbanned, I don't log in to my account, anymore and use the SD XL 1.0, anonymously. I still would like to learn some of the AI tweaking tools, such as random seed number and Base Guidance and negative prompts and how to use them more effectively.
@C4DRS4U
@C4DRS4U 6 ай бұрын
Cool little horror film, appropriately atmospheric.
@hyperbolicfilms
@hyperbolicfilms 6 ай бұрын
Glad to hear that! The last (live action) horror film I made really lacked in that department.
@kissler101
@kissler101 6 ай бұрын
Very cool.. I liked it...great dialogue and good pace.
@hyperbolicfilms
@hyperbolicfilms 6 ай бұрын
Thanks!
@arielmorandy8189
@arielmorandy8189 6 ай бұрын
I thing you really need to practice stable diffusion before making images. You are not using SD properly. You need to use img2img , not text2img. You cannot refine an image, by trying to have one single prompt to generate your image. Try to look at Sebastian Kampf tutorials.
@rachelleventhal4718
@rachelleventhal4718 6 ай бұрын
Love the story! Agree that the lip sync was good- how did you get the 3/4 angle lip sync to work?
@hyperbolicfilms
@hyperbolicfilms 5 ай бұрын
I was using Wav2Lip, which seems to get the lips right even on a profile shot. Takes quite long to process a shot and upscale it, but great results.
@gamersmania8494
@gamersmania8494 5 ай бұрын
​​@@hyperbolicfilmshow u upscale the video to decent quality after the wave 2 lip degrade it ??
@hyperbolicfilms
@hyperbolicfilms 5 ай бұрын
There is a Wav2Lipz + GAN that does the upscaling and cleaning automatically. I used that in a Google colab ($14 a month) to be able to run it somewhat fast, but it was usually 10 minutes per clip and a lot of clicking.
@NiccoWargon
@NiccoWargon 6 ай бұрын
This was a good story. It also had the best lip-sync I've seen in any of the competitors yet. Nice work!
@jabelardo
@jabelardo 7 ай бұрын
An optimization of your workflow is to generate a batch of images and only perform high res fix for the ones you want to keep. In that way you are avoiding the extra time required for the high res fix for images you are going to discard anyway. The only extra thing you need to do is to copy the seeds of the images you want to keep and redo them a second time with the high res fix enabled (and a low denoise level for the fix to avoid the fix to change the image composition)
@hyperbolicfilms
@hyperbolicfilms 7 ай бұрын
Thank you! My workflow has changed a bit since switching over to SDXL, but that is a tip I can carry over.
@user-pc7ef5sb6x
@user-pc7ef5sb6x 7 ай бұрын
controlnet can fix all the consistency problems.
@hyperbolicfilms
@hyperbolicfilms 7 ай бұрын
I've started using it in the past two weeks, and it definitely can help a lot. Canny in particular is giving me great results for replicating the clothing of the person in my source image.
@martyneary7026
@martyneary7026 8 ай бұрын
Interesting. I never considered a character voice as 'my voice' intellectual property before.. Thanks.
@boscoe5334
@boscoe5334 8 ай бұрын
That is the reason I use Stable Diffusion. I don't have to pay Midjourney and with so many different styles to choose from and now the SDXL model. Midjourney has a sameness to the art that makes me able to pick it out of a crowd. SD has so much going for it absolutely NO censorship. Which is awesome. I could have some super sexy steamy scenes. (Add in Lora's and the ability to use Controlnet to control the pose... It's near perfect for what I like to use it for.)
@Cunegonde_the_one_and_only
@Cunegonde_the_one_and_only 8 ай бұрын
No. Just no.
@rishiisrani7783
@rishiisrani7783 8 ай бұрын
This was a great tutorial. Finally was able to install Stable Diffusion. Now to figure out how to control the prompts to get what i want. I went to school with you in the 3D Design class. Ai would have made life so much easier then. -Rishi
@hyperbolicfilms
@hyperbolicfilms 8 ай бұрын
Glad you liked it. Getting repeatable results or what you actually want is the hardest part. I tried to redo this page using a different checkpoint that has more of a 3D look, and it changes the details so much that it's incredibly hard to get something consistent. It is a huge jump from what we had back then, but harder to control.
@moneyhunter943
@moneyhunter943 8 ай бұрын
thanks for testing, keep uploading bro your channel will be huge soon
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 8 ай бұрын
I want to know how you made both your head movement and body movement without a suit
@trash-heap3989
@trash-heap3989 9 ай бұрын
I've been having tons of fun using A.I to create images (Mainly Nightcafe), but I am hinky on it's impact on enterprising artists as clearly the means of it taking images across the web is very ethically sour for the prospect of selling generated images, but even if we got to a point where an a.i did not need other images (Which brings up other concerns), it would still be creation by proxy and not genuinely drawn by a person's personal effort and given time to create something beautiful, so it still has some toxic impact for how real artists would be thrown by the wayside. I don't sell or intend to sell my a.i images at all, but it's a fun tool at the least for people with very weak artistic skills (Who may never improve particularly well for how old they are, or in lew of mental difficulties, such as myself.), letting them create some crazy stuff they can enjoy. Furthermore for such people who are also a bit poor, this lets them enjoy such lovely craziness without having to pay more exorbitant prices from real artists, which is also an ethical quandary in some sense as that infers only the rich or moderately well off should buy art, which is unfair but also kind of a practical truth for poorer people interested in such crazy art unique to them. At any rate, it's a fascinating time in our history and I hope we can find balances legally and software-wise. Maybe there could be a worldwide mandatory digital print that shows an image is made in a.i that you can't remove easily, though that would be a tricky endeavor to make and implement. Great video!
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
The ethics are very difficult to grasp with. Yesterday I finished a motion capture film where I played 3 characters, two who were female. I was able to use Voice.AI to turn my voice into the two women's voices at basically the click of a button. That's great for me and any other solo artist who perhaps can't afford to hire actors, but has a computer and access to the software. But as someone who once made a living being a voice actor and on-camera spokesperson, obviously these are the kinds of things that put me out of work. Plus, I doubt the two celebrities whose voices were sampled for the voice software would be happy knowing their voices are being used for all kinds of purposes. I sympathize with the actors on strike and what AI will mean for their careers, but these tools will empower a whole generation into creating art and films, and maybe will take the stranglehold that major corporations have on entertainment, in the same way that KZbin democratized small creators to find an audience. I don't have any answers, and every day I see something new and amazing and terrifying. I made this video a year ago, and the technology has moved so far beyond what I was doing with it. Posing characters is easy. Choosing specific styles is a download from a website. And all those websites are working from essentially stolen art and images.
@honzo1078
@honzo1078 9 ай бұрын
Nice video, but in the simple style you're using, it would probably be quicker, easier and more controllable to just draw it.
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
For someone who is a better artist, perhaps. I just spent a year drawing 24 pages that were mediocre at best. This methodology would also work for many other art styles that can be loaded as checkpoints into Stable Diffusion.
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 9 ай бұрын
Did you pay for move AI sub for this ?? I’m hoping I can get it free for some quick projects
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
I still have some minutes from when I was beta testing. If you want to do free mocap, check out Rokoko Studio.
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 8 ай бұрын
@@hyperbolicfilmsheyyy can you now share how you mixed them together both your head and body motion ??
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 9 ай бұрын
Please can you do a step by step tutorial on how you got all these working together
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
Trying to do that now. It’s a bit tough because I lose a lot of the screen capture to dropped frames while Unreal is doing any heavy GPU stuff.
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 9 ай бұрын
@@hyperbolicfilms umm what system u using to run unreal engine ?
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 9 ай бұрын
@@hyperbolicfilms and what iPhones did you do for your move AI ?
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
@@imbrokesoidecidedtomakeayo9451 iPhones are two iPhone 8, one 11, and one 14 Pro. Face capture is iPhone 12
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
@@imbrokesoidecidedtomakeayo9451 It's an Alienware i9 3080 from 2020.
@Hencekevin
@Hencekevin 9 ай бұрын
i have a question, i have two characters and i want midjourney to make scenes with them but it change all the time the style of the characters, how do you do to mantein the style and the characters in the images?
@hyperbolicfilms
@hyperbolicfilms 9 ай бұрын
It is a very difficult thing to do well. I generally have used the names of actors in my prompts to get some consistency, then describe the clothes, and generate a lot of images. The best research on the subject of consistency is done by John Walter on Medium. He has many articles on the subject, and he's explored more than medium.com/@johnwalter-counsellor
@yaellichaa5169
@yaellichaa5169 9 ай бұрын
Love it!!!
@imbrokesoidecidedtomakeayo9451
@imbrokesoidecidedtomakeayo9451 10 ай бұрын
I feel you would be able To pull off Dutch vander linde in rdr 2 🤔
@Strawhatshinobi
@Strawhatshinobi 10 ай бұрын
Looks great! I feel like my beard was messing with the capturing my lip movements properly. What do you think?
@hyperbolicfilms
@hyperbolicfilms 10 ай бұрын
Could be. The first test I did seemed to have better lip movement, but I was also making bigger movements. Maybe overacting helps when you have a beard.
@donelkingii3738
@donelkingii3738 10 ай бұрын
Nice!
@davidMRZ
@davidMRZ 10 ай бұрын
'great job mate, how many phones did you use?
@hyperbolicfilms
@hyperbolicfilms 10 ай бұрын
3. I shot with 4 (two 8, an 11, and a 14), but the 14 was too close and at a funny angle, so I removed it from the final mocap calculation. I’m glad Move lets you do that so you can salvage a bad calibration.
@davidMRZ
@davidMRZ 10 ай бұрын
@@hyperbolicfilms thank you for explanation and great job my friend
@aimadbennini279
@aimadbennini279 10 ай бұрын
from where you get this helmet ?
@hyperbolicfilms
@hyperbolicfilms 10 ай бұрын
@@aimadbennini279 You can get all the pieces from Amazon. The breakdown of how it was made is in this video: kzbin.info/www/bejne/iHqtl4elj7uqn5I Fastforward to 7 minutes in. The hardest part is that some airsoft helmets have the NVG mount on the front, but they are not measured well, so it's very hard to get the GoPro mount into the mount.
@dedead76
@dedead76 10 ай бұрын
this video is old now but still impressive ! I've tried to do the same but outdoor and offline using BlendARtrack (i'm on android, i didnt tried on iphone yet) I was close to a good result but even if i turn off the stabilization of my differents cameras, it's not perfect :( I think the problem is even with IBIS Off the sensor still move a bit and make the tracking a little bit wrong... Did you tried to do an offline/outdoor tracking ? on iphone there is also CamtrackAR app. See you !