This is a super impressive demo. How did you achieve such good lipsync?
@CutsceneArtistКүн бұрын
TY, I use an iPhone to transmit the blendshapes. (there is a 0.3 sec delay, which I have fixed in the video edit by shifting my audio)
@quintonrichards48052 күн бұрын
I watched your photos to art video, I would love to see a video on how to do photos to line art! Is there somewhere I can message you for some more advice on turning photos to line art?
@CutsceneArtistКүн бұрын
photo to line art is on my shortlist of videos to make next (has been requested before)... I will prioritize it. Lots of helpful people at the Draw Things Discord. Should be a permanent invite on DrawThings.ai
@0xnpctim11 күн бұрын
This is a really good video. Thank you!
@juanesemamusic933912 күн бұрын
I love how the voice is also ai made
@kjton16 күн бұрын
Dose msty unlock all ai models (no api keys needed) when you pay for lifetime?
@CutsceneArtist12 күн бұрын
paying to support MSTY dev is not the same as subscribing to AI services. There are some perks for paying, but I have not done it yet.
@theexchipmunk21 күн бұрын
1:10 When you ask for a dolphin but get an cartoon Ichtiosaur instead.
@konzeptzwei23 күн бұрын
It it alos possible to map midi inputs to facial expressions_ Like using a fader for opening eyes, mouths etc.
@aburmese298924 күн бұрын
This looks promising & hopeful as someone who can't buy expensive GPUs. Any future plan to compare other latest possibilities for AI video generation?
@theexchipmunk24 күн бұрын
The thing on the preview is so obvious but I didn´t consider it till now, because I am just so used to having no real idea what the AI is doing. So it was always a guessing game. But having that pointed out, I instantly realised I could have cut a third of the time some of my prompts took to generate.
@CutsceneArtist24 күн бұрын
YAY! you watched to the end of the video! lol
@theexchipmunk15 күн бұрын
@@CutsceneArtist Well, this is a pretty informative and helpful video, especially as there isn´t a whole lot on draw things that isn`t super disjointed and hidden in radom subreddits. This one is probably the best overall video on the basics for Draw Things around right now. So naturally I did watch it to the end. :D
@niyaspaceАй бұрын
Thank you from France for your tutorial ❤️🇨🇵
@RYOkEkENАй бұрын
your explanations are always the easiest to follow 😻
@Stef_frenchtouchАй бұрын
2 things >> it's possible to re-imagine with drawthings ? and it's possible to have like generate fill from photoshop ?
@brianchristine9301Ай бұрын
You got him dancing everytime he speaks, love that! I've been enjoying your DrawThings videos for a few weeks, and finally subbed tonight. Thanks for this entertaining video!
@quintonrichards4805Ай бұрын
Do you have any tutorials for image to line art/coloring page?
@kewldraweringsАй бұрын
Thank You!
@seencaponeАй бұрын
Thanks for this tutorial. Question: have you tried exporting the character as .fbx to bring into a third party 3D program for ARKit face capture retargeting? The big problem with CC4 is the Eye_Look expressions, since the eyeball rotation is driven totally by the bone. I haven't been able to dissociate it from that. I have tried all these tweaks and more, and they always work internally within CC4 and iClone, as they do for you here -- but when I bring the character into Maya, the blendshapes don't work, or only a few of them work and the rest are broken -- but there's no pattern to it. It's totally random. I found it faster just to edit the blendshapes themselves within Maya, but then I have to do that on each character individually.
@anuragkalra7511Ай бұрын
Great video and excellent explanation!
@CalverhallАй бұрын
What about mixing pinpointing models to another checkpoint?
@CutsceneArtistАй бұрын
I'm not sure what a pinpointing model is...?
@CalverhallАй бұрын
@@CutsceneArtist lol, sorry, its a weird autocorrect! I meant inpainting
@CutsceneArtistАй бұрын
I will check... I rememember LiuLiu said it is possible, but I'm not sure how...
@michaelredman81792 ай бұрын
I am using a desktop Mac and can’t figure out how to zoom. Please help
@CutsceneArtistАй бұрын
I have not found an answer for this. My mouse/trackball has a scroll wheel.
@DamianWampler2 ай бұрын
I just subscribed. Nice vid! Question - how do I get more variation between images in a batch? Is that just text guidance?
@CutsceneArtist2 ай бұрын
there is a Wildcards script inside DrawThings community scripts. It allows to swap randomly from a list of words within the prompt
@true_user2 ай бұрын
How to pull out a depth map from a picture using draw things?
@CutsceneArtist2 ай бұрын
bottom right of the Image Canvas is a 'Layers' button. The 1st menu switches to view the layer instead of the image (If you have a Depth map loaded it will show here). The same menu goes deeper with 'Load Layer...' where you will find 6-7 options how to load an existing depthmap, or Extract Depth from (Files, Paste, Photos, or from the Canvas image)
@true_user2 ай бұрын
@@CutsceneArtistHow to get a depth map out of the picture? What model or controlnet to use to pull a depth map from a regular picture?
@true_user2 ай бұрын
Simply put, how to generate a depth map from any picture to use in the future?
@CutsceneArtist2 ай бұрын
DrawThings will update itself (download the current image-to-depth model) the 1st time you extract a depth map. After the model is installed, it will operate invisibly whenever the depth extraction process is needed (unless you manually add a depthmap to the Depth layer)
@true_user2 ай бұрын
@@CutsceneArtist It can’t be downloaded and it works by default. You can’t specify the number of details and the gradient of black and white in space (for example, gradation of black and white, only an object) there seem to be ways, but how to do it in draw things?
@flethacker2 ай бұрын
you can set the system prompt to make the bot act however you want. if you want him to be positive but it in the system prompt
@pmarreck2 ай бұрын
Also, any word on a controlnet for SD3 to do IP Adapter Plus Face?
@CutsceneArtist2 ай бұрын
IP Adapter FaceID for Kolors should be released in a few days (currently beta testing)...
@pmarreck2 ай бұрын
what is the "zavy-ctflt" part of the text prompt?
@CutsceneArtist2 ай бұрын
Zavy is on Civit.ai, makes great art LoRAs. the last bit is the trigger for this particular LoRA called 'cute flat' ... a search should bring it up on Civit.
@pmarreck2 ай бұрын
Is this "CosmicDynavision" model one you installed or mixed yourself? I can't find it but it looks great. Thanks for the help. I'm messing with SD3 and the initial output was... blobs... but after watching this and another tutorial the output is MUCH better (albeit not as good as yours yet)
@CutsceneArtist2 ай бұрын
It is a mix of Cosmicman and DynaVision... I have a recent video about model mixing in DrawThings.
@m0kiiy2 ай бұрын
03:17 what is that style of photos called? which prompt did you use for it? i’m searching for it a long time. would really appreciate an answer 😁
@hasneetdhalor32292 ай бұрын
you are my Savior Angel... God Bless You 🥺🙇♂
@amandamate91172 ай бұрын
this was amazing! can you also make one about uxing fux dev with realism_lora? would be so good
@Yaroslav-c7g2 ай бұрын
привет! классная аватарка 🤝
@AMA-AYA2 ай бұрын
I use macbook air M2 why i can’t run it ?
@kbllr.graphics2 ай бұрын
Bravo!!! <3 fell immediately for it! I am trying to do a speech-2-speech pipeline with a Vanilla frontend to a mlx-python backend that uses coquiTTS, mlx whisper and mlx llama3.1 models, as well as VAD for word recognition, but my struggle comes with lip sync, although now the idea of a robot it's delightful to avoid the hassle. I do have a couple of options on the test table if you'd like to exchange ideas! Love your channel
@CutsceneArtist2 ай бұрын
Yeah! I'd love to hear what you are trying!
@MarkBowenPiano2 ай бұрын
You mentioned watching the previews to see if an image is being overcooked and maybe you don't need as many steps to create an image? Does that mean you need to sit there and count how many preview images appear before you get a good image or does it tell you somewhere in the interface which step you're currently on? Thank you.
@CutsceneArtist2 ай бұрын
You don't need to count steps. It is an objective way to know if/when the render is all the way 'cooked'. But like grilling a steak, sometimes you prefer it a rare or prefer more charred... Parameters will be different for every model, every prompt (character length), and whether you want a subtle realistic 'photo' or a bold illustration. It is better to learn when to manipulate the controls than to leave every render at same generic settings. You don't need to guess an exact parameter, it's enough to understand when a setting needs 'more' or 'less'..., that's enough to go in the right direction. It is an art, like photography. Your 'eye' will develop as you spend more time with a specific model, and learn where its sweetspots are for your ideas.
@saxtant2 ай бұрын
Great work... I've been using whisperv3 large and xtts2 with llama 3.1 8b, I found ollama to have inferior model versions than the original, because the 16 bit model was not the same quality as using something like vllm 16 bit, but that's because my chatbot keeps its own memory through prompting, which requires low hallucinations. I'm going to check out unity assets man, thanks! Try llama 3.1 storm 8b through vllm, make your own endpoint.
@CutsceneArtist2 ай бұрын
Wow, thank you for suggestions! I know I need to spend some time trying different models.Your tips are appreciated!
@saxtant2 ай бұрын
@@CutsceneArtist cheers!
@ZirrelStudios2 ай бұрын
AI is so crap 🤣🤣
@CutsceneArtist2 ай бұрын
🤣 FACTS!
@MarkBowenPiano2 ай бұрын
Really nice video, well done! Mind if I ask which Mac you are using there?
@CutsceneArtist2 ай бұрын
I have an M1 Mac Studio, 64GB
@MarkBowenPiano2 ай бұрын
@@CutsceneArtist Thank you. Can I ask if there generations in this video were sped up at all or are they realtime? Thank you.
@CutsceneArtist2 ай бұрын
I sped up the video to fit the narration.
@MarkBowenPiano2 ай бұрын
@@CutsceneArtist Ah okay thank you. I thought it was a bit fast even for a Mac Studio ;-)
@HepcatHarmonies2 ай бұрын
This is looking good!
@CutsceneArtist2 ай бұрын
Thank you!
@aquiatic2 ай бұрын
Fun video! What a magical combination of tech. So many possibilities...
@RYOkEkEN2 ай бұрын
niiiiiiiiice 😍
@trancemuter2 ай бұрын
Slime is the best and cheapest
@MarkBowenPiano2 ай бұрын
Is the audio out of sync for anyone else about halfway through the movie? Nothing she's saying is lining up with what she's doing? Seems to be about 30 seconds out 😞
@CutsceneArtist2 ай бұрын
I messed up the narration, sorry.... The description and the video are both correct, but you are correct, I am not showing the steps as they are talked about.
@MarkBowenPiano2 ай бұрын
@@CutsceneArtist Ah thank you. Thought I was just tired when I was first watching it ;-) No problem at all. A very easy thing to do. Thank you for the fantastic videos though. All the very best.
@maisoncréation-p6o2 ай бұрын
Looks amazing! What do you mean by overcooked and how are you checking this? Do you have a video explaining this?
@faya-patterns2 ай бұрын
Thank you soooo much for all your content. As Mac user I am really thankful for your great videos.
@parrydigm2 ай бұрын
Great primer - best I've seen. Nice one!
@tichcang2 ай бұрын
your videos is so helpful. i wonder what type of controlnet preprocessor of sketch r256 to select when importing?
@CutsceneArtist2 ай бұрын
Sketch and Recolor set the preprocessor type as INPAINT. (It is not at all obvious they would be inpaint)
@obscuremusictabs59273 ай бұрын
EDIT: UPGRADING THE MAC OS FROM VENTURA TO SONOMA SEEMS TO HAVE FIXED THE ISSUE. Drawthings gave about 10 great Flux model outputs and then the program quit unexpectedly. Restarted and now it only gives outputs of greyed out blocks. Restarted computer. Reset default settings. Still grey blocks. Is this a common issue with a simple fix? This is the first program I have ever had be permanently disabled by a crash.
@CutsceneArtist3 ай бұрын
I'm not the dev. Try deleting the file from your application folder and redownloading (all the models and project history is in your root drive and won't be erased) If your copy of flux was imported through a HF download, you may need to trash it, and try one of the internally-downloadable versions so it gets imported with any developer bells and whistles pre-programmed
@obscuremusictabs59273 ай бұрын
@@CutsceneArtist Thanks for the quick response. Is this a common and well known issue with a known cause that I can avoid in the future? I only had about 28GB free space on my hard drive. Was thinking maybe that was the issue.
@CutsceneArtist3 ай бұрын
It's not a known issue to me... the part where it worked and the stopped..., idk that sounds like a memory leak or something? I guess standard advice: make sure you have the latest version of DT.
@obscuremusictabs59273 ай бұрын
@@CutsceneArtist How can we get access to the discord? All the links I find on reddit are expired. Maybe they can help with my issue. I deleted the app and reinstalled but I can tell it wasn't a true delete as it immediately brought up my previous settings and the problems persist. Seems very strange that a single crash can permanently disable a program that had been working properly. Such a shame too as it is an amazing tool.
@CutsceneArtist3 ай бұрын
this link should never expire: discord.gg/Q3h4kvuqv8
@robbiepacheco7533 ай бұрын
Cheers for such a helpful tutorial, had tried to do a local install of FLUX on my Mac M1 but could not get it to work. Then I implemented the use of Hyper 8 Lora with FLUX and to my amazement it works rather well with 10 steps! Still takes around 10 minutes per render but quality is great : )
@Beauty.and.FashionPhotographer3 ай бұрын
could a new model be mixed which excludes all cartoon anime results, and only retains photo quality results in the new model?
@CutsceneArtist3 ай бұрын
I would prompt for 'cinematography of...', and don't go too high with CFG/Text Guidance. There are a few cinematography models I like that have been trained on photoreal imagery (presumably movie stills).
@Beauty.and.FashionPhotographer3 ай бұрын
V14 and Florence 2 (in comfy ui ) seem to be a go to place for image to prompt good results.
@Beauty.and.FashionPhotographer3 ай бұрын
can Claude and Grok be used ?
@CutsceneArtist3 ай бұрын
I think so...? I haven't tried, but I assume they all use similar API...