Msty: Local AI Simplified
6:21
3 ай бұрын
DRAW THINGS Detailer for Faces
8:36
DRAW THINGS: New in 2024
25:31
9 ай бұрын
MOCAP Update 2023
23:55
Жыл бұрын
Overtone Text-to-Speech in Unity3D
24:43
I taught ChatGPT to write Comedy
33:58
Open AI's Alignment Problem
28:38
Жыл бұрын
AI MOCAP TOOLS to Watch in 2023
17:33
No AI is a SCAM
10:15
Жыл бұрын
Пікірлер
@LamNguyen-oe7zf
@LamNguyen-oe7zf Күн бұрын
This is a super impressive demo. How did you achieve such good lipsync?
@CutsceneArtist
@CutsceneArtist Күн бұрын
TY, I use an iPhone to transmit the blendshapes. (there is a 0.3 sec delay, which I have fixed in the video edit by shifting my audio)
@quintonrichards4805
@quintonrichards4805 2 күн бұрын
I watched your photos to art video, I would love to see a video on how to do photos to line art! Is there somewhere I can message you for some more advice on turning photos to line art?
@CutsceneArtist
@CutsceneArtist Күн бұрын
photo to line art is on my shortlist of videos to make next (has been requested before)... I will prioritize it. Lots of helpful people at the Draw Things Discord. Should be a permanent invite on DrawThings.ai
@0xnpctim
@0xnpctim 11 күн бұрын
This is a really good video. Thank you!
@juanesemamusic9339
@juanesemamusic9339 12 күн бұрын
I love how the voice is also ai made
@kjton
@kjton 16 күн бұрын
Dose msty unlock all ai models (no api keys needed) when you pay for lifetime?
@CutsceneArtist
@CutsceneArtist 12 күн бұрын
paying to support MSTY dev is not the same as subscribing to AI services. There are some perks for paying, but I have not done it yet.
@theexchipmunk
@theexchipmunk 21 күн бұрын
1:10 When you ask for a dolphin but get an cartoon Ichtiosaur instead.
@konzeptzwei
@konzeptzwei 23 күн бұрын
It it alos possible to map midi inputs to facial expressions_ Like using a fader for opening eyes, mouths etc.
@aburmese2989
@aburmese2989 24 күн бұрын
This looks promising & hopeful as someone who can't buy expensive GPUs. Any future plan to compare other latest possibilities for AI video generation?
@theexchipmunk
@theexchipmunk 24 күн бұрын
The thing on the preview is so obvious but I didn´t consider it till now, because I am just so used to having no real idea what the AI is doing. So it was always a guessing game. But having that pointed out, I instantly realised I could have cut a third of the time some of my prompts took to generate.
@CutsceneArtist
@CutsceneArtist 24 күн бұрын
YAY! you watched to the end of the video! lol
@theexchipmunk
@theexchipmunk 15 күн бұрын
@@CutsceneArtist Well, this is a pretty informative and helpful video, especially as there isn´t a whole lot on draw things that isn`t super disjointed and hidden in radom subreddits. This one is probably the best overall video on the basics for Draw Things around right now. So naturally I did watch it to the end. :D
@niyaspace
@niyaspace Ай бұрын
Thank you from France for your tutorial ❤️🇨🇵
@RYOkEkEN
@RYOkEkEN Ай бұрын
your explanations are always the easiest to follow 😻
@Stef_frenchtouch
@Stef_frenchtouch Ай бұрын
2 things >> it's possible to re-imagine with drawthings ? and it's possible to have like generate fill from photoshop ?
@brianchristine9301
@brianchristine9301 Ай бұрын
You got him dancing everytime he speaks, love that! I've been enjoying your DrawThings videos for a few weeks, and finally subbed tonight. Thanks for this entertaining video!
@quintonrichards4805
@quintonrichards4805 Ай бұрын
Do you have any tutorials for image to line art/coloring page?
@kewldrawerings
@kewldrawerings Ай бұрын
Thank You!
@seencapone
@seencapone Ай бұрын
Thanks for this tutorial. Question: have you tried exporting the character as .fbx to bring into a third party 3D program for ARKit face capture retargeting? The big problem with CC4 is the Eye_Look expressions, since the eyeball rotation is driven totally by the bone. I haven't been able to dissociate it from that. I have tried all these tweaks and more, and they always work internally within CC4 and iClone, as they do for you here -- but when I bring the character into Maya, the blendshapes don't work, or only a few of them work and the rest are broken -- but there's no pattern to it. It's totally random. I found it faster just to edit the blendshapes themselves within Maya, but then I have to do that on each character individually.
@anuragkalra7511
@anuragkalra7511 Ай бұрын
Great video and excellent explanation!
@Calverhall
@Calverhall Ай бұрын
What about mixing pinpointing models to another checkpoint?
@CutsceneArtist
@CutsceneArtist Ай бұрын
I'm not sure what a pinpointing model is...?
@Calverhall
@Calverhall Ай бұрын
@@CutsceneArtist lol, sorry, its a weird autocorrect! I meant inpainting
@CutsceneArtist
@CutsceneArtist Ай бұрын
I will check... I rememember LiuLiu said it is possible, but I'm not sure how...
@michaelredman8179
@michaelredman8179 2 ай бұрын
I am using a desktop Mac and can’t figure out how to zoom. Please help
@CutsceneArtist
@CutsceneArtist Ай бұрын
I have not found an answer for this. My mouse/trackball has a scroll wheel.
@DamianWampler
@DamianWampler 2 ай бұрын
I just subscribed. Nice vid! Question - how do I get more variation between images in a batch? Is that just text guidance?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
there is a Wildcards script inside DrawThings community scripts. It allows to swap randomly from a list of words within the prompt
@true_user
@true_user 2 ай бұрын
How to pull out a depth map from a picture using draw things?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
bottom right of the Image Canvas is a 'Layers' button. The 1st menu switches to view the layer instead of the image (If you have a Depth map loaded it will show here). The same menu goes deeper with 'Load Layer...' where you will find 6-7 options how to load an existing depthmap, or Extract Depth from (Files, Paste, Photos, or from the Canvas image)
@true_user
@true_user 2 ай бұрын
@@CutsceneArtistHow to get a depth map out of the picture? What model or controlnet to use to pull a depth map from a regular picture?
@true_user
@true_user 2 ай бұрын
Simply put, how to generate a depth map from any picture to use in the future?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
DrawThings will update itself (download the current image-to-depth model) the 1st time you extract a depth map. After the model is installed, it will operate invisibly whenever the depth extraction process is needed (unless you manually add a depthmap to the Depth layer)
@true_user
@true_user 2 ай бұрын
@@CutsceneArtist It can’t be downloaded and it works by default. You can’t specify the number of details and the gradient of black and white in space (for example, gradation of black and white, only an object) there seem to be ways, but how to do it in draw things?
@flethacker
@flethacker 2 ай бұрын
you can set the system prompt to make the bot act however you want. if you want him to be positive but it in the system prompt
@pmarreck
@pmarreck 2 ай бұрын
Also, any word on a controlnet for SD3 to do IP Adapter Plus Face?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
IP Adapter FaceID for Kolors should be released in a few days (currently beta testing)...
@pmarreck
@pmarreck 2 ай бұрын
what is the "zavy-ctflt" part of the text prompt?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
Zavy is on Civit.ai, makes great art LoRAs. the last bit is the trigger for this particular LoRA called 'cute flat' ... a search should bring it up on Civit.
@pmarreck
@pmarreck 2 ай бұрын
Is this "CosmicDynavision" model one you installed or mixed yourself? I can't find it but it looks great. Thanks for the help. I'm messing with SD3 and the initial output was... blobs... but after watching this and another tutorial the output is MUCH better (albeit not as good as yours yet)
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
It is a mix of Cosmicman and DynaVision... I have a recent video about model mixing in DrawThings.
@m0kiiy
@m0kiiy 2 ай бұрын
03:17 what is that style of photos called? which prompt did you use for it? i’m searching for it a long time. would really appreciate an answer 😁
@hasneetdhalor3229
@hasneetdhalor3229 2 ай бұрын
you are my Savior Angel... God Bless You 🥺🙇‍♂
@amandamate9117
@amandamate9117 2 ай бұрын
this was amazing! can you also make one about uxing fux dev with realism_lora? would be so good
@Yaroslav-c7g
@Yaroslav-c7g 2 ай бұрын
привет! классная аватарка 🤝
@AMA-AYA
@AMA-AYA 2 ай бұрын
I use macbook air M2 why i can’t run it ?
@kbllr.graphics
@kbllr.graphics 2 ай бұрын
Bravo!!! <3 fell immediately for it! I am trying to do a speech-2-speech pipeline with a Vanilla frontend to a mlx-python backend that uses coquiTTS, mlx whisper and mlx llama3.1 models, as well as VAD for word recognition, but my struggle comes with lip sync, although now the idea of a robot it's delightful to avoid the hassle. I do have a couple of options on the test table if you'd like to exchange ideas! Love your channel
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
Yeah! I'd love to hear what you are trying!
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
You mentioned watching the previews to see if an image is being overcooked and maybe you don't need as many steps to create an image? Does that mean you need to sit there and count how many preview images appear before you get a good image or does it tell you somewhere in the interface which step you're currently on? Thank you.
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
You don't need to count steps. It is an objective way to know if/when the render is all the way 'cooked'. But like grilling a steak, sometimes you prefer it a rare or prefer more charred... Parameters will be different for every model, every prompt (character length), and whether you want a subtle realistic 'photo' or a bold illustration. It is better to learn when to manipulate the controls than to leave every render at same generic settings. You don't need to guess an exact parameter, it's enough to understand when a setting needs 'more' or 'less'..., that's enough to go in the right direction. It is an art, like photography. Your 'eye' will develop as you spend more time with a specific model, and learn where its sweetspots are for your ideas.
@saxtant
@saxtant 2 ай бұрын
Great work... I've been using whisperv3 large and xtts2 with llama 3.1 8b, I found ollama to have inferior model versions than the original, because the 16 bit model was not the same quality as using something like vllm 16 bit, but that's because my chatbot keeps its own memory through prompting, which requires low hallucinations. I'm going to check out unity assets man, thanks! Try llama 3.1 storm 8b through vllm, make your own endpoint.
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
Wow, thank you for suggestions! I know I need to spend some time trying different models.Your tips are appreciated!
@saxtant
@saxtant 2 ай бұрын
@@CutsceneArtist cheers!
@ZirrelStudios
@ZirrelStudios 2 ай бұрын
AI is so crap 🤣🤣
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
🤣 FACTS!
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
Really nice video, well done! Mind if I ask which Mac you are using there?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
I have an M1 Mac Studio, 64GB
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
@@CutsceneArtist Thank you. Can I ask if there generations in this video were sped up at all or are they realtime? Thank you.
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
I sped up the video to fit the narration.
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
@@CutsceneArtist Ah okay thank you. I thought it was a bit fast even for a Mac Studio ;-)
@HepcatHarmonies
@HepcatHarmonies 2 ай бұрын
This is looking good!
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
Thank you!
@aquiatic
@aquiatic 2 ай бұрын
Fun video! What a magical combination of tech. So many possibilities...
@RYOkEkEN
@RYOkEkEN 2 ай бұрын
niiiiiiiiice 😍
@trancemuter
@trancemuter 2 ай бұрын
Slime is the best and cheapest
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
Is the audio out of sync for anyone else about halfway through the movie? Nothing she's saying is lining up with what she's doing? Seems to be about 30 seconds out 😞
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
I messed up the narration, sorry.... The description and the video are both correct, but you are correct, I am not showing the steps as they are talked about.
@MarkBowenPiano
@MarkBowenPiano 2 ай бұрын
@@CutsceneArtist Ah thank you. Thought I was just tired when I was first watching it ;-) No problem at all. A very easy thing to do. Thank you for the fantastic videos though. All the very best.
@maisoncréation-p6o
@maisoncréation-p6o 2 ай бұрын
Looks amazing! What do you mean by overcooked and how are you checking this? Do you have a video explaining this?
@faya-patterns
@faya-patterns 2 ай бұрын
Thank you soooo much for all your content. As Mac user I am really thankful for your great videos.
@parrydigm
@parrydigm 2 ай бұрын
Great primer - best I've seen. Nice one!
@tichcang
@tichcang 2 ай бұрын
your videos is so helpful. i wonder what type of controlnet preprocessor of sketch r256 to select when importing?
@CutsceneArtist
@CutsceneArtist 2 ай бұрын
Sketch and Recolor set the preprocessor type as INPAINT. (It is not at all obvious they would be inpaint)
@obscuremusictabs5927
@obscuremusictabs5927 3 ай бұрын
EDIT: UPGRADING THE MAC OS FROM VENTURA TO SONOMA SEEMS TO HAVE FIXED THE ISSUE. Drawthings gave about 10 great Flux model outputs and then the program quit unexpectedly. Restarted and now it only gives outputs of greyed out blocks. Restarted computer. Reset default settings. Still grey blocks. Is this a common issue with a simple fix? This is the first program I have ever had be permanently disabled by a crash.
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
I'm not the dev. Try deleting the file from your application folder and redownloading (all the models and project history is in your root drive and won't be erased) If your copy of flux was imported through a HF download, you may need to trash it, and try one of the internally-downloadable versions so it gets imported with any developer bells and whistles pre-programmed
@obscuremusictabs5927
@obscuremusictabs5927 3 ай бұрын
@@CutsceneArtist Thanks for the quick response. Is this a common and well known issue with a known cause that I can avoid in the future? I only had about 28GB free space on my hard drive. Was thinking maybe that was the issue.
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
It's not a known issue to me... the part where it worked and the stopped..., idk that sounds like a memory leak or something? I guess standard advice: make sure you have the latest version of DT.
@obscuremusictabs5927
@obscuremusictabs5927 3 ай бұрын
@@CutsceneArtist How can we get access to the discord? All the links I find on reddit are expired. Maybe they can help with my issue. I deleted the app and reinstalled but I can tell it wasn't a true delete as it immediately brought up my previous settings and the problems persist. Seems very strange that a single crash can permanently disable a program that had been working properly. Such a shame too as it is an amazing tool.
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
this link should never expire: discord.gg/Q3h4kvuqv8
@robbiepacheco753
@robbiepacheco753 3 ай бұрын
Cheers for such a helpful tutorial, had tried to do a local install of FLUX on my Mac M1 but could not get it to work. Then I implemented the use of Hyper 8 Lora with FLUX and to my amazement it works rather well with 10 steps! Still takes around 10 minutes per render but quality is great : )
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 3 ай бұрын
could a new model be mixed which excludes all cartoon anime results, and only retains photo quality results in the new model?
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
I would prompt for 'cinematography of...', and don't go too high with CFG/Text Guidance. There are a few cinematography models I like that have been trained on photoreal imagery (presumably movie stills).
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 3 ай бұрын
V14 and Florence 2 (in comfy ui ) seem to be a go to place for image to prompt good results.
@Beauty.and.FashionPhotographer
@Beauty.and.FashionPhotographer 3 ай бұрын
can Claude and Grok be used ?
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
I think so...? I haven't tried, but I assume they all use similar API...
@CrustyCowboy
@CrustyCowboy 3 ай бұрын
What are you using to make a talking avatar
@CutsceneArtist
@CutsceneArtist 3 ай бұрын
this was iClone