incredible, thank you ❤excited to experiment with td + sd!
@rahsheedamcrae23812 жыл бұрын
This is revolutionary, can’t wait for more 🤩
@benchaykin42862 жыл бұрын
Fantastic work. So exciting to see such an AI-integrated TD project
@plyzitron2 жыл бұрын
Super fascinating for this AI integration in TD, thanks so much!
@therob36722 жыл бұрын
Brilliant work Torin, I will share the news on your amazing integration and tools and grab them from your Patreon. It’s especially impressive that you made it so straightforward with the use of the api service and created and excellent way to create a constant animation between the generated frames. Using these as textures composited into the base color of a PBR texture onto 3D objects also generated by AI would be an interesting way for for this to evolve. What an incredible holiday gift for the community! ❤
@blankensmithing2 жыл бұрын
Thanks Rob! I'm glad you've been enjoying the tutorials! Yeah, I think it'd be really interesting to use these to generate HDRI maps, or for texture map for a 3D model. I should make an example of applying the image output to a 3d model. Looks like Spline added that into their web editor kzbin.info/www/bejne/o5KcYp93apaIgJo
@JDucks77 Жыл бұрын
very creative
@Nanotopia2 жыл бұрын
Amazing! Thank you for sharing this. I wonder if it would be possible to make the particles interactive through webcam or Kinect movement.. going to try :)
@blankensmithing2 жыл бұрын
hey, glad you’re enjoying it! yes absolutely you could do it with both 😁
@smon11272 жыл бұрын
I love you so badly! Thanks for that. Huge fan ❤
@SuzanaLascu2 ай бұрын
regardless of tweaking this patch, I still get tons of black grain on the last null... is it because I'd need a commercial license for that particles_gpu object or is my i9 chip not up to the task of rendering it?
@GianTJ Жыл бұрын
Hey, Torin! This is absolutely stunning... could this potentially be used in a live setting? For example could I get audio in from Ableton Live and then project the reactive visuals in real-time?
@blankensmithing Жыл бұрын
Hey Gian, yeah you could use an audio device in to get the microphone input and map the audio analysis to the particle system
@bennettgrizzard5527 Жыл бұрын
This is fantastic. Could the image generation be done live as well, so that prompts could be entered during a performance rather than pre-recorded? @@blankensmithing
@ricardcantm Жыл бұрын
Great work bro! Do you know if it can work on mac machines? i know that there are some specs limitations on those
@ricardcantm Жыл бұрын
nvm i just saw that u use a mac😅😅
@AnderrGraphics2 жыл бұрын
Great tutorials, keep up the good work! Is there a way to generate images through this method but with live audio coming from an external device, like a turntable?
@blankensmithing2 жыл бұрын
Thanks Anderr! Yeah totally, you can use an Audio Device In CHOP instead of a Audio File In. Using that operator you can select your computer's built-in microphone, or if you're able to connect your turntables to your computer through an audio interface you can select your audio interface.
@therob36722 жыл бұрын
Torin, I was wondering if you could use this and computerenderer to generate images in HD or 4K resolution in Midjourney, DallE-2 or StableDiffusion Models and if the environment could show a count of how many images have been generated to be aware of the run cost as it accrues?
@stiffyBlicky Жыл бұрын
Is it possible to use multiple images as inputs? Maybe like around 30?
@JannatShafiq-fr4lr8 ай бұрын
Thanku so much for this amazing video and also for the link of file. My API component is not working can u plzz tell me is it due to version difference of touch designer if it is so plzz tell me which version u used for this.
@blankensmithing8 ай бұрын
It works fine for me on the latest TD version. Just make sure you create an API key on computerender.com/ and swap out your key in the project
@Fonira2 жыл бұрын
thanks !!
@unveil7762 Жыл бұрын
Would be cool to have depth map so than the particles becomes 3d… ❤
@hudsontreu Жыл бұрын
Hey Torin, thanks so much for the tutorial and project file! I am having a few issues though and wondering if you can answer a question. So when I open the project the noise objects that are used for altering the img2img function do not have anything in them, and it seems like img2img is required for everything to work. How exactly do you get the noise populated with an image or noise data and running correctly? Thank you!
@xthefacelessbassistx2 жыл бұрын
how can i stable diffuse a live video feed
@blankensmithing2 жыл бұрын
You can just pass in a Movie File in TOP into the component. It's not going to convert them in real-time since it takes some time to process, but every time you generate a new image it'll snag the current frame from the TOP