Heads up for people this is Nvidia ONLY! So people with AMD don't waste time watching this.
@pixelkay20044 күн бұрын
thanks!!
@Matamjid686 күн бұрын
Helpful tutorial that was amazing. Appreciate it.
@김정수-z8f6 күн бұрын
What is different "edify 3d" of nvidia from these ones.
@ArquiKev7 күн бұрын
Amazing!
@LavelleMoore-v6z8 күн бұрын
can someone please tell me once i get to this point how do i make the res better quality????????
@Urban_Decoders8 күн бұрын
In the photorealistic tiles under level of detail, you can increase the Maximum Screen Space Error. If you want very high quality context you would have to get a detailed Lidar scan as google maps can only go so far.
@LavelleMoore-v6z7 күн бұрын
@@Urban_Decoders yea i've already played around with the max screen space error and it didnt do anything and the terrain was still pixelated so maybe i'll give the lidar scan a try , thank you!
@uegamedev11 күн бұрын
Thanks for the video! If you encounter an error "Clip vision model not found": 1. Go to git ComfyUI_IPAdapter_plus 2. Download and rename models CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors and ip-adapter-plus_sdxl_vit-h.safetensors (or other models, see the installation instructions on the git)
@Gyanigadha-dp6nh13 күн бұрын
Couldn't we run comfyUI without installing stable diffiusion?
@Gyanigadha-dp6nh13 күн бұрын
Why this massege in coming every time when i installing comfyUI???? PLEASE HELP11!!!! ENOENT: no such file or directory, stat 'C:\pinokio\api\comfy.git\{{input.event[1]}}'
@kashifakhtar169217 күн бұрын
Make a workflow to create image in different styles all at once like leonardo flow state
@natishmaac18 күн бұрын
Thanks sir for this.. One more thing, How can we export this chunk of model to 3dsmax with texture?
@BrentMinder22 күн бұрын
why aren't you downloading your models from within the comfyui manager? also can you post your workflow.jsons? thanks for the excellent videos!
@Urban_Decoders8 күн бұрын
Just wanted to show how it can be done manually, but that is a good way to do it too! workflows are in video links, can be just dropped into the canvas
@Hankyo-m3m24 күн бұрын
좋은 강좌 감사합니다 ❤❤❤❤
@MadPonyInteractive27 күн бұрын
Finally an intuitive way to do area composition, thank you!
@ryanleethomas27 күн бұрын
Would love to see more on the Rhino compute to UE workflow!
@Urban_Decoders8 күн бұрын
Easiest way teach that would be through a custom UE plugin, but would need some time to work on it!
@zhutaiwu28 күн бұрын
Very good video since all the steps are explained well and all the sources are listed.
@MafaldaReynoldsBrandao29 күн бұрын
Great video, thanks!
@thesofakillersАй бұрын
can you explain why the need for the DualClipLoader?
@Urban_DecodersАй бұрын
CLIP stands for Contrastive Language-Image Pre-Training (CLIP). Essentially, when text prompts are used as inputs, these are encoded using the CLIP models for the image generation process. Flux model is different from other ones (such as stable diffusion) as it uses a dual clip loader rather than a single one but that is also why it is also much better at interpreting text prompts into images.
@thesofakillers29 күн бұрын
@@Urban_Decoders didnt i reply to my OP with an answer? did that get deleted
@ChanhDucTuongАй бұрын
Tyvm. But is there a way to apply different Lora to each Region too? Also I’d love to have the ability to draw the mask region instead of using Squares, is that possible?
@Urban_DecodersАй бұрын
Sounds like you may want to use an in-painting workflow combined with separate loras. you can find a tutorial here on how to do it: kzbin.info/www/bejne/d4rHdpSAmpxnes0 you could adapt that workflow to have multiple masks which go through a different lora to have a different effect. The strength of the regional conditioning node is in its simplicity to separate zones.
@ChanhDucTuong28 күн бұрын
@ Nice thank you again.
@user-oleg-gerАй бұрын
Fabulous, thanks for FREE workflows!😀
@adamhavkin9833Ай бұрын
Great toturial! is there a way to control or edit the final point of view for the new location? to control how the camera orriented when it reaches the poi?
@rozhan.shojaei16 күн бұрын
sorry did you find a way for it?
@dirtydevoteeАй бұрын
"Regional prompting" also known as "inpainting".
@Urban_DecodersАй бұрын
Another workflow for similar outcomes
@mubumbutu9393Ай бұрын
Unfortunately, this custom node is obsolete. If you have a problem in comfyui like this one: "I can move the menu around and use it (e.g. opening manager and updating worked). I can even queue a prompt and its running. I can also edit text inputs, but I can't zoom or move the screen or manipulate any other boxes or connections." You need to uninstall this Dave_CustomNode.
@KameKaioАй бұрын
I have a problem, the MultiAreaConditioning node doesn't show the grid, It appears the same as the preview on the left (1:23). Any idea how to solve this issue?
@Urban_DecodersАй бұрын
should automatically appear when added. Maybe an issue with the dave node so you could try reinstalling from the manager and restarting.
@MostafaMohamed-vq9oi16 күн бұрын
@@Urban_Decoders I have the same problem and cant find it in the manager
@25castro25Ай бұрын
Thank you so much for this great info!
@Beauty.and.FashionPhotographerАй бұрын
is this an alternative to the infamous Adetailer extension in auto1111, mainly used for faces,... maybe?
@Urban_DecodersАй бұрын
There are a couple of daemon detailers than you can try out. Also if you are upscaling using something like Ultimate upscaler you can use a higher CFG scale to add more creative details.
@marcovth2Ай бұрын
"flux1-canny-dev-lora.safetensors" is giving me black outputs.
@komputasidigitalАй бұрын
Great tutorial! thanks mate. Anyway, could you make a tutorial about on how to add a name label above each pin. Please..
@Urban_DecodersАй бұрын
That can be done too by reading the label names from the data structure into floating widgets. I will be looking to do more Digital Twin tutorials for unreal engine so will try to add that.
@ibrahimdahab6037Ай бұрын
I have this problem: IPAdapterUnifiedLoader ClipVision model not found.
@ibrahimdahab6037Ай бұрын
I used a sketch as a base to render it, but i get very plain result if i reduce the denosier if i increase it, it changes the design
@EngineerAAJАй бұрын
Amazing content, I saw some Rhino videos on your channel as well, any chance this Flux Tool works with Rhino view captures ? Kinda like to replace rendering, maybe via grasshopper? Or just using a shaded view or monochromatic view from rhino to give textures and colors to stuff with this tool.
@Urban_DecodersАй бұрын
At the moment only this person is developing stable diffusion integration within grasshopper. www.food4rhino.com/en/app/ambrosinus-toolkit There are no flux plugins at the moment for grasshopper so the easiest is to take screenshots and paste into comfyUI like in the previous video. It is possible to write your own scripts to automatically take screenshots from rhino and run them in comfyui rather than doing it manually.
@MartinŽižka-h1tАй бұрын
Hi Great tutorial and a lot of potential here. Im having a lot of trouble figuring out how to keep the dynamic update on the drawing work. I tried to create a drawing set from a 3D model with multiple sections and drawings. When I first play with the section, the drawing automatically and dynamically updates, later this stops working, and when I move the section the drawing does not update anymore. Also later when I delete the section the drawing does not delete, as if they are detached. Not sure if there is a toggle for this or this is a bug. I updated my rhino 8 to latest. working on mac.
@brankoobradovic3455Ай бұрын
Seems to take forever to generate final image (before upsizing) on a macbook pro M2 32gb RAM
@GharKaKhana93Ай бұрын
where is the market place i am not seeing any body tell me pls
@Urban_DecodersАй бұрын
in your epic games launcher there is the market place, although now it has been changed to be called fab
@teddykim1107Ай бұрын
what the.......is that it? where the thumbnail image in the video?
@tigerlee-kz3ifАй бұрын
ControINetLoader Error while deserializing header:MetadatalncompleteBuffer
@edi18912Ай бұрын
can you use ai to create moving clouds over a building for a video not an image?
@artemnikolski31972 ай бұрын
Only issues using this manual. Lineart model, inadaptee etc. Any début UPD ? Please
@abh73722 ай бұрын
Great content and explanation, thanks a lot. Its really a great improvement on the Rhino toolsets for architects. Could you please show us how you made/from where you can obtain the hatches you used! Thanks in advance!
@Instant_Nerf2 ай бұрын
Thank you. Is this real world data .. for the building ? How did you get those ? Are those bims or osm data ? From google maps ?
@Urban_DecodersАй бұрын
the context is streamed with cesium plugin. there is a tutorial here for it: kzbin.info/www/bejne/iX7bmXiIh7umsLcfeature=shared
@mork37432 ай бұрын
Can we do this with perspective sections?
@Urban_DecodersАй бұрын
yes you could just make different layers for the section and the perspective backgrounds
@mrmuffyman2 ай бұрын
does the controlnet input image have to be a sketch? i saw u uploaded a sketch and then turned it into lineart. can u upload a photorealistic img and turn it into lineart?
@Ace05552 ай бұрын
How much does one need to pay to use comfy? I’m paying for mid journey but looking for an alternative that I can used unlimited
@andreaaureli40592 ай бұрын
It's free brother
@Ace05552 ай бұрын
@ but using it on google collab? Which is extremely expensive
@AlexandreMonteiroSilva2 ай бұрын
Wow, this explanation is something I've been looking for for a while! Would it be possible to copy a section of the terrain and transform it into an editable landscape in Unreal?
@Urban_DecodersАй бұрын
you could use the geometry script plugin and "project" a mesh onto the cesium terrain. there is a tutorial here about it which is quite interesting: dev.epicgames.com/community/learning/tutorials/1GYl/unreal-engine-using-geometry-script-and-blueprints-to-quickly-replace-cesium-tiles
@bori92 ай бұрын
Awesome work! The morphing looks so smooth-way more than just stitching images together. Could you share how you did it? I also checked the video from the link, but it seems there's no workflow provided. I’d really appreciate it if you could share some insights!
@ibrahimdahab6037Ай бұрын
He explained it to details in the video
@jakubsobczyk78702 ай бұрын
yo can I ask for the links ? :) nice video btw
@djcybercorgi2 ай бұрын
Why does noone making these videos show the view from a pedestrian perspective? Why is it always from 1000's of feet above?
@Urban_DecodersАй бұрын
The quality of the google photorealistic tiles isn't very high quality close up, its strength is in the fact that you have the whole world mapped out in 3D.
@bubuububu2 ай бұрын
Thanks ! I was looking for so long for something like this, as all the upscaling tutorials were focusing on human faces.
@Nibbarese72 ай бұрын
i get an error saying Clip vision model not found
@IqbsonnI2 ай бұрын
My input node only has an input pin. Not all these things like landscape that You have. Any idea why?
@DavidoniTheGamerАй бұрын
It's an update. You can use now the Get Landscape Data node for the same functionality