Relight and Preserve any detail with Stable Diffusion

  Рет қаралды 8,278

Andrea Baioni

Andrea Baioni

Күн бұрын

Over the weekend we might have broken product photography.
Updated Workflow with Color Matching, Upscaling and more: • Stable Diffusion IC-Li...
In this episode of Stable Diffusion for Professional Creatives, we manage to start from a iPhone picture, create a new background for a product, relight it, and preserve details, all in one click.
And it's not even clickbait.
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Workflow: openart.ai/workflows/risunobu...
or civitai.com/articles/5393
(install the missing nodes via comfyUI manager, or use:)
IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-L...
IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-...
GroundingDinoSAMSegment: github.com/storyicon/comfyui_...
SAM models: found in the same SAM github above.
Frequency Separation: they should be coming from the standard comfyUI install, otherwise you can just install them via the manager.
Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epic...
Auxiliary controlNet nodes: github.com/Fannovel16/comfyui...
Timestamps:
00:00 - Intro
00:33 - Workflow overview
02:18 - Comparison (no details vs details)
03:36 - How it works
08:30 - Live demonstration (no cherry picking proof)
09:47 - More demonstrations
11:02 - Limitations and optional features
15:58 - Demonstration from studio shot
16:34 - Thoughts on the tech and how to use it
18:30 - Outro
#stablediffusion #ic-light #iclight #stablediffusiontutorial #relight #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 112
@risunobushi_ai
@risunobushi_ai Ай бұрын
go break it and report back how it works for you, chatterinos: openart.ai/workflows/risunobushi/product-photography-relight-v3---with-internal-frequency-separation-for-keeping-details/YrTJ0JTwCX2S0btjFeEN
@caseymathieson7023
@caseymathieson7023 7 күн бұрын
"I hope you break things bc I would like to hear some feedback on it" - this got me. *Subscribed*
@risunobushi_ai
@risunobushi_ai 7 күн бұрын
Ahah thank you! I really appreciate when people give me well thought feedbacks. Outside testing is key to deliver good results for everyone out there!
@pixelcounter506
@pixelcounter506 Ай бұрын
Great work... great explanation, thank you very much, Andrea!
@iMark22
@iMark22 Ай бұрын
Thank you! Incredible work!
@user-tz4sv5nc6b
@user-tz4sv5nc6b 28 күн бұрын
Love all of your content. Thank you.
@wascopitch
@wascopitch Ай бұрын
OMG Andrea, this is amazing! Thanks a ton for sharing. Can't wait to give this workflow a go. Keep being awesome!
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thank you! I'd love some feedback on it, have fun!
@OriBengal
@OriBengal Ай бұрын
Wow- That's of massive value. Thank you for solving this and sharing and explaining. This is one of the most practical things I've seen so far.
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thanks! Honestly I’m astonished at how useful it ended up being.
@dragerx001
@dragerx001 Ай бұрын
thank you again for posting workflow
@HooIsit
@HooIsit 28 күн бұрын
You are the best! Don't stop😊
@DJVARAO
@DJVARAO Ай бұрын
Awesome! As a photographer I think this is the best Ai processing so far.
@risunobushi_ai
@risunobushi_ai Ай бұрын
yeah, it feels like IC-Light really takes the whole space a lot closer to being a sort of "exact science" rather than being way too random
@Scerritos
@Scerritos Ай бұрын
Awesome video. Thanks for sharing! Also looking forward to the people work flow
@risunobushi_ai
@risunobushi_ai Ай бұрын
I’ll try to get it working soon, but I’m currently swamped with deadlines from my day job, so I might get it done for next week’s video
@sebicified1408
@sebicified1408 3 күн бұрын
Chapeau!
@whatman65
@whatman65 Ай бұрын
Great stuff!
@andrewcampbell8938
@andrewcampbell8938 Ай бұрын
Love your content.
@risunobushi_ai
@risunobushi_ai Ай бұрын
thank you!
@ChakChanChak
@ChakChanChak Ай бұрын
This is so good! Makes me wanna download the video to keep it forever
@ChakChanChak
@ChakChanChak Ай бұрын
@@robertdouble559 thx mate, but i only use laserdiscs.
@ChakChanChak
@ChakChanChak Ай бұрын
@@robertdouble559Thx mate, but i only use laserdiscs.
@vincema4018
@vincema4018 Ай бұрын
Amazing work!!! Very into to the IC-lighting stuffs recently, was just trying to upscale the image from the IC-light workflow. Will try your workflow and let you know the outcome soon. Thanks again Andrea.
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thanks! If you add an upscaler pass, remember to upscale the high frequency mask you're using as well, be it the one from SAM or the one you're drawing yourself, otherwise it won't work anymore because of a size mismatch between mask and high frequency layers. As I say in the video, a good spot to place a upscale group would be in between the relight group and the preserve details group.
@lumarans30
@lumarans30 28 күн бұрын
Grazie mille! ottimo lavoro
@risunobushi_ai
@risunobushi_ai 28 күн бұрын
Grazie a te!
@-Yun-Hee
@-Yun-Hee Ай бұрын
wow! this is a great solution!!
@TuangDheandhanoo
@TuangDheandhanoo Ай бұрын
Great vdo sir, thank you very much!
@risunobushi_ai
@risunobushi_ai Ай бұрын
thank you for watching!
@jahormaksimau1597
@jahormaksimau1597 25 күн бұрын
Amazing!
@Mranshumansinghr
@Mranshumansinghr Ай бұрын
muchas gracias senor
@omthorat3891
@omthorat3891 Ай бұрын
Love you 3000 ❤😂
@sab10067
@sab10067 26 күн бұрын
Nice workflow! As other people have said, for certain objects it's a bit tough to keep the original color of the object. I added a perturbed attention guidance between the first model loader and ksampler, which helps create more coherent backgrounds. Thank you for making the tutorial video as well!
@risunobushi_ai
@risunobushi_ai 26 күн бұрын
Thanks! Yeah, I understand now that some people prefer having a complete workflow rather than a barebones one, I’ll create two versions going forward, one barebones for further customization and one with more stuff, like PAG, IPAdapters, color match or whichever group might be useful
@jeremysomers2239
@jeremysomers2239 Ай бұрын
@andrea this is so fantastic, thank you for the breakdown! Do you think there's a way to BRING a background plate in instead of generating one???
@risunobushi_ai
@risunobushi_ai Ай бұрын
as long as it has the same dimensions as the relighting mask and subject, and has the same perspective as the subject, you can use custom backgrounds, sure!
@8561
@8561 Ай бұрын
Great workflow! Also I can imagine hooking up an IPAdapter for the BG generation to keep consistency between different angled product shots!
@risunobushi_ai
@risunobushi_ai Ай бұрын
Yeah, this is a “barebones” workflow, it can be expanded with anything one might need. I usually publish barebones rather than fully customized ones because it’s easier to make it your own (or at least it is for me, I don’t like when I have useless stuff in other people’s workflows)
@8561
@8561 Ай бұрын
@@risunobushi_ai Agreed! Cheers
@xxab-yg5zs
@xxab-yg5zs Ай бұрын
Mind-blowing! As a product photographer, I'm more excited than terrified. AI is just another tool, like any other. You still need to learn how to use it, and so far, it is complicated enough to require a lot of effort to create quality product images. I wonder, is there a way to generate 16-bit TIFF files that can be edited in Photoshop without introducing image quality degradation? Frequency separation sometimes makes banding, probably because it is done in 8-bit.
@risunobushi_ai
@risunobushi_ai Ай бұрын
That’s the way I see it too, and why I started getting interested in it a long while ago. Unfortunately there’s no way to generate TIFF files (as far as I know, but I’m 99% sure). Jpegs and PNGs are all we can work with as of now. The only way to alleviate banding issues (to a degree, and it’s more of a bandaid than a solution) or outlines is to generate files at a higher res, this way the affected pixels are, in percentage, less relative to the total amount of pixels in the image.
@merion297
@merion297 Ай бұрын
It's incredible, again! 😱 One thing, just a minorly-minor improvement idea: You enter a prompt then copy it into another prompt field, after a lighting prompt part. You could separate these two then synthetise it using the product prompt. Turning it into sample code: ProductPrompt = 'a photograph of a product standing on a banana peel' LightingPrompt = 'white light' SynthesizedPrompt = ProductPrompt + LightingPrompt # Here's the point where we no longer Ctrl-C/Ctrl-V 😁 Plus the prompt nodes could be rearranged into a Prompts group. (Of course I could do this myself after downloading the workflow for which you deserve a Praying Blanket 🙏 but I'm here just for admiring, my machine is far from below the minimal requirements of all this.)
@risunobushi_ai
@risunobushi_ai Ай бұрын
thanks, I didn't know about the product prompt node! I knew about other prompt concatenate nodes, and I thought about using them, but again, not knowing the knowledge level of the end user I usually end up using the least complicated setup. sometimes this ends up producing minor inconveniences like copy pasting text, or having to link outputs and inputs manually where I could have used a logic switch, but it's a tradeoff I accept for the sake of clarity
@merion297
@merion297 Ай бұрын
@@risunobushi_ai Nonono, I've just called it Prompt Node. 😁 It's what it is, you're 100-fold more educated in this than I.
@M4rt1nX
@M4rt1nX Ай бұрын
Amazing results. The beauty of open source is finding solutions together. Can the detail preserving part be used on the workflows for clothing? It might be a challenge with the posing but I just thought about it.
@risunobushi_ai
@risunobushi_ai Ай бұрын
I've tested it on underwear only right now (I'm working with a client who produces underwear, so that's what I had laying around) and it works well, even with harsh relights, such as neon strips. I haven't tested it with other types of clothing, but I might do that tomorrow when I have more time. The only thing that it struggles it, right now, are faces in full body shots, because the high frequency layer catches a ton of data there, but I think it just might need some tinkering, nothing major.
@pranavahuja1796
@pranavahuja1796 Ай бұрын
I have tried full body shot or infact half body for t shirt, my experience was not that good (yet)
@risunobushi_ai
@risunobushi_ai Ай бұрын
yeah, it needs to be fine tuned for people, that's why I released it for product shots only
@KINGLIFERISM
@KINGLIFERISM 25 күн бұрын
Brother take the text node and use that as inputs for the clip positives. helps. This workflow is awesome btw.
@risunobushi_ai
@risunobushi_ai 24 күн бұрын
Thanks! Yeah, I know there's better ways to bypass a double prompt field, more so if the two prompts are similar, but I usually construct my workflows so that there's as little complications as possible for new users. In this case, this means using two different prompt fields for what is essentially the same prompt, but to new users having the usual Load Checkpoint -> CLIP Text Encode -> KSampler pipeline makes more sense than having a Text node somewhere, conditioning two different KSamplers in two different groups.
@gwanyip345
@gwanyip345 29 күн бұрын
This is amazing... thank you so much for putting these videos together!! Question: For some reason, the image I'm getting out of the KSampler after the IC-Light Conditioning node is always coming out darker/orange/brown. I've tried it with a bunch of different images but the image and color are always significantly different than what's being fed into it. I've also tried a few different prompts in the text encoded that's being fed into the IC-Lighting node but everything still comes out quite dark. Thanks again!
@risunobushi_ai
@risunobushi_ai 29 күн бұрын
Thanks! Please refer to the comment by AbsolutelyForward, where we talk about this and about the use of a color match node. You can also increase the amount of light by remapping the light mask (right now it should be set to 0.7, 1 is full white)
@gwanyip345
@gwanyip345 29 күн бұрын
@@risunobushi_ai Thank you!! I tried to see if anyone else had the same issue and must have missed it. Color Blend definitely helped at the end when connecting it to the original image. I also found increasing the min value of the Remap Mask Range node to 0.4 helped brighten up the initial input image. I also increased the IC-Lighting Conditioning to 0.5. Thanks again for this amazing workflow!!
@user-il7pq4ne6x
@user-il7pq4ne6x 15 күн бұрын
thanks. the first time, 5%. hehe
@dadrian
@dadrian 29 күн бұрын
So cool! I'm doing basically the same for cars and people! But at the moment I stll prefer to do the freq seperation part in Nuke - I can only dream of 32bit workflow in Comfy
@risunobushi_ai
@risunobushi_ai 29 күн бұрын
Wait, if you generate a normal map from IC-Light do you get to work with 32bit images in Nuke?
@egarywi1
@egarywi1 9 күн бұрын
This is great for a product photographer like myself, I got v3 going however v4 keeps breaking Comfy so I want to concentrate on V3 to see how it performs, for me I am using a bottle of wine however the txt on the label is not preserved enough, is there a way to give it more importance?
@risunobushi_ai
@risunobushi_ai 9 күн бұрын
You can try using my set of Frequency Separation nodes here, by changing the nodes that are responsible for it in either V3 or V4 with them. You can find them in this video: kzbin.info/www/bejne/d3yxq6h-o82CmM0
@ImAlecPonce
@ImAlecPonce 13 күн бұрын
This is really cool, but it still changes my colors.... It seems to work better (not perfect) pulling the blended image into the second frequecy seperacion. At least the scene gets re-lit. Is there a way to use the IC-light and then just pull the colors over with some transparency value so they don't get washed out?
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Yep, we solved the color matching here: kzbin.info/www/bejne/lWK8l52Zr6eorrM and on monday I'll release a workflow for relighting people while preserving details and colors too. I also developed custom nodes for frequency separation, but I haven't had the chance to update the workflow yet. They'll be in Monday's video tho.
@ImAlecPonce
@ImAlecPonce 13 күн бұрын
What I usually do is use IC-light and the luminosity masks in Krita
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
I would do it outside of comfyUI too, but the viewers wanted a all-in-one workflow
@ImAlecPonce
@ImAlecPonce 13 күн бұрын
@@risunobushi_ai wow!! Thanks!
@Bartskol
@Bartskol 28 күн бұрын
Ok, since no one asked it yet, can i use sdxl model with this workflows ? Thanks for this work and I'm also a photographer 😅😊 cant wait for v4 with that ip adapter for consistent backgrounds(and sdxl for higher res? ;) )
@Bartskol
@Bartskol 28 күн бұрын
Subed
@risunobushi_ai
@risunobushi_ai 28 күн бұрын
Thanks! Unfortunately there’s no support for SDXL, it’s for 1.5 only, but you can definitely upscale a ton with SUPIR or other upscalers
@Spinaster
@Spinaster 17 күн бұрын
Instead of changing the subject name in the Grounding Dino prompt, you can try using just "subject" or "main subject", it should work ;-)
@risunobushi_ai
@risunobushi_ai 17 күн бұрын
In this case, and when you only have one subject yes, but if you have more subjects (like in my update on this video, when I have the bottle sitting on a branch) it might not work. But I agree, here you can just use subject instead!
@ultimategolfarchives4746
@ultimategolfarchives4746 28 күн бұрын
Keeping détails in upscaling is a common problem. Could tuat technique be applied to upscaling as well?
@risunobushi_ai
@risunobushi_ai 28 күн бұрын
I haven’t tested it with upscaling, I guess that as long as you don’t need to upscale the original image you won’t have to resize the frequency layers, so the details would be as they are in the original image. If you need to upscale the original image and the frequency layers as well, you might have some troubles with preserving details depending on how much you’re upscaling.
@EdwardKing-nu7ug
@EdwardKing-nu7ug 16 күн бұрын
Hello, why does the color of an object change after I turn on the lights? For example, the bottle was originally green, but it turned yellow after the lights were turned on. Which parameter should I adjust to maintain the original color?
@risunobushi_ai
@risunobushi_ai 16 күн бұрын
we solve that issue in this update: kzbin.info/www/bejne/lWK8l52Zr6eorrM
@EdwardKing-nu7ug
@EdwardKing-nu7ug 16 күн бұрын
Thank you so much and I am your ❤❤❤big fans 🎉
@charlieBurgerful
@charlieBurgerful Ай бұрын
This looks like a game changer. Maybe only for mockups, ideas iterations, or even real productions ! Everything starts well on my side, but the segment anything does nothing so the process is useless. I am on a M2pro, any ideas ?
@risunobushi_ai
@risunobushi_ai Ай бұрын
Did you install all the necessary dependencies for SAM to work on M chips? As far as I know you’ve got some hoops to jump through in order to get tensorflow and other dependencies running on M chips
@AbsolutelyForward
@AbsolutelyForward Ай бұрын
Absolutely fantastic workflow and a well explained tutorial :) I tried to relight some package designs, but somehow it gets allways „tinted“ in a warmish-yellow tone, no matter what text prompt I use for the lightning. I noticed that the epicrealism checkpoint tends to do so if I use a very generic (no description apart from the advertising photography) prompt for the background. Im lost.
@risunobushi_ai
@risunobushi_ai Ай бұрын
you could either try different checkpoints, and / or you could try to specify which kind of light you want. I notice that I get a very warm tint with "natural light", but specifying "white light" or some kind of studio light (softbox, spotlight, strip light) produces more neutral results. You could also try influencing with a negative prompt (warm tones, warm colors, etc).
@AbsolutelyForward
@AbsolutelyForward Ай бұрын
@@risunobushi_ai thx for the hints :) The package image (input) is colored half-green + half-grey. What is your expierence (so far) with retaining the original colors and transfering them in a realistic way with your workflow? Would an additional color matching node perhaps do some help?
@risunobushi_ai
@risunobushi_ai Ай бұрын
I have never particularly cared for the color matching node (at least the one I used), as it was almost never working well for me, but you could try and blend it at a lower percentage for better results. I guess it all depends on how important it is to color match to an exact degree the final relit image to the source one. This is my own preference, but the way I'm used to working I'd rather fix the colors in PS for a better degree of control. If one would want to do everything inside of comfyUI, which to be fair is in the spirit of this video and workflow, a color matching node could be a good enough solution, although less "directable" than proper post work in PS.
@risunobushi_ai
@risunobushi_ai Ай бұрын
adding here, since I just thought about it: you could even try color matching only specific parts of the subject, such as the non-lit ones, or only the lit ones, by using the same node I'm using to extract a light mask from the blended image, or a RGB/CMYK/BW to mask node, based on the color / light you need to correct.
@AbsolutelyForward
@AbsolutelyForward Ай бұрын
@@risunobushi_ai So far I haven't had any success by changing the checkpoints or modifying the lightning prompt - the original colours of the packaging are lost. But: at the end of the workflow, I used the input image again to readjust the colours. To do this, I combined the "ImageBlend" (settings: 1.00, soft_light) node with the "Image Blend by Mask" (for masking the packaging) node - this has worked very well so far :)
@houseofcontent3020
@houseofcontent3020 26 күн бұрын
How do I mix in existing background? Is it possible, instead of having the workflow creating my background
@risunobushi_ai
@risunobushi_ai 26 күн бұрын
Yep, but you need to have the same perspective between the subject and the background. Simply add a load image node and blend the background with the segmented subject, bypassing the background generator group. There’s no perspective correction in comfyUI that I know of, but if someone knows about it it’d be great.
@eranshaysh9536
@eranshaysh9536 26 күн бұрын
Thank you so much for the detailed answer. I’ll look up for a rotúrela that explains how to connect the nodes you talked about. As for the perspective, that’s fine, since I’ll be editing it before on Photoshop l, so it will only need to mix the light and color
@anastasiiadereshivska4090
@anastasiiadereshivska4090 19 күн бұрын
Wow! This is fantastic! I was faced with the problem that the Load And Apply IC-Light node does not find loaded models. Does anyone know how to solve this? * LoadAndApplyICLightUnet 37: - Value not in list: model_path: 'iclight_sd15_fc.safetensors' not in []
@risunobushi_ai
@risunobushi_ai 19 күн бұрын
Did you place the model in the Unet folder?
@anastasiiadereshivska4090
@anastasiiadereshivska4090 19 күн бұрын
@@risunobushi_ai It works! Thank you!
@AnotherPlace
@AnotherPlace 28 күн бұрын
i'm having this error: RuntimeError: Given groups=1, weight of size [320, 12, 3, 3], expected input[2, 8, 128, 128] to have 12 channels, but got 8 channels instead
@risunobushi_ai
@risunobushi_ai 28 күн бұрын
Are you using the IC-Light FBC model instead of the FC? Are you trying to use SDXL instead of SD 1.5?
@Arminas211
@Arminas211 24 күн бұрын
I got the error during ImageResize+: not enough values to unpack (expected 4, got 3). Any ideas what went wrong and how to fix it?
@risunobushi_ai
@risunobushi_ai 24 күн бұрын
What is the image extension you’re using? You can sub in another resize node if that one doesn’t work for you
@sreeragm8366
@sreeragm8366 23 күн бұрын
Facing the same issue. Are we passing the mask or the image to resizer. Debugging shows resizer is getting a tensor with no channels. If you can confirm, I will patch the resizer to bypass this shape mismatch. Thank you. Btw I am working in api mode. Never used comfy in ui mode.
@risunobushi_ai
@risunobushi_ai 23 күн бұрын
We’re passing an image, but it’s not the first time I hear someone having issues with this resize node. swapping it for another resize node solves usually solves it.
@Arminas211
@Arminas211 22 күн бұрын
@@risunobushi_ai thanks really much. I will write a comment in openart.
@user-en1zh1so7k
@user-en1zh1so7k 21 күн бұрын
@@Arminas211 I encountered the same issue, but I eventually discovered that I hadn't changed the prompt of the segment anything node, which caused the problem. Perhaps you could try doing that as well?
@ismgroov4094
@ismgroov4094 29 күн бұрын
workflow plz, sir!
@risunobushi_ai
@risunobushi_ai 29 күн бұрын
The workflow is in the description *and* in the pinned comment, and I even say "the workflow is in the description below" as soon as 00:40
@spiritform111
@spiritform111 Ай бұрын
very cool, but for some reason the control nets just crash my computer... i have a 3080ti, so it must be something else.
@risunobushi_ai
@risunobushi_ai Ай бұрын
That's weird, I haven't had any reports of crashes yet. I have a 3080ti too, so maybe try subbing in another controlnet node / controlnet model?
@spiritform111
@spiritform111 Ай бұрын
@@risunobushi_ai yeah, going to try that... thanks for the reply.
@spiritform111
@spiritform111 29 күн бұрын
@@risunobushi_ai turns out it was the depth anything model.. i can use depth_anything_vits14.pth - thanks. insane workflow... powerful stuff.
@amitkumarsinha1654
@amitkumarsinha1654 Ай бұрын
Hi.. Thanks a Lot For this tutorial and workflow. I am getting this error , can you please help me how can I fix this : C:\ComfyUI ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py:1051: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers. warnings.warn( Prompt executed in 8.58 seconds
@risunobushi_ai
@risunobushi_ai Ай бұрын
This is not an error per se, it’s a warning about a transformers argument being deprecated. As you can see, the prompt gets executed. What issues are you facing during the prompt? Where does it stop?
FREE AI tool for photographers, or why MidJourney SUCKS!
20:36
Кәріс өшін алды...| Synyptas 3 | 10 серия
24:51
kak budto
Рет қаралды 1,3 МЛН
DELETE TOXICITY = 5 LEGENDARY STARR DROPS!
02:20
Brawl Stars
Рет қаралды 20 МЛН
Универ. 10 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:04:59
Комедии 2023
Рет қаралды 2 МЛН
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 7 МЛН
How to AI Upscale and Restore images with Supir.
16:31
Sebastian Kamph
Рет қаралды 20 М.
NVIDIA Just Supercharged Ray Tracing!
6:59
Two Minute Papers
Рет қаралды 150 М.
ComfyUI AI: Inject reality into your animations
7:56
Show, don't tell!
Рет қаралды 665
Omost = Almost AI Image Generation from lllyasviel
9:43
Nerdy Rodent
Рет қаралды 23 М.
Make AI Ads in Flair.AI and also A1111 - Consistent Objects
9:19
Olivio Sarikas
Рет қаралды 39 М.
Pentax, Pentax, PENTAX 17! First Look & Interview with @KHROME_Hamburg
35:04
Stable Diffusion 3 IS FINALLY HERE!
16:08
Sebastian Kamph
Рет қаралды 64 М.
AI vs Artists - The Biggest Art Heist in History
44:23
Yes I'm a Designer
Рет қаралды 327 М.
BIG Update - Midjourney Will Never Be the Same Again
12:17
Future Tech Pilot
Рет қаралды 54 М.
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 719 М.
Ждёшь обновление IOS 18? #ios #ios18 #айоэс #apple #iphone #айфон
0:57
DC Fast 🏃‍♂️ Mobile 📱 Charger
0:42
Tech Official
Рет қаралды 485 М.
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 6 МЛН
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30
#miniphone
0:16
Miniphone
Рет қаралды 3,2 МЛН