Noise Styling is the NEXT LEVEL of AI Image Generation

  Рет қаралды 41,889

Olivio Sarikas

Olivio Sarikas

5 ай бұрын

Noise Styling is the NEXT Dimension of AI Image Generation. This new Method by Akatsuzi creates incredible new Styles and AI Designs. Go far beyond what your AI Model can do. Explore now artistic Expressions. Become more versatile with AI Noise Styling.
#### Links from the Video ####
My Workflow + Noise Map Bundles: drive.google.com/file/d/1D0f5...
Akatsuzi Workflows: openart.ai/workflows/L2orhP8C...
Akatsuzi Noise Maps: drive.google.com/drive/folder...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas

Пікірлер: 136
@uni0ue87
@uni0ue87 5 ай бұрын
Hmmm maybe I didn‘t get it, but seems like a very complicated way to get a tiny bit control of colors and shapes.
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
you get a lot of creative outputs that the model on it's own couldn't create. so there is endless ways of experimentation with this
@ImmacHn
@ImmacHn 5 ай бұрын
This is more of an exploratory method than anything, which sometimes you want for inspiration.
@uni0ue87
@uni0ue87 5 ай бұрын
I see, makes sense now, thanks.
@alecubudulecu
@alecubudulecu 5 ай бұрын
You should try it. It’s pretty fun.
@jeanrenaudviers
@jeanrenaudviers 4 ай бұрын
Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.
@Foolsjoker
@Foolsjoker 5 ай бұрын
As always love your walkthroughs, you don't miss a node and explain the flow. Keeps it simple and on track. Hope you are having fun on your trip!
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
thank you very much. i forgot to include new shots from my bangkok stay this time
@Foolsjoker
@Foolsjoker 5 ай бұрын
@@OlivioSarikas No worries. I was there last year. Beautiful country.
@BoolitMagnet
@BoolitMagnet 5 ай бұрын
The output really are artistic, can't wait to play around with this. Thanks for another great video on a really useful technique.
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
you are welcome. i love this creative approach and the results that akatsuzi came up with
@TimothyMusson
@TimothyMusson 5 ай бұрын
This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.
@syndon7052
@syndon7052 5 ай бұрын
amazing tip, thank you
@ProzacgodAI
@ProzacgodAI 2 ай бұрын
Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.
@jeffbull8781
@jeffbull8781 5 ай бұрын
I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition
@subhralaya_clothing
@subhralaya_clothing 5 ай бұрын
Sir Please Bring Automatic Tutorial Also
@CoreyJohnson193
@CoreyJohnson193 5 ай бұрын
A1111 is dead, bro 😂
@pedrogorilla483
@pedrogorilla483 5 ай бұрын
Can’t do it there.
@ciphermkiii
@ciphermkiii 5 ай бұрын
@@CoreyJohnson193 I’m a little out of the loop. What’s the better alternative for A1111? Counting out Comfy UI.
@CoreyJohnson193
@CoreyJohnson193 5 ай бұрын
@@ciphermkiii SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.
@jakesalmon4982
@jakesalmon4982 5 ай бұрын
there isn't one, a1111 is the best for what it is, he was saying it is dead because comfy exists.. i disagree for some use cases@@ciphermkiii
@manticoraLN-p2p-bitcoin
@manticoraLN-p2p-bitcoin 5 ай бұрын
This is so 80s... I liked it!
@Jan-jf4th
@Jan-jf4th 5 ай бұрын
Awesome video
@blisterfingers8169
@blisterfingers8169 5 ай бұрын
Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥ Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.
@weirdscix
@weirdscix 5 ай бұрын
I'm glad I saw this comment as I thought the workflow was bugged, I never thought of looking that far from the start area
@TheSickness
@TheSickness 5 ай бұрын
Thanks, that got me haha Scroll out ftw^^
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
thank you, i will look into that
@kazioo2
@kazioo2 5 ай бұрын
Remember when AI gen was about writing a prompt?
@DivinityIsPurity
@DivinityIsPurity 5 ай бұрын
A1111 reminds me everytime I use it.
@jakesalmon4982
@jakesalmon4982 5 ай бұрын
Much more interesting this way :) a depth map is worth 1000 words
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
it still is on Midjourney ;)
@Clupea101
@Clupea101 5 ай бұрын
Great Guide
@mick7727
@mick7727 3 ай бұрын
Nice results! Would this be achievable with multiple ipadapter references? I feel like it would in practice i just haven't thought of trying it yet.
@frankiesomeone
@frankiesomeone 5 ай бұрын
Couldn't you do this in Automatic1111 using the colour image as img2img input and the black & white image as controlnet depth?
@eskindmitry
@eskindmitry 5 ай бұрын
just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.
@xn4pl
@xn4pl 5 ай бұрын
@@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.
@gameswithoutfrontears416
@gameswithoutfrontears416 5 ай бұрын
Really cool
@summerofsais
@summerofsais 5 ай бұрын
Hey I'm in Bangkok right now. I have a casual interest in AI not as in depth as you but we can grab a quick coffee
@AndyHTu
@AndyHTu 5 ай бұрын
This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.
@MrMustachio43
@MrMustachio43 5 ай бұрын
question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image
@Herman_HMS
@Herman_HMS 5 ай бұрын
For me it just seems like you could have used img2img with high denosing to get the same effect?
@rbscli
@rbscli 5 ай бұрын
Didn't really get it either.
@minecraftuser8900
@minecraftuser8900 5 ай бұрын
when are you making some more A1111 tutorials, i really liked them!
@webraptor007
@webraptor007 5 ай бұрын
Thank you...
@AlexsForestAdventureChannel
@AlexsForestAdventureChannel 5 ай бұрын
Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.
@sb6934
@sb6934 5 ай бұрын
Thanks!
@petec737
@petec737 5 ай бұрын
Looks like soon enough we're going to recreate the entire photoshop interface inside a comfyui workflow :))
@altruistminute
@altruistminute 5 ай бұрын
Fr
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
pretty much, yes ;) endless posibilities
@EddieGoldenberg
@EddieGoldenberg 5 ай бұрын
Hi, beautiful flow. I tried to run it on SDXL (with SDXL controlnet depth) but got weird results. Seems only 1.5 checkpoints work. Is it true?
@KDawg5000
@KDawg5000 5 ай бұрын
might be fun to use this with SDXL Turbo and do live painting
@kamillatocha
@kamillatocha 5 ай бұрын
soon ai artists will actualy have to draw their prompts
@UmbraPsi
@UmbraPsi 5 ай бұрын
Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained
@programista15k22
@programista15k22 5 ай бұрын
What hardware did do you use? What graphics card?
@Shingo_AI_Art
@Shingo_AI_Art 5 ай бұрын
The result looks pretty random, but the artistic touch is wonderful though
@ivoxx_
@ivoxx_ 5 ай бұрын
This is amazing, you're the boss Olivio!
@user-zi6rz4op5l
@user-zi6rz4op5l 5 ай бұрын
He is basically ripping off other people's workflows and pastes them on his channel.
@ivoxx_
@ivoxx_ 5 ай бұрын
@@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue. Maybe he could at least tell where did he got it from. I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.
@veteranxt4481
@veteranxt4481 5 ай бұрын
@Olivio Sarikas what would be usefull for RX 6600 XT? AMD GPU?
@hleet
@hleet 5 ай бұрын
I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)
@sznikers
@sznikers 5 ай бұрын
Wouldn't addetail lora during upscaling part of workflow do the job too?
@pedroserapio8075
@pedroserapio8075 5 ай бұрын
Interesting, but I don't get it, 05:15 where the blue went? Background? Or the blue that you are talking about turn into yellow?
@OlivioSarikas
@OlivioSarikas 5 ай бұрын
Yes, i meant to say her outfit is yellow now
@geraldhewes
@geraldhewes 5 ай бұрын
I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.
@geraldhewes
@geraldhewes 5 ай бұрын
The v2 update fixed this issue. 🙏
@Dachiko007
@Dachiko007 5 ай бұрын
I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.
@c0dexus
@c0dexus 5 ай бұрын
Yeah, the clickbait title made it seem like it's some new technique but it's just using img2img and control net to get interesting results.
@vintagegenious
@vintagegenious 5 ай бұрын
That's exactly what he is doing: 75% denoise with initial image is just i2i
@vuongnh0607l
@vuongnh0607l 5 ай бұрын
@@vintagegenious you can go 100% denoise and still get some benefit too.
@vintagegenious
@vintagegenious 5 ай бұрын
@@vuongnh0607l I didn't know, isn't that just txt2img (if we ignore the controlnet)
@AliTanUcer
@AliTanUcer 5 ай бұрын
I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :) Also, feeding weird depth maps. I think he just discovered it i guess :)
@gatwick127
@gatwick127 5 ай бұрын
can you do this in Automatic1111?
@jibcot8541
@jibcot8541 5 ай бұрын
I like it. It would be easier if it had a drawing node in ComfyUI but might not be as controllable as using a Photoshop type application.
@blisterfingers8169
@blisterfingers8169 5 ай бұрын
There'a Krita plugin that uses Comfy as it's backend but it's really finicky to use, it seems.
@TheDocPixel
@TheDocPixel 5 ай бұрын
Try using the canvas node for live turbo gens, and connect to depth or any other controlnet. Experiment!
@SylvainSangla
@SylvainSangla 5 ай бұрын
You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI. The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..
@alekxsander
@alekxsander 5 ай бұрын
I thought I was the only human being to have 10,000 tabs open at the same time! hahahaha
@mistraelify
@mistraelify 5 ай бұрын
Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.
@windstar2006
@windstar2006 5 ай бұрын
A1111 can use this?
@PostmetaArchitect
@PostmetaArchitect 5 ай бұрын
you can also just use prompt travel to achieve the same result
@HasanAslan
@HasanAslan 5 ай бұрын
workflow doesn't load , it doesnt give any errors just nothing happens on comfyui. maybe the image you produced even non upscaled version ?
@blisterfingers8169
@blisterfingers8169 5 ай бұрын
Zoom out and pan down.
@Soshi2k
@Soshi2k 5 ай бұрын
Going to need GPT to break this down 😂
@pedxing
@pedxing 5 ай бұрын
prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!
@pedxing
@pedxing 5 ай бұрын
also... how about an actual oscilloscope to create the noise channel from actual NOISE? =)
@aymericrichard6931
@aymericrichard6931 5 ай бұрын
I probably don't understand. I have the impression we replace a noise by another noise which effect we still not controlled either.
@filigrif
@filigrif 5 ай бұрын
I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work. Still, fun experiments!
@aruak321
@aruak321 3 ай бұрын
@@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.
@keepitshort4208
@keepitshort4208 5 ай бұрын
My Python crashed while running stable diffusion What can be the issue ?
@jhnbtr
@jhnbtr 5 ай бұрын
how is it ai, if you have to do all the work, may as well draw it at this point. Can you make ai more complicated?
@TeamPhlegmatisch
@TeamPhlegmatisch 5 ай бұрын
that looks nice but totally random to me.
@LouisGedo
@LouisGedo 5 ай бұрын
👋
@kanall103
@kanall103 5 ай бұрын
nothing change in this world
@MrSongib
@MrSongib 5 ай бұрын
So it's depth map + custom img2img with high denoise. ok
@xn4pl
@xn4pl 5 ай бұрын
The man at his wits end for some content invents img2img but calls it differently to make it seem like novelty. Bravo.
@Artazar777
@Artazar777 5 ай бұрын
The ideas are interesting, but I'm lazy. Anyone have any ideas on how to make a lot of noise pictures without spending a lot of time on it?
@blisterfingers8169
@blisterfingers8169 5 ай бұрын
ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.
@patfish3291
@patfish3291 5 ай бұрын
The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level
@HyperGalaxyEntertainment
@HyperGalaxyEntertainment 5 ай бұрын
are you a fan of aespa?
@simonmcdonald446
@simonmcdonald446 5 ай бұрын
Interesting. Not really sure why the AI art world has so many anime girl artworks. Oh well.......
@sxonesx
@sxonesx 5 ай бұрын
It's cool, but it's unpredictable. And if it's unpredictable, then it's unusable.
@vuongnh0607l
@vuongnh0607l 5 ай бұрын
This is for when you want just a little bit of control but still let the model hallucinate. If you need stronger control, use the various controlnet models.
@robotron07
@robotron07 5 ай бұрын
way to convoluted
@lazydogfilms30
@lazydogfilms30 5 ай бұрын
Have you given up doing tutorials for proper photography, or are you going down this AI route?
@sirflimflam
@sirflimflam 5 ай бұрын
I think you're about 12 months late asking that question.
@chirojanee
@chirojanee 5 ай бұрын
I't cool, but not new... Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it. Process almost identical - I do like the addition of the depth map - I tend to use monster instead
@jiggishplays6781
@jiggishplays6781 5 ай бұрын
i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.
@artisans8521
@artisans8521 4 ай бұрын
What i see are a lot of unweighted compositions. The masspoint of the poor girl is not above her feet. So she would drop to the floor.
@NotThatOlivia
@NotThatOlivia 5 ай бұрын
First
@Danny2k34
@Danny2k34 5 ай бұрын
I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.
@blisterfingers8169
@blisterfingers8169 5 ай бұрын
Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it. Also, check out Invoke for more artist focussed UI.
@dvanyukov
@dvanyukov 5 ай бұрын
I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.
@dinkledankle
@dinkledankle 5 ай бұрын
It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.
5 ай бұрын
Please don't leave A1111! Comfy is used by very few, A1111 is used by many.
@IDSbrands
@IDSbrands 2 ай бұрын
Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.
@GoodEggGuy
@GoodEggGuy 5 ай бұрын
Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(
@dinkledankle
@dinkledankle 5 ай бұрын
It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.
@rbscli
@rbscli 5 ай бұрын
Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.
@GoodEggGuy
@GoodEggGuy 5 ай бұрын
@@rbscli Olivio recommended Fooocus and I have been using that.
@aruak321
@aruak321 3 ай бұрын
@GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.
@user-kr1jp3qr6q
@user-kr1jp3qr6q 5 ай бұрын
Got excited but clicked off after seeing ComfyUI.
@vuongnh0607l
@vuongnh0607l 5 ай бұрын
Missing all the fun stuff
@vintagegenious
@vintagegenious 5 ай бұрын
Basically use noisy colorful images to do img2img
@Slav4o911
@Slav4o911 5 ай бұрын
Using again that unComfyUI... I just don't like it... I'll wait for Automatic1111 video.
@T-Bone54
@T-Bone54 5 ай бұрын
Overblown, overreaction to a basic background texture achievable in any photo editor. 'Noise'? Really? The Emperor's New Clothes, anyone?
@aruak321
@aruak321 3 ай бұрын
I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.
DWPose for AnimateDiff - Tutorial - FREE Workflow Download
17:15
Olivio Sarikas
Рет қаралды 46 М.
Magnific AI Upscaler Free Alternatives! Krea and Comfy UI Workflows
10:21
В ДЕТСТВЕ СТРОИШЬ ДОМ ПОД СТОЛОМ
00:17
SIDELNIKOVVV
Рет қаралды 2,2 МЛН
КАК СПРЯТАТЬ КОНФЕТЫ
00:59
123 GO! Shorts Russian
Рет қаралды 3,2 МЛН
ПООСТЕРЕГИСЬ🙊🙊🙊
00:39
Chapitosiki
Рет қаралды 29 МЛН
LATENT Tricks - Amazing ways to use ComfyUI
21:32
Olivio Sarikas
Рет қаралды 114 М.
I Tried Creating Concept Art using AI..
13:42
Imad Awan
Рет қаралды 66 М.
How to create consistent character with Stable Diffusion in ComfyUI
12:38
Stable Diffusion Art
Рет қаралды 2 М.
1000 Prompts in 1 Click - Dynamic Prompt Wildcards for Automatic 1111
12:21
AI NEWS - AI is changing in a SCARY way! Or is it?
10:37
Olivio Sarikas
Рет қаралды 17 М.
ComfyUI for Everything (other than stable diffusion)
32:53
Design Input
Рет қаралды 19 М.
UniFL shows HUGE Potential  - Euler Smea Dyn for A1111
9:25
Olivio Sarikas
Рет қаралды 18 М.
Animate Anyone - Only 1 Image needed!!!!
12:01
Olivio Sarikas
Рет қаралды 39 М.
When cats see food 😱🤣🥰
0:29
Ben Meryem
Рет қаралды 6 МЛН
My very favorite way to launch a hang glider 😍 #hanggliding
0:12
Erika Klein
Рет қаралды 54 МЛН
Hot Ball ASMR #asmr #asmrsounds #satisfying #relaxing #satisfyingvideo
0:19
Oddly Satisfying
Рет қаралды 20 МЛН
Creative ideas #creativetools #ideas #diy
0:19
SENIOR WELDER
Рет қаралды 10 МЛН