Endless Zoom Demo
1:04
Жыл бұрын
Scribble Diffusion Demo
11:52
Жыл бұрын
Пікірлер
@laurentpastorelli1354
@laurentpastorelli1354 7 күн бұрын
great video! more tutorials like this please!
@MaxRohowsky
@MaxRohowsky 8 күн бұрын
Hey, is it possible to get this to work on a Windows System with an AMD GPU?
@A1_Frontier
@A1_Frontier 8 күн бұрын
This awesome! Do you guys have a method to train and/or run 2 persons together so they can be in one image?
@thruhazeleyes
@thruhazeleyes 9 күн бұрын
Pokémon one I think 😂 don’t pretend like you don’t know that Pokémon lmao
@2nerC9
@2nerC9 9 күн бұрын
This is kinda scary knowing how accurate it is and how it can and will be used to steal other peoples voices…
@gbgary
@gbgary 27 күн бұрын
hilarious
@kamranhuseynov2897
@kamranhuseynov2897 Ай бұрын
this seems redundant and pointless. you are solving a problem that does not exist. you can simply use nvidia-container-toolkit with docker and it is very straightforward to set up.
@amkidcreation
@amkidcreation Ай бұрын
audio is very bad and disturbing
@mikestaub
@mikestaub 2 ай бұрын
Can we use webhooks with custom deployments?
@rebelwave100
@rebelwave100 2 ай бұрын
At least we'll still have epic bbc docs once he pops his clogs
@SvixHQ
@SvixHQ 2 ай бұрын
Can't argue with webhooks=yes :) Also maybe a link to the project in the description?
@Yam31Yam
@Yam31Yam 2 ай бұрын
Super, Can IImprt a picture to convert it in IA 4:21
@codededy
@codededy 3 ай бұрын
im got some problem from req.body in webhook Received webhook ReadableStream { locked: false, state: 'readable', supportsBYOB: true }
@AlfiPrice-hq2yv
@AlfiPrice-hq2yv 3 ай бұрын
Bro how do I get access to this software?
@stephenhardy9753
@stephenhardy9753 3 ай бұрын
Dev.1 is far better, people actually look real where as Schnell still looks cartoon like .
@iamdihan
@iamdihan 3 ай бұрын
use text=name instead of input=name 😄
@RobertsDigital
@RobertsDigital 3 ай бұрын
Please i need a response please. I need to make an animation character with my own avatar and using my hands. Can it be done on replicate?
@getcoai
@getcoai 3 ай бұрын
In our past life we had to write FFMEG voodoo. We will try your tool for sure ;)
@رهگذرالیاسمشهدی
@رهگذرالیاسمشهدی 3 ай бұрын
سلام
@n_0_body
@n_0_body 4 ай бұрын
Sometimes Flux Schnell should stop putting lightning in every creation 😅😅
@xylfox
@xylfox 4 ай бұрын
All comedians will be jobless very soon😅I heard Attenborough in this new video : kzbin.info/www/bejne/gpW5hGR7mLKoqJI and couldn´t tell wether its him or AI!😮 To be honest I´m still not sure after comparing this 2 videos? But does he speak (such actual) videos at all with 98 yoa?
@Zanbilazy
@Zanbilazy 4 ай бұрын
What your name Ai?
@nandans2506
@nandans2506 4 ай бұрын
Can't we just start calling it magic at this point
@insanity2753
@insanity2753 4 ай бұрын
thanks for sharing
@ysteineide178
@ysteineide178 5 ай бұрын
Tested for almost 2 weeks now via Replicat/Api. Yes, it has great understanding for long complicated prompts. Minor challenges with regards to fingers and loose/disfigured body parts, relatively good on eyes, but so far few variables in relation to the appearance of people. Missing support for higher resolution than 1024x1024, but according to support, it will come. PNG gives poor results so have ended up with Webp with 100 output_quality. The NSFS filter can be set to the lowest level (1 out of 5) and you don't run up against the censorship wall very often, but sometimes you wish you could turn it off completely to preserve the creative flow and bring out the expression you want.
@TheLorismaister
@TheLorismaister 5 ай бұрын
1:19 did somebody shart ?
@anorak6366
@anorak6366 5 ай бұрын
tanks and you can use the real esgan in comfyui locally even with slow cpu
@Velanteg
@Velanteg 5 ай бұрын
Pencil drawings are very bad example for showing difference. I didnt understand how it will look in realistic.
@edwardferry8247
@edwardferry8247 5 ай бұрын
Dude making the simplest thing as complex as he can 😂
@PaulFidika
@PaulFidika 5 ай бұрын
What's the difference between the two, in terms of the actual model?
@lucataco
@lucataco 5 ай бұрын
Dev is trained using guidance distillation, Schnell is smaller and was trained using latent adversarial diffusion distillation
@SravanKing-x1q
@SravanKing-x1q 5 ай бұрын
in replicate flux schnell api any limit or we can use number of time?
@RahulPatel-td6qu
@RahulPatel-td6qu 5 ай бұрын
hello sir, do i have to setup billing before using replicate api in my web app.
@replicatehq
@replicatehq 5 ай бұрын
You get a little bit of time to try out the API for free. When that time runs out you'll have to pay to continue using the API.
@Spectre5390
@Spectre5390 5 ай бұрын
Very nice comparison. The answer is probably no, but would these work sith SD/SDXL LoRAs?
@lucataco
@lucataco 5 ай бұрын
Highly unlikely that an SD/SDXL LoRA would work with FLUX because of architectural differences
@Spectre5390
@Spectre5390 5 ай бұрын
@@lucataco I see. Thx for the answer.
@CyberwizardProductions
@CyberwizardProductions 5 ай бұрын
i'm using a double model workflow - in ComfyUI, using the basic workflow comfy created: - attached dev to the BasicGuider. Attach Schnell to the basicSampler - what comes out is high quality and excellent prompt comprehension.
@tr0ublem4kerWZ
@tr0ublem4kerWZ 5 ай бұрын
what mechanical keyboard are you using? it sounds nice.
@replicatehq
@replicatehq 5 ай бұрын
EPOMAKER CIDOO V75 VIA - www.amazon.com/dp/B0C23GB1G6?psc=1&ref=product_details
@Denkkraft
@Denkkraft 5 ай бұрын
What type of camera or filter or anything do you use for your image? it looks weirdly pixelated but also crisp at the same time, the black and white gives it an overall artistic style to it! i realy like it. insanely actually.
@replicatehq
@replicatehq 5 ай бұрын
OBS and iPhone 13. High contrast. Zero saturation. El Gato green screen. 🤷‍♂
@mirek190
@mirek190 5 ай бұрын
with both did you use the same t5xx?
@replicatehq
@replicatehq 5 ай бұрын
The t5 models are configured slightly differently - schnell inputs have a max length of 256 tokens and dev have a max length of 512 - but everything else (weights, code) is the same. See these links: huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/text_encoder_2/config.json huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/text_encoder_2/config.json
@mirek190
@mirek190 5 ай бұрын
@@replicatehq Strange ...did you set for flux dev a guidance to 0.5-1.0 for pictures ? I'm getting much better results with pictures like that for flux dev ( I'm using a comfy UI)
@karlisstigis
@karlisstigis 5 ай бұрын
File > Remux Recordings, drop in your file, click "Remux" button.
@1amy0u1amy0u
@1amy0u1amy0u 5 ай бұрын
720p on 2024 and everything is white, what the hell man!
@MartinKoss
@MartinKoss 5 ай бұрын
Gosh this looks amazing but on my Mac I am not having any joy at all. Installation went ok and I got an API, adding that seemed to go ok too. But running aimg just results in errors in terminal. Not sure if this means anything but here's the error: ReferenceError: TransformStream is not defined at Object.<anonymous> (/usr/local/lib/node_modules/aimg/node_modules/replicate/vendor/eventsource-parser/stream.js:182:45) at Module._compile (internal/modules/cjs/loader.js:1063:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10) at Module.load (internal/modules/cjs/loader.js:928:32) at Function.Module._load (internal/modules/cjs/loader.js:769:14) at Module.require (internal/modules/cjs/loader.js:952:19) at require (internal/modules/cjs/helpers.js:88:18) at Object.<anonymous> (/usr/local/lib/node_modules/aimg/node_modules/replicate/lib/stream.js:7:5) at Module._compile (internal/modules/cjs/loader.js:1063:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10) MK-MacBook-Pro:AI mk$
@replicatehq
@replicatehq 5 ай бұрын
Looks like you opened an issue on GitHub. Let's continue the conversation there: github.com/zeke/aimg/issues/1
@spoiltchild6517
@spoiltchild6517 5 ай бұрын
Why its changed to REPLICATE_API_TOKEN to REPLICATE_API_KEY
@replicatehq
@replicatehq 5 ай бұрын
Not sure specifically what you're referring to, but you can name your token env var whatever you like. Note, however, that it is generally referred to as REPLICATE_API_TOKEN, and this is what our client libraries expect by default.
@maybegreat
@maybegreat 5 ай бұрын
Thanks! Struggling through this myself, always welcome to see a breakdown how others gone about solving it
@boskanal3694
@boskanal3694 5 ай бұрын
Great!
@JimmyGunawanX
@JimmyGunawanX 5 ай бұрын
What is the FLUX requirement, it seems running super fast on your computer… but my Mac is M2 32GB will it work?
@replicatehq
@replicatehq 5 ай бұрын
It's running in the cloud on Replicate in this video. You can run it locally, but you'll need a pretty powerful machine, and it will still be pretty slow.
@audiogus2651
@audiogus2651 5 ай бұрын
Hope we get control net!
@audiogus2651
@audiogus2651 5 ай бұрын
Amazing
@piteshbhanushali1140
@piteshbhanushali1140 5 ай бұрын
Goodbye stable diffusion..
@yvrary
@yvrary 5 ай бұрын
midjourney*
@laurentpastorelli1354
@laurentpastorelli1354 5 ай бұрын
Yay! more videos like that please!
@JaysCoolThings
@JaysCoolThings 5 ай бұрын
Good video. I wasn't aware this model existed. I'm going to go try it now
@PrometheusVFX
@PrometheusVFX 6 ай бұрын
very cool website! It seems to generate fast. Is there a more improved model you've seen out now that fixes the blurriness of the image and adds more detail? I have experimented with comfyui and making txt2img and putting lora to add more detail. Anything you'd recommend for making a good inpainting model?
@replicatehq
@replicatehq 6 ай бұрын
You might wanna check out this new version of Stable Diffusion (3) with differential diffusion inpainting: replicate.com/zeke/sd3-inpainting-with-differential-diffusion