Model of the Week Review - Lyriel - Like a gatcha game you pay for with electricity

  Рет қаралды 1,207

SiliconThaumaturgy

SiliconThaumaturgy

Күн бұрын

Пікірлер: 16
@andrewowens5653
@andrewowens5653 Жыл бұрын
Thank you very much. A link to the model would be nice.
@siliconthaumaturgy7593
@siliconthaumaturgy7593 Жыл бұрын
civitai.com/models/22922/lyriel Ask and ye shall receive
@blitzar8443
@blitzar8443 Жыл бұрын
This model is really neat, it has lots of color and variety.
@toomurmu9148
@toomurmu9148 Жыл бұрын
Would like more such videos.💥💥💥💯💯
@ismailtibba
@ismailtibba Жыл бұрын
Great video, we need more like this 🙏
@lukeovermind
@lukeovermind Жыл бұрын
That was cool, more please!
@JavierPortillo1
@JavierPortillo1 Жыл бұрын
Yey! I love model showcase videos!
@demoran
@demoran Жыл бұрын
I like this kind of stuff. I'd love to see this become a monthly thing. Though things move fast. If you don't stop and look around once in a while, you could miss it.
@pn4960
@pn4960 Жыл бұрын
Nice !
@MrSongib
@MrSongib Жыл бұрын
I just did a similar benchmark (more scuffed than yours) the other day for all models that I have, and you can spot the permutation from certain models to others since most models are using realistic, anime, and midjourney-ish models, you can backtrack their source model most of the time and you can make some good model from the source too. and it seems fun. My method was almost similar to yours but with a low number of runs using clip skip 1 and 2, straight up 768*512 either portrait or landscape, and didn't use hiresfix, embedding, Adetailer (extension) or lora's. So I know how the model reacts with certain prompt structures and didn't look at the "Suggested" prompt from the page so you get more hands-on with it so you will understand more since most people build their prompt differently and then after that, plop the most favorite sampler in your disposal (or just test all of it. xd) and seeing the model react from that as well. sometimes you get better stuff from different samplers even though the page said to use certain samplers. For steps, I use 25 steps, since in general, I use it at that range (I think most models did, until you got something good from it then we can add more steps for that specific seed. Beyond that it's kind of waste, I learn this from you and other people too. ty sir). For Prompts, I use a simple prompt on what the subject is then just add the basic addon (either realistic or stylized). since if you put strong words like masterpiece etc or style in the front it defeats the purpose of how the model will react in a general prompt, putting it vaguer is better so we can see its behavior more towards certain stuff. And since my Vocabulary is quite bad I just use GPT for the add-ons and adjust it from there. and I didn't use embedding again it defeat the purpose for testing imo. "Subject + Addons" make it up to 75 - 150 tokens, if I can I try to reduce it to around 75 tokens. Here is test prompt examples that I did today (95 tokens): Female in a fantasy world, Fluid brushwork, Bright colors, Emphasis on light and atmosphere, realism, impressionism, Post-impressionism, light and color, attention to detail, by master of portraiture, by master of realism,, (realism:1.2) and (naturalism:1.2), (impressionistic:1.2) brushwork, (portraiture:1.2) and (figurative:1.2) focus, (play of light and shadow:1.2), (elegant compositions:1.2) and (balanced:1.2) arrangements, (rich color palette:1.2) and (bold:1.2) use of color, and (masterful technique:1.2) with (attention to detail:1.2),, neg: (Disastrous:1.2) composition, (awkward:1.1) proportions, (unrefined:1.1) style, (unimpressive:1.2) technique, (unengaging:1.3) subject matter, (lack of skill:1.2), (sloppy execution:1.2), (clashing:1.1) patterns, (incoherent:1.3) theme, (distorted:1.2) perspective, (amateurish:1.3) execution, (lack of creativity:1.2), (messy:1.1) arrangement, (lack of impact:1.1), (poorly defined:1.1) shapes, (lack of originality:1.1), (low resolution:1.3), (noisy:1.3), (blurry:1.2), (grainy:1.3), (unclear subject:1.4), (subpar:1.2), (bad camera angle), (ugly anatomy feature:1.3), (poorly chosen lighting:1.3), (unattractive color palette:1.2), (muddled details:1.2), (lack of depth:1.1), (unappealing texture:1.1). And for the test, I just use two tests (Realistic and Stylized): First I use Realistic stuff like people for a general idea of how the model will react in terms of texture if the model can do this most of the time it can do other realistic stuff too. Seconds Stylizing (fantasy and Sci-fi) This is how I evaluate color and how the model knows certain stuff and how the model reacts to blending or "bleeding" (since most of the time fantasy and Sci-fi is blending real stuff together and putting more colors). Third, I just use different samplers from my favorite to determine which one I like the most. Fourth, after this, you get the gist of the model enough without giving any leverage like using hiresfix or embedding or some detailers extension or lora, etc. Last, Your video helps me a lot to understand certain technical stuff about SD since I am quite new to it, maybe you can share how the prompt and weight work in the future? Since I still struggle to invoke certain camera angles and colors (but I learn about color today from this video "kzbin.info/www/bejne/horTY5RtorqBfqs" ), and I didn't like using ctrlnet it's boring (until I get frustrated to get the angle then I'll do it). xd
@tstciuqz
@tstciuqz Жыл бұрын
Wow, I love how you present your study result! Could you share in general about good practice or framework to study a model?
@headsink
@headsink Жыл бұрын
Realistic Vision next.
@achiche1337
@achiche1337 Жыл бұрын
As an idea you could also do this with the most popular models and compare them to each other
@kinlih289
@kinlih289 Жыл бұрын
very cool , can you suggest a model which can do complex stuff , as in complex poses or concepts , anime specific model are generally better in this regard but not good enough as mid-journey ,the rest popular model were inconsistent ( or simply add a complexity test in your future model review ) much thanks :)
@siliconthaumaturgy7593
@siliconthaumaturgy7593 Жыл бұрын
Based on my testing, I think the bottleneck for complexity is inherent to the version of CLIP in SD 1.5 (~3 things at once at >50% accuracy). Regional Prompting in Multidiffusion or other extensions can help, but isn't without its own challenges. Theoretically, SD 2.1 should allow more complexity with its improved CLIP, but noone uses it so I haven't bothered to test it. I'm optimistic SDXL will offer improvements though.
@ywueeee
@ywueeee Жыл бұрын
can you make a video on how to replicate the generative AI capacities of Adobe latest release using SD? do it asap and get many views ;)
Players push long pins through a cardboard box attempting to pop the balloon!
00:31
How many people are in the changing room? #devil #lilith #funny #shorts
00:39
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 6 МЛН
SDXL September Model Roundup - 30 SDXL Models Ranked and Reviewed
23:24
SiliconThaumaturgy
Рет қаралды 8 М.
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 511 М.
Stable Diffusion - Exploring Schedulers
20:04
André Nascimento Freitas
Рет қаралды 348
7 Things “Nice Guys” Say That Turn Women Off
15:13
Courtney Ryan
Рет қаралды 77 М.
Stable Diffusion Deep Dive - CFG - Don't Accidentally Fry Your Images
11:40
SiliconThaumaturgy
Рет қаралды 11 М.
Graphic Designers are Bad Thumbnail Designers - Here’s Why
18:55
Rule of Thumb
Рет қаралды 43 М.
I Was An AI Artist. Then I Switched Sides.
30:17
astukari
Рет қаралды 33 М.
Players push long pins through a cardboard box attempting to pop the balloon!
00:31