Put Yourself Into A Universe Of Art - Using Your Images In Dreambooth Stable Diffusion

  Рет қаралды 3,502

Ming Effect

Ming Effect

Күн бұрын

Пікірлер: 19
@martinkaiser5263
@martinkaiser5263 Жыл бұрын
Hi and TY for this Vid ! The first vid i see thats in a proper speed to follow ! Cant wait until you reveal the secret artist !!
@mingeffect
@mingeffect Жыл бұрын
You're welcome! I just released the video detailing that today.
@vacc06
@vacc06 Жыл бұрын
Thanks for sharing! Eager to learn more!
@NirdeshakRao
@NirdeshakRao Жыл бұрын
Your are a great tutor, thanks for sharing your knowledge with all. 🙂🙏
@mingeffect
@mingeffect Жыл бұрын
So nice of you, thank you.
@abhinavchhabra4947
@abhinavchhabra4947 Жыл бұрын
Awsome content, thanks putting these videos. Keep posting!
@mingeffect
@mingeffect Жыл бұрын
Thank you :)
@rikenshah1861
@rikenshah1861 Жыл бұрын
Thanks a lot!
@atgaming5318
@atgaming5318 Жыл бұрын
amazing, thanks!!!
@eugeniusvision
@eugeniusvision Жыл бұрын
Eric, when you have a chance, could you record a short video about the new fast-DreamBooth collab interface. I think they changed quite a few things compared to this video. Dreambooth area looks differently. Appreciate it.
@mingeffect
@mingeffect Жыл бұрын
This does seem to be in order. Sometimes the GitHub updates with daily changes. I’ve got that video on the board for production. Thank you for your suggestion.
@lloydkeays7035
@lloydkeays7035 Жыл бұрын
Hi, I believe the interface changed quite a bit since your tutorial (which is great) I was able to get it all up and working. But now... How do I simply go back to stable diffusion interface with my trained model directly? I don't seem able to find how to do it without retraining the model. Thanks
@mingeffect
@mingeffect Жыл бұрын
Dreambooth produces a session folder on your google drive that will contain the trained model. You can then move that model to any other folder (I create a “model” folder, and then access that model folder when using the standard Automatic1111 colab.
@eugeniusvision
@eugeniusvision Жыл бұрын
@mingeffect with chatGPT gaining so much buzz, do you think we soon will be able to train AI in openAI as well as we do in Stable Diffusion and create our own Avatars there as well? Or can we already do that?
@mingeffect
@mingeffect Жыл бұрын
I have a feeling it wouldn’t take much for that to happen. As of yet chat doesn’t generate images (or even random numbers for that matter) but it wouldn’t take much for someone to have it generate code to access the api and produce that elsewhere. As t is, the chat is t connected to the internet for pulling data. I have a feeling they are not wanting to test the waters vs google until they’ve got the appropriate legal issues resolved for such.
@UnfilteredTrue
@UnfilteredTrue Жыл бұрын
How do you get 169 images of yourself? Do you copy same image multiple times to basically overtrain the model? I a struggling with getting the face right.
@mingeffect
@mingeffect Жыл бұрын
Hi, thanks for the question. I have another channel where I am often filmed from different angles in different environments. This is key. Different lighting conditions (just enough where the image is still well-lit but with different ambient conditions), and angles. Think of recording yourself in certain 3d apps. You can start at a low angle looking up at your face and then take photos as you circle around yourself side-to-side. Then you set the camera at a higher level and do the same thing. And then again, until you eventually get photos at a very high angle around your face/head. And different environments--as well as different clothing--will cause subtle lighting and color reflections on your face or the rest of your body. It's easy to get that number of images with this approach. Plus the different styles of clothing-with shots close, medium, and far away will help round out your training images. I hope this gives you a bit more to go on.
@ak_mits
@ak_mits Жыл бұрын
I think I have just found what I have been looking for! I tried training on my photos (1 with the same background and the other version with background removed) but still I got basic results. It did not take the face of the photo but rather it seems like it used the whole frame. So the results were all like with the same tshirt of the person because the original photos were with the same tshirt. Do you have any idea to fix it?
@mingeffect
@mingeffect Жыл бұрын
The key is to have a variety of angles from closeup to far away. Then comes the prompts themselves. Some of mine (with certain trained models) only produced good results of my face when I also included the word "viking" -go figure. I suppose it has something to do with my beard. Think of other words that might actually tune the render more closely to your face. Maybe add businessman, person, or even throw in a 4-letter random set which will also change the render: something like sdd4
Stable Diffusion Dreambooth Made Easy - Clone Yourself In AI Art
21:55
Comfortable 🤣 #comedy #funny
00:34
Micky Makeover
Рет қаралды 17 МЛН
لااا! هذه البرتقالة مزعجة جدًا #قصير
00:15
One More Arabic
Рет қаралды 51 МЛН
Bony Just Wants To Take A Shower #animation
00:10
GREEN MAX
Рет қаралды 7 МЛН
Идеально повторил? Хотите вторую часть?
00:13
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 18 МЛН
LORA + Checkpoint Model Training GUIDE - Get the BEST RESULTS super easy
34:38
Don't Use ChatGPT Until You Watch This Video
13:40
Leila Gharani
Рет қаралды 1,6 МЛН
23 AI Tools You Won't Believe are Free
25:19
Futurepedia
Рет қаралды 2 МЛН
FREE Image Upscaler For Mac - FREESCALER - Stable Diffusion Ai
8:01
Testing Moshi Ai - My Realtime Conversation with  New AI
9:47
Ming Effect
Рет қаралды 1 М.
Why is anti-immigration sentiment on the rise in Canada?
13:00
The Guardian
Рет қаралды 1,8 МЛН
Comfortable 🤣 #comedy #funny
00:34
Micky Makeover
Рет қаралды 17 МЛН