Wizard-Vicuna: 97% of ChatGPT - with Oobabooga Text Generation WebUI

  Рет қаралды 56,976

Prompt Engineering

Prompt Engineering

Күн бұрын

In this video, we explore a unique approach that combines WizardLM and VicunaLM, resulting in a 7% performance improvement over VicunaLM. We will be running the model locally using the powerful Oobabooga text generation webui.
LINKS:
--------------------------------------------------
WizardVicunaLM: github.com/mel...
Model on HuggingFace: huggingface.co...
Oobabooga WebUI: github.com/oob...
-------------------------------------------------
☕ Buy me a Coffee: ko-fi.com/prom...
Join the Patreon: patreon.com/PromptEngineering
-------------------------------------------------
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...

Пікірлер: 115
@shotelco
@shotelco Жыл бұрын
Been waiting for this to be released. You will have plenty of new videos based on this platform. Hopefully, the next is how to train on internally created datasets, and more importantly, how to set up automated continuously updated training. Thanks!
@seldian
@seldian Жыл бұрын
Loving these local guides!!!! Thank you! Edit: Thank you for including the uncensored recommendations too, they are hilarious to mess with
@sayantandas7544
@sayantandas7544 Жыл бұрын
The more I watch your videos, the more amazed I become. It's incredible how much AI has already evolved, so many things are already available for free to play around. Your videos are always very thorough and easy to understand. 🚀🚀
@engineerprompt
@engineerprompt Жыл бұрын
Thank you, its just getting crazy day by day!
@TH-nr2sp
@TH-nr2sp Жыл бұрын
すばらしい!簡単に導入することができました。ありがとうございます。
@ekstrajohn
@ekstrajohn Жыл бұрын
Vicuna-WizardLM trained on GPT4 outputs, so it makes sense that it would be closer to what a GPT4 judge would prefer. It's not a very useful test imo ...
@reinerheiner1148
@reinerheiner1148 Жыл бұрын
How dare you question the neutrality of the judge! 😁
@anispinner
@anispinner Жыл бұрын
Good point.
@hfislwpa
@hfislwpa Жыл бұрын
Thanks for the video! Would like to see how to fine tune as you mentioned please
@wilfredomartel7781
@wilfredomartel7781 Жыл бұрын
Wish to see the text summarization performance.
@jayhu6075
@jayhu6075 Жыл бұрын
This streaming is so useful for everybody to start how to install and exercise with this to understand the fundamentals. Many thanks
@yassinebouchoucha
@yassinebouchoucha Жыл бұрын
it' s always great to have a video explanation with step-by-step even if it's simple clicks and cmd commands !(Personally I get lost navigating in the everyday new tool/framework of AI models) Looking forward for your in depth Video about Oobabooga and if we can integrate it API with langchain !?
@seldian
@seldian Жыл бұрын
Can you make a video explaining training, and how it works?
@madelynmills712
@madelynmills712 Жыл бұрын
Thanks for this. Very helpful!
@nasiraden4628
@nasiraden4628 Жыл бұрын
did i miss something - if you chose B (AMD) it will tell you GBU is not supported . does anyone have the same issue ?
@ultimatefaux
@ultimatefaux Жыл бұрын
Grat tutorial. Thank you! 🐙
@TheRMartz12
@TheRMartz12 Жыл бұрын
The one click installers are not available anymore
@SB-uh8ik
@SB-uh8ik Жыл бұрын
I have the text generation web ui and I want to train the llama2 model of 7b with my own data in txt but I get an error, has anyone already done it or can give me the instructions? Because I have been trying for a week and nothing so to see if someone would help me with this, whether we make a call where they show me how to do it or here
@lovol2
@lovol2 Жыл бұрын
Thanks. Yes please show training Also, please show api. Wish i was at my pc now to try it!
@ModelAIweb
@ModelAIweb Жыл бұрын
Following your instructions llama2 get stuck after frist prompt, can you help with a video to tweak the parameters to enhance code generation and longer conversations
@janalgos
@janalgos Жыл бұрын
would like to see how does wizard vicuna performance compared to stablevicuna
@dodeekdd961
@dodeekdd961 Жыл бұрын
Hi, For some reason when I download ooba booga I have different files from the ones in the video, also when I try to start it up, I only get one file named installer _files and there is no file text-generation-webui anyone knows what might be going on?
@sahanfernando4414
@sahanfernando4414 Жыл бұрын
same did you figure it out?
@dodeekdd961
@dodeekdd961 Жыл бұрын
Yes
@dodeekdd961
@dodeekdd961 Жыл бұрын
I deleted the oobagooba file and reinstalled it, i also swithed from option a, NVDIA to d, internal cpu
@sahanfernando4414
@sahanfernando4414 Жыл бұрын
@@dodeekdd961 I figured it out already, I just deleted the installer files folder and ran it again. I made sure to use the right capitalisation for the options this time and it ended up working. Might have just been an error might have been capitalisation (which would be really stupid) but it works as intended
@sahanfernando4414
@sahanfernando4414 Жыл бұрын
@@dodeekdd961 kept it on Nvidia btw
@flyLeonardofly
@flyLeonardofly Жыл бұрын
This is a very misleading metric. It suggests that vicuna is as "good" or very close to chatGPT, but I tried it for the things I use chatGPT for like, learning how certain APIs work (programming) and it's pretty useless for that.
@EngkuFizz
@EngkuFizz Жыл бұрын
Fact
@ShaunPrince
@ShaunPrince Жыл бұрын
So much misinformation in this vid, no explanation for your choices, and GGML models don't load in the GPU. If you use a huggingface format or GPTQ model it will run on a GPU, and the 1080ti will output almost the same token per second as a RTX 3060. Also, TheBloke repos usually contain many unneeded files, and are confusing for new users.
@KwissBeats
@KwissBeats Жыл бұрын
I highly disagree with that last comment, The bloke provides un-broken files that don't need the stupid merge with facebooks LLama
@josenerydev
@josenerydev Жыл бұрын
How I know if this use GPU? I setup to NVIDEO, but use CPU
@liquidsnakeblue1
@liquidsnakeblue1 Жыл бұрын
Same problem
@ChessScholarOfficial
@ChessScholarOfficial Жыл бұрын
Set the n-gpu-layers based on your GPU. I'm on the RTX 4070 and have it set to 33. Make sure your model isn't too big for the GPU, otherwise speeds will be very slow.
@aashas8553
@aashas8553 Жыл бұрын
Its so weird to me I get insane ammount of error messages and cancellations
@SAVONASOTTERRANEASEGRETA
@SAVONASOTTERRANEASEGRETA Жыл бұрын
Can you give me some advice: my Alpaca-13B-GGML model is fairly fast but responds as it wants. I tried to change parameters but to no avail. What do you tell me to do?
@McVerdict
@McVerdict Жыл бұрын
Can you help me, im trying to use the api for a program but it doesnt seem to let me use the web ui character's through the api
@didiervandendaele4036
@didiervandendaele4036 Жыл бұрын
Thank you ❤
@behroozhussaini2339
@behroozhussaini2339 Жыл бұрын
Hi, How to share the Vicuna GUI with a local computer. I installed on my server now I want to access in my phone or other computers but I can't find any way to do that. please help.
@conradtwonine9414
@conradtwonine9414 Жыл бұрын
how could I add my own database or file library to this model to create my own version?
@SAVONASOTTERRANEASEGRETA
@SAVONASOTTERRANEASEGRETA Жыл бұрын
HI . I need to know something I don't understand. If I want to give my assistant a new book text, I have to put it in the file. Bin form or in a webui folder? Thank you
@muazashraf409
@muazashraf409 Жыл бұрын
if we have to host the text generation web ui, we have to download it manually?
@pangalord
@pangalord Жыл бұрын
Does it run actually on windows? It seems u started installing on windows then finished on Linux
@engineerprompt
@engineerprompt Жыл бұрын
It does but I dont' have a GPU on my windows machine. That's why switched to linxu, mentioned that in the video.
@michaelroberts1120
@michaelroberts1120 Жыл бұрын
@@engineerprompt But isn't this thing supposed to be able to run on CPU? You're even given that option during installation!
@WildMidwest1
@WildMidwest1 Жыл бұрын
No joy for me under two Linux versions: Linux Mint 21.1 and Manjaro (current). I have 64 GB RAM and a Ryzen 7 CPU with a lowly integrated AMD GPU. The script fails at different points with each OS. In Linux Mint the start script puts up a bunch of errors after downloading four 8.1 to 14.6 GB files from TheBloke_wizard-vicuna-13B-CGML, declaring “error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a CG!L file?” Various tokenizer errors and assertion errors follow before giving the message Done! In Manjaro, the start script terminated after “Building wheel for llama-cpp-python (project.toml)… error. Subprocess exited with error.” Further down CMake complains “49 (message): Could not find compiler set in environment variable CC: gcc-9”, then “Trying Ninja generator - failure.” Further down, “scikit-build could not get a working generator for your system. Aborting build. Building Linux wheels for Python 3.10 requires a compiler (eg gcc).” Manjaro Linux has gcc 12.2.1 preinstalled. Downgrading gcc 12..2.1 to gcc 9 is a questionable idea, at best, and not easily accomplished by most of us. (I read a bunch of threads before abandoning this effort.) ---- ADDENDUM: Posting an update for the benefit of others. The challenge I was having with oobabooga in Linux Mint (an Ubuntu derivative) was due to recent removal from the llama.cpp of bit-shuffling, requiring re-quantization of the base model AND updating llama-cpp-python to version 0.1.50 or greater. The command that got Wizard-Vicuna running for me was: > pip install llama-cpp-python==0.1.50
@Cashvib-f4w
@Cashvib-f4w Жыл бұрын
How to fix 'cadam32bit_grad_fp32' not found, need help
@JuliusCaezar-d8x
@JuliusCaezar-d8x Жыл бұрын
Why is it not doing anything when I run the start-windows.bat? Need a little help here thanks.
@davidstone2700
@davidstone2700 Жыл бұрын
How can we give it langchain and online access? With EdgeGPT Plugin? Please help.
@greendsnow
@greendsnow Жыл бұрын
finally.... a prompt tutor with no gpu!
@SAVONASOTTERRANEASEGRETA
@SAVONASOTTERRANEASEGRETA Жыл бұрын
Why don't you see the transformer parameters in my latest version?
@patrickmchargue7122
@patrickmchargue7122 Жыл бұрын
Miniconda won't start for me. I ran the installer for Miniconda, but no joy. Running Python 3.11 on my system
@MrTPGuitar
@MrTPGuitar Жыл бұрын
Why are you saying compared to chatgpt this chatgpt that when the readme specifically compares it to GPT3.5
@jodter1
@jodter1 Жыл бұрын
How do I use that and other models without using the terminal or the chat website, but directly in a Python or JS project?
@hafizkk99
@hafizkk99 Жыл бұрын
Does this work on my 4gb RAM system. I dont have a gpu
@itsalivevideo
@itsalivevideo Жыл бұрын
What hardware are you using in your demo?
@xilix2770
@xilix2770 Жыл бұрын
Is there a way to make the chat logs .txt instead of .json? ``` I would love the chat log format to look like this name1 "Hello." name2 "Hi." name1 "Nice weather today." name2 "Indeed." ```
@chrisbraeuer9476
@chrisbraeuer9476 Жыл бұрын
Can someone guide me to an uncencored version? This one seems to work but refuses to reply to many questions.
@kethtemplar8989
@kethtemplar8989 Жыл бұрын
Does the AMD option work?!! Anyone?! Please! Tell ME! I am DESPERATE for faster than 398+seconds cpu thinking time!
@jichaelmorgan3796
@jichaelmorgan3796 Жыл бұрын
Is there any llm that could feasibly work on an i5-4440 with 16gb of ram?
@Atom5355
@Atom5355 Жыл бұрын
For those wondering I installed on pc with 7900xtx and 7950x on cpu mode works fine
@engineerprompt
@engineerprompt Жыл бұрын
Nice!
@henriquemiranda3762
@henriquemiranda3762 Жыл бұрын
Hi, how i install more language support in this model? eg: Portuguese languages
@engineerprompt
@engineerprompt Жыл бұрын
You will look at specific models that are multilingual.
@harris6363
@harris6363 Жыл бұрын
Waiting for the video on how to use the chat setting
@michal5869
@michal5869 Жыл бұрын
Ehh, I don't see any better difference than with the old Vicuna or Wizard. For example, asking for code to be written in C++ is a waste; if I want more, I'll switch to Java. The answers are short and they misunderstand my questions. I compared all models.
@valdesguefa9513
@valdesguefa9513 Жыл бұрын
what is the caracteristics of your computer ?
@homosuperior1337
@homosuperior1337 Жыл бұрын
Not working on my m1 machine.
@greendsnow
@greendsnow Жыл бұрын
How can I make an API out of that?
@mygamecomputer1691
@mygamecomputer1691 Жыл бұрын
Can confirm, this webui from Oogabooga does indeed enable you to have spicy role-play off-line. Thank you for covering this.
@DrGuy118
@DrGuy118 Жыл бұрын
spicy role-play 🥰
@chrisbraeuer9476
@chrisbraeuer9476 Жыл бұрын
How. Mine refuses to do uncencored stuff. Is there some setting i miss?
@cleverestx
@cleverestx Жыл бұрын
How spicy we talkin'? LOL
@mygamecomputer1691
@mygamecomputer1691 Жыл бұрын
There is no limit to how spicy you can make it if you’re using the uncensored version. fantasy at its finest.
@mygamecomputer1691
@mygamecomputer1691 Жыл бұрын
Do you have to make sure you’re using the model that specifically says it’s uncensored and ideally also use the storyteller preset.
@lukacosic1353
@lukacosic1353 Жыл бұрын
Hello everyone, I've only recently started dealing with AI, so I'm still a beginner. Thank you for the video, it helped me a lot to set up the system. My question is, how can I share the WebUI output with other computers in the network? I installed the system under Proxmox in a Servers without a desktop and would like to access the WebUI from my workstation. Can anyone help me set this up? Thanks in advance and best regards Luka
@RpgBlasterRpg
@RpgBlasterRpg Жыл бұрын
How to make the Text faster?
@engineerprompt
@engineerprompt Жыл бұрын
Better hardware :)
@sjtrader1363
@sjtrader1363 Жыл бұрын
If that's the performance on 1080 then what can people do who have 1050
@aresdeus2996
@aresdeus2996 Жыл бұрын
Pensei isso também kkkkkkkkkkkkkk
@reinerheiner1148
@reinerheiner1148 Жыл бұрын
cry? ^^ but more seriously, you can always use google colab and use a cloud gpu thats about as fast as a 1080ti for free, or use cpu, depending on your hardware. besides, a 1050 may not even have enough vram to run it.
@sjtrader1363
@sjtrader1363 Жыл бұрын
@@reinerheiner1148 thanks bro. My desperate comment didn't go in waste after all.
@annwang5530
@annwang5530 Жыл бұрын
Can you analyze freedomgpt?
@withcarename
@withcarename Жыл бұрын
97% of chat gpt ? What a bold claim.
@mbrochh82
@mbrochh82 Жыл бұрын
I believe it is 97% of GPT-3. So yeah, still not very useful.
@simonalkemade
@simonalkemade Жыл бұрын
@@Utoko kmsys is down probably to busy
@gileneusz
@gileneusz Жыл бұрын
got error on mac m1: ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
@WildMidwest1
@WildMidwest1 Жыл бұрын
Same issue here building in Manjaro Linux. The start script terminated after “Building wheel for llama-cpp-python (project.toml)… error. Subprocess exited with error.” Further down CMake complains “49 (message): Could not find compiler set in environment variable CC: gcc-9”, then “Trying Ninja generator - failure.” Further down, “scikit-build could not get a working generator for your system. Aborting build. Building Linux wheels for Python 3.10 requires a compiler (eg gcc.” Manjaro Linux has gcc 12.2.1 preinstalled. Downgrading gcc 12.x to gcc 9 is a questionable idea, at best. I do not have these issues building wheel in Linux Mint, however I have other apparently insurmountable problems installing in Mint. Huge time waste but maybe educational?
@WildMidwest1
@WildMidwest1 Жыл бұрын
Posting an update for the benefit of others. The challenge I was having with oobabooga in Linux Mint (an Ubuntu derivative) was due to recent removal from the llama.cpp of bit-shuffling, requiring re-quantization of the base model AND updating llama-cpp-python to version 0.1.50 or greater. The command that got Wizard-Vicuna running for me was: > pip install llama-cpp-python==0.1.50
@michaelroberts1120
@michaelroberts1120 Жыл бұрын
Maybe the creator of oogabooga for Windows should contact the creator of Koboldcpp, and ask him for lessons in how to code a program for windows. Because when you see how much the creator of Koboldcpp has crammed into a single file less than 20 megabytes in size, versus the monstrosity that is Oobabooga, it is quite clear that the creator of Oobabooga for Windows doesn't know what he is doing.
@hemzabomb
@hemzabomb Жыл бұрын
nothing I try to install works
@michaelroberts1120
@michaelroberts1120 Жыл бұрын
Don't waste your time downloading and attempting to install this thing (oogabooga for Windows). You need Microsoft Visual C++ 14.0 or greater; no mention of that is made in the text files. On top of that, even if you specify CPU only mode during installation, the program still fails to initialize because numerous modules won't start since they have been compiled without cuda support. The whole thing is a hot mess, and reminds me of why I hate Linux, and Linux mixed with Windows is ten times worse! Until someone comes up with a pure Windows version, don't waste time with this unless you're a complete masochist!
@Spock69able
@Spock69able Жыл бұрын
I tryed to install it on my windows pc with having tons of errors and not running! I guess you are right!
@ScreamCheese13
@ScreamCheese13 Жыл бұрын
This post was a million times more helpful than anything else I've found. I've been trying to make this work for days and it absolutely refuses to. While it would be nice to find a comment that solves the problem and saves you time, the second best thing is to find a comment that tells you to just stop wasting it on a horribly designed product with pitiful instructions.
@yassinebouchoucha
@yassinebouchoucha Жыл бұрын
Same issue it won't compile if it build with CUDA env
@randomman5188
@randomman5188 Жыл бұрын
its not that difficult and if you cant download microsoft visual c++ 14.0 or greater than you shouldnt even be using an llm and and it specifically says for windows not linux so thats why you are having issues because the oogabooga installer is simple and straightforward
@ScreamCheese13
@ScreamCheese13 Жыл бұрын
@@randomman5188 Congratulations on not even reading my post. That's not at all where the problems were.
@NebulousTwist
@NebulousTwist Жыл бұрын
6:20
@marcosbenigno3077
@marcosbenigno3077 Жыл бұрын
Muito obrigado Prompt . Prevejo que logo teremos uma forte oposição ao uso das AIs estou atualmente baixando e guardando todos os modelos disponiveis (1 tera quase). De todas as interfaces que testei (sou leigo) o oogabonga é o mais promissor. Se puder explicar a diferença entre os tipos seria agradavel de ver (*.pt, pytorch*.bin, ggml*.bin, *.safetensors). Até então só tenho conseguido rodar arquivos *.bin unitários. Desculpe minha burrice mas é para isto que veio as AIs: tornar generalistas especialistas e vice versa. Esta é a ultima fronteira e será barrada antes de 2030. Grato pelos vídeos. Marcos. Sul América. Brasil. Minas Gerais. Betim
@AaliDGr8
@AaliDGr8 Жыл бұрын
That's very stupid thing you all people just making videos without knowing everything listen it would be very great and amazing as good with chat GPT if this also could edit images same like GPT4 can do if it's not edit pictures then it will not be amazing or cool
@CCCW
@CCCW Жыл бұрын
I get a "KeyError: 'serialized_input'"
@jlpfytb
@jlpfytb Жыл бұрын
I followed the steps but gave these errors when starting, can someone help me? INFO:Loading the extension "gallery"... Traceback (most recent call last): \server.py", line 885, in create_interface() \server.py", line 472, in create_interface with gr.Blocks(css=ui.css if not shared.is_chat() else ui.css + ui.chat_css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']: \blocks.py", line 1261, in get_config_file "input": list(block.input_api_info()), # type: ignore blocks.py", line 1285, in __exit__ self.config = self.get_config_file() serializing.py", line 40, in input_api_info KeyError: 'serialized_input'
@jlpfytb
@jlpfytb Жыл бұрын
@@reinerheiner1148 thanks a lot
@lacknerish
@lacknerish Жыл бұрын
If anyone else comes across this problem, the fix is to install a newer Gradio. $ pip3 install gradio==3.28.3
@opitts2k2
@opitts2k2 Жыл бұрын
if you are like me and was wondering where you place the llm model if you ran start_windows.bat and was greeted with a url instead of the command prompt:(. You place it here: oobabooga-windows └───text-generation-webui └───models └───anon8231489123_vicuna-13b-GPTQ-4bit-128g └───Wizard-Vicuna-30B-Uncensored.ggmlv3.q8_0.bin
@fakeaccount-t3r
@fakeaccount-t3r Жыл бұрын
excuse me but would anyone know how to fix this problem. !! self.initialize_options() installing to build\bdist.win-amd64\wheel running install running install_lib creating build\bdist.win-amd64 creating build\bdist.win-amd64\wheel creating build\bdist.win-amd64\wheel\sentence_transformers creating build\bdist.win-amd64\wheel\sentence_transformers\cross_encoder copying build\lib\sentence_transformers\cross_encoder\CrossEncoder.py -> build\bdist.win-amd64\wheel\.\sentence_transformers\cross_encoder creating build\bdist.win-amd64\wheel\sentence_transformers\cross_encoder\evaluation copying build\lib\sentence_transformers\cross_encoder\evaluation\CEBinaryAccuracyEvaluator.py -> build\bdist.win-amd64\wheel\.\sentence_transformers\cross_encoder\evaluation error: could not create 'build\bdist.win-amd64\wheel\.\sentence_transformers\cross_encoder\evaluation\CEBinaryAccuracyEvaluator.py': No such file or directory [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for sentence-transformers Running setup.py clean for sentence-transformers Failed to build sentence-transformers ERROR: Could not build wheels for sentence-transformers, which is required to install pyproject.toml-based projects Command '"C:\Users\???\Downloads\AI\oobabooga_windows\oobabooga_windows\installer_files\conda\condabin\conda.bat" activate "C:\Users\???\Downloads\AI\oobabooga_windows\oobabooga_windows\installer_files\env" >nul && python -m pip install -r extensions\openai equirements.txt --upgrade' failed with exit status code '1'. Exiting... Press any key to continue . . .
@kiarasha1106
@kiarasha1106 Жыл бұрын
I had the same issue, your file is in a file, just take it out.
@marcosbenigno3077
@marcosbenigno3077 Жыл бұрын
Thank you very much Prompt E. I predict that we will soon have strong opposition to the use of AIs I am currently downloading and saving all available models (1 tera almost). Of all the interfaces I've tested (I'm a layman) oogabonga is the most promising. If you can explain the difference between the types it would be nice to see (*.pt, pytorch*.bin, ggml*.bin, *.safetensors). So far I've only been able to run single *.bin files. Excuse my stupidity but this is what AIs came for: making generalists specialists and vice versa. This is the last frontier and it will be stopped before 2030. Thanks for the vídeos. Frames. South América. Brasil. Minas Gerais. Betim 🦾
@WESTSIDE15
@WESTSIDE15 Жыл бұрын
how i can run this program again?
@Dragons_Armory
@Dragons_Armory Жыл бұрын
6:28
WizardLM But UNCENSORED!!!
14:57
Prompt Engineering
Рет қаралды 15 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,9 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
100% Offline ChatGPT Alternative?
16:01
Rob Mulla
Рет қаралды 642 М.
Use AutoGen with ANY Open-Source Model! (RunPod + TextGen WebUI)
7:02
How To Create Your Own Custom GPT (5 Simple Steps)
16:41
Goor Moshe
Рет қаралды 7 М.
Build ANYTHING With AI Agents For FREE! (DeepSeek-R1 Beats ChatGPT)
21:43
Ollama meets LangChain
6:30
Sam Witteveen
Рет қаралды 55 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 392 М.
Llama-2 with LocalGPT: Chat with YOUR Documents
23:14
Prompt Engineering
Рет қаралды 169 М.
8 AI Tools I Wish I Tried Sooner
16:10
Futurepedia
Рет қаралды 356 М.
Automatic1111 + Oobabooga = Exchange Images with your ChatBot!
13:05
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН