How To Use AutoGen With ANY Open-Source LLM FREE (Under 5 min!)

  Рет қаралды 129,080

Matthew Berman

Matthew Berman

8 ай бұрын

A short video on how to use any open-source model with AutoGen easily using LMStudio. I wanted to get this video out so you all can start playing with it, but I'm still figuring out how to get the best results using a non-GPT4 model.
Enjoy :)
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
AutoGen Beginner Tutorial - • AutoGen Tutorial 🚀 Cre...
AutoGen Intermediate Tutorial - • AutoGen FULL Tutorial ...
AutoGen - microsoft.github.io/autogen
LMStudio - lmstudio.ai/

Пікірлер: 390
@matthew_berman
@matthew_berman 8 ай бұрын
Should I do a full review of LMStudio?
@bonoxchampion3820
@bonoxchampion3820 8 ай бұрын
Absolutely!. Being able to self host an LLM which generates an API is amazing!
@morganandreason
@morganandreason 8 ай бұрын
Absolutely! Seems better than oobabooga/text-generation-webui doesn't it? I would like to see if it's possible to make it use AI "character templates" downloaded in json format for instance, or as embedded chunks in an image file. Basically, can it act either directly as a replacement for TavernAI and similar, or can it replace Oobabooga as the server running behind the scenes of TavernAI.
@hotbit7327
@hotbit7327 8 ай бұрын
How 'completly free, completly opencource' if LMStudio seems proprietary and for mac/windows only, no Linux?
@MrAndi1281
@MrAndi1281 8 ай бұрын
Yes! Please do!
@MrMoonsilver
@MrMoonsilver 8 ай бұрын
Absolutely, questions that are interesting to me: Can I host on another machine than where I am using LM-Studio? I have a dedicated linux machine, but would love to use windows to work with the llm via api. Also, does it support multi-gpu setups? Data-parallelism and inference?
@DevonAIPublicSecurity
@DevonAIPublicSecurity 8 ай бұрын
All I can say is thank you for your videos, you give enough information to get things up and running without making it overkill. Please keep making more videos like these I am learning soooo much ...
@neoblackcyptron
@neoblackcyptron 7 ай бұрын
You are a lifesaver, you give so much top-notch content for free. I am about to start-out in a startup where we plan to use a mix of GenAI (driven by stuff like Autogen) and traditional ML models (I wonder if we will need these ever again in the future) with some RPA to spce things up. These videos of yours have given me full coverage of what I will need to do as far as the GenAi side of things is concerned, that is very new for me.
@mcusson2
@mcusson2 8 ай бұрын
Thank you for this timely video in my rough AI journey. I feel this is the boost I needed.
@naytron210
@naytron210 8 ай бұрын
Man, thanks! Can't believe how easy it is -- was a great idea to build LMStudio to mimic the OpenAI API. Definitely looking forward to seeing more content on your exploration of this!
@matthew_berman
@matthew_berman 8 ай бұрын
You're welcome!
@lomek4559
@lomek4559 8 ай бұрын
I followed this video guide, and unfortunately I haven't figured out how to fix "KeyError: "choices'" in completion .py, or "AttributeError: 'str' object has no attribute 'get' " Seems like AutoGen files still need some code upgrading (mine is ver 0.1.11)
@shubhamdayma5209
@shubhamdayma5209 8 ай бұрын
I followed this, but it misses to generate the .py file in coding folder. I confirmed the folder name etc. But it seems like AutoGen search for specific key in order to understand if it's code or text coming from chat response and there LM studio API fails.
@PRATHEESH15
@PRATHEESH15 8 ай бұрын
same issue im getting@@lomek4559 @matthew_berman
@ArianeQube
@ArianeQube 8 ай бұрын
I was waiting for this ever since Autogen came out. Thanks :)
@matthew_berman
@matthew_berman 8 ай бұрын
Hope you like it!
@shannonhansen77
@shannonhansen77 8 ай бұрын
I have to say, I was struggling with this exact task...getting an open source model to load up and expose a openai API endpoint. Awesome content as usual!
@peralser
@peralser 7 ай бұрын
Matthew, thanks for your time. You do an amazing job promoting these things in the way that you do. Thanks again.
@cristian15154
@cristian15154 8 ай бұрын
Finally something local and totally free, amazing thanks, it's been a long wait!
@dataprospect
@dataprospect 8 ай бұрын
You are the best! You sometimes feed us directly what we need and sometimes you teach how to catch the fish. In this video, you did both.👏
@leonwinkel6084
@leonwinkel6084 8 ай бұрын
Wow super nice stuff!! This is what I was waiting for! It makes it so easy to use llms basically anywhere, Amazing!! Thanks for sharing this ultra valueable content with us 🙏🏼🙏🏼🙏🏼
@matthewstarek5257
@matthewstarek5257 4 ай бұрын
I'm so glad I found your channel. When I watch your videos, I feel confident that I know the latest info on AI and how to best utilize these tools. Thank you for doing what you do! You rock! 🤘🤘🤘
@peterc7144
@peterc7144 8 ай бұрын
Amazing work, thank you so much for sharing this! Now let's make this a start of a new era of locally running autonomous assistants which are actually helpful and free to use.
@Jwoodill2112
@Jwoodill2112 8 ай бұрын
Hell yeah. What a time to be alive!
@fleshwound8875
@fleshwound8875 8 ай бұрын
I tried to get this done myself so this will save me a lot of time lol thank you!
@matthew_berman
@matthew_berman 8 ай бұрын
Glad I could help!
@jonathanozik5442
@jonathanozik5442 8 ай бұрын
Thank You so much for showcasing this. I've been using LM Studio and GTP4All for a few months now and I really like them. One problem - I could not make LM Studio to use my GPU, tho other people are successful.
@the_CodingTraveller
@the_CodingTraveller 4 ай бұрын
I love the way you teach and explain stuff. It is the right tone for me AND you look like Gale from Breaking Bad.
@TheJnmiah
@TheJnmiah 8 ай бұрын
Thank you for this! WOW, this runs very well on my laptop. I'm playing with Mistral right now, and so far it's great!
@aliabdulla3906
@aliabdulla3906 8 ай бұрын
Believe it or not. I find myself in the first minute unconsciously pressing the like button. you are my hero. Please keep putting up beautiful content like this.
@Dewclaws
@Dewclaws 8 ай бұрын
First off - Thanks for all the content and it is very evident that you put a lot of effort into research time for the each upload. That being said, there's one small suggestion I'd like to make, if you could include links to repositories and tools, brought up in your videos. Often, I find myself wanting to play around and learn more about a showcased tool, but without direct links, it can sometimes be a bit of a hunt. I understand that adding links might take a bit of extra time, but I believe it impove your channel reviews.
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh 8 ай бұрын
Links to autogen and lm studio are currently in the description.
@Dewclaws
@Dewclaws 7 ай бұрын
@@zyxwvutsrqponmlkh Thank you for updating that.
@RetiredVet
@RetiredVet 5 ай бұрын
@@zyxwvutsrqponmlkh The problem is that Autogen is changing rapidly, and a number of links in Matthew's descriptions no longer work. So far, on Autogen's site, one link I have found does not work. The code would make it easier to follow along.
@francoisneko
@francoisneko 8 ай бұрын
I would love to see a video about how to fine tune a local model with your own files. Like several text or pdf.
@matthew_berman
@matthew_berman 8 ай бұрын
You might just need RAG rather than fine-tuning.
@LiberyTree
@LiberyTree 8 ай бұрын
What's a RAG?
@DanielSCowser
@DanielSCowser 8 ай бұрын
following@@LiberyTree
@echofloripa
@echofloripa 8 ай бұрын
​@@LiberyTreeRetrieval Augmented Generation. Basically embeddings and vector database
@francoisneko
@francoisneko 8 ай бұрын
@@matthew_berman oh I see. Thank you I didn’t know about Rag, it look like what I need exactly.
@Norvieable
@Norvieable 8 ай бұрын
Keep us posted, awesome job man! :)
@ajarivas72
@ajarivas72 8 ай бұрын
His job is incredible. It is challenging to keep up with all the information presented in this KZbin channel.
@manuelherrerahipnotista8586
@manuelherrerahipnotista8586 8 ай бұрын
Thanks man. This open up a lot of very interesting stuff to try.
@Q9i
@Q9i 8 ай бұрын
LETS GO!!! THE ONE WE NEEDED! MY MAN! THIS IS WHY WE SUB!
@matthew_berman
@matthew_berman 8 ай бұрын
Thank you!
@user-jg4ci4mf8w
@user-jg4ci4mf8w 8 ай бұрын
Awesome find, Matt.
@WisienPol
@WisienPol 8 ай бұрын
OK, now I am totally convinced to start playing with AutoGen :D thanks mate
@tertiusdutoit9946
@tertiusdutoit9946 8 ай бұрын
This is awesome! Thank you for sharing!
@tal7atal7a66
@tal7atal7a66 8 ай бұрын
'fully local' , i love this word ❤. thank you bro , for your professional info, tuto...
@matthew_berman
@matthew_berman 8 ай бұрын
My pleasure!
@chase5513
@chase5513 8 ай бұрын
I don't know how I'm just now stumbling upon your content, damn algorithms. Would LOVE to see this improved! Cheers
@matthew_berman
@matthew_berman 8 ай бұрын
lol. welcome!
@CarisTheGypsy
@CarisTheGypsy 8 ай бұрын
This is a great video, makes it very easy to setup. One issue I encountered was a limit of 199 tokens was reached, which seems to be a default. You might want to add "max_tokens": -1, to your llm_config, or to some more reasonable number as it seems 199 is very easy to hit and then the output just stops.
@alexjensen990
@alexjensen990 8 ай бұрын
I should have sent LM Studio to you a while ago. I thought abot it, but I never know what you already know about or how helpful it would be to send stuff to you. Glad you found it though. It has really changed the way that I interact with LLMs, not to mention the frequency because of the ease of use.
@endoflevelboss
@endoflevelboss 8 ай бұрын
This channel has everything an After Effects intro and a pastel hoodie.
@KamronAhmed
@KamronAhmed 8 ай бұрын
1. Was hoping to see the chat interface. Wondering why you had to hardcode the initial prompt after the assistants were created. 2. LM studio is cool. I recently had ChatGPT create a streamlit front end for my Autogen app. Would love to see you go through this as well.
@moon8013
@moon8013 8 ай бұрын
wow i got it working, and this is Amazing... thank you Matt...
@enigmarocker
@enigmarocker 8 ай бұрын
Great video! Subscribed
@artificial-ryan
@artificial-ryan 8 ай бұрын
This is awesome! it's always been what deters me from going all in the AI-Agent world wsa the cost but having this completely local is a game changer. I have it working as we speak using Mistral 7b, on my POS Ryzen with 4gb VRAM 16 gb ram. I really didn't think any of this would work but low and behold. Thanks man, you made my week with this video.
@patrickobrien9935
@patrickobrien9935 6 ай бұрын
Have you run into the api_type error code in config?
@chessmusictheory4644
@chessmusictheory4644 8 ай бұрын
Kool. Iv been trying to do this using text generation web ui . This looks way easier. Awesome man thanks!
@ZeroIQ2
@ZeroIQ2 8 ай бұрын
Oh wow, this is awesome. Thanks for sharing.
@urknidoj422
@urknidoj422 8 ай бұрын
Thanks for the great tutorial! 🙏
@93cutty
@93cutty 8 ай бұрын
Just ran across this as I'm leaving work. Can't wait to see this when I get home!
@matthew_berman
@matthew_berman 8 ай бұрын
Have fun!
@93cutty
@93cutty 8 ай бұрын
@@matthew_berman this is certainly a game changer
@EricBacus
@EricBacus 8 ай бұрын
This is amazing! Thanks so much
@jorgerios4091
@jorgerios4091 8 ай бұрын
BIG thanks Mat this is by far one of the most useful videos. Just fyi I have a strange behavior when I run it, the assistant gives more requests to the user_proxy than the ones I make (apart from requesting numbers from 1 to 100 it is requesting a fibonacci sequence nobody asked for), also there is a warning that does not interfere with the result but I was not expecting that "SIGALRM is not supported on Windows. No timeout will be enforced". Again, thanks.
@JohnLewis-old
@JohnLewis-old 8 ай бұрын
You're a legend my friend. Keep up the amazing work.
@rogerhills9045
@rogerhills9045 8 ай бұрын
Thanks. I am struggling to get useful stuff out of Autogen and local llms. The timeout thing seemed to be useful. I am getting empty strings and running on LLM sessions. I am about to try a larger model and a higher quantised number for Mistral instruct. This is my prompt "Find ways to store and connect arxiv papers programmatically". Keep up the good work.
@ingenfare
@ingenfare 8 ай бұрын
It will be interesting to see which open models work best with this. I suspect that we soon will run different models for different roles. It could compensate a lot for not having the size of the GPT4.
@tvwithtiffani
@tvwithtiffani 8 ай бұрын
🎯(MoE) Mixture of Experts is what thats called. It's documented and people are starting to realize the benefits of this approach. One barrier to MoE approach is the amount of memory it cost to keep multiple models hanging around. But, overall it's still a huge improvement. and I suspect it will gain even more traction since open source models keep getting smaller in size and better in quality.
@ingenfare
@ingenfare 8 ай бұрын
@@tvwithtiffani MoE, cool, I had not heard that definision before. Ty for sharing. Ram is luckily not the most expensive or power hungry. It might be possible for the project leader model to decide what models to involve and when. So that some models are not called until the end of a project. We are truly living in interesting times.
@IslandDave007
@IslandDave007 8 ай бұрын
Having great luck with Zephyr Mistral 7B model. My only challenge right now is getting it to terminate once it completes the coding task - it keeps going with its own stuff.
@tvwithtiffani
@tvwithtiffani 8 ай бұрын
I think this would be the perfect time where Chain of thought or a system like moe might help. Before returning the code, pass it by inference once more time and this time ask it to make the code concise and focused on the user's request.@@IslandDave007
@cognivorous1681
@cognivorous1681 8 ай бұрын
I agree that multi agent structure will become more popular in the medium term because it makes applications more reliable, transparent and allows using smaller, more specialised models.
@SzymonKurcab
@SzymonKurcab 8 ай бұрын
Great stuff! I'm waiting for the update of LM studio, so that you can customize the prompt template not only for chat but also for the server. BTW I've just tested autogen with Zephyr locally :) this will save me a lot of $$$ when playing with autogen :)
@matthew_berman
@matthew_berman 8 ай бұрын
Did it work well? Did you run into any errors?
@ByteBop911
@ByteBop911 8 ай бұрын
ngl...i was searching for this since last two weeks....perfekt❤❤
@matthew_berman
@matthew_berman 8 ай бұрын
Enjoy!
@ygorbarbosaalves7528
@ygorbarbosaalves7528 8 ай бұрын
Its amazing! Thank you!
@joelwalther5665
@joelwalther5665 8 ай бұрын
Very promising! Thanks. It could be use with DB-GPT as well ❤
@careyatou
@careyatou 8 ай бұрын
FYI. GPT4All has similar functionality and now supports GPU inference too. I might be worth checking that one out again too. Thanks for the content!
@stuartpatterson1617
@stuartpatterson1617 8 ай бұрын
Well done ✅
@rayanfernandes2631
@rayanfernandes2631 8 ай бұрын
Thats great news!
@mrquicky
@mrquicky 8 ай бұрын
It was a needed utility for sure!
@user-cc8ll8sn4e
@user-cc8ll8sn4e 6 ай бұрын
Thank you so much for your video🥰
@haroldasraz
@haroldasraz 5 ай бұрын
This is amazing. Please make more videos on this. It would be interesting to see a couple of Python (Data Science, ML, Games) projects being created with assistance from AutoGen.
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh 8 ай бұрын
Awesome, thanks so much.
@nourabdou4118
@nourabdou4118 8 ай бұрын
Thank youuuuu sooooooo much!
@xEHECxRatte
@xEHECxRatte 8 ай бұрын
I downloaded LM Studio, and saw that there is a new update right now that makes the end prompt customizable, so maybe that should fix it and terminate it. Thanks for the videos! I learn so much from them! Could you possibly show how to assign different LLMs to different agents?
@matthew_berman
@matthew_berman 8 ай бұрын
That'll be in my advanced tutorial, coming next week most likely.
@__--JY-Moe--__
@__--JY-Moe--__ 8 ай бұрын
super! so helpful Matthew! thanks! it sounds like there needs to be a written cut off! like ''end if''.
@stickmanland
@stickmanland 8 ай бұрын
Yeah!! Now it's just matter of time before we have open source GPT-4
@matthew_berman
@matthew_berman 8 ай бұрын
💯
@mariusj.2192
@mariusj.2192 8 ай бұрын
Unfortunately not. "Just" the prompt template and the model finetuning are 99% of the work. The things in this video are mostly tools to reduce the boilerplate and don't contribute to inference quality by themselfes. I watched the video in hopes it would contain some magic bullet to tackle the core inference quality problem. Still a good video though.
@CarisTheGypsy
@CarisTheGypsy 8 ай бұрын
Great video!
@nbalagopal
@nbalagopal 8 ай бұрын
The reason it fails to run completion is because the output format is different between different models. I fixed it by appending "Mimic gpt-4 output format." to the prompt of the UserProxyAgent and the basic autogen example of plotting a chart of NVDA and TESLA worked! The model I used was codellama-13b-q5_0_gguf on a M1 Max/32GB RAM. Your videos are very easy to understand and very helpful. Thank you!
@rein436
@rein436 8 ай бұрын
Just what I was waiting for. Thanks, Mattew
@matthew_berman
@matthew_berman 8 ай бұрын
No problem!
@chibiebil
@chibiebil 8 ай бұрын
oh that looks way better than textgenui. I check this out this weekend. Planned to use autogen or metagpt as soon as I can use selfhosted LLMs because I have beefy enough setup (33B Models work fine, Have to check 70B lama but maybe thats too slow)
@zakaria20062
@zakaria20062 8 ай бұрын
I will be happy to focus in free source models (non-openAI ) in future 😊
@mr2octavio
@mr2octavio 8 ай бұрын
THANK YOU😊
@consig1iere294
@consig1iere294 8 ай бұрын
I was waiting for this and bam you delivered! I could not find the intermediate video on your channel you mentioned @ 2:50. Please share a link, thanks for your hard work!
@matthew_berman
@matthew_berman 8 ай бұрын
Link is in the description :)
@mengli7441
@mengli7441 3 ай бұрын
Thanks for all your great videos about AutoGen, Mathew. I'm wondering if there is a way to use AutoGen framework with an AWS API gateway since my LLM is hosted on AWS EC2
@missmountainlover3908
@missmountainlover3908 8 ай бұрын
would love to see something like this for a linux distro :D
@vishalkhombare
@vishalkhombare 8 ай бұрын
Mind Blown!!
@szghasem
@szghasem 8 ай бұрын
Thanks as always! Can you please share your thoughts on Petals. You mentioned it a long time ago. Has your opinion changed since then?
@matthew_berman
@matthew_berman 8 ай бұрын
I need to check it out again. It was awesome but too complicated to set up for most poeple.
@IrmaRustad
@IrmaRustad 8 ай бұрын
Fantastic!!
@DucNguyen-99
@DucNguyen-99 8 ай бұрын
awesome video man !!! just quick question i tried to run this but it used CPU for all the tasks. anyway to make it run on GPU ?
@down2fish690
@down2fish690 8 ай бұрын
This is awesome! Do you know if there is a way to use local LLMs for Aider?
@chukypedro818
@chukypedro818 8 ай бұрын
We need to see it working with Open-Source Model. Thanks Bala-blue
@InsightCrypto
@InsightCrypto 7 ай бұрын
need more detailed review for this :D
@LaurentPicquet
@LaurentPicquet 8 ай бұрын
Maybe you could do chatdev + LM studio? Great work on this one
@edoardog1899
@edoardog1899 8 ай бұрын
Wanderful! Thanks
@leandrogoethals6599
@leandrogoethals6599 7 ай бұрын
great will make a followup where u add memgpt to it? that would be awesome
@PeeP_Gainz
@PeeP_Gainz 8 ай бұрын
I’m liking this method, is it better than textgen webui with the same LLM installed? My prompts are working as well, haven’t configured it yet to AutoGen but it’s knows how when I use my prompts
@ourypierre3288
@ourypierre3288 8 ай бұрын
Is it possible to have a tutorial to use autogen with remote LLMs on runpod ?
@howardelton6273
@howardelton6273 8 ай бұрын
It would be interesting to see how fast the api server is, compared to VLLM which also has an openai api but claims to be much fast than everything else out there.
@puremintsoftware
@puremintsoftware 8 ай бұрын
Legend ❤ Let's see if it works 😃
@p25187
@p25187 8 ай бұрын
Hi Matthew. Using this local models, what's the best way to train it with your data?
@AI_For_Lawyers
@AI_For_Lawyers 8 ай бұрын
Can you include a link, like you mentioned in the video to AutoGen? Also, can you link to the script that you were using in the video? I'm assuming it's on your GitHub repository.
@michaelslattery3050
@michaelslattery3050 8 ай бұрын
If the goal is to save money (not privacy), perhaps add a GPT-4 agent that only gets involved when Mistral fails. Reflexion is perhaps the best technique for code gen: Test code is generated before the implementation, and the agent run the tests to ensure implementation code is correct, up to 10 times before giving up. When it does give up, pass the best attempt to GPT-4 to fix. Fixing existing code should require much fewer tokens than from-scratch generation. Look at how Aider does it.
@donduvalp.3337
@donduvalp.3337 8 ай бұрын
When using these local LLMs, what would be the best computer set up to make it operate smoothly?
@aldousd666
@aldousd666 8 ай бұрын
This is what I wonder too. I am trying to decide what to buy to be able to run a setup like this.
@StellarGamingDev
@StellarGamingDev 8 ай бұрын
Interesting!
@GianMarcoOrlando677
@GianMarcoOrlando677 8 ай бұрын
Thanks for your great video: I'm using AutoGen with Dolphin 2 installed locally though LM Studio. I want to understand if there is some differences between using "send" function of AutoGen and using the chat integrated in LM Studio because using the same model and the same prompt I obtain pretty good results in the chat integrated in LM Studio and very low-accuracy results using the send functon of AutoGen. Am I missing something? In detail: at first, I use a "UserProxyAgent" to initiate a chat with an AssistantAgent and then I use the send function to the same AssistantAgent to have other interactions with him.
@LordOfThunderUK
@LordOfThunderUK 8 ай бұрын
After my fail attempt to make autogen run using python, this might look very promising because I am already using LM Studio
@nadoiz
@nadoiz 8 ай бұрын
Can you connect these autogen models to a vector database like the Langchain agents do? To use a tool when needed and not programaticaly force to do it?
@satyamtiwari3839
@satyamtiwari3839 8 ай бұрын
That really good lm studio is cool and also free also i want to run autogen for long time but i dont have the money to buy tokens for openai
@bhaweshs8461
@bhaweshs8461 18 күн бұрын
Wonderful...
@MeditationOasis_BaGRoS
@MeditationOasis_BaGRoS 8 ай бұрын
The best option is to run a few different LLM models locally, but for this, we need a lot of memory.
@sfco1299
@sfco1299 8 ай бұрын
Thanks so much for the guidance here Matthew. I've managed to get it stood up, and even running in group-chat mode. I'm noticing however that prompts seem to be cut short far too soon, and if I set "max_tokens": -1, they run on indefinitely (stopped one agent at 6000tkns after repeating itself a bunch of times. Is there a clever way to stop this behaviour that you know of?
@timetravellingtoad
@timetravellingtoad 8 ай бұрын
Interesting app to quickly explore models for agent apps! But how do you get it to utilize a local GPU instead of CPU?
@timetravellingtoad
@timetravellingtoad 8 ай бұрын
Never mind, I worked it out. There's a setting (assuming it can detect your GPU via CUDA) for n_gpu_layers which you can change, depending on how much VRAM your GPU supports (16 works fine for a 4090; higher seems to generate garbage even though only half the VRAM is consumed with 8 bit quantized 7B).
@user-be2bs1hy8e
@user-be2bs1hy8e 3 ай бұрын
Gpt 4 works well on coding because of byte-pair-encoding, compared to pairwise because the structure in language. so maybe try just dummy caches of random conjunctions words(like if and or the and etc) to confuse the decoding maybe
@jschacki
@jschacki 6 ай бұрын
Could you do a video with a comparison of Ollama and LMStudio? For me it seems they both serve the same purpose and pros and cons are unclear to me. Thanks a lot
@user-gp6ix8iz9r
@user-gp6ix8iz9r 8 ай бұрын
Can you make a video on how you can talk to your own document, like pdf ect
@DeepThinkingGPU
@DeepThinkingGPU 7 ай бұрын
you look like your staying up till 4am with this AI revolution like i am. then getting up at 7:30am to repeat the same day again learning more AI we are officially in the groundhogs day movie i believe
@S3ndMast3r
@S3ndMast3r 6 ай бұрын
to save some time: 'api_type' is deprecated, 'api_base' is 'base_url'
@jackslammer
@jackslammer 5 ай бұрын
This is EXACTLY what I've been needing! Now if anyone wants to provide me with a good language model that can run on mac and can give me JSON output out of gibberish... that would be amazing :D
@tusgyu851
@tusgyu851 5 ай бұрын
Hey Matthew , thanks for Video, could you tell me how to extract final output value ,once agents are agree
Use AutoGen with ANY Open-Source Model! (RunPod + TextGen WebUI)
7:02
🌊Насколько Глубокий Океан ? #shorts
00:42
IS THIS REAL FOOD OR NOT?🤔 PIKACHU AND SONIC CONFUSE THE CAT! 😺🍫
00:41
AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)
18:34
Matthew Berman
Рет қаралды 210 М.
Run a GOOD ChatGPT Alternative Locally! - LM Studio Overview
15:16
MattVidPro AI
Рет қаралды 26 М.
AutoGen Advanced Tutorial - Build Incredible AI AGENT Teams
38:08
Matthew Berman
Рет қаралды 109 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 115 М.
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
AutoGen Studio: Build Self-Improving AI Agents With No-Code
27:06
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 6 МЛН
сюрприз
1:00
Capex0
Рет қаралды 1,6 МЛН
🔥Идеальный чехол для iPhone! 📱 #apple #iphone
0:36
Не шарю!
Рет қаралды 1,3 МЛН
ПОКУПКА ТЕЛЕФОНА С АВИТО?🤭
1:00
Корнеич
Рет қаралды 2,9 МЛН