Пікірлер
@StupidInternetPeople1
@StupidInternetPeople1 6 күн бұрын
Amazing doucheFace thumbnail! Clearly intelligent people click stupid face thumbnails because everyone knows looking like an idiot and doing exactly the same as everybody else is a clear sign that your content must be amazing! 😂
@optiondrone5468
@optiondrone5468 12 күн бұрын
That Adala framework looks like a game changer. Most AI devs spend a lot of time labeling data for their training.
@analyticsCamp
@analyticsCamp 12 күн бұрын
Yep, but not just for dev projects, but also for research in academia. I can tell from my own experience how long it takes for researchers to label data. Let me know if you try this system :)
@ElObredor
@ElObredor 16 күн бұрын
how can i access to the notebook? no entiendo nada ;C
@analyticsCamp
@analyticsCamp 16 күн бұрын
Hi, If you mean the code and process, then the process is explained in the video; you can access separate files and functions here: github.com/Maryam-Nasseri/Fine-tuning-LLMs-Locally
@Researcher100
@Researcher100 23 күн бұрын
"Basically", I really liked this tutorial! Does this setting work with a larger model, say 7B?
@analyticsCamp
@analyticsCamp 23 күн бұрын
Thanks for watching! Should work with a 7b-model too if you have more VRAM, set the cuda to True in the training arguments, and keep the batch size lower. Hope it helps :)
@yoyartube
@yoyartube Ай бұрын
With no cude true do you think I could fine tune deepseek llm on my mac m2 with 16 gigs of ram?
@analyticsCamp
@analyticsCamp Ай бұрын
Running LLMs is more about VRAM than RAM; I'd say you need 6+ VRAM (larger the model, larger the training dataset => more processing needed). Maybe start with DeepSeek base which is only 7B (see if any Q4 is available). Running on powerful CPU is possible ; I don't use MAC so I cannot comment on it :)
@Nathan-pu9um
@Nathan-pu9um Ай бұрын
Using tools like n8n low code you can do this alot easier
@analyticsCamp
@analyticsCamp Ай бұрын
I agree, but for deployment and wider use there is pricing for n8n, which could be beyond some users' budget (unlike CrewAI which can work with local LLMs free!). But thanks for watching :)
@Nathan-pu9um
@Nathan-pu9um Ай бұрын
@@analyticsCamp I agree but, you can use n8n to create workflows connected to Pinecone or a vector database so you can make your own agentic custom workflow internally
@chadricheson1038
@chadricheson1038 Ай бұрын
This channel is underrated.
@analyticsCamp
@analyticsCamp Ай бұрын
Thank you for watching :)
@paradigmnnf
@paradigmnnf 2 ай бұрын
OK, so where is the paper?
@analyticsCamp
@analyticsCamp 2 ай бұрын
Hi, the full references of all the papers are cited in the description box :)
@optiondrone5468
@optiondrone5468 2 ай бұрын
Love it, thanks for demystifying many of the fine tuning terms and their use! 👋 keep up the good work👍
@analyticsCamp
@analyticsCamp 2 ай бұрын
Thanks, will do and more to come!
@sai_ai-z1c
@sai_ai-z1c 2 ай бұрын
SmythOS seems like a great way to increase productivity! I've been trying to find ways to make my process more efficient. What is its difference from other AI technologies that you have used? #SmythOS #AI #Productivity #AI
@analyticsCamp
@analyticsCamp 2 ай бұрын
Hi, unfortunately I did not understand your question/comment :(
@EarthrightCanvas
@EarthrightCanvas 2 ай бұрын
Cant follow.
@analyticsCamp
@analyticsCamp 2 ай бұрын
Hi, the code and process is on my GitHub (link in the description box) so you can follow at your own pace :)
@bladealex1844
@bladealex1844 2 ай бұрын
This video is an excellent deep dive into Mixture of Agents (MoA)! 🚀 As someone who's been working on implementing MoA concepts, I found the explanation and tutorial incredibly valuable. For those interested in a practical application of MoA principles, I've developed MALLO (MultiAgent LLM Orchestrator): github.com/bladealex9848/MALLO You can try it live here: mallollm.streamlit.app/ MALLO builds on the MoA concept, integrating local models, OpenAI and Together AI APIs, and specialized assistants. It's fascinating to see how the MoA architecture with its layers of agents, as explained in the video, can be adapted for specific use cases. The benchmarks comparing MoA against GPT-4/GPT-4o are particularly interesting. In MALLO, I've implemented a similar multi-layered approach, focusing on specialized domains like legal and constitutional law. The tutorial on running MoA locally is a game-changer for accessibility. In MALLO, I've also integrated local models using Ollama, which aligns well with the free and local approach demonstrated here. I'm curious about how others are adapting these MoA concepts in their projects. Has anyone else experimented with combining different model types or specialized agents in their implementations? Thanks for this comprehensive guide! It's exciting to see the AI community pushing the boundaries of what's possible with open-source and locally-run models. 🌟 #MixtureOfAgents #MALLO #AIInnovation #OpenSourceAI #LocalLLM
@stephenzzz
@stephenzzz 3 ай бұрын
Thanks for all your videos! on a side note, my wife wants to create a membership site with a chat & RAG of sorts to answer questions from her bespoke sales content. Which system out there do you think would work best, that is low code.
@analyticsCamp
@analyticsCamp 3 ай бұрын
Thanks for watching :) If this is for a simple QA chatbot, then CrewAI could do, but if you need a more robust system and you're willing to spend on it, then one of the paid frameworks such as Oracle may be better (I haven't used it personally, so do your research), and Good luck with your project :)
@Researcher100
@Researcher100 3 ай бұрын
Thanks for bringing this system to our attention. I think this is the first YT video that talks about this new agentic work. ❤
@analyticsCamp
@analyticsCamp 3 ай бұрын
Thanks for watching :) I also think this is an innovative approach!
@chadricheson1038
@chadricheson1038 3 ай бұрын
Very interesting topic
@analyticsCamp
@analyticsCamp 3 ай бұрын
Glad you liked it
@optiondrone5468
@optiondrone5468 3 ай бұрын
This paper was a good find, thanks for your explanation, looks like future of AI internet is here!
@analyticsCamp
@analyticsCamp 3 ай бұрын
I think so too!
@jeffg4686
@jeffg4686 3 ай бұрын
what if the models you choose are different for different agents in the various layers. Such as layer 1 has Agent 1 (llama3.1), Agent 2(mixtral7b), Agent3(gemma) layer 2 has Agent 1(chatgpt 4), agent2(mixtral7b), agent3 (llama3) Also, can the layers have different numbers of agents - I assume so, but not sure.
@analyticsCamp
@analyticsCamp 3 ай бұрын
Hi, yes you can effectively do all that. If you see the video at 06:05 you see there are four different models for the layers with Qwen2 acting as the aggregator. If you take a look at the MOA diagram, you'll see each agent/LLM is depicted with a different colour (A1, A2, A3) in each layer, so in their current set-up in each layer the number of 'reference models' defined by the user will separately produce the intended result, and they get aggregated at the final output (depicted as A4). Yes, I think you can tweak the code to have a different number of agents per layer too; please check their GitHub repository, the bot.py file ( I haven't done that personally). Thanks for your comment :)
@arielle-cheriepaterson7851
@arielle-cheriepaterson7851 3 ай бұрын
Are you available for consulting?
@analyticsCamp
@analyticsCamp 3 ай бұрын
Hi, could you please send me an email with more details? (my email address is in my channel's About section. Thanks :)
@OpenAITutor
@OpenAITutor 3 ай бұрын
I love this! I did create a version using Groq and open-webui!
@analyticsCamp
@analyticsCamp 3 ай бұрын
Thanks for your comment. I visited your channel and subed! Great videos :)
@thatsfantastic313
@thatsfantastic313 3 ай бұрын
beautifully explained!
@analyticsCamp
@analyticsCamp 3 ай бұрын
Glad you think so!
@soccerdadsg
@soccerdadsg 3 ай бұрын
Another quality video from the channel!
@analyticsCamp
@analyticsCamp 3 ай бұрын
Much appreciated!
@soccerdadsg
@soccerdadsg 3 ай бұрын
Appreciate your effort to make this video.
@analyticsCamp
@analyticsCamp 3 ай бұрын
My pleasure, thanks for watching :)
@soccerdadsg
@soccerdadsg 3 ай бұрын
This is a very good video. It is a good summary of current development of agentic workflows with scientific paper support.
@analyticsCamp
@analyticsCamp 3 ай бұрын
Thanks for your supportive words. Stay tuned, I have more of this coming :)
@optiondrone5468
@optiondrone5468 4 ай бұрын
Medical images better than human operators! If we keep going at this rate soon many general practitioners in UK will have no jobs.
@analyticsCamp
@analyticsCamp 4 ай бұрын
Now imagine if we combine this with the agentic power! But I still think it's too early to make a definitive judgement, as many of these papers report on their best results/round! Thanks for watching though :)
@DrRizzwan
@DrRizzwan 4 ай бұрын
Good explanation 👏
@analyticsCamp
@analyticsCamp 4 ай бұрын
Glad you liked it
@analyticsCamp
@analyticsCamp 4 ай бұрын
Hey everyone, I have already explained RAG, ICL, and fine-tuning in the previous videos separately, so I thought I would give you all in one place!
@peralser
@peralser 4 ай бұрын
Excelente Video!! Thanks!
@analyticsCamp
@analyticsCamp 4 ай бұрын
Glad you liked it!
@dreamphoenix
@dreamphoenix 4 ай бұрын
Thank you.
@analyticsCamp
@analyticsCamp 4 ай бұрын
Thanks for watching :)
@jonjeffers5153
@jonjeffers5153 4 ай бұрын
Thanks for the video! I'm having an issue with the API key. I'm not a python programmer, FYI. The bot.py runs, but when I type something I get: OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
@analyticsCamp
@analyticsCamp 4 ай бұрын
Hi, I think you have not set your environment variables correctly (Please follow the video steps). You should either have a valid OpenAI API key (can get it from their website) or just get a free key from the Together AI website for this project. Then, from your code editor terminal, export your environment API key by typing this exactly: echo "export OPENAI_API_KEY='yourkey'" >> ~/.bash_profile -but replace 'yourkey' with the key ID you got (it doesn't have to be only from OpenAI, but any partner of them like Together AI, etc). Then update the shell with the new variable by typing: source ~/.bash_profile -to confirm if it is set correctly, type: echo $TOGETHER_API_KEY -This should show your key, if it is correct you are set. I hope this helps :) PS: if you work on Windows, I think you should use \ instead of / in my code, and instead of bash_profile in the first two lines, use autoexec.bat. I don't work with Windows but this information that I found online may help: To set environment variables in Windows, you can follow these steps: Press Win + R to open the Run dialog. Type sysdm.cpl and press Enter to open the System Properties window. Go to the "Advanced" tab and click on the "Environment Variables" button. In the Environment Variables window, you can set system variables (for all users) or user variables (specific to the current user). Click on "New" to add a new variable, or select an existing variable to edit or delete it.
@travelingbutterfly4981
@travelingbutterfly4981 4 ай бұрын
hi. I dont think the data it produced is correct did u try some method to validate it?
@analyticsCamp
@analyticsCamp 4 ай бұрын
Thanks for your comment. You are right! I checked the top 3 manually in the CSV file and it looks different. With Mistral I get more accurate results. However, LLAMA3 produced a good synthesis of the career path. The video is basically meant as a tutorial (how to do), but the choice of LLM makes a difference. Thanks for watching :)
@travelingbutterfly4981
@travelingbutterfly4981 4 ай бұрын
@@analyticsCamp Thanks for the reply. Actually I am trying to get insights from the dataset using crew ai. Can you suggest some ways to do it?
@analyticsCamp
@analyticsCamp 4 ай бұрын
Is your dataset a CSV file? This video tutorial is a standard way of calling a CSV file within the CrewAI framework, if you don't get accurate results, change the model, e.g., to Mistral or Qwen2, or dbrx (from Databricks) on a sample dataset where you already know the results; any of the model's which produce accurate results, use that one on your target dataset. If you are doing a more serious data anlytics work, keep in mind that most of these LLMs are primarily language models (designed to predict the next word, not necessarily the 'correct' data), so in this case, using the traditional methods in Pandas, for example for data wrangling, or machine learning models from Scikit-learn will give you the most accurate results. If you insist on agentic method, then try asking one of those LLM agents to access Pandas or Scikit-learn and do the work for you. I haven't tried this honestly, so I don't know how it would turn out. But please keep me updated if this works for you. Hope this information helps :0
@gc1979o
@gc1979o 4 ай бұрын
Awesome presentation!
@analyticsCamp
@analyticsCamp 4 ай бұрын
Glad you liked it!
@BooleanDisorder
@BooleanDisorder 4 ай бұрын
Next will probably be: Mixture of Mixtures!
@analyticsCamp
@analyticsCamp 4 ай бұрын
LOL :) Who knows? Maybe you're right!
@BooleanDisorder
@BooleanDisorder 4 ай бұрын
​@analyticsCamp seriously though, thanks for the excellent video.
@sr.modanez
@sr.modanez 4 ай бұрын
top top top + + + + + +👏👏👏👏👏👏👏👏👏
@analyticsCamp
@analyticsCamp 4 ай бұрын
Glad you liked it and thanks for watching :)
@JavierTorres-st7gt
@JavierTorres-st7gt 4 ай бұрын
How to protect a company's information with technology ?
@analyticsCamp
@analyticsCamp 4 ай бұрын
I'm not sure if I understand your question :( Apologies, but it'll be good if you give more context.
@BARD-no4wq
@BARD-no4wq 4 ай бұрын
great video, you channel is underrated
@analyticsCamp
@analyticsCamp 4 ай бұрын
Glad you think so :)
@optiondrone5468
@optiondrone5468 4 ай бұрын
Wow CSV file reading agent, this is so cool, does this mean that agent can also be programmed to generate SQL and access data from a database and do additional analysis?
@analyticsCamp
@analyticsCamp 4 ай бұрын
As far as I know, the only sql parser in crewai tools is PGSearchTool, which is specifically made for PostgreSQL database tables; yep, it can search and generate sql queries, I think they call it Retrieve and Generate RAG. I haven't tested it yet, but if enough viewers ask for it, I may make sth out of it :)
@optiondrone5468
@optiondrone5468 4 ай бұрын
@@analyticsCamp thanks for the tool name hope enough people here ask for #SQLagent tutorial!
@analyticsCamp
@analyticsCamp 5 ай бұрын
Thanks for all your helpful comments :) Here's a related video explaining AI agentic workflow: kzbin.info/www/bejne/onKWhZ2rabuIntE
@analyticsCamp
@analyticsCamp 5 ай бұрын
Some of you asked for AI agents in action; here's a video with code to use 100% free local AI agents: kzbin.info/www/bejne/jpy2ZZycoLGqrbM
@analyticsCamp
@analyticsCamp 5 ай бұрын
Hey, if you are new to LLMs and need to improve the responses, here's a related video that shows 5 ways to improve LLM results: kzbin.info/www/bejne/bnqmaZWNq7SFfLc
@optiondrone5468
@optiondrone5468 5 ай бұрын
All very exciting things but how long do you think before everyone can have access to all these AI based new applications?
@analyticsCamp
@analyticsCamp 5 ай бұрын
Thanks for watching :) You can use ICL with any LLM, especially the ones you can download directly from Hugging Face or via Ollama. Some other interfaces allow users to attach files to process, so you can write your prompts and instructions in those files plus any images you need to attach. I'm not sure about audio and video ICL at this moment, though.
@Researcher100
@Researcher100 5 ай бұрын
The explanation was clear, thanks. Does this paper show how to use this method in practice? I think most llm users don't know ins and out of fine tuning so icl can be very helpful for ordinary users.
@analyticsCamp
@analyticsCamp 5 ай бұрын
Thanks for the comment :) Yes, the paper comes with all those explanation. Yep, I also believe this can open a way for more ordinary AI users AND many researchers in other fields.
@jarad4621
@jarad4621 5 ай бұрын
Sorry another quesion, am i able to use LM studio with crewai as well, wanted to test some other models and its gpu accel allows models to run better then ollama for me, is it still going to have probems due to the issues you fix with the models file or is that issue not a problem for other local servers? Or is ollama the best way because you can actually edit those things to make it work well? Thanks
@analyticsCamp
@analyticsCamp 5 ай бұрын
I do not use LM Studio so I cannot comment on that. But Ollama via terminal is pretty sturdy, CrewAI it should work with all Ollama models, but I have not tested all. If you run into issues you can report it here so others can know and/or help you :)
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 5 ай бұрын
can mamba have its input rope scaled? It seems it doesnt require positional encoding but this might make it extremely efficient for second order optimization techniques.
@analyticsCamp
@analyticsCamp 5 ай бұрын
In Mamba sequence length can be scaled up to a million (e.g., a million-length sequences). It also computes the gradient (did not find any info on second-order opt in their method): they train for 10k to 20k gradient steps.
@jarad4621
@jarad4621 5 ай бұрын
Also never seen the mistral idea so this model would make a really good agent then better then the others? Really helpful to know, glad I found this. Also can you test agencu dwarm ans let us know what the best agent framewoek is currently? Apparently crew is not great for production?
@analyticsCamp
@analyticsCamp 5 ай бұрын
Thanks for watching :) I have tested multiple models from Ollama and mistral seems to have better performance overall, across most tasks. Agent Swarm can be useful for VERY specialised tasks in which general LLMs get it totally wrong. Other than that, it will add to the time/cost of build.But I'm not sure if I understood your question right!
@jarad4621
@jarad4621 5 ай бұрын
Awesome I've been looking for some of this info for ages, Best video on agents after watching dozens of vids, nobody explains the problems with other models or fixing model file and how to make sure the local models work, many of these YT Experts are just using local and other nodels snd wondering why it's not working well. Can i use phi 3 mini local as well and it needs same model setup? Also will llama 70b on openrouter api actually work as a good agent or does something need to be tweaked first nobody can answer these things, please help? Thanks!
@analyticsCamp
@analyticsCamp 5 ай бұрын
Sure, you can essentially use any models listed in Ollama as long as you make a model file, you can manage the temperature etc. I have used LLAMA 70b before but surprisingly, it did not show better response than its 7b and 13b on most tasks! I recommend LLAMA3 (I may be able to make a video on it if I get some free time, LOL ).
@jarad4621
@jarad4621 5 ай бұрын
@@analyticsCamp Awesome thanks ill test the smaller ones first then
@optiondrone5468
@optiondrone5468 5 ай бұрын
Thanks for sharing your thoughts and practical AI agent workflow. I also believe that this agentic workflow will fuel many LLM based development in 2024
@analyticsCamp
@analyticsCamp 5 ай бұрын
Thanks for watching :) If you have a specific LLM-based development/project in mind please let me know. With such easy agentic access, I am also surprised how many AI users are still hooked on zero-shot with paid interfaces!
@optiondrone5468
@optiondrone5468 5 ай бұрын
@@analyticsCamp ha ha it also never made sense to me why ppl don't look into open source LLM 🤔 its free, not limiting your token size, free to experiment with different models and most importantly your data (prompt) is yours and don't become automatically OpenAi's property. Keep up the good work, looking forward to your next video.