InternLM - A Strong Agentic Model?

  Рет қаралды 15,344

Sam Witteveen

Sam Witteveen

Күн бұрын

Пікірлер: 30
@LaHoraMaker
@LaHoraMaker 5 ай бұрын
LMDeploy is a quite interesting framework to deploy and quantize most of the Chinese models. It also works in Kaggle fairly well given it supports also older GPUs.
@keithmatthews2707
@keithmatthews2707 5 ай бұрын
Very useful content thank you Sam for your valuable insights into these topic areas
@mickelodiansurname9578
@mickelodiansurname9578 5 ай бұрын
thats a nice SMALL model for function calling alright... appreciate you bringing it to my attention.
@omarelfaqir3627
@omarelfaqir3627 5 ай бұрын
Hello Sam, Thanks to bring this wonderful model to our attention. There is just a confusion in the video between commercial usage and commercial licence: commercial usage is allowed without submitting any form, but with the Open Source licence you might need to Open Source any derivative work (ie finetuning you make for example). If you want to make non open source stuff with it (why would you😊?) you will need to submit the form to obtain a commercial licence, allowing you to do that. It is a quite classic business model in Open Source software
@toadlguy
@toadlguy 5 ай бұрын
Thank you, Sam, for once again highlighting the most interesting new models/techniques in this fascinating field. I note InternLM 2.5 explicitly notes that it "supports gathering information from over 100 websites" with an implementation using Lagent. I'm sure a LangChain implementation could be easily created as well. Actually fine tuning models with Sources for information not in the model (like current weather or news) with function calling and JSON support and using LangChain for finer control would be a great method for using smaller local models. (I feel more comfortable using LangChain than a model specific framework, if possible.) I would love to see other models add this approach. I wonder how much this is done in pretraining vs the base model. (guess I'll have to look at the paper 😉).
@waneyvin
@waneyvin 5 ай бұрын
great job mate! And this is a bit like glm4, not sure about the comparison of benchmark. Both are agentic designed, and could be trained with agentic instructions.
@kenchang3456
@kenchang3456 5 ай бұрын
Kind of interesting that if one of the stronger points of InternLM 2.5 is being able to support agents, I wonder what part of the training data makes it more capable of supporting agents if function calling data only accounts for 16%. Thanks for the video, I'll have to find a way to make time to try it out.
@jon_flop_boat
@jon_flop_boat 5 ай бұрын
It’s my understanding that, instead of focusing on incorporating information into the model, the creators focused hard on pretraining on reasoning and research. If the model is particularly good at these things, it can just Google the relevant information and synthesize it in real time, hence the name: InternLM. It doesn’t know anything, but it can look stuff up!
@aa-xn5hc
@aa-xn5hc 5 ай бұрын
Please try lmagent with 2.5
@SonGoku-pc7jl
@SonGoku-pc7jl 5 ай бұрын
thanks! in spanish is regular but good that all evolution :)
@nikosterizakis
@nikosterizakis 4 ай бұрын
Tried it with CrewAI and Autogen. In the case of CrewAI, it did not work... Could not call the agents properly or passing the right parameters to the tools. Perhaps because the tools were Annotated, but it wanted to pass JSON, or could not map the JSON to the Annotated function calls. To its credit, it did not hallucinate either, trying to please with answers. I also saw a lot of Chinese coming up on the log file ;). In the case of Autogen, I got the error message: "LLM does not have a tool calling function.' Both experiments with Ollama locally, where Llama3.1 has been tested successfully (of sorts, with plenty of hallucinations.
@samwitteveenai
@samwitteveenai 3 ай бұрын
They have optimized it for their own agentic framework (Lagent) also the Function calling they are using is different than what CrewAI is using I think, so I wouldn't expect that one to work well. It could be customized is my guess.
@nikosterizakis
@nikosterizakis 3 ай бұрын
@@samwitteveenai I am also trying it with a LiteLLM wrapper over Ollama, which is the recommended method by Autogen for models that do not natively support function calling. Just in case it makes a difference. Ongoing trials and will report back.
@nikosterizakis
@nikosterizakis 3 ай бұрын
@@samwitteveenai Update: with litellm I got a 500 API error when using it with Autogen, I think indicating that the LLM is tuned security-wise to work with its own framework (as you pointed out). For some strange reason, CrewAI refused to hook to Litelm serving this model...
@ManjaroBlack
@ManjaroBlack 5 ай бұрын
I couldn’t get InternLM to work well with RAG or any embedding. It gives ok answers to simple prompting.
@WillJohnston-wg9ew
@WillJohnston-wg9ew 5 ай бұрын
What is the agentic aspect? Maybe I don't understand something or missed something?
@Schaelpy
@Schaelpy 5 ай бұрын
He talks about it at 4:45
@attilavass6935
@attilavass6935 5 ай бұрын
Am I the only one who misses a memory module from Lagent? I'm gonna test this though ASAP
@tlfmcooper
@tlfmcooper 5 ай бұрын
Thanks
@choiswimmer
@choiswimmer 5 ай бұрын
Nice
@lapozzunk
@lapozzunk 5 ай бұрын
If each model gets a higher rating than its predecessors, when will we reach 100? Also, if I don't watch such videos, will this happen later?
@wickjohn3854
@wickjohn3854 5 ай бұрын
ask him what happen in 1989 LOL
@Dom-zy1qy
@Dom-zy1qy 5 ай бұрын
The ultimate benchmark for Chinese models. I wonder I'd they've actually been tuned to avoid discussing things like that. Would prob get them defunded by the govt.
@TheGuillotineKing
@TheGuillotineKing 5 ай бұрын
Fun fact these Chinese models are banned in the USA and can’t be used for a commercial product
@ringpolitiet
@ringpolitiet 5 ай бұрын
Quite an enigma how you combine an interest in rather techy stuff like tool calling LLMs with a straight off the turnip truck view of other things that seems as easy or easier to get informed about.
@dinoscheidt
@dinoscheidt 5 ай бұрын
Fun fact: A source helps. @TheGuillotineKing seems cognitively challenged holding apart the current talks to maybe restrict the EXPORT of OSS Models vs the other way around.
@TheGuillotineKing
@TheGuillotineKing 5 ай бұрын
@@dinoscheidt Fun Fact your mother swallowed a gallon of 🥜🥜🥜🥜🥜🐿️🐿️🐿️ juice and that's how she had you
@toadlguy
@toadlguy 5 ай бұрын
@@dinoscheidt Well, he is right that they can’t be used for commercial projects due to the license. 😉
Introducing The New Champion of Function Calling!
14:51
Sam Witteveen
Рет қаралды 12 М.
Anthropic's New Agent Protocol!
15:35
Sam Witteveen
Рет қаралды 42 М.
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Florence 2 - The Best Small VLM Out There?
14:02
Sam Witteveen
Рет қаралды 16 М.
How to Ask Questions to LLMs
56:35
Phi-AI
Рет қаралды 39
Open Reasoning vs OpenAI
26:59
Sam Witteveen
Рет қаралды 30 М.
Creating an AI Agent with LangGraph Llama 3 & Groq
35:29
Sam Witteveen
Рет қаралды 48 М.
The Future of Knowledge Assistants: Jerry Liu
16:55
AI Engineer
Рет қаралды 124 М.
5 Problems Getting LLM Agents into Production
13:12
Sam Witteveen
Рет қаралды 14 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 399 М.