LMDeploy is a quite interesting framework to deploy and quantize most of the Chinese models. It also works in Kaggle fairly well given it supports also older GPUs.
@keithmatthews27075 ай бұрын
Very useful content thank you Sam for your valuable insights into these topic areas
@mickelodiansurname95785 ай бұрын
thats a nice SMALL model for function calling alright... appreciate you bringing it to my attention.
@omarelfaqir36275 ай бұрын
Hello Sam, Thanks to bring this wonderful model to our attention. There is just a confusion in the video between commercial usage and commercial licence: commercial usage is allowed without submitting any form, but with the Open Source licence you might need to Open Source any derivative work (ie finetuning you make for example). If you want to make non open source stuff with it (why would you😊?) you will need to submit the form to obtain a commercial licence, allowing you to do that. It is a quite classic business model in Open Source software
@toadlguy5 ай бұрын
Thank you, Sam, for once again highlighting the most interesting new models/techniques in this fascinating field. I note InternLM 2.5 explicitly notes that it "supports gathering information from over 100 websites" with an implementation using Lagent. I'm sure a LangChain implementation could be easily created as well. Actually fine tuning models with Sources for information not in the model (like current weather or news) with function calling and JSON support and using LangChain for finer control would be a great method for using smaller local models. (I feel more comfortable using LangChain than a model specific framework, if possible.) I would love to see other models add this approach. I wonder how much this is done in pretraining vs the base model. (guess I'll have to look at the paper 😉).
@waneyvin5 ай бұрын
great job mate! And this is a bit like glm4, not sure about the comparison of benchmark. Both are agentic designed, and could be trained with agentic instructions.
@kenchang34565 ай бұрын
Kind of interesting that if one of the stronger points of InternLM 2.5 is being able to support agents, I wonder what part of the training data makes it more capable of supporting agents if function calling data only accounts for 16%. Thanks for the video, I'll have to find a way to make time to try it out.
@jon_flop_boat5 ай бұрын
It’s my understanding that, instead of focusing on incorporating information into the model, the creators focused hard on pretraining on reasoning and research. If the model is particularly good at these things, it can just Google the relevant information and synthesize it in real time, hence the name: InternLM. It doesn’t know anything, but it can look stuff up!
@aa-xn5hc5 ай бұрын
Please try lmagent with 2.5
@SonGoku-pc7jl5 ай бұрын
thanks! in spanish is regular but good that all evolution :)
@nikosterizakis4 ай бұрын
Tried it with CrewAI and Autogen. In the case of CrewAI, it did not work... Could not call the agents properly or passing the right parameters to the tools. Perhaps because the tools were Annotated, but it wanted to pass JSON, or could not map the JSON to the Annotated function calls. To its credit, it did not hallucinate either, trying to please with answers. I also saw a lot of Chinese coming up on the log file ;). In the case of Autogen, I got the error message: "LLM does not have a tool calling function.' Both experiments with Ollama locally, where Llama3.1 has been tested successfully (of sorts, with plenty of hallucinations.
@samwitteveenai3 ай бұрын
They have optimized it for their own agentic framework (Lagent) also the Function calling they are using is different than what CrewAI is using I think, so I wouldn't expect that one to work well. It could be customized is my guess.
@nikosterizakis3 ай бұрын
@@samwitteveenai I am also trying it with a LiteLLM wrapper over Ollama, which is the recommended method by Autogen for models that do not natively support function calling. Just in case it makes a difference. Ongoing trials and will report back.
@nikosterizakis3 ай бұрын
@@samwitteveenai Update: with litellm I got a 500 API error when using it with Autogen, I think indicating that the LLM is tuned security-wise to work with its own framework (as you pointed out). For some strange reason, CrewAI refused to hook to Litelm serving this model...
@ManjaroBlack5 ай бұрын
I couldn’t get InternLM to work well with RAG or any embedding. It gives ok answers to simple prompting.
@WillJohnston-wg9ew5 ай бұрын
What is the agentic aspect? Maybe I don't understand something or missed something?
@Schaelpy5 ай бұрын
He talks about it at 4:45
@attilavass69355 ай бұрын
Am I the only one who misses a memory module from Lagent? I'm gonna test this though ASAP
@tlfmcooper5 ай бұрын
Thanks
@choiswimmer5 ай бұрын
Nice
@lapozzunk5 ай бұрын
If each model gets a higher rating than its predecessors, when will we reach 100? Also, if I don't watch such videos, will this happen later?
@wickjohn38545 ай бұрын
ask him what happen in 1989 LOL
@Dom-zy1qy5 ай бұрын
The ultimate benchmark for Chinese models. I wonder I'd they've actually been tuned to avoid discussing things like that. Would prob get them defunded by the govt.
@TheGuillotineKing5 ай бұрын
Fun fact these Chinese models are banned in the USA and can’t be used for a commercial product
@ringpolitiet5 ай бұрын
Quite an enigma how you combine an interest in rather techy stuff like tool calling LLMs with a straight off the turnip truck view of other things that seems as easy or easier to get informed about.
@dinoscheidt5 ай бұрын
Fun fact: A source helps. @TheGuillotineKing seems cognitively challenged holding apart the current talks to maybe restrict the EXPORT of OSS Models vs the other way around.
@TheGuillotineKing5 ай бұрын
@@dinoscheidt Fun Fact your mother swallowed a gallon of 🥜🥜🥜🥜🥜🐿️🐿️🐿️ juice and that's how she had you
@toadlguy5 ай бұрын
@@dinoscheidt Well, he is right that they can’t be used for commercial projects due to the license. 😉