4:00 note that you wrote "as" instead of "ask" and it still worked, nice :)
@AZisk10 ай бұрын
i like your new virtual background
@GaryExplains10 ай бұрын
😂
@adfjasjhf10 ай бұрын
I tried to type in my language and it did got multiple words wrong even though it was understandable. ChatGPT and Bing Copilot didn't have this issue but it's fine. Mistral AI helped me to solve a powershell script issue that I was trying to solve with ChatGPT 3.5 and Copilot for hours within one prompt. Amazing! :D Will be using this more often now. Thanks!
@tonysheerness242710 ай бұрын
Can it fill in, 'I am not a robot and select the squares with the traffic lights?
@Taskade9 ай бұрын
Looking forward to welcoming Mistral to Taskade in our next Multi-Agent update! 🌟
@lonekid928610 ай бұрын
Context: Video on shunting yard algorithm. Answer: Yes, please.
@GaryExplains10 ай бұрын
👍I published the first part today. Second part coming soon.
@DanCreaMundos10 ай бұрын
Looks like you're gonna have to start adding harder challenges, LLMs are starting to get pretty cleaver. Or maybe you're so popular they're stsr ting to train them to look good in your reviews 😂
@GaryExplains10 ай бұрын
I wish it was the second possibility, but I think it is likely the first, i.e. they are getting better and better. 😜
@technolus574210 ай бұрын
Only got to try it out for a bit yesterday before the interface simply stopped producting answers. (Not sure if its a bug or an actual limit for non-subscribers). In my limited testing, I found it much preferable to gemini advanced. Gemini consistently gives me poor quality answers, insists on bulletpoints when I ask for it to not do so, refuses to answer questions about dictators, does not align properly with the intent of my questions, ... Mistral large on the other hand gives complete answers, logically sound reasoning, and shows good understanding of my intent. Havent been doing much coding lately, but im eager to compare these 2 models for coding (benchmarks show mistral underperforming, but Id like to judge that for myself after how disappinting gemini has been in general despite benchmarks).
@xeon2k810 ай бұрын
have you tried bing?
@kjmok10 ай бұрын
I laughed pretty hard when the gary lineker test question came up. Good content though, was curious how mistral's model fare compared to the others
@SO-dl2pv10 ай бұрын
Press the like button if you agree to do a video on Shunting Yard Algorithm :)
@S.0.K.10 ай бұрын
I asked it how many planets are spelled with letter "a" in their name. It failed miserably.
@GaryExplains10 ай бұрын
LOL, that is a good one, nice, I think I will add that to my list of questions. Mistral get it wrong for me as well. ChatGPT 4 got it right, ChatGPT 3.5 got it partially right. Claude's answer was similar to ChatGPT 3.5.
@GaryExplains10 ай бұрын
I especially like this part of Mistral's reply, "If we consider dwarf planets, Pluto and Eris also have the letter "a" in their names."
@ps330110 ай бұрын
Everyone spends millions just to train llm as their business model.
@technolus574210 ай бұрын
They dont, they fine tune existing models which is much cheaper.
@ohdude664310 ай бұрын
@@technolus5742 It doesn't work like that.
@technolus574210 ай бұрын
@@ohdude6643 What? Yes it does. That's literally how the vast majority does it.
@AlwaysCensored-xp1be10 ай бұрын
Now got six LMM's on my Pi5, if I had 6 x Pi5's could they talk to each other to get better?
@GaryExplains10 ай бұрын
No, unfortunately not. LLMs are trained and then the model is released. The model is "read only" in that sense.
@AlwaysCensored-xp1be10 ай бұрын
@@GaryExplains We might get AGI before I can get my hands on 6 Pi5's. Things are moving fast in AI.
@GaryExplains10 ай бұрын
😂
@Ether_Void10 ай бұрын
You could try and convert them back into a training format. Usually they are something like ONNX or TFlite for inference and you would need to convert that back into a trainable model under whatever Framework you want to use (TensorFlow, Pytorch etc.). However you would also need to reimplement the entire training code and loss function with which you can load data and update the weights and biases in the model. Added to that letting LMMs talk to each other isn't usually a good way to make them better without another "supervision" function. First issue is that you have to ask the question how do you score whether a answer was good or not simply comparing answers from equal models won't get you anywhere. Second a LLM has not really any reason to stick to human languages. If you only score their conversation without another algorithm testing the correctness of the grammar used by those LLMs they can actually start speaking very broken English
@AlwaysCensored-xp1be10 ай бұрын
@@Ether_Void Just tried Orca mini, fast enough to get useful on a Pi5. Will have to try those smaller ones now.
@mrshankj510110 ай бұрын
Mistral large is intelligent!
@Garythefireman6610 ай бұрын
Thanks professor!
@GaryExplains10 ай бұрын
My pleasure!
@benarcher37210 ай бұрын
Mistral API sounds interesting! If you have the time 🙂