Microsoft's Phi 3.5 - The latest SLMs

  Рет қаралды 15,622

Sam Witteveen

Sam Witteveen

Күн бұрын

Пікірлер: 33
@thenoblerot
@thenoblerot 4 ай бұрын
Thanks Sam! You always have good content in a sea of clickbait nonsense :)
@samwitteveenai
@samwitteveenai 4 ай бұрын
Thanks this is what I am trying to go for. This who space has gotten sop hype focused over the past couple of years.
@yotubecreators47
@yotubecreators47 16 күн бұрын
I totally agree I just wanted to like this video for the 2nd time lol already found myself liked it 2 weeks ago
@blossom_rx
@blossom_rx 3 ай бұрын
Unfortunately every Phi model I tested so far had a model collapse after 3 to 5 queries. I have this only with Microsoft models OR models I truncated on my own. I do not understand the hype and do not trust the benchmarks. Just to make clear: I have about 15 different official models running locally that were not tampered with and NONE except the Microsoft models have this issue.
@thmo_
@thmo_ 3 ай бұрын
the MoE wasn't wrong, the correct answer for that calculation was exactly 9.9996, rounding _is_ the next step. So I'd say it did better at that specific question..
@jeremybristol4374
@jeremybristol4374 4 ай бұрын
Surprisingly good. Better than v3. But still get's stuck in loops as the response context length grows. Experimenting with prompts to avoid this.
@yotubecreators47
@yotubecreators47 16 күн бұрын
just wanted to like this video for the 2nd time lol already found myself liked it 2 weeks ago
@supercurioTube
@supercurioTube 4 ай бұрын
Thanks for the coverage, I'd be interested in a tool use / RAG and other utilities comparison with Llama 3.1 8B quantized aggressively to bridge the gap in RAM and performance!
@라면먹고싶다-d5w
@라면먹고싶다-d5w 3 ай бұрын
What are some different use cases for Mini and MoE? For example if you want to do a RAG application, which would be more suitable?
@etherhealingvibes
@etherhealingvibes 4 ай бұрын
Phi 3.5 is mindblowing. Works crazy fast and accurate for function calling, and json answers also.
@NoidoDev
@NoidoDev 3 ай бұрын
Which version, what functions?
@mukilanru
@mukilanru 3 ай бұрын
Is it faster than Llama-3.1-8b-Instruct float16 for json response? Also which model, mini, right?
@0cano
@0cano 4 ай бұрын
Always top notch content Sam!
@Diego_UG
@Diego_UG 4 ай бұрын
Is there any cheap way to finetune these small models with proprietary data?
@samwitteveenai
@samwitteveenai 4 ай бұрын
yeah you can do FTs with Unsloth etc quite easily for these.
@erniea5843
@erniea5843 4 ай бұрын
Nice overview!
@WillJohnston-wg9ew
@WillJohnston-wg9ew 4 ай бұрын
Does anyone know of a source for community/conversation on LLMs and business? I'm a technologist developing an app and would really like to find a good source for discussing ideas and what's working/not working.
@xthesayuri5756
@xthesayuri5756 4 ай бұрын
It's funny. Every time a new Phi model comes out I get so insanely bearish for LLMs because they always suck. Just gaming the benchmark but are horrendous to use.
@hidroman1993
@hidroman1993 4 ай бұрын
100% agreed, just ask a slightly different question and Phil goes NUTS
@Spathever
@Spathever 4 ай бұрын
This is what I noticed too. Went crazy on the 2nd time. There was no 3rd. Maybe newer bigger ones would work. Probably will need to fine-tune.
@etherhealingvibes
@etherhealingvibes 4 ай бұрын
This kind of models are like gold for people working with NLP.
@SavinaAzzahra-i9k
@SavinaAzzahra-i9k 4 ай бұрын
😂
@samwitteveenai
@samwitteveenai 4 ай бұрын
Can I ask what you are using it for that you are finding it sux. Curious is it a chat kind of app etc?
@NetZeroEarth
@NetZeroEarth 4 ай бұрын
🔥 🔥 🔥
@hidroman1993
@hidroman1993 4 ай бұрын
Definitely first
@ArianeQube
@ArianeQube 4 ай бұрын
o fucks given.
@IdPreferNot1
@IdPreferNot1 4 ай бұрын
How much longer are we going to pretend that these are in any way practical? No on prem running for anyone except large corp and many of the privacy issues open source was supposed to address arise come back once you start using someone else's hardware. Guess Its great to see smaller models improve and push foundation models, but if you want to do stuff with any off these, especially with agentic processes gobbling thousands of tokens, latency and performance demand hosted service.... might as well go free flash, mini with no setup or hosting issues.
@pwinowski
@pwinowski 4 ай бұрын
Well, you actually can run a crew of Phi models on a MacBook Pro. The M3 Pro with 36 GB of system memory, can allocate around 27 GB of that pool solely to GPUs for inference.
@IdPreferNot1
@IdPreferNot1 4 ай бұрын
@@pwinowski Its not about can/cant. What is the tokens/sec doing that locally? Now consider hitting the gemini-flash API with 128k tokens 15 times a minute for free.
InternLM - A Strong Agentic Model?
18:44
Sam Witteveen
Рет қаралды 15 М.
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
The New Outlook is TERRIBLE
20:19
Chris Titus Tech
Рет қаралды 119 М.
Testing Microsoft's New VLM - Phi-3 Vision
14:53
Sam Witteveen
Рет қаралды 14 М.
AgentWrite with LangGraph
19:22
Sam Witteveen
Рет қаралды 10 М.
Gemma 2 - Local RAG with Ollama and LangChain
14:42
Sam Witteveen
Рет қаралды 20 М.
My Brain after 569 Leetcode Problems
7:50
NeetCode
Рет қаралды 2,7 МЛН
Google’s Quantum Chip: Did We Just Tap Into Parallel Universes?
9:34
Gemini 2.0 Flash Thinking
20:13
Sam Witteveen
Рет қаралды 11 М.
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН