In the last two years, I've worked extensively with Small Language Models, and recently they've improved significantly. This isn't due to a change in their size but to the availability of high quality synthetic data, enabled by Trillion Parameters Models. Large Language Models act as data compression tools. It seems We had to first generate a very large model to digest vast amounts of information from internet, then use it to create specific synthetic data to enhance small models. I believe the future lies not in creating bigger models but in leveraging synthetic data to elevate Small Language Models. In a year, we might see Small Language Models outperform current Large Language Models in benchmarks, thanks to synthetic data. Initially, we needed to create large, often under-trained models, but now we can use them to generate synthetic data in any format we need. This allows for highly specialized small models that excel in specific tasks. This is how I see the future unfolding.
@Druid-o9d2 ай бұрын
They need to do that on top of the lager models size isn't an issue for me
@bseddonmusic12 ай бұрын
Amazing the effects of competition.
@frag_it2 ай бұрын
I am loving these tutorials I would like to see you do in depth on using vllm as an api point for serving llm using azure kubernates cluster it would be soo useful to the community as we can then use quantized models of llama3 70b with very cheap gpu to help serve applications. I would be just amazing for the community then you can use that to help make agents with lang graph tutorials bro I would love it
@micbab-vg2mu2 ай бұрын
Sam my first tests were possitive - a good model for data extraction and clearence + JSON function calling works great - and this price:) I am waiting to fine-tune this model:) - it will be a fun:)
@summer_xo2 ай бұрын
great to hear, exactly what I need it for, and summarising large amount of text!
@HadiCurtay2 ай бұрын
Great explanation Sam!
@thenoblerot2 ай бұрын
I really like Claude, but Haiku is not great at function calling. Love to see iteration! Thanks, Sam!
@MrKrzysiek99912 ай бұрын
Test this one. It's even worse. I'm not sure what is wrong with this model
@4l3dx2 ай бұрын
@@MrKrzysiek9991Are you sure? Function calling works perfectly for me, and I also have to add that I haven't had issues with censorship when translating texts, unlike with the other models
@MrKrzysiek99912 ай бұрын
@@4l3dx In my test that involves HTML parsing it is worse than LLaMa 3 8B. I have not tested it for function calling. My college has issues with agents powered by this model. Old prompts stop working. Hard to say what is wrong, but it has been overtuned for benchmark which was addressed in the lastest AI explained video.
@henkhbit57482 ай бұрын
Love competition👍 btw: Gemini flash has 1 million tokens, audio and video inputs
@TomGally2 ай бұрын
Thank you for the very informative video! I learn a lot from all of your videos.
@COLLINSSAKALA-jq5ze2 ай бұрын
Just from using it. I think it's now available for all, I'm impressed 😮. It's excellent, though it can’t browse the web in real-time. Despite that, it will be very helpful, especially for summarizing content, rewriting resumes, and tailoring them for job applications. I've also noticed an improvement in coding capabilities compared to the older model. When I generated code for my resume, it did a great job.
@SLAM29772 ай бұрын
What will the inference cost be if someone needs to finetune the model?
@samwitteveenai2 ай бұрын
dont think that has been announced yet.
@SLAM29772 ай бұрын
@@samwitteveenai it is double the price apparently however as only for tier 4 and 5 users means it's only for companies
@DeathHeadSoup2 ай бұрын
I would find this interesting if it was actually competing against open source models that I can use locally but since it isn't I find it to be not even news worthy. It only gives users a price cut when we all should be asking the question of whether using AI SaaS products in your software stack is a good idea? If they release this model as a local use product then it will be news worthy.
@samwitteveenai2 ай бұрын
very newsworthy for people building apps that want fast cheap models.
@DeathHeadSoup2 ай бұрын
@@samwitteveenai perhaps but I would argue that Groq Cloud is still a better choice if you do not have your own server. Being able to test your application against several open source models helps future proof your application and avoid vendor lock.
@stevenkies8022 ай бұрын
So, do we call this the AI, Chabot, or LLM war?
@RyluRocky2 ай бұрын
5:40 It isn’t 4.5 it’s 4O it’s called 4O because it’s omnimodal meaning all modalities they never claimed to increase its intelligence but rather it’s a structural shift from GPT4
@hqcart12 ай бұрын
if you do a math, 0.15/mil is actually cheaper than running a local model, not to mention it's better than all open source.
@hastyscorpion2 ай бұрын
But if you want to keep your data your data. Running local is the only way to ensure that.
@hqcart12 ай бұрын
@@hastyscorpion true, but the reality is 99% of population dont care about their data..
@MudroZvon2 ай бұрын
@@hastyscorpion Come on, your data! People who think they are so important deserve to suffer from using local models 😅 I'm kidding
@valleymykel-mq7gw2 ай бұрын
"Don't ask questions, just consume product then get excited for next product!"
@Dht1kna2 ай бұрын
Perhaps 4o mini is actually a distillation of 4.5 and not 4o
@samwitteveenai2 ай бұрын
4o was 4.5 until they renamed it. I think it is just a small training certainly for the IT/post training part
@davidwipperfurth84652 ай бұрын
RE: open-source models: I worry these cheap corporate models aren't competing with one another as much as they are competing against decentralization.
@clapppo2 ай бұрын
all these new models are utterly obsessed with formatting / organising everything as lists...
@samwitteveenai2 ай бұрын
yeah its there new IT datasets with CoT built in
@123arskas2 ай бұрын
Who wants expensive systems or cloud services when we've these super cheap models
@hope422 ай бұрын
🛑🛑!!Note!!🛑🛑: OpenAI created mini for their selfish purpose to neuter a GPT-4o Plus you in a timeout you bad child and drop you back to GPT-4o mini much like it did with GPT-4 when it would drop you back to GPT-3.5 aka Dory! Wtf? Fact check me please but that is what i am seeing now. I just timed out for the first time ever in GPT-4o and it dropped me back to GPT-4o mini and took away my upload option again Wtf? 🛑🛑
@samwitteveenai2 ай бұрын
yeah I also noticed some weird timeouts. possibly teething issues early on I hope.