This is so awesome! Just tried out llava 1.5 7b llamafile and it worked out of the box running on my CPU, without eating all of my RAM! The token generation speed was good enough for me! And my CPU is ~8 years old. Holy cow!
@bigglyguy84296 ай бұрын
Where gguf?
@geomorillo6 ай бұрын
where?
@dskbiswas6 ай бұрын
What did I just watch ...mindblowing! Finally someone took the initiative of going against the tide while giving CPUs some attention that they have lost to the GPU madness!
@navodpeiris90546 ай бұрын
loving the llamafile already. this is how i deploy local LLMs now!
@LeftBoot6 ай бұрын
Local for yourself or clients?
@longboarderanonymous57186 ай бұрын
These individuals are pioneers of the Personal AI. Efficient, Universal, and Economical.
@aeu1266 ай бұрын
This was my favorite presentation!
@lolilollolilol77733 ай бұрын
Justine Tunney is a genius. Everything she does is undistinguishable from magic.
@granite_planet2 ай бұрын
Looking at her work, I sometimes think I should just quit programming and pick up something like gardening instead. :D
@Viewable116 ай бұрын
Llamafile now supports OpenAI API and non-AVX CPUs. Finally! Having the OpenAI API is a must.
@indylawi50216 ай бұрын
This is fantastic! I can't wait to try it out.
@deadlokIV6 ай бұрын
Justine just shifted the timeline 💥🔀
@gunnarasmussen2076 ай бұрын
Well, what I'm suppossed to say but: awesome...running local AI on normal consumer hardware without any worries about privacy seemed impossible just months ago. All the computational work in GPT, Gemini and others is done in the cloud on the companies servers. So you don't know, what they are doing with your data. Even if you have nothing to hide - I'm sure erveryone has certain things, he/she wants to stay private...this seems to be the right way of implementing AI in a private manner. And doing such a great afford without any commercial Interests is nothing but mindblowing. Keep up the good work, please!
@leejacksondev6 ай бұрын
This is utterly brilliant. What a fantastic presentation. Amazing project.
@LaHoraMaker6 ай бұрын
I really like the idea of a Threadripper configuration but... does anyone have a reference machine configuration for that? I'd like to compare the price to existing alternatives like the dual RTX4090 setup that is mentioned!
@tejaslotlikar35736 ай бұрын
Now this is called achievement. Meanwhile the so-called "open"AI is looting people. You guys are awesome
@craigscott42056 ай бұрын
Justine an absolute champion!
@aiforsocialbenefit6 ай бұрын
Awesome. Great project and presenters!
@Jason_RA6 ай бұрын
This is absolutely amazing!
@spookymv5 ай бұрын
it was the first time I had the chance to listen to one of his speeches. bro i like this guy. D:
@FirstNameLastName-fv4eu6 ай бұрын
These cloud companies trying their best to keep the valuation high!!! This guy is the new CDO manager!!
@tollington94146 ай бұрын
Absolutely fascinating and totally genius
@delq6 ай бұрын
Awesome, exactly what I have been looking for, no more virtual heavy environments, no more heavy nvidea cuda drivers ! Lets fricking go !!!
@Alice_Fumo5 ай бұрын
well... I just took a look at the repo for the llama 3 70b llamafile repo and found this info about performance: "AMD Threadripper Pro 7995WX ($10k) does a good job too at 5.9 tok/sec eval with Q4_0 (49 tok/sec prompt). With F16 weights the prompt eval goes 65 tok/sec." 70b would be the lower bound for model I would enjoy using, but getting like 6 tokens per second output on a 10k$ CPU... At that point I could just as well build a GPU machine... So, even though I think this is in concept an amazing project, either it or hardware in general has a long way to go still before it is in my opinion usable for an average person such as myself.. (I'm assuming the performance data on the huggingface repo are at least somewhat accurate and not outdated)
@eggmaster886 ай бұрын
Awesome work!
@raiumair74946 ай бұрын
Refreshing indeed - tokens per seconds is one measure and I like eval speed but what and how do you measure that?
@RomuloMagalhaesAutoTOPO5 ай бұрын
Amazing. Thank you.
@NeXTOoOoOoO6 ай бұрын
Wow! Really great work!
@CaptainSpoonsAlot6 ай бұрын
this is just fantastic.
@KevinKreger6 ай бұрын
Amazing❤
@OranCollins5 ай бұрын
omg i love Justine Tunney! they are amazing!
@GandalfTheBrown1176 ай бұрын
Justine is a GOAT
@rayhere79256 ай бұрын
This is a game-changing breakthrough. Can't underplay this any other way.
@john_blues6 ай бұрын
Is there a way to get Windows to run llamafiles bigger than 4Gb? Without being able to do that, it is very limiting in the models you can run.
@Deepak-eu7kt22 күн бұрын
Has anyone tried the bigger models on CPU and has feedback?
@XEQUTE6 ай бұрын
Love it!!
@Charles-Darwin6 ай бұрын
Awesomesauce
@dbreardon6 ай бұрын
He said,, "Who remembers using the original Netscape Navigator?" ........to that I say, who remembers using the original Mosaic browser? And then telnet before the graphical internet?
@WoodyWilliams6 ай бұрын
[raises hand] Doh!
@tinkerman17906 ай бұрын
“Who remembers the handshaking tone in dial-up process” 😂
@smthngsmthngsmthngdarkside6 ай бұрын
Who remembers the original smoke signals?
@Atonsha6 ай бұрын
How about BTX?
@vncstudio6 ай бұрын
We do! and Gopher!
@philly_eddie6 ай бұрын
very cool
@masbuba6 ай бұрын
Oh shit, CPU prices is going to hike
@romanbauer6 ай бұрын
👏🏻👏🏻👏🏻
@johnkost25146 ай бұрын
This is better than the Nvidia NIM solution (which is just containerization). Way better ..
@ShieldsWebDesign5 ай бұрын
Why is no one talking about this?
@Godkidz76 ай бұрын
Freedom and Justices are more expensive than Money and Power. No one live and rule forever. Respects and Salute to you guys...
@7T7Soulz6 ай бұрын
this is future
@cholst16 ай бұрын
*checking on RAM prices*
@omercelebi20126 ай бұрын
What about quality trade-off? Did they mention about that?
@GandalfTheBrown1176 ай бұрын
Tired -> wired around @9:30 😂
@erb346 ай бұрын
Don't forget the browser.
@timchapman85396 ай бұрын
I need an AI that can access the files on my hard drive. Does anyone have a suggestion? I don't want to upload them to the AI. I want the AI to access them directly.
@bigglyguy84296 ай бұрын
ChatGPT4all has RAG
@constantinegeist18546 ай бұрын
All of this was already possible before... Already back in early 2023. What they did was just save you 15 minutes (otherwise you'd have to download an inference program and weights separately)
@JohnnysaidWhat6 ай бұрын
this guy is a fkn rockstar on stage I was totally blown away 🎉
@ravishmahajan93145 ай бұрын
NVIDIA has hired CIA agents to make sure this technology is not reaching in hands of public. Be safe sir !😝
@pandoraeeris78606 ай бұрын
The Singularity is here.
@snow87256 ай бұрын
Fuck yeah!!!
@hope425 ай бұрын
Am I the only one that someone AI generated Matt Perry?
@TalsBadKidney6 ай бұрын
let's go to the gym
@fkxfkx6 ай бұрын
well this feels like something out of left field.🤷♂️ Seems too good to be true. What are the catches?
@projectsspecial92246 ай бұрын
As an AI Design Engineer and developer of original works in Unified Language Models (predecessor to LLMs) for over 20 years, this compact framework, GPU or custom hardware independence, and resource efficient methodology is the correct approach. 😊
@fkxfkx6 ай бұрын
”a” correct approach but maybe not “the” correct approach. It’s not clear what downsides there are yet.
@bigglyguy84296 ай бұрын
@@fkxfkx I'm not sure how you're supposed to run it? GGUF I can run but what the heck is the 14GB "llamafile" thing?
@maxd39466 ай бұрын
@@bigglyguy8429 actually, you don't need a 14GB llamafile. It's even unable to be run on windows (4GB max executable size limit). You can keep a llamafile without embedding any model in it and call it with the -m parameter to specify the model file to load.
@JimAmos6 ай бұрын
Hats off for the engineering feat. But in terms of application, we are still just talking about text summarization. And the image generation in your own demo was just as disappointing as ever. There's no killer app for LLMs yet even though we keep throwing money and science at it. What are we even doing?
@bobtarmac18286 ай бұрын
Free candy, I mean, Free open source Ai for everyone. It’s a like a trick. Don’t fall for it. Cease Ai.