This is so awesome! Just tried out llava 1.5 7b llamafile and it worked out of the box running on my CPU, without eating all of my RAM! The token generation speed was good enough for me! And my CPU is ~8 years old. Holy cow!
@bigglyguy84294 ай бұрын
Where gguf?
@geomorillo4 ай бұрын
where?
@dskbiswas4 ай бұрын
What did I just watch ...mindblowing! Finally someone took the initiative of going against the tide while giving CPUs some attention that they have lost to the GPU madness!
@longboarderanonymous57184 ай бұрын
These individuals are pioneers of the Personal AI. Efficient, Universal, and Economical.
@lolilollolilol7773Ай бұрын
Justine Tunney is a genius. Everything she does is undistinguishable from magic.
@granite_planet2 күн бұрын
Looking at her work, I sometimes think I should just quit programming and pick up something like gardening instead. :D
@navodpeiris90544 ай бұрын
loving the llamafile already. this is how i deploy local LLMs now!
@LeftBoot4 ай бұрын
Local for yourself or clients?
@aeu1264 ай бұрын
This was my favorite presentation!
@Viewable114 ай бұрын
Llamafile now supports OpenAI API and non-AVX CPUs. Finally! Having the OpenAI API is a must.
@gunnarasmussen2074 ай бұрын
Well, what I'm suppossed to say but: awesome...running local AI on normal consumer hardware without any worries about privacy seemed impossible just months ago. All the computational work in GPT, Gemini and others is done in the cloud on the companies servers. So you don't know, what they are doing with your data. Even if you have nothing to hide - I'm sure erveryone has certain things, he/she wants to stay private...this seems to be the right way of implementing AI in a private manner. And doing such a great afford without any commercial Interests is nothing but mindblowing. Keep up the good work, please!
@deadlokIV4 ай бұрын
Justine just shifted the timeline 💥🔀
@tejaslotlikar35734 ай бұрын
Now this is called achievement. Meanwhile the so-called "open"AI is looting people. You guys are awesome
@indylawi50214 ай бұрын
This is fantastic! I can't wait to try it out.
@Alice_Fumo3 ай бұрын
well... I just took a look at the repo for the llama 3 70b llamafile repo and found this info about performance: "AMD Threadripper Pro 7995WX ($10k) does a good job too at 5.9 tok/sec eval with Q4_0 (49 tok/sec prompt). With F16 weights the prompt eval goes 65 tok/sec." 70b would be the lower bound for model I would enjoy using, but getting like 6 tokens per second output on a 10k$ CPU... At that point I could just as well build a GPU machine... So, even though I think this is in concept an amazing project, either it or hardware in general has a long way to go still before it is in my opinion usable for an average person such as myself.. (I'm assuming the performance data on the huggingface repo are at least somewhat accurate and not outdated)
@leejacksondev4 ай бұрын
This is utterly brilliant. What a fantastic presentation. Amazing project.
@craigscott42054 ай бұрын
Justine an absolute champion!
@LaHoraMaker4 ай бұрын
I really like the idea of a Threadripper configuration but... does anyone have a reference machine configuration for that? I'd like to compare the price to existing alternatives like the dual RTX4090 setup that is mentioned!
@GandalfTheBrown1174 ай бұрын
Justine is a GOAT
@spookymv3 ай бұрын
it was the first time I had the chance to listen to one of his speeches. bro i like this guy. D:
@FirstNameLastName-fv4eu4 ай бұрын
These cloud companies trying their best to keep the valuation high!!! This guy is the new CDO manager!!
@delq4 ай бұрын
Awesome, exactly what I have been looking for, no more virtual heavy environments, no more heavy nvidea cuda drivers ! Lets fricking go !!!
@aiforsocialbenefit4 ай бұрын
Awesome. Great project and presenters!
@dbreardon4 ай бұрын
He said,, "Who remembers using the original Netscape Navigator?" ........to that I say, who remembers using the original Mosaic browser? And then telnet before the graphical internet?
@WoodyWilliams4 ай бұрын
[raises hand] Doh!
@tinkerman17904 ай бұрын
“Who remembers the handshaking tone in dial-up process” 😂
@smthngsmthngsmthngdarkside4 ай бұрын
Who remembers the original smoke signals?
@Atonsha3 ай бұрын
How about BTX?
@vncstudio3 ай бұрын
We do! and Gopher!
@raiumair74944 ай бұрын
Refreshing indeed - tokens per seconds is one measure and I like eval speed but what and how do you measure that?
@tollington94144 ай бұрын
Absolutely fascinating and totally genius
@Jason_RA3 ай бұрын
This is absolutely amazing!
@rayhere79254 ай бұрын
This is a game-changing breakthrough. Can't underplay this any other way.
@john_blues4 ай бұрын
Is there a way to get Windows to run llamafiles bigger than 4Gb? Without being able to do that, it is very limiting in the models you can run.
@RomuloMagalhaesAutoTOPO3 ай бұрын
Amazing. Thank you.
@eggmaster884 ай бұрын
Awesome work!
@masbuba4 ай бұрын
Oh shit, CPU prices is going to hike
@NeXTOoOoOoO4 ай бұрын
Wow! Really great work!
@CaptainSpoonsAlot3 ай бұрын
this is just fantastic.
@OranCollins3 ай бұрын
omg i love Justine Tunney! they are amazing!
@johnkost25144 ай бұрын
This is better than the Nvidia NIM solution (which is just containerization). Way better ..
@cholst14 ай бұрын
*checking on RAM prices*
@KevinKreger3 ай бұрын
Amazing❤
@ShieldsWebDesign3 ай бұрын
Why is no one talking about this?
@philly_eddie4 ай бұрын
very cool
@Charles-Darwin4 ай бұрын
Awesomesauce
@XEQUTE4 ай бұрын
Love it!!
@constantinegeist18543 ай бұрын
All of this was already possible before... Already back in early 2023. What they did was just save you 15 minutes (otherwise you'd have to download an inference program and weights separately)
@Godkidz74 ай бұрын
Freedom and Justices are more expensive than Money and Power. No one live and rule forever. Respects and Salute to you guys...
@7T7Soulz4 ай бұрын
this is future
@erb344 ай бұрын
Don't forget the browser.
@omercelebi20124 ай бұрын
What about quality trade-off? Did they mention about that?
@GandalfTheBrown1174 ай бұрын
Tired -> wired around @9:30 😂
@romanbauer4 ай бұрын
👏🏻👏🏻👏🏻
@JohnnysaidWhat4 ай бұрын
this guy is a fkn rockstar on stage I was totally blown away 🎉
@timchapman85394 ай бұрын
I need an AI that can access the files on my hard drive. Does anyone have a suggestion? I don't want to upload them to the AI. I want the AI to access them directly.
@bigglyguy84294 ай бұрын
ChatGPT4all has RAG
@ravishmahajan93143 ай бұрын
NVIDIA has hired CIA agents to make sure this technology is not reaching in hands of public. Be safe sir !😝
@hope423 ай бұрын
Am I the only one that someone AI generated Matt Perry?
@pandoraeeris78604 ай бұрын
The Singularity is here.
@snow87254 ай бұрын
Fuck yeah!!!
@fkxfkx4 ай бұрын
well this feels like something out of left field.🤷♂️ Seems too good to be true. What are the catches?
@projectsspecial92244 ай бұрын
As an AI Design Engineer and developer of original works in Unified Language Models (predecessor to LLMs) for over 20 years, this compact framework, GPU or custom hardware independence, and resource efficient methodology is the correct approach. 😊
@fkxfkx4 ай бұрын
”a” correct approach but maybe not “the” correct approach. It’s not clear what downsides there are yet.
@bigglyguy84294 ай бұрын
@@fkxfkx I'm not sure how you're supposed to run it? GGUF I can run but what the heck is the 14GB "llamafile" thing?
@maxd39464 ай бұрын
@@bigglyguy8429 actually, you don't need a 14GB llamafile. It's even unable to be run on windows (4GB max executable size limit). You can keep a llamafile without embedding any model in it and call it with the -m parameter to specify the model file to load.
@TalsBadKidney4 ай бұрын
let's go to the gym
@JimAmos4 ай бұрын
Hats off for the engineering feat. But in terms of application, we are still just talking about text summarization. And the image generation in your own demo was just as disappointing as ever. There's no killer app for LLMs yet even though we keep throwing money and science at it. What are we even doing?
@bobtarmac18284 ай бұрын
Free candy, I mean, Free open source Ai for everyone. It’s a like a trick. Don’t fall for it. Cease Ai.