WTF is an AI PC?

  Рет қаралды 51,398

Framework

Framework

Күн бұрын

Пікірлер: 295
@challacustica9049
@challacustica9049 2 ай бұрын
No hype marketing. Just the CEO giving us an honest tech explanation. Framework is so cool.
@ajmash9745
@ajmash9745 2 ай бұрын
Nirav Patel made a netbook-sized laptop!!!!
@Peshyy
@Peshyy 2 ай бұрын
Nirav Patel is always such an inspiration. A geek who made his own company and is still that same geek who interacts with the community and improves the product every single year. I know Framework will never try to sell "AI laptops" to just be on the money train. And this video was super informative and down-to-earth!
@lonzoformvp5078
@lonzoformvp5078 2 ай бұрын
more geeks should become ceos
@AdrianSanabria
@AdrianSanabria 2 ай бұрын
I installed GPT4ALL and Ollama on my Framework 13. So my Framework 13 is now an AI PC. More importantly, I just upgraded from a 4-core CPU to 12 cores, so it is arguably the best AI PC, because I can upgrade it!
@michaeljaques77
@michaeljaques77 2 ай бұрын
This is how they win. The toasters gain the ability to repair themselves ;)
@iroar5982
@iroar5982 2 ай бұрын
This is awesome. You can even upgrade the cpu. Really out of this world
@Q365GA
@Q365GA 2 ай бұрын
I really wonder, how would a frame work qualcom pc look like..
@SAINIDE
@SAINIDE Ай бұрын
@@iroar5982 so it's basically swapping out whole motherboard not a chip like on pc's!
@shanthkoka
@shanthkoka 2 ай бұрын
Everyone: more AI now Nirav: do you even know what AI is?
@keyboard_g
@keyboard_g 2 ай бұрын
"Everyone" looks around for everyone.
@Aura_Mancer
@Aura_Mancer 2 ай бұрын
​@@keyboard_g vocal minority more like
@ninjadodovideos
@ninjadodovideos 2 ай бұрын
"Everyone" as in greedy tech investors who care too much about short term profit to think about whether the way this 'AI' was built is even legal at all, or if maybe it is in fact the largest instance of mass copyright infringement in the history of copyright and AI companies owe everyone whose work (not to mention private information) they stole royalties and damages.
@ExtantFrodo2
@ExtantFrodo2 2 ай бұрын
@@ninjadodovideos I'm not averse to saying that the accessibility to not JUST such a vast amount of aggregated knowledge but a utility that can process and integrate it for anyone is seconds is (and will be) more beneficial and valuable to humankind in total than any patent or copyright infringement around. I go by the adage " 'Strong' is what we make each other. ". 99% of capitalism's philosophy is predicated on this all being a zero sum game which never has been true. Wealth is made by doing more with less. This technology is our path to doing almost everything with almost nothing. The birth of the age of abundance is upon us.
@jonathanyun7817
@jonathanyun7817 Ай бұрын
@@ninjadodovideos Tantacrul just released a massive video about facebook's shady practices and he gets into AI in the final chapter of that video... truly disgusting stuff... Big tech has been chasing profits without paying for the consequences of their actions for too long, and hopefully AI will be the last straw in that fight for people's rights and privacy but I'm pessimistic in our current state... @ExtantFrodo2 This reads like AI-generated slop, which proves our point if anything...
@allanwind295
@allanwind295 2 ай бұрын
This format works so much better than the awkward interview format.
@TerranViceGrip
@TerranViceGrip 2 ай бұрын
It’s an excuse to jack the prices of hardware for NPUs that have zero use case for %99 of consumers
@RokeJulianLockhart.s13ouq
@RokeJulianLockhart.s13ouq 2 ай бұрын
I find it difficult to believe that CPU and SoC OEMs would sacrifice performance for that.
@koisose0
@koisose0 2 ай бұрын
if they had oculink addon will buy straight away
@TheSwanies
@TheSwanies 2 ай бұрын
I still don't believe raytracing has value for gamers. Just give us better framerates and higher resolutions, make 4k affordable for everyone. No need for car reflections. RTX only exists because the same tech can be sold as AI chips, so it's paying for its development
@Splarkszter
@Splarkszter 2 ай бұрын
And make it way easier to tag data and spy on people
@hackladdy9886
@hackladdy9886 2 ай бұрын
Exactly. AI models already run on anything with a reasonably okay GPU. We don't need to pay hundreds of dollars more for increasingly complex, harder to repair hardware, that can't be used for anything outside of those AI models.
@heredos4666
@heredos4666 2 ай бұрын
i love this. framework coming and being like, oh, you want AI? why are you asking, you can already have AI. and then, proceeding to explain how to run ollama. as a long time ollama user, this makes me really happy.
@Brians256
@Brians256 2 ай бұрын
My respect for Framework and NIrav Patel has just been cemented. What a great video on the topic and such a gift to the community! SW Engineer for 30+ years and this was such a soothing relief compared to the marketing hype and meaningless drivel I've seen from so many other companies.
@lno_onel3071
@lno_onel3071 2 ай бұрын
this is probably the best ad for Framework's morals possible
@MadMathMike
@MadMathMike 2 ай бұрын
13:06 Hahaha, the dig against Acer and Toshiba! 😂 "Wow, this is even worse!"
@axethepenguin
@axethepenguin 2 ай бұрын
RIP Toshiba
@mattvisaggio
@mattvisaggio 2 ай бұрын
I like this company a lot. They're not shoveling hype and have made a dent. I'll be interested to see how their pricing model adjusts once they have enough critical mass. For me right now their pricing is a barrier to entry.
@Quarky_
@Quarky_ 2 ай бұрын
Repair friendly hardware will always carry a premium. Manufacturing wise, soldering things to the PCB is cheaper. I don't think many people appreciate that. For the consumer though, the lifetime cost of ownership is a lot lower (also cheaper in terms of non-monetary things like aggravation and time cost when things do break). Unfortunately this is not reflected in sticker price. It's difficult to convey this at the time of purchase. For the consumer who cares, maybe one way to get a sense of this is to look up repairability scores from sources like iFixit.
@jonathanyun7817
@jonathanyun7817 Ай бұрын
You're probably already aware of this if you follow their activities, but in case you aren't: Framework sometimes has old stock at steeply discounted prices, like 11th gen laptops going for like $600 or 500 or so when they were being phased out iirc, which is a super competitive price on a workhorse laptop for the modest user :) Higher processing power is more expensive for sure, but eGPU's can be a decent solution if some portability can be sacrificed while still being far more portable than a desktop :) Hope things can work out for you soon!
@louishurr393
@louishurr393 9 күн бұрын
Nirav seems like a really great CEO. Gives me a lot of confidence in his product and vision.
@giripriyadarshan
@giripriyadarshan 2 ай бұрын
CEO who can type 🤙🏿🔥
@kvbk
@kvbk 2 ай бұрын
Dholay mari. Mamulga undadu
@kolkoki
@kolkoki 2 ай бұрын
Did Nirav, the effin CEO of Framework, just made a video to talk us about something... he just wanted to talk about? How cool is that?!
@kgenkov
@kgenkov 2 ай бұрын
I like the awkward interviews too!
@The-Cat
@The-Cat 2 ай бұрын
Please dont turn into a cultist/tech ceo worshipper I'm tired of iFruit fanboys and Musky-Tesla slaves
@深夜-l9f
@深夜-l9f Ай бұрын
​@@The-Cat it's impossible to make a cult around him as long as he stays this way. people just appreciate because there aren't many around
@guesswho2778
@guesswho2778 Ай бұрын
​@@The-Catjust because people like him more than most other CEOs doesn't mean that they are in a cult. Though I do see your point and hope it doesn't go that way either.
@0.Andi.0
@0.Andi.0 2 ай бұрын
You've gained my respect Mr Framework CEO
@Rushil69420
@Rushil69420 2 ай бұрын
The most personable CEO in consumer hardware (tech?)
@Char1es4k
@Char1es4k 2 ай бұрын
As someone who strives to build autonomes robot, AI does play a crucial part in my academic research. But in most cases I'd use the GPU server from our institute to train the reinforcement learning policy. Better linux support somehow becomes the most important factor when I consider a new laptop, and I'm glad I chose framework 13.
@aygwm
@aygwm 2 ай бұрын
More Nirav rants! More R2R! Happy framework owner here!
@MirorR3fl3ction
@MirorR3fl3ction 2 ай бұрын
I love that Framework is using their channel to actually make these types of videos just exploring a topic that applies to their products
@vitalybanchenko7276
@vitalybanchenko7276 Ай бұрын
Thank you Nirav for this introduction. I just went by your steps, installed ollama and asked it - `what is framework laptop` And it looks that my llama from paraller universe)) This is response (shortened) ``` Framework is a brand of modular laptops that were designed to be highly customizable and upgradable. The idea behind the Framework is to create a platform where users can easily upgrade individual components of their laptop, rather than having to replace the entire device. However, in March 2022, it was announced that the Framework would no longer manufacture new laptops under this model, citing constraints and a shift towards more traditional computer manufacturing. Although the original Framework laptop is no longer available for purchase, some third-party companies have started to develop their own modular laptop designs inspired by Framework's concept. These newer products may offer similar upgradeability features, but with different approaches and designs. It's worth noting that the Framework has shifted its focus towards developing custom hardware solutions for businesses and organizations, such as custom-built laptops and other devices, rather than consumer-facing products. ``` I would like our universes not to intersect in the future.
@flammablewater1755
@flammablewater1755 2 ай бұрын
I love the realness of this demo because the model doesn't know about Framework. Good demo.
@egarcia1360
@egarcia1360 2 ай бұрын
Great intro to the topic! I would just add that LLMs don't have "knowledge" or reasoning per se, but rather try to generate the statistically best response to your inputs-I've heard it described as "autocorrect on steroids" which pretty much sums it up. That's why they sometimes give nonsensical or-even more dangerously-reasonable-sounding, confidently wrong answers. It's an important distinction that highlights why no one should rely on LLMs for anything mission-critical.
@wsippel
@wsippel 2 ай бұрын
LLMs are also quite good at transforming information, that's where RAG (retrieval-augmented generation) and tools come into play. Instead of asking the model a question and relying on its own "knowledge", you can tell it to fetch an article from Wikipedia, pick the information you're looking for, and summarize that information for you. That usually works pretty well. You can also give LLMs access to tools such as calculators to enhance their capabilities. Also, one interesting reason LLMs tend to hallucinate is that they were trained to consider "I don't know" a bad answer, so they'll just make stuff up. Researchers from BuildIO recently found that simply telling the LLM in the system prompt not to hallucinate and just say if they don't know or can't do something actually works, and significantly reduces those "confidently wrong" answers.
@morganclemens
@morganclemens 2 ай бұрын
Framework's direct and open communication is so refreshing! I love your mission and your responsiveness to the community. I don't have a framework laptop (yet) but I really enjoy the content. Keep it up!
@johnofone
@johnofone Ай бұрын
WOW!!! Great Video!!! I (quickly) read all of the comments and I want to repeat many/all of the positive ones!!!! So many of the positive comments said exactly what I was thinking!!!! (and the few neutral/negative comments are just a bit of noise to me...) Great timing too! I already own and love my Framework 13... and was considering getting a Framework 16 for running local models... so it was nice to see the actual performance!!!! Well done all around!!!! (here is where I want to repeat all the glowing praise for the CEO, the company, the community, etc... but I will stop... and let the other comments speak for me)
@dazperson8228
@dazperson8228 2 ай бұрын
But Nirav, it can't automatically take screenshots and sell my information (for someone else) if it isn't running in the background all the time!
@israellewis5484
@israellewis5484 2 ай бұрын
LOL XD.
@ivanjelenic5627
@ivanjelenic5627 Ай бұрын
Ask it to write a script to sell your data! Problem solved.
@sapiosuicide1552
@sapiosuicide1552 Ай бұрын
We need more companies like Framework laptop and more CEOs like Nirav. Great video, keep up the good work
@H-S.
@H-S. 2 ай бұрын
That's actually a great summary on what's a local LLM, how to run them, and what's the state of support on consumer HW (like VRAM size and nVidia vs AMD). It would have been useful back when I started to experiment with this stuff myself. :)
@NikolayTach
@NikolayTach Ай бұрын
Thanks for the demo Nirav! Robert Martin is a great pioneer when it comes to architectural modelling.
@krivimara
@krivimara Ай бұрын
i just got llama running on my system after seeing this video. super interesting stuff. thanks for sharing.
@mikeatom
@mikeatom 2 ай бұрын
I didn't even know that I can do stuff like this. Thank you very much!!!
@richtigmann1
@richtigmann1 2 ай бұрын
Honestly this was 100% the best way to address this issue, showing the things that you could do with AI but not overhyping it. Showing the actual use cases (Running Local LLMS) is awesome and showing you how to do it is even better. I'm definitely excited to try out my own local LLMs too now
@skobkinru
@skobkinru 2 ай бұрын
The delay before second response is more likely caused by the model being unloaded from VRAM because it wasn't called for some time. Ollama allows you to tweak that and you can have the model stay in VRAM indefinitely.
@kumarhiranya
@kumarhiranya 2 ай бұрын
Thanks for the video, and glad to see such a level-headed take on AI, instead of just diving into the marketing hype like so many other companies! Also, from my personal experience, the reason why its taking so long to respond at 7:33 might be related to Ollama aggressively trying to free up GPU resources and off-loading the model from the GPU when it's doesn't get used for a few minutes. Any new prompt past a few minutes would need it to load back the model on the GPU, which takes some time. I'm sure the screen cap is contributing to the delay as well, but I think its primarily the model loading.
@hdtravel1
@hdtravel1 Ай бұрын
This was very interesting - thanks !! And if you don't have a Framework laptop - order one ASAP - best company and computers ever !!
@lucianobestia
@lucianobestia 2 ай бұрын
It is a joy to listen to you! A smart and realisttic viewpoint. Subscribed immediately. Thank you :-)
@Iniwid
@Iniwid 2 ай бұрын
Bahaha the ending was so funny, especially seeing Nirav smile more and more as he was going along with the upbeat script LOL
@kayseeday
@kayseeday Ай бұрын
Hi Framework! What I would really love is a better trackpad. Apples force trackpad is unrivaled to me and I wish you had a competitor or at the very least to allow clicking throughout the pad.
@LukeDupin
@LukeDupin 2 ай бұрын
I just got my AMD FW 13 a few weeks ago. Running AI's locally are a bit tricky but way better than my XPS 13.
@nathanbanks2354
@nathanbanks2354 2 ай бұрын
I really like ollama. I've been using it for a year to test all sorts of models, though usually I end up using Claude or OpenAI. Hopefully Cerebras will let us test Llama 405b soon, though a couple other slower providers have it already. (The newest o1 is pretty smart but extremely verbose.)
@ryanhamstra49
@ryanhamstra49 2 ай бұрын
Got to meet the framework team at LTX last year, cool team!!
@luckylucy4078
@luckylucy4078 Ай бұрын
A bit disappointed not to see image generation here as well. Projects like Fooocus have made it very easy to setup and use, even on Windows, and even with AMD cards.
@captainfracassful
@captainfracassful 25 күн бұрын
You can run a model on RAM and CPU. I believe it's achievable with the current architecture of the framework 13 ( depends on the cache size of the CPU for inference and RAM size for storage) but for anything more technical besides chatbot applications using anything other than a dedicated GPU will need technical knowledge on programming/DEV OPS . the hype is just there to keep Nvidia stock price afloat after the crypto bubble burst.
@pratikbin
@pratikbin 2 ай бұрын
patel, my man. love it. make these more
@rationalistfaith
@rationalistfaith 2 ай бұрын
You’re fighting the good fight 🤲
@LinuxRenaissance
@LinuxRenaissance 2 ай бұрын
This is the way. We should all teach people how to use software instead of hyping up stuff without context. Good stuff Framework people, good stuff!
@LinuxRenaissance
@LinuxRenaissance 2 ай бұрын
A bit of off-topic, but I am so pumped for your RISC-V experiment. If I will be able to afford it (price is still not announced?) I am getting one as soon as you start selling it.
@TheAuraEngineer
@TheAuraEngineer 2 ай бұрын
This changed my perspective on these “ai computers” a lot, it seems like it could be huge for any smaller companies looking to make their own ai platforms for people to use locally with their computers files too
@RTSun-lx7ee
@RTSun-lx7ee 2 ай бұрын
I bet this CEO know much more than some other random tech company CEO about AI stuff. Talk like a real practical dude, making himself sounds real trustworthy. I will consider Framework laptop next time.
@willnilges8131
@willnilges8131 2 ай бұрын
Wow, I'm honestly impressed.
@rajat0610
@rajat0610 2 ай бұрын
although it is not the most lucrative market and the target audience will be very small in numbers, i really wish Framework expands to India
@MarkEichin
@MarkEichin Ай бұрын
15:18 when chatGPT first came out, the slow "typing speed" output was a gimmick to make the output seem more effortful and thus something to take more seriously. (The delay up to the first output was actual processing.) Pretty sure it's *still* a gimmick - otherwise most laptops would be painfully slow in their output - so why do you (or anyone) put up with it, let alone praise how "fast" it is?
@tapplek
@tapplek Ай бұрын
I thought they made the free version slow so they can upsell you a faster one for the low low price of $x per month
@T0DD
@T0DD 2 ай бұрын
Wish this video went more into the controversial aspects of LLM's regarding training sets, but otherwise I think this was a very level-headed video, especially coming from the CEO.
@TechOtakuYT
@TechOtakuYT 2 ай бұрын
You know whats good content for this channel, its interviewing engineers asking them what they love doing at framework
@steamerSama
@steamerSama 2 ай бұрын
Closing shot with a friendly smile. 15:43 😂
@microcolonel
@microcolonel 2 ай бұрын
I like running whisper for speech to text on my machines. Runs well even on my phone, runs even better on my workstation. If the accelerators can be hooked up for that use case under linux, I'll be happy to see it.
@ThunderGod9182
@ThunderGod9182 2 ай бұрын
I'm sick of AI at this point, I want to see a gaming chassis form you guys, and more GPU options for the Framework.
@aeghohloechu5022
@aeghohloechu5022 2 ай бұрын
gaming laptops are just as.much of a meme as ai is lets be real
@ThunderGod9182
@ThunderGod9182 2 ай бұрын
@@aeghohloechu5022 AI is 100% more of a meme at this point then a gaming laptop. Framework would have already had my money if they had offered more GPU options.
@T0DD
@T0DD 2 ай бұрын
I mean more options in general are always better, but what would a gaming chassis have that the FW16 doesn't already have?
@StevoDesign
@StevoDesign 2 ай бұрын
@@T0DD Racing stripes
@ForgotMyPasswd000
@ForgotMyPasswd000 2 ай бұрын
The framework 16 seems to be a good option, with its support for dGPUs that don't have terrible cooling. What more do you need?
@jeremythesmith
@jeremythesmith 2 ай бұрын
This was a great video, cool info, thanks.
@midnqp
@midnqp 2 ай бұрын
Nirav sounds like a fantastic person!
@wsippel
@wsippel 2 ай бұрын
The main reason the second llama 3.1 demo took a while to get going is that Ollama is designed to run as a system service, so that it's always available to other software like Open WebUI or Continue. To make sure it doesn't constantly eat all your memory just waiting for a client to connect, it unloads the model after a short period of inactivity, and then loads it again once a request comes in. Loading models takes a while.
@miles_tails0511
@miles_tails0511 2 ай бұрын
12:47 let’s pray for the day when you can insert a MacBook picture and have it identified as framework ⚙️
@Tancred423
@Tancred423 2 ай бұрын
I love that you don't just jump on the AI hype train to get another selling point but that you can acknowledge the noise around it and focus on what's real 👏 Also "i use ollama from a group called ollama and they make ollama" was pretty funny to me😂
@ethlanrete6736
@ethlanrete6736 11 күн бұрын
the end is epic tho.😂
@Joshua-Studies
@Joshua-Studies 2 ай бұрын
Been loving my Framework 16 with the Bluefin community build. Using Ollama-Web has been helping with programming and much more. It's been fun to play around with.
@kesslerdupont6023
@kesslerdupont6023 2 ай бұрын
right to repair in the context of AI is AI that the end user can modify over time and ensure longevity of functionality vs cloud hardware that can change on you whenever
@b130610
@b130610 2 ай бұрын
If you are running ollama on linux, make sure to get the "rocm" version of it for it to utilize the GPU. By default it runs in cpu mode, which is quite a bit slower than running on the GPU, though can be useful if you have a really memory pool for your cpu.
@VladyslavKudlai
@VladyslavKudlai 2 ай бұрын
Hi, thank you for such easy to understand description.
@grendel_eoten
@grendel_eoten 2 ай бұрын
WE LOVE YOU NIRAV
@EricLidiak
@EricLidiak 2 ай бұрын
I'd be interested in seeing the updated AMD CPU / GPU. AI or not.
@milesfarber
@milesfarber 2 ай бұрын
I don’t even care about AI, i just want a Ryzen HX 370 framework laptop because it’s the first x86 processor ever made that absolutely destroys Apple Silicon.
@joaomanoellima5947
@joaomanoellima5947 2 ай бұрын
Really cool seeing the ceo like this. Looks like a great company
@EugeneBuvard
@EugeneBuvard 2 ай бұрын
The battery saving on lunar lake looks promising, putting aside the ai stuff, any plan to create a framework laptop with lunar lake and Linux?
@henrikoldcorn
@henrikoldcorn 2 ай бұрын
They’re only just releasing the previous gen, so expect it to take a while if they do.
@hyperspeed1313
@hyperspeed1313 2 ай бұрын
The downside of Lunar Lake is the RAM is soldered on the CPU package meaning it's completely non-upgradeable. I'd expect Lunar Lake to be skipped and Arrow Lake to be the next Intel generation on Framework mainboards.
@novantha1
@novantha1 2 ай бұрын
@@hyperspeed1313I’m not sure it’s as simple as “skipping Lunar Lake”. Lunar Lake, AMD Strix Point, Snapdragon X Elite, most ARM SoCs, most RISC V SoCs that you would use in this case all use soldered RAM. The same class of SoC that went into Lunar Lake and Strix Point (Intel and AMD’s mobile SoCs respectively) tend to have more carefully binned CPUs, integrated GPUs of decent performance, etc… The future successors to those chips will probably have soldered RAM as well. Arrow Lake isn’t really suited to a mobile format, and even if it’s okay for workstation laptops / desktop replacements, it’ll be dependent on having a dedicated GPU, meaning your Framework will be essentially sacrificing battery life (by having a dGPU) for upgradable RAM. Some consumers might make that tradeoff (and if they do, it’ll work out really well!) but a lot of consumers who have laptops want them to be portable and have a great battery life, which his what the mobile SoCs AMD and Intel, and now Qualcomm make are keying their planning around. So, is it really a great strategy to not give Framework customers access to best in class battery life? Now, I don’t want that outcome personally. I love upgradeable RAM. But Framework is beholden to the SoCs that are available off the shelf in the wider market, and I’m not sure that failing to use what their competitors are using is really the best approach. I think what’s more important than upgradable RAM is being able to upgrade the SoC, because a lot of consumers (most, in fact), even when buying a highly upgradable desktop will just buy RAM with their CPU, and never upgrade it. I think as long as Framework still lets you swap out the mainboard non-upgradable RAM isn’t necessarily the worst outcome possible.
@EugeneBuvard
@EugeneBuvard 2 ай бұрын
@@hyperspeed1313 the ram not being upgradable, it would make even more sense to buy a framework with it. For a laptop, I would take that trade off with a better battery life.
@nowaynoway1798
@nowaynoway1798 2 ай бұрын
@@EugeneBuvard tf? that makes no sense at all, framework laptop supposed to be flexible on upgradability, not the other way around
@Tynted
@Tynted 2 ай бұрын
Anyone here used an LLM locally on a FW13 AMD 7640U? Curious what the performance on that is like (ideally with 24GB or more memory installed) Secondly, Nirav is such a cool guy! I also would love more videos by him on any topic! We're really lucky he and his colleagues decided to make a company like this when they probably could've done something much less open and more lucrative.
@lastrae8129
@lastrae8129 2 ай бұрын
You should be able to run the models he showed on the video above reading speed on pretty much any hardware. If you don't fancy using the terminal you can also host your own frontend such as openwebui or librechat.
@kevinkarjono737
@kevinkarjono737 2 ай бұрын
I just ran llama3.1 through ollama like Nirav on my FW13 with a 7640U and 24GB of DDR5 5600 memory and windows 11. Seems to generate faster than I can read, obv not as fast as the 7700S in the video but it seems rather usable. Keep in mind I'm too lazy to reach for my charger so I've been running this on battery (~30-40% too). Loading the 3.1 model seems to use about 6GB of RAM. I had a bunch of other things running so my RAM usage went up to 91% so I would suggest 32GB or even way more if you're going to run other programs other than the LLM at the same time. I also have the VRAM of the iGPU set manually to 4GB in the bios so my total system memory is limited to 20GB. I could do more detailed hardware info testing, but I used task manager since it was easier. From task manager I see that it's using around 66% of the CPU and peaking at around 88%. Not sure if task manager is misreporting the iGPU but it showed only about 8% usage. Overall, it's not bad but RAM might be a concern if you are running other programs. Side note: My battery just dropped below 30% and it seems to be generating a tiny bit slower, but still a bit faster than I can read. CPU usage dropped to 40% average.
@tivrusky4
@tivrusky4 2 ай бұрын
Can't speak to the 7640 in particular, but I'm using Llamafile on a 7840 with 32GB and it generally works well w anything 13B and below. That's around where you start getting perceptible latency for general use, though models around 7B will also see this with larger prompts like character cards. Would also echo the bits about RAM usage when running other stuff. This works fine on its own but not so much with my tabbed browsing problem.
@lastrae8129
@lastrae8129 2 ай бұрын
@@tivrusky4 you should give llama.cpp or one of its wrapper like ollama a shot, quantized model loose very little performance at ~6 or 8 bpw and are noticeably lighter on RAM.
@tivrusky4
@tivrusky4 2 ай бұрын
@@lastrae8129 Llamafile is built around Llama.cpp, and I already am using quantized weights
@MadMathMike
@MadMathMike 2 ай бұрын
I think it would have been great to hear your perspective on how important (or unimportant) it could be to have NPUs integrated into the CPU. This was such a great demonstration of where we are currently at with LLMs and consumer-grade hardware, and I would like to understand how new chips like AMD's AI 300 series "moves the needle" (forgive the corpo speak 🤦‍♀️).
@MachineYearning
@MachineYearning 2 ай бұрын
I have no intention of buying this product but the company gained ++respect with this video. Nice representative.
@SkywardKing
@SkywardKing 2 ай бұрын
We need that Framework gaming handheld! Ultra modularity! Running SteamOS. Sounds beautiful. The idea of being able to upgrade the board on my gaming handheld would be awesome!!! awesome.
@singular9
@singular9 2 ай бұрын
As someone who will eventually need to upgrade my aging and falling apart Surface book 2 (what a beast lasting this long), framework is where I am going.
@lucadrogon
@lucadrogon 2 ай бұрын
This is the right approach. I want a powerful machine that can run AI applications. On my own PC (not a framework), I cannot run powerful Modal due to performance limitations. If you can add a dedicated, powerful AI processor (NPU) that can handle AI tasks easily (similar to having a separate GPU), that would be even better. What I don't care about is having a Copilot button on the keyboard.
@ChocolateSax
@ChocolateSax 2 ай бұрын
Loved the video. Easy to follow
@albuslee4831
@albuslee4831 2 ай бұрын
In my opinion, Nirav Patel has a potential for being a similar type tech personality in the Social media space in my opinion. First it will not be easy to make Nirav since Nirav can't self edit his comments well enough, but with a right producer ( host, editter of sort) over time he will trained be better at making his own conversations interesting. So far Nirav's great intereting comments stop being interesting all the time because it gets too long very quickly. It's a self editing problem. Problem known for hyper-Nerds, talking to long, lack of understanding of how good conversations works. This problem can be mitigated by putting a host, a conversation mitigator, an interviewer in the video who knows how to make a conversation interesting, who knows when to make Nirav stop from talking too much, too long which pulls the audience's attention away. Who can make the video have a good rhythm and flow so the watchers stay to the end of the video. I bet there are many kids who look up to Nirav because what he have proved with his products is admirable. The presence of a interviewer who leads the conversation is the main reason way Nirav's videos do much better when he is appearing in other KZbin channels, but not as much in the Framework KZbin channel. Nirav is interesting but the videos put out by Frameworks are boring. Good material wasted. The approach is wrong, outdated approach similar to the old big tech brands. A good reference of a good approach to the social media is the case of Mark Cerney ( the cheif architect of Playstation department) He is a legitimate celebrity between Engineering nerds at this point and that is not because he is from Sony, it is an intentional marketing strategy they made. The way they made Mark Cerney's Social media presence in a very controlled, selective way. Not wasting any of the viewers watch time. Not as great as the case of Mark Cerney, but the way Nintendo building Miyamoto Shigeru's Social Media presence is also a great approach. Nintendo's way is especially interesting because their approach to their audience have changed over the last 30years. Also becaue it's very easy to hate big brands for nerds rather than to like them and look up to them. The evidence of Nintendo's success of building Miyamoto Shigeru's Social Media presence is many Nintendo fans hate Nintendo as a company, hate many of their policys but most of them like Miyamoto Shigeru. I hope Framework Laptops survive as an established brand in the long run. Not a forgotten hidden gem from the past, a failed startup which was trying to hold up a good philosophy but failed in the end buy not attracting the audience well enough.
@Napert
@Napert 2 ай бұрын
11:05 I'm surprised it didn't just make up all of those results like it tends to do for me
@rimbaud0000
@rimbaud0000 2 ай бұрын
Thanks so much for not wasting your efforts on AI which has not proved itself
@captainpumpkinhead1512
@captainpumpkinhead1512 2 ай бұрын
Some people might be scared by using a terminal UI. There is also a frontend called Open WebUI (previously Ollama WebUI) which connects easily to Ollama and is really easy to use.
@ojassarup258
@ojassarup258 2 ай бұрын
Nice presentation but honestly still don't know why I would use any of the things covered.. Maybe the only legit use cases I can think of are if I feed it some documents and it parses them locally and helps me summarise them or combine information from them. Or maybe give it a folder of images and ask it to classify things or pick out images of a cat etc. But i think for consumers there needs to be a proper UI interface and stuff.
@dragomirivanov7342
@dragomirivanov7342 2 ай бұрын
GPU is cool and all, but what about the NPU unit in Ryzen? GPU is limited only to the VRAM, but NPU potentially can use your whole RAM, 96GB or so. It would be awesome to be able to run LLama3.1-70B on a Ryzen.
@BriefNerdOriginal
@BriefNerdOriginal 2 ай бұрын
Similarly as for SEO, I imagine Framework will also need to be fairly present into training datasets for LLMs 🤓
@TommyLikeTom
@TommyLikeTom 2 ай бұрын
I don't know if you guys have Codium or Co-pilot but it already feels like I have an AI PC. They are similar but Codium can automatically access your whole codebase without any extra setup. Not sure if co-pilot can do that at all
@Techonsapevole
@Techonsapevole 2 ай бұрын
Cool, will you build PCs with AMD hx 370 / 365 or Intel Lunar Lake ?
@Containerrd
@Containerrd 2 ай бұрын
I pray for the day Framework officially launches in India, love the mission statement, people and the product ❤
@nowaynoway1798
@nowaynoway1798 2 ай бұрын
I believe that's gonna take loooong time, considering the import tax and stuff with current indian govt
@codedusting
@codedusting 2 ай бұрын
Hope Framework sells in India some time in future. Or ship to India with better post sale support.
@eerrcc
@eerrcc 2 ай бұрын
the decision factor for me for my next laptop is being able to run ollama / lm studio -- a decent / strong NPU, and > 32 GB of RAM. Not going for one with GPU.
@wenenwo5341
@wenenwo5341 2 ай бұрын
hey laptops can usually last 10+ years already just for work but can you make framework 16 inch one designed for gaming as upgrades are more necessary for playing newer games and make it like an old alienware so it is sort of bulky and it might make more room for upgrades and cooling and will overall be great as even 20 years ago people could live with old thinkpads when they werent really thin and lightweight
@shApYT
@shApYT 2 ай бұрын
Real use cases? Something that can run sdxl turbo and mistral nemo in realtime. Unless it can do that it's not a real AI machine. None of the AI branded PCs do it. With realtime generation you can actually steer it with your own artistic direction and creativity instead of being left with the model churning out slop. LLMs are useful for pasting in docs and getting out simple answers like using ffmpeg cli or inpainting in krita with flux or stable diffusion.
@Winnetou17
@Winnetou17 2 ай бұрын
One thing I'd add is that using a model is relatively easy, hence why it can be done on a laptop, even without a beefy GPU. Training a model (aka constructing it) on the other hand ... that is much more resource intensive. So if you want to have a model based on your data (because, say, you don't trust the ethicallity of the available large models, including the open ones) you'll only be able to have a pretty basic one, or up to a decent one that's very specialized. Not to mention that you usually need TONS of data for the training. This is why I'm still waiting on the AI. Until I can run it properly locally (I guess with the next generation of laptop GPUs, having one with 16 GB of VRAM, say a successor to the recently released 7800M might be good) and until I can either a) train it with my data and/or b) have well-known well-audited public models that are very exactly documented what they were trained on, and that it's things that are public and/or licensed for AI training (alternatively not have the model directly but the data)
@MrDoBo95
@MrDoBo95 2 ай бұрын
I have my home server with a 2600x and an rx6600 for Ollama running. Mistral nemo runs great, some stable diffusion branches also work so I'm happy. It's integrated into Nextcloud assist so why would I need anything on my fw 13? Also, my experience with mistral is definitely better than with llama 3.1. I was also surprised how easily rocm finally works and fedora server is pretty awesome.
@doesthingswithcomputers
@doesthingswithcomputers 7 күн бұрын
You can set HSA_OVERRIDE_GFX_VERSION=11.0.0 on the latest rocm to use the 7700s on the framework 16.
@MrJannieboy
@MrJannieboy 2 ай бұрын
Loved this video!!!
@PaintingPaul
@PaintingPaul 2 ай бұрын
I love this video❤ I haven’t heard so much about the models you can run locally - I’ll definitely try them out in my framework 16 :D PS - do you need a dedicated GPU to run them or is the iGPU with enough normal ram appropriate as well?
@lordkekz4
@lordkekz4 2 ай бұрын
If you want a good experience, you need a NPU or dGPU usually. For now, get the dGPU if you can. It has better peak performance than the integrated NPUs and probably also better software support (since NPUs are still very new).
@garrettrinquest1605
@garrettrinquest1605 2 ай бұрын
You can run most of these models on the CPU. You just need to have enough ram. Like he said, you need about 8GB of RAM dedicated exclusively to the model. It will be slower than on a GPU, but it's often still usable. Try it out. Worst case scenario is that it doesn't work and you have to uninstall it
@PaintingPaul
@PaintingPaul 2 ай бұрын
alright thanks!
@jupiterbjy
@jupiterbjy 2 ай бұрын
wish to see arm framework! with sve(Q4_0_8_8) or i8mm(Q4_0_4_8) should run at decent speed.
@shaunakdsilva
@shaunakdsilva 2 ай бұрын
The creases on his forehead tell me that he deeply cares about everything he does. This man has only added value and with sincerity through Framework and his KZbin channel. I hope he continues to maintain the same integrity and does not sell his soul to capitalism. Hey Nirav, you're my role model and I wish to meet you and work with Framework someday and hopefully bring framework products to India.
@imperatorpalpatine3978
@imperatorpalpatine3978 2 ай бұрын
This talk about AI leads me to a question: As both x86 Copilot+ platforms, that being Lunar Lake and Ryzen AI, do not feature upgradable RAM, will there still be motherboards for these platforms? I'd also be interested in knowing whether or not there are plans for a dedicated convertible (maybe 14-15in?) or a convertible mod?
I invested $225K in Framework Laptop - 1 Year Update and 12th Gen Upgrade
20:40
Developing the RISC-V Framework Laptop Mainboard
24:59
Framework
Рет қаралды 142 М.
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 235 МЛН
Turn Off the Vacum And Sit Back and Laugh 🤣
00:34
SKITSFUL
Рет қаралды 7 МЛН
My thoughts on framework after daily driving it for 2 years
16:34
Louis Rossmann
Рет қаралды 723 М.
I’m Legally Obligated to Disclose This - Framework investment
15:59
Linus Tech Tips
Рет қаралды 3,2 МЛН
Building LLMs from the Ground Up: A 3-hour Coding Workshop
2:45:10
Sebastian Raschka
Рет қаралды 77 М.
Why The US is Struggling to Return to the Moon
19:55
Real Engineering
Рет қаралды 613 М.
Framework 16 Laptop: Is This The Future?
25:23
Hardware Canucks
Рет қаралды 192 М.
Linus Re-Evaluates his Investment
48:37
LMG Clips
Рет қаралды 631 М.
Node.js: The Documentary | An origin story
1:02:49
Honeypot
Рет қаралды 667 М.
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 235 МЛН