@TiffInTech On the chat screen, you will see a markdown option right between plaintext and monospace.
@OnADarkNote3 ай бұрын
For model selection, Q4-5 is typically the best option for running with fewer errors on low- to mid-range computers. Essentially, any quantized model is a simplified or "LOBOTOMIZED" version of the original. High-end computers with sufficient VRAM should handle Q8 models without problems. The relevance of Q3 models is questionable, but the video was very good. Keep it up.
@Jibril_Abdulkadir3 ай бұрын
I have heard about the snapdragon chip and the upgrade in AI for it it sounds very exciting Great video tiff!!
@Eric-v8t3 ай бұрын
I tried it on my HP i7. It is a bit slow but still faster than a human developer. Thanks for the video 👏
@nanomat36042 ай бұрын
Hi Tiff. How long it took?. I already can run Local LLM for develop in my 4 years old laptop, but it is really slow. Thanks in advance.
@jaldaadithya86552 ай бұрын
I think the LLMs are not using NPUs. Am I right?
@LittleBoobsLover8 күн бұрын
yeah they are developing it. They want to add it in the pfuture - now it runs only on cpu. Strange because Quallcomm showed in one booth app with llama llm working on npu. IDK if this is avaible somewhere to download.
@Topgunchannel3 ай бұрын
Hi! Tiffany from japan! I can’t imagine that! I want to open this world…
@themax2go3 ай бұрын
you want to open the world??
@keerthes3 ай бұрын
Does the lcd really hold battery more than oled screens with 8 hours ? Can u check it
@themax2go3 ай бұрын
this thing is $1k... you can get a mac m1 with better specs including better battery life (mine's 5 years old and still lasts all day) for half the price (~$500 now)
@akinoz3 ай бұрын
@@themax2goa lie is in premises
@HumanName-zv2jv2 ай бұрын
OLED is only power saving when you want a contrast with lots of black screen areas and very bright other parts, so efficiency entirely depends on how you use it. Anyway, a 70wh battery is common for these laptops. If you want to use LLM, it would likely grow to above 70w consumption and probably last only half hour, because resistance heating increases significantly when li-ion batteries are pumping more than 1C, and these likely are using li-ion.
@Mdnumaanuddin9993 ай бұрын
Hey tiff pls make a video on Devops career should I go for it or not. I'm requesting from 3 weeks pls reply 😢
@themax2go3 ай бұрын
doesn't matter just go for it and learn as much as possible on-job and off-job and then switch track... all the dev stuff will get more and more automated, so just keep learning in your track quickly and then learn about how to your your knowledge to innovate
@ravirajchilka3 ай бұрын
When I run libtorch debug version it takes around 1 min 30 seconds to get the output with a simple 3x3 tensor, and for release version it takes around 30 seconds, does anybody know why it take so long to get the output ?
@themax2go3 ай бұрын
on what system
@ravirajchilka3 ай бұрын
@@themax2go windows, cuDNN is enabled
@ravirajchilka3 ай бұрын
@@themax2go windows, cuDNN is enabled
@ravirajchilka3 ай бұрын
@@themax2go Windows with cuDNN
@ravirajchilka3 ай бұрын
@@themax2goWindows with cuDNN
@burnedbrowncoal75363 ай бұрын
ah, i am in love with that light blue eyes and smile 🙂
@moisesespiritosanto21953 ай бұрын
Hi, I'm from São Paulo! Great! Amazing Code!!! ABRAÇO FORTE DE URSO!
@Muhammadkamran-cg5op3 ай бұрын
You have guided very well. very good❤❤❤❤
@anugrahagomjan68562 ай бұрын
You looking so gorgeous
@frostgodx6 күн бұрын
Put it away sicko wtf is your problem
@AvinashSingh-vj3rk3 ай бұрын
Nice
@TomNook.3 ай бұрын
They speak Cantonese in Hong Kong, not mandarin. For now, anyway.
@marekkroplewski67602 ай бұрын
Misleading video. Noobs get attracted to a "new laptop for AI", in reality she is using a 13B model but quantized. Takes around 7 GB of ram as seen in LMStudio. And she cut the generation time of tokens, hence one will not asses how good this is with LLM generation. Please test it properly. Tokens per second, ram usage, CPU (NPU?) usage. Other than that this is just nice pictures and misleading.
@HumanName-zv2jv2 ай бұрын
Yup. $$$ :) $$$ :) $$$ :) My erying i7-1360p with onboard cpu without gpu which I got for $130 runs an 8b_q6 model faster than I can read. Slower it ran 28b and even slower I managed to run a 70b. However, I am looking up these videos because of the supposed MORE THAN DOUBLE faster memory bandwidth of the snapdragon x elite. Mine is said to have less than 65gb/s, but these are said to be more than 130gb/s. Yet all I find is tiny models, with dishonest videos like this one. With this memory bandwidth I would expect to see people making a usable speed of highly quantized 70b models.
@HumanName-zv2jv2 ай бұрын
Not trying to sound hateful, but these reviews are exactly what we don't need. Not what you and I were looking for. I'm in no rush though, these processors do not support linux too well.
@Zbk-r6m3 ай бұрын
Are you single? You should talk about that too. People are curious :)
@frostgodx6 күн бұрын
Are you from India
@Nitin-wo5xp3 ай бұрын
Sorry iam late 😅❤🎉
@themax2go3 ай бұрын
overpriced. you can buy 2 macbook air m1s for that amount, and they have a better battery life, my 5yo macbook air m1 16gb ram lasts ~15h