This is exactly how tutorials should be! I’ve wasted so much valuable time on other KZbin channels where you have to suffer through 20 minutes of mindless rambling just to get 2 minutes of actual information!
@docfish7552 Жыл бұрын
Andy: Always enjoy your videos. Thanks!
@oleskool625911 ай бұрын
Thanks you really covered things nobody else did like tokens etc . Im gonna give this thing a whirl Thanks Again
@KarlAxelZander11 ай бұрын
Great tutorial, thanks! Great non-commandline open-soruce option for local llm. Next step after this tutorial trying to load something large like Mixtral 8x7B that feels like it almost would work with one's PC-spec - I suggest to find the slightly smaller Q-version of the model on Huggingface (as .GGUF), and manually import it into Jan, and give it a go. Was very easy aswell 👍👍
@hanasafayi10 ай бұрын
i wish you give us much more information about each model.
@handler00711 ай бұрын
can I upload a pdf for reference for the ai's response? if not, then basically jan is useless.
@thatguyalex283510 ай бұрын
Yeah, you can put a PDF on there. But, you must have a fast computer for that. :)
@RandalTurnerMKULTRA6 ай бұрын
I downloaded Jan AI and intalled a few models, tried to have it write using Chat GPT and it says I need to pay?
@samstep28410 ай бұрын
how is it different from LM Studio?
@user-xk5cs6fo3c8 ай бұрын
how about the api problem
@ally38549 ай бұрын
Is dowmloading to my work laptop is not an issue? I find it very useful in case i need to create a recap from our team meeting or any short meeting. Appreciate your advise Andy.
@tracyrose27497 ай бұрын
I have a robust 3090 video card with 24GB ram...and over 128GB of normal RAM. It was still chewing up the CPU time. I downloaded Nvideo SDK and for whatever reason it now uses only Video card RAM like it should. Mistral is the best as you said. Can we prove JAN is now sending any of our data to China or anywhere else? Who can verify this? On my machine Mistral runs as fast as ChatGPT.