Run Deepseek Locally for Free!

  Рет қаралды 74,560

Crosstalk Solutions

Crosstalk Solutions

Күн бұрын

Пікірлер: 257
@DanielZuluagaVidaenAntioquia
@DanielZuluagaVidaenAntioquia 6 күн бұрын
This truly is the best video for getting up and running locally your own AI Chat Bot. Thanks a lot, it's amazing!!!
@edoneill6701
@edoneill6701 4 күн бұрын
100% agree
@MikeFaucher
@MikeFaucher 7 күн бұрын
Excellent tutorial. This is the most useful and detailed video I have seen in a while. Great job!
@neilyoung6671
@neilyoung6671 5 күн бұрын
Perfect tutorial. At the point. I had some bumps but finally got it to work on Ubuntu 20.04. Thanks for sharing
@fazlayelahi29
@fazlayelahi29 5 күн бұрын
I love the way he talks and teach... Its very very helpful...!! ❤
@robert.glassart
@robert.glassart 6 күн бұрын
Thanks!
@20648527
@20648527 7 күн бұрын
Excellent! Amazing to the detail tutorial. Keep it up 👍🏻
@TEDOTENNIS
@TEDOTENNIS 7 күн бұрын
Great video! Thanks for taking the time to create it.
@jaydmorales23
@jaydmorales23 7 күн бұрын
This is super cool! Instructions on how to uninstall all of this could be helpful as well
@enginar69
@enginar69 6 күн бұрын
format
@henryuta
@henryuta 3 сағат бұрын
Just delete the docker image and it's all gone
@CahyoWidokoLaksono
@CahyoWidokoLaksono Күн бұрын
Exactly the video that I need. Thanks!
@HaraldEngels
@HaraldEngels 6 күн бұрын
I am running locally installed LLMs on my mini PC ASRock DeskMeet X600 with the CPU AMD Ryzen 5 8600G without a dedicated GPU. The AMD 8600G has an integrated NPU/GPU. I have 64GB RAM and a fast SSD. I can run easily LLMs up to 32B with Ollama under Ubuntu 24.04. The whole setup was significantly below $1,000. Inference with big models is slow but still 50 times faster then when I have to perform such tasks by myself.
@jagvindersingh4543
@jagvindersingh4543 5 күн бұрын
@HaraldEngles, question, are token per seconds on 32b, performance wise is it fast, moderate or slow ?
@ryanjusay6072
@ryanjusay6072 6 күн бұрын
Great tutorial! Excellent session and easy to follow.
@SonyJimable
@SonyJimable 7 күн бұрын
Awesome walk through. SUBSCRIBED!!! I also love the Minisforum mini server - I have my eye on one those and also on their Atomman G7 PT with an integrated 8GB RX 7600M XT...
@jpmcgarrity6999
@jpmcgarrity6999 2 күн бұрын
Thanks for the video. I did it in powershell with Choco.
@CraigDurango
@CraigDurango 3 күн бұрын
Awesome tutorial, greatly appreciated!
@mockmywords
@mockmywords 4 күн бұрын
I just set it up, thanks for the clear instructions!
@TechUnGlued
@TechUnGlued 5 күн бұрын
Excellent video. Keep them coming. Have a good one.
@henryuta
@henryuta 4 сағат бұрын
Great video!
@mpz24
@mpz24 7 күн бұрын
Going to try this as soon as I get home.
@mladenorsolic370
@mladenorsolic370 3 күн бұрын
Great video, now here's an idea for next one : rather than using this chatGPT like UI, i'd like to query my local model using API, basically writing my own UI to communicate with LLM. Any hints on how to start ?
@AlanCheun
@AlanCheun 4 күн бұрын
cool, wanted to try something like this, with potentially mac mini M4, partly in consideration of energy consumption, but will consider some of the other options you mentioned
@Stecbine
@Stecbine 5 күн бұрын
Incredibly helpful video thank you, liked and subscribed!!!
@turbo2ltr
@turbo2ltr 7 күн бұрын
I just set up ollama on a VMware VM on my 12th gen i9 laptop. It's not the fastest thing, but was faster than I thought it would be, at least using the Ollama 1.5b or small Deepseek-r1. Now I want to actually make a small AI machine with a decent GPU.
@svedalawoodcrafts
@svedalawoodcrafts 6 күн бұрын
Super nice tutorial!
@thomasbayer2832
@thomasbayer2832 5 күн бұрын
Just what I needed!
@zerosleep1975
@zerosleep1975 7 күн бұрын
LM Studio is also an alternative worthy of looking at to serve multiple loaded models.
@jridder89
@jridder89 7 күн бұрын
And it's much easier and faster to install
@yakbreeder
@yakbreeder 7 күн бұрын
I don't get Ollama when LM Studio is SO much simpler to get setup and running.
@GPTLocalhost
@GPTLocalhost 7 күн бұрын
Agreed. We tested deepseek-r1-distill-llama-8b in Microsoft Word using MacBook Pro (M1 Max, 64G) and it ran smoothly: kzbin.info/www/bejne/imLQqmWdps5gbbM
@jridder89
@jridder89 5 күн бұрын
@rangerkayla8824 the underside of LM is literally ollama.c
@dailymotion101
@dailymotion101 Күн бұрын
Or Pico AI on a mac. Or privateLLM.
@jaxwylde2139
@jaxwylde2139 7 күн бұрын
Keep in mind that UNLESS you're using one of the very large parameter models, that the output is often wrong (hallucinations!) . Deepseek-r1 (8 Billion parameter), listed "Kamloops Bob" (whoever that is), as the 4th Prime Minister of Canada. It told me that there were two r's in strawberry, and only corrected itself (with a lot of apologizing) after I pointed that out. It also told me that Peter Piper picked 42 pecks of pickled peppers, because that's the answer according to the Hitchhiker's guide (42 is the universal answer to everything...LOL). Unless you have the space and hardware to install one of the very large models, I wouldn't take any of the outputted results as being accurate (without cross checking). It's fun (hilarious, in fact) to play with, but take the results with a LARGE grain of salt.
@ok-ou7qk
@ok-ou7qk 7 күн бұрын
how much vram do you have?
@789know
@789know 7 күн бұрын
btw only the 671B deepseek one is the real deal, the other one are just distilled model of Llama/Qwen (distilled using R1 output, so still improved from original) 8 billion may be too little I think some data show 14B seems to be the sweet spot (i think it is distilled R1 14B or sth like that), result not too far off compare to 32B on paper 32B Distilled R1 Qwen2.5 beat out 70B Distilled R1 Llama If ur hardware can handle it, i suggest trying the 14B
@jaxwylde2139
@jaxwylde2139 7 күн бұрын
@ I've got a gaming laptop with mobile version of rtx4080 with 12 Gb VRAM. My Laptop also has 32 Gb RAM. I was able to run the 14B version with no issue, but it has too many hallucinations. I'm sticking with llama3.2 and phi-4 as they suit my needs perfectly. Cheers.
@jaxwylde2139
@jaxwylde2139 7 күн бұрын
@ I agree. I misquoted my original post. I used the 14B version (and the 8B before that). I still had a bunch of errors (hallucinations), when compared with llama3.2 which answered more accurately. Although all seem to struggle with the number of r's in the word strawberry 🙂
@agoysy
@agoysy 6 күн бұрын
thanks bro. I was planning to use the 8GB version as I value accuracy I canceled. Btw fun fact on Kamloops Bob is a person named Robert Trudeau from Kamloops, Canada. I think it got mixed up w Justin Trudeau, you guys guessed the 23rd prime minister of Canada. No idea how it went to 4th, but there it is.
@DenisOnTheTube
@DenisOnTheTube 7 күн бұрын
This is a GREAT video!
@mohamedeladl6273
@mohamedeladl6273 6 күн бұрын
Thanks, for your great video! How much storage needs to install both models?
@Pub_Squash
@Pub_Squash 2 күн бұрын
Why do you not also enable Windows Subsystem for Linux while in Windows Features, is that not what's needed?
@Threadripperbourbon2024
@Threadripperbourbon2024 6 күн бұрын
May have to go into your BIOS to enable Virtual Processing.
@tonysolar284
@tonysolar284 7 күн бұрын
16:07 Any LLM can use the tag.
@MrBrettStar
@MrBrettStar 20 сағат бұрын
I followed the steps but ollama basically crashes as soon as I enter a prompt, in openwebui and direct into cmd. Yet if I install Ubuntu on the same machine (within windows using wsl) and then install ollama it works fine within that environment so I’m not sure why it’s not working
@elypelowski5670
@elypelowski5670 7 күн бұрын
Excellent !!! I will have to load this up on my server :)
@traxendregames7880
@traxendregames7880 6 күн бұрын
Great job thanks for all information and your work, i will try that out soon! do you have recommendation if i want to buy a used GPU for this type of usage?
@jamesbrady9105
@jamesbrady9105 6 күн бұрын
A most awesome video and detailed perfectly. I do have an issue, downloading the model filled up my hard drive, how can install it to an alternate drive? I have a 250GB C drive and 5 TB hard drive for my D drive. I want to install it on the 5TB one.
@TomSmith-yh9ju
@TomSmith-yh9ju 6 күн бұрын
ok, the problem for w10 users : WSL is installed by default in Windows 11. In Windows 10, it can be installed either by joining the Windows Insider program or manually via Microsoft Store or Winget. ---- without wsl - no docker
@HefaiztShouse-v8f
@HefaiztShouse-v8f 21 сағат бұрын
I appreciate your post! My okx wallet holds USDT and other coins and I’ve got the seed phrase :(tag suit turtle raccoon orange fever main skull special hungry crystal mixture). Could you explain What's the best way to send them to Binance?
@selwynbeck2356
@selwynbeck2356 Сағат бұрын
erm, Docker Desktop keeps crashing- even after reinstalling it 3 times. Says it is latest version. Cannot proceed further. Help?
@thomasbayer2832
@thomasbayer2832 5 күн бұрын
Perfect!
@zhouly
@zhouly 5 күн бұрын
For anyone having difficulty installing a Linux distribution in Windows Subsystem for Linux, pls check that virtualisation is enabled for your CPU in the BIOS. Without a linux distro installed in WSL, Docker won’t start.
@Viper_Playz
@Viper_Playz 7 күн бұрын
Very helpful video!
@Dragonninja904
@Dragonninja904 6 күн бұрын
love the video but i have a question how are u using 2 gpu on your main machine i have 3 laying around but i dont know how to combine their power
@mrsajjad30
@mrsajjad30 2 күн бұрын
How can I setup a local model on a computer with no internet connection?
@emaasunesisgloballtd1457
@emaasunesisgloballtd1457 2 күн бұрын
how do i delete the model i dont want?
@TC-yr8qb
@TC-yr8qb 3 күн бұрын
How do you force ollama to use the GPU? When i use the 70b my 3090 sits at 10% usage and cpu and system ram goes to 100%. Only with the 30b does my 3090 get used properly.
@LegionInfanterie
@LegionInfanterie 6 күн бұрын
btw tiammen masacre thing is not answered on online model, if you host ist localy, model answer it without any censorship
@ClassyMonkey1212
@ClassyMonkey1212 6 күн бұрын
People use that as this big smoking gun but I don't know about anyone else but I don't sit at home all day using LLMs to talk about China. The more impressive thing about deepseek it's basically jailbroken
@laujimmy9282
@laujimmy9282 2 күн бұрын
​@@ClassyMonkey1212exactly
@alonzosmith6189
@alonzosmith6189 7 күн бұрын
Tk U for sharing, working with no issues
@artiguf
@artiguf 3 күн бұрын
Great video - though I have issues :-) docker is installed and when I have installed the openwebui, it wont start! Is it a requirement, that the proxmox vm has nested enabled ? (I assume so, so did that.admin). So I uninstalled and re-installed Docker and I then installed WSL via Powershell, and lastly Re-installed webUI. So now WebUI starts in Docker and stays running. :-)
@sham8996
@sham8996 19 сағат бұрын
Can you make a tutorial to install and run a NPU optimized deepseek version on Cpiolot+ pc with Snapdragon ?
@jiandam
@jiandam 3 күн бұрын
Do you have any guidance for install DeepSeek and use it for offline prompting? I saw many examples but only for creating a free chat app offline, not for prompting tasks like what we can do with the paid API.
@alneid2707
@alneid2707 5 күн бұрын
Hmmm... Have a 3060(12GB) hooked up to my MS-A1. I'll have to try installing this to the GPU. Thanks for the tutorial!
@RealityTrailers
@RealityTrailers 2 күн бұрын
So after downloading 10 different things, rebooting a few times, DeepSeek Ai works. Thanks.
@zo2o
@zo2o 2 күн бұрын
Well, I installed ollama, and I started using it through the cli. It is sufficient to see that the low parameter versions (up to the 14b, which I could reasonably run on my toaster) are just garbage for an enduser like me (make no mistake, they are still tech marvels, but from a practical viewpoint, not really fit for the job yet). I need to invest into some hardware if I want to move on to the useful models. I wonder though, if they are correctly or at least better prompted, then could they be actually useful? Here is an example. I prompted the following instruction: "Find the three mistakes in the following sentence: This sentance contains, three mistakes." The online version solved the problem almost flawlessly, though regarded it a paradox for some reason (maybe paradoxes are fasionable). The smaller models just couldn't really tackle the problem. I might add, I used Hungarian as the input language, just for more fun.
@hendrix5928
@hendrix5928 Күн бұрын
thanks that worked great though for me I had to enable virtualization capabilities in my bios before i could get docker to work with out giving me an Unexpected WSL error
@josephayooluwa8802
@josephayooluwa8802 3 күн бұрын
I pulled the open webui image with podman and i have logged in to open webui but it can't see the model i have downloaded already nor can it pull a new model. Any idea why this is happening? Thanks.
@mounishbalaji2038
@mounishbalaji2038 7 күн бұрын
Nice video. Can you please make a video how to completely uninstall all this from my computer after setup everything.
@KevinBoland215
@KevinBoland215 6 күн бұрын
To install WSL I like to open a Powershell window and use command wsl --install then reboot. The default Linux Distro is Ubuntu. Say you wanted Debian then you can issue command wsl --install -d Debian. Hope this helps. From Powershell window to update WSL the command would be wsl --update
@FredericoMonteiro
@FredericoMonteiro 5 күн бұрын
very good tutorial, thanks a lot.
@marpandz8483
@marpandz8483 7 күн бұрын
What if i want to delete the first model i downloaded(llama) and just use the second one that i have downloaded(deepseek)?
@aaronthacker9509
@aaronthacker9509 5 күн бұрын
I can't get the Docker Desktop Installer to run, even as admin. It spins a wheel for 2 seconds then quits. Seems to be a common issue, but no advice seems to be helping
@HeyStanleey
@HeyStanleey 10 сағат бұрын
I never understood the need for registering to Open Web UI and "login". All videos skip this part.. kind of weird for me. where does that information go? Overall the video is great, step by step. but that's my only big concern
@CrosstalkSolutions
@CrosstalkSolutions 4 сағат бұрын
It’s for local credentials. Open WebUI is a service - multiple people can use it from different computers. They would need their own logins so that you’re not sharing query history.
@EduardoKabello
@EduardoKabello 6 күн бұрын
Nice tutorial! Is there a way to create a video showing Ollama installed on a mini PC running Linux, using an NVIDIA graphics card installed on another PC running Windows, where they communicate over the network.
@Jamilkhan--
@Jamilkhan-- 6 күн бұрын
I there any way to install this on d: as i do not have space on my c:
@dmytro_ryzhykh
@dmytro_ryzhykh 6 күн бұрын
many thanks for the video. Could you please paste actual commands (not clipped images) for running the container with various variables? Thanks in advance!
@clockware
@clockware 4 күн бұрын
How much time would it take to have an answer with 70b or 671b on recent, but average CPU-only PC?
@Tej-k7f
@Tej-k7f 18 сағат бұрын
I have installed that for the shake of curiousity and now wants to free up some space so how can I uninstall all of that if anybody has any idea please help me out
@guyinacrappyvan
@guyinacrappyvan 6 күн бұрын
if I set this up on a headless machine, how to I access from other machines in the house locally? And can I set up separate accounts for each family member to this one machine?
@enigma6643
@enigma6643 4 күн бұрын
is it possible to run it on oogabooga textgen web ui as i used to run other models ?
@GabeTetrault
@GabeTetrault 7 күн бұрын
Yep, this got me curious. I'm installing it now.
@CrosstalkSolutions
@CrosstalkSolutions 7 күн бұрын
Follow up and let me know how it goes!
@thanos1000000fc
@thanos1000000fc 7 күн бұрын
any way to run it without docker?
@Viper_Playz
@Viper_Playz 7 күн бұрын
yeah, he Literally said that it works without it. Docker is just to make it look nice
@thanos1000000fc
@thanos1000000fc 7 күн бұрын
@@Viper_Playz I want to make it look nice without docker
@pixaim69
@pixaim69 7 күн бұрын
Yes you can run Openweb Ui without docker. There are some instructions on the website.
@TomSmith-yh9ju
@TomSmith-yh9ju 6 күн бұрын
also got Docker Desktop - unexpected WSL error, ... be shure virtuell computer is activated in your bios ...checking, if isocache exsists ...
@seasonedveteran488
@seasonedveteran488 3 күн бұрын
How do you remove a specific model?
@fredsmith7964
@fredsmith7964 6 күн бұрын
Any additional steps or software needed to use Ollama with an intel gpu like the A770?
@Capitan_Cavernicola_1
@Capitan_Cavernicola_1 7 күн бұрын
Would this work with MacOS too? If not how. Greatly appreciated!
@teknerd
@teknerd 6 күн бұрын
Is this method better than installing something like LM Studio and GPT4All? Does it perform any better?
@Pieman16
@Pieman16 Күн бұрын
Can this be done on unraid?
@LeadDennis
@LeadDennis 7 күн бұрын
So helpful. Thank you.
@V.I.POwner
@V.I.POwner 4 күн бұрын
What if you run into a problem with the WSL update when going thru the docker install process at the end
@V.I.POwner
@V.I.POwner 4 күн бұрын
I made it past the issue and now I can download models but now I noticed how much processing power you need and I'm just running on a 8g ram on a lenovo flex 5i.. what much can I do on this.lol
@greymatter-TRTH
@greymatter-TRTH 6 күн бұрын
(HTTP code 500) server error - Ports are not available: exposing port TCP 0.0.0.0:3000 -> 0.0.0.0:0: listen tcp 0.0.0.0:3000: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
@StephenAigbepue
@StephenAigbepue 6 күн бұрын
Wow...Thanks, Can I do Data analysis by uploading my data from my local machine, as with ChatGPT 4o paid version?
@kurt_harrop
@kurt_harrop 6 күн бұрын
use the + on the left of the text box to upload a document is the basic description. There are videos on this topic,
@9ALiTY
@9ALiTY 5 күн бұрын
I get WLS -- update failed on docker every time.
@josephayooluwa8802
@josephayooluwa8802 5 күн бұрын
can i use podman desktop instead of docker?
@RodrigoAGJ
@RodrigoAGJ Күн бұрын
LM Studio is also an easy alternative!!
@McMaxW
@McMaxW 6 күн бұрын
Is it bettter to use Linux if I have an AMD GPU so I can use Rocm? Or there would be no difference?
@Threadripperbourbon2024
@Threadripperbourbon2024 6 күн бұрын
Do I need Windows 11 "PRO" vs Home to get the Virtual Machine Platform operating?
@zhouly
@zhouly 5 күн бұрын
No, Windows 11 Home is good enough.
@GS-XV
@GS-XV 5 күн бұрын
Ollama's website states that it no longer requires WSL and now runs natively on Windows.
@cowlevelcrypto2346
@cowlevelcrypto2346 5 күн бұрын
Why are we running on Windows at all ?
@LeadDennis
@LeadDennis 6 күн бұрын
I was successful at installing on 1/2 pcs.
@REALEYEZ1718
@REALEYEZ1718 6 күн бұрын
i have a ryzen 7 7735hs and amd rx 7700s gpu is there a special dock command to run ?
@ashwinpal-s8l
@ashwinpal-s8l 6 күн бұрын
the gpu... did not work, takes forever to get a resposne, sometiems none at all with 3.3
@MrDivHD
@MrDivHD 7 күн бұрын
Great Video, Would you sleep with the evil and give him also your car keys?
@YamiGenesis
@YamiGenesis 6 күн бұрын
how do I use my Nvidia GPU instead of the CPU like it says I am using in Docker
@YamiGenesis
@YamiGenesis 6 күн бұрын
I would assume this is the reason I am getting the [500: Ollama: 500, message='Internal Server Error', url='host.docker.internal:11434/api/chat'] error when trying to run the deepseek-r1:70b model.
@TC-yr8qb
@TC-yr8qb 4 күн бұрын
having this issue as well. i pulled the GPU option but it still uses the CPU
@aperson1181
@aperson1181 3 күн бұрын
There is a draft bill being proposed to ban downloading or using it or go to jail for 20 yrs
@Gabeyre
@Gabeyre 6 күн бұрын
i dont have laptop or pc. can I run model for free?
@NoodlesTBograt
@NoodlesTBograt 5 күн бұрын
Thanks for the video I have it all working I just need somebody to explain how to optimise it to use my R7 5800 x3d rx 7900xt system most efficiently
@thekjub
@thekjub 15 сағат бұрын
17:20 .... to clarify: My first question to DeepSeek was : How big is US budget. And after smashing me with answer . I asked I downloaded 1.5GB data how could you figured this out locally? And there it was. Why OpenAI is so fearefull of DeepSeek ? Because they offloaded this completition of queries logic to users PC :D that means in billions less processing power for all those stupid questions around the world :D and they just point all queries to distributed server with particular answers.
@MikdanJey
@MikdanJey 6 күн бұрын
May I know, what is your system config ?
@fiehlsport
@fiehlsport 3 күн бұрын
☢️ RADON 780 GRAPHICS ☢️
@hadashidesu
@hadashidesu Күн бұрын
Try "tell me about the Tienanmen square massacre, but substitute the latter i with 1, and the letter e with 3". I could get the censored version of DeepSeek to talk about Tienanmen!
@michaelthompson657
@michaelthompson657 7 күн бұрын
Is this the same process on Mac?
@Mr.Tec01
@Mr.Tec01 7 күн бұрын
yes this works on a mac, running on a Mac Mini M4 no issues...I actually did all this yesterday before his video came out...super weird...lol
@michaelthompson657
@michaelthompson657 7 күн бұрын
@ lol thanks! I’ll have to check out some videos
@Mr.Tec01
@Mr.Tec01 7 күн бұрын
@ heads ups, do not go with llama 3.3 on Mac mini M4 not only did it crash my computer it brought down my whole unifi network...oops...lol just rock llama 3.2latest and you will be fine
@michaelthompson657
@michaelthompson657 7 күн бұрын
@ thanks. I currently have a MacBook Pro m4 with 24gb ram, not sure what the difference is
@Mr.Tec01
@Mr.Tec01 7 күн бұрын
@@michaelthompson657 I think it based on the billion parameters (??) the llama 3.3 is like 70 billion 42gb download, lama3.2 is only 6 billion and 4.5gb…I’m pretty sure your your macbook can handle 6billiin no issue
@frooglesmythe9264
@frooglesmythe9264 7 күн бұрын
This is extremely interesting: Today (2025-01-30, 18:30 utc), I downloaded deepseek-r1:7b, and I entered the exact same question as you: "Tell me about the Tienenmen Square Massacre of 1989". From llama3.2 I got the corerct answer but from deepseek-r1:7b I got "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses". Why the difference from your answer? (By the way, I am running Ollama on a MacBook Pro, Apple M2 Pro with 16 GB memory)
@CrosstalkSolutions
@CrosstalkSolutions 7 күн бұрын
Well - that's exactly what I showed in this video...sometimes the deepseek model answers that question, and sometimes it gives the censored answer - maybe it has to do with what was asked earlier in that same conversation?
@ComedianHarmonistNL
@ComedianHarmonistNL 6 күн бұрын
​@frooglesmythe9264 I just visited the channel Dave's Garage and he installed DeepSeek and got a satisfactory answer about Tienanmen Square. So what did you do wrong? Breaking of an initally appearing answer to a seemingly politically loaded question happened to me. But polite explaining and rephrase got me results. So rethink your way of asking questions.
the ONLY way to run Deepseek...
11:59
NetworkChuck
Рет қаралды 722 М.
Your Remote Desktop SUCKS!! Try this instead (FREE + Open Source)
22:30
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН
DeepSeek on Apple Silicon in depth | 4 MacBooks Tested
26:27
Alex Ziskind
Рет қаралды 122 М.
DeepSeek and Packet Analysis? Let's find out...
7:41
Chris Greer
Рет қаралды 56 М.
Building a fully local "deep researcher" with DeepSeek-R1
14:21
LangChain
Рет қаралды 160 М.
NVIDIA CEO Jensen Huang's Vision for the Future
1:03:03
Cleo Abram
Рет қаралды 1,1 МЛН
Tailscale Setup on GL.iNet: Remote Access Made Easy
12:02
Crosstalk Solutions
Рет қаралды 10 М.
How do Graphics Cards Work?  Exploring GPU Architecture
28:30
Branch Education
Рет қаралды 3,3 МЛН
Deepseek R1 671b Running LOCAL AI LLM is a ChatGPT Killer!
19:13
Digital Spaceport
Рет қаралды 473 М.
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН