Deepseek R1 first impressions - Web + Local with Ollama

  Рет қаралды 20,628

Jeremy Morgan

Jeremy Morgan

Күн бұрын

Пікірлер: 54
@agnosticoparatodo
@agnosticoparatodo 12 күн бұрын
Gracias por el doblaje en español. Y sobretodo por traer un vídeo muy interesante.
@HaraldEngels
@HaraldEngels 11 күн бұрын
I am running deepseek-r1:14b-qwen-distill-q8_0 variant locally (Ollama on Kubuntu 24.04) on my cheap ASRock DeskMeet X600 PC without dedicated GPU. My AMD Ryzen 5 8600G has 16 TOPS only and a 65 watts power limit. I have 64GB RAM which can be FULLY USED for inference. Inference is slow. Highly complex tasks (prompts) are running up to 5 minutes but even writing a well-structured prompt takes me more time. And the result saves me hours of work. The PC supports up to 128GB RAM, therefore running a 70B model should work perfectly when time is no issue. Due to the low power consumption there are no heat problems. So you trade speed against unlimited model size - for me that is the perfect solution, especially considering that this is a
@murillodaniel9208
@murillodaniel9208 11 күн бұрын
Sorry to bother, but could you tell me how you can tell what are the minimun specs to run a llm locally? I was curious what sort of specs I need to run DeepSeek R1 locally but my research implied I need so much vram that there is no GPU on the market that would run it? But I feel like I'm wrong
@Vrhits
@Vrhits 11 күн бұрын
Running the 32B model on an M3 Max and CrewAI. It's super impressive
@Susurros_Latentes
@Susurros_Latentes 10 күн бұрын
Is it censored like the web version? or is it fully opened?
@JeremyMorgan
@JeremyMorgan 9 күн бұрын
I imagine the M3 is significantly faster than my demo. Apple Silicon is making big strides!
@DaveEtchells
@DaveEtchells 12 күн бұрын
Great overview/demo, impressive that this level of reasoning can run locally and reasonably quickly to boot!
@amubi
@amubi 8 күн бұрын
Ask it "Does TikTok need WiFi" 😂
@dubesor
@dubesor 12 күн бұрын
at 5:25 you provided a link but DeepSeek-R1 does not have browsing ability, so it wasn't as if it was reading your blog and making up some stuff, it literally never got to see any content in the first place. everything it wrote is made up solely based on the tiny text in the url.
@xLBxSayNoMo
@xLBxSayNoMo 12 күн бұрын
It has browsing ability in the website. Not the app. Just asked it to tell me about betting odds on Lakers game tonight and it thought through it, starting with the timezone issue of deepseek saying it's Australian time zone Jan 22nd, but the Lakers game is jan 21. So it corrected itself
@gabrielpolancomejia7198
@gabrielpolancomejia7198 11 күн бұрын
It has already been updated and you now have the ability to navigate using your reasoning model!
@domothepilot
@domothepilot 12 күн бұрын
what are the specs of your mac studio? could you elaborate what kind of specs are needed to run the models in different sizes? thanks!
@JeremyMorgan
@JeremyMorgan 11 күн бұрын
It's a Mac Studio M2 Ultra with 60 core GPU and 128G of RAM. Most of the smaller models (1.5B-7) will run on M1/M2/M3 Macs just fine or NVIDIA GPUs.
@LoftwahTheBeatsmiff
@LoftwahTheBeatsmiff 12 күн бұрын
Awesome!! 🎉
@paxtonmosby9371
@paxtonmosby9371 5 күн бұрын
Yeah this is an odd test as you provided a link which it can’t access. I’d be curious to know how its analysis would change if it had genetic abilities to access and read the actual article from a prompt
@yzw8473
@yzw8473 12 күн бұрын
No. It has no access to internet so when you ask it to summerize it has to guess around, only clue is the artical title from URL. You can do "Search" with V3, or do "DeepThink" with R1. Not at the same time currently.
@craighart
@craighart 12 күн бұрын
Nope, now you can do the deepthink and access internet at the same time.
@yzw8473
@yzw8473 12 күн бұрын
@@craighart Not in the video at least. You need to enable both button.
@xLBxSayNoMo
@xLBxSayNoMo 12 күн бұрын
​@@yzw8473 it works on the site, not the app. They just allowed both at the same time yesterday
@DiegoSanchez-td1fk
@DiegoSanchez-td1fk 10 күн бұрын
@@yzw8473 They've updated it
@leeishere7448
@leeishere7448 9 күн бұрын
Android can use both internet access and deep thinking except for apple devices which is odd as hell.
@claudiusquantum3367
@claudiusquantum3367 10 күн бұрын
Massive problems, I am using the 70B and its nowhere near as good as Sonnet 3.5. Its reasoning is faulty. Its not reasoning based upon what you want, it reasons based on what it thinks.
@flov3086
@flov3086 5 күн бұрын
Hi, I’m trying to decide between using DeepSeek R1 in the browser or running it locally (e.g., Qwen 32B Q3_K_S). I have an RTX 4090 and 32GB of RAM, but I’m not sure if the web version is better or how it exactly compares to running it locally.
@JeremyMorgan
@JeremyMorgan 5 күн бұрын
If you mean chat.deepseek.com as the web version, yes it will definitely be better. It's a 671b parameter model, vs 32b Locally, which is is quantized. That being said, depending on your use it may not matter. There are an infinite amount of use cases out there where a 32b model will work just fine.
@Novavendetta
@Novavendetta 9 күн бұрын
Is the 70b model the MLX version?
@Ahmed-ef3bg
@Ahmed-ef3bg 11 күн бұрын
The oligarch tech cronies won’t like this.
@MohanKumar-gg7xt
@MohanKumar-gg7xt 10 күн бұрын
which of the distilled models can i able to run on macbook m2 air?
@zonegaming3498
@zonegaming3498 12 күн бұрын
I normally ask AI to write a short rap with double entendres and inner rhymes schemes. It's pretty difficult for AI to make complex rap.
@racerx1777
@racerx1777 5 күн бұрын
you are asking something smart to, in essence, be dumb! so yeah theres that
@alexanderhuang5511
@alexanderhuang5511 9 күн бұрын
Hey quick question, how many parameters does the online model have? I am guessing it's one of the smaller ones so the local models should theoretically be better. What kind of hardware do you need to run the larger models and is it possible to run the full model 670B parameters via their api's or something? would appreciate a response, thanks for the great video :)
@JeremyMorgan
@JeremyMorgan 9 күн бұрын
You're correct, Deepseek R1 has 670B parameters. I haven't tried using their API yet. I plan on mathing out what each model needs based on memory. I tried these models on two machines, and noticed the biggest model I could run on my 4090 (24G RAM) is the 32B model. The biggest I could run on my M2 Ultra is the 70B model. (integrated RAM with Apple Silicon. Not fast)
@chetk8413
@chetk8413 11 күн бұрын
Hi would it be reasonably decent token rate with the distilled deepseek r1 models (7bn) on a base level Mac m4 mini with Ollama ?
@JeremyMorgan
@JeremyMorgan 11 күн бұрын
Yes I don't know for sure, but I'm fairly certain a 7B will run just fine on that platform. How much memory do you have?
@chetk8413
@chetk8413 11 күн бұрын
@JeremyMorgan I'm deciding whether to get the base model m4 which is 16gb.
@atchutram9894
@atchutram9894 11 күн бұрын
2:58 for my own reference
@gabrielpolancomejia7198
@gabrielpolancomejia7198 11 күн бұрын
It has already been updated and you now have the ability to navigate using your reasoning model!
@eygs493
@eygs493 10 күн бұрын
ok stupid
@dbog5214
@dbog5214 11 күн бұрын
the site only givesa me v3 (?)
@imgajeed
@imgajeed 10 күн бұрын
To be honest, I'm running the 14b model on my local computer, and it's just terrible (I'm using ollama and open-webui). Allow me to clarify: I asked deepseek-r1:14b a riddle, and it answered incorrectly. Then I told him what he should do differently, and he was confused and didn't know what riddle I was talking about. It felt like he didn't have a connection to the previous responses. But then I tested that, and he does have access to the previous messages but seems like to just not use them. I don't know what exactly is happening, but I'm very disappointed. I don't know how good GPT-o1 is, but deepseek-r1:14b is nowhere near Sonnet 3.5
@ywueeee
@ywueeee 10 күн бұрын
add chapters
@prasannaford4600
@prasannaford4600 12 күн бұрын
Nice Video
@witness1013
@witness1013 12 күн бұрын
I've never visited before - but the 1st two tasks being a 'joke' and a poem tell me you aren't very serious about this.
@JeremyMorgan
@JeremyMorgan 12 күн бұрын
Thanks for your feedback
@Rami_Zaki-k2b
@Rami_Zaki-k2b 11 күн бұрын
Try it yourself. It is on par with o1. Weird from China. I didn't expect that. But it is true 🤫
@shoddits2156
@shoddits2156 9 күн бұрын
671b
@emport2359
@emport2359 12 күн бұрын
Calls o1 01, opinion already invalidated not gonna lie
@emport2359
@emport2359 12 күн бұрын
Also made a video on v3 but thought it was R1 lmfaoo
@JeremyMorgan
@JeremyMorgan 12 күн бұрын
@@emport2359 Nobody's forcing you to watch it.
@emport2359
@emport2359 12 күн бұрын
@@JeremyMorgan I'm just saying shit man. I actually enjoyed the video, I much prefer the thoughts coming from an actual developer compared to someone who purely uses ai and has no actual understanding of the thing the ai produces (code for example) and purely values the ai based on if the code works or not and doesn't take the quality, efficiency and real life usage into account
@captain-starbuck
@captain-starbuck 9 күн бұрын
Sorry @jeremymorgan, I had the same initial impression "this guy doesn't know what he's talking about". But I watched it through, felt educated, didn't waste my time as we do with too many tubers, and I'm here to convey a happy experience. Thanks. Extra points for the masm demo 🤓
@JeremyMorgan
@JeremyMorgan 9 күн бұрын
​@@captain-starbuck fair enough. I read far more information than I hear, so I mispronounce things. Also, I'm not a genius, don't have a PhD, and I don't pretend to be an expert on AI. I'm not. I like trying new things, learning from mistakes, and sharing what I've learned. If I can tell people where the potholes are in the road and help them build cool stuff, it's good enough for me. So I get criticized a lot and don't take it personally. The part about not wasting the viewer's time, however, is very important to me. So thank you for that feedback.
@ObscenePlanet
@ObscenePlanet 11 күн бұрын
what are your specs? i tried to run the 70b model and my computer threatened to walk
Deepseek R1 671b Running LOCAL AI LLM is a ChatGPT Killer!
19:13
Digital Spaceport
Рет қаралды 411 М.
DeepSeek R1 Fully Tested - Insane Performance
15:10
Matthew Berman
Рет қаралды 887 М.
SLIDE #shortssprintbrasil
0:31
Natan por Aí
Рет қаралды 49 МЛН
Хаги Ваги говорит разными голосами
0:22
Фани Хани
Рет қаралды 2,2 МЛН
Building a fully local "deep researcher" with DeepSeek-R1
14:21
LangChain
Рет қаралды 148 М.
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 254 М.
DeepSeekR1 - Full Breakdown
22:49
Sam Witteveen
Рет қаралды 42 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE
14:02
Tech With Tim
Рет қаралды 97 М.
SLIDE #shortssprintbrasil
0:31
Natan por Aí
Рет қаралды 49 МЛН