don't forget more memory in many cases mean more speed, because you use bigger batch sizes
@zainbaloch55412 жыл бұрын
Cuda cores is one thing, but these days we also consider Tensor cores, and I have a suggestion to those who may want to buy GTX 1080 Ti. My suggestion is to go for RTX 2060 instead as it has 240 Tensor cores compared to no Tensor cores in GTX 1080 Ti. In my opinion, RTX 2060 is the right option in the right amount of Money!
@pysource-com2 жыл бұрын
I totally agree with this. the RTX 2060 is the best buy price/performance
@rock53355 Жыл бұрын
@@pysource-com How does one use the tensor cores in an RTX card? Is it 'cuda' in pytorch?
@michal5869 Жыл бұрын
@@pysource-com what? RTX 3060 12GB VRAM, is much faster for DL about 20%, and now cheaper by 30%, 7 months ago I bet the same. The best is the test, not cores or tensors because memory speed is also important as core speed, technology, etc.
@gplgomes2 жыл бұрын
So will we need two GPU, one for the monitor and other for deep learning?
@pysource-com2 жыл бұрын
Nope, the same GPU can be used for training while at the same time is used for the monitor. Usually a normal computer usage (with browser open and some other light programs), uses only a few hundreds megabytes of gpu memory, and approximately 10/15% of gpu usage
@SoundChaser_ Жыл бұрын
Thanx for explanations!
@Mr_K2832 жыл бұрын
is it the dedicated vram that should be more than 4gb?
@pankajjoshi4206 Жыл бұрын
Sir graphic card which should I buy in 2023 . I am doing project in live face detection in a shopping mall , I m using deep learning pytorch open cv , yolo. Thank you
@amrzakaria52902 жыл бұрын
Good information you help a lot.
@sus7382Ай бұрын
Good afternoon, i realise this is a rather old video but i was wondering if you could give some insight into what would be best for me I'm working on a turret that does real time aingle object tracking with the lowest possible latency, currently im not in need of a gpu as i use a 7840hs apu and a highly modified mosse tracker, but i want to use some gpu acceleration for things like encoding, initial detection, etc in the future. Considering that i dont have a particularly high need for gpu performance etc, would it be advisable to get an amd counterpart due to the lower price or spend more on the nvidia card ourely for the support? I know rocm is getting good, but i also know its not close to cuda yet either. Thanks
@pysource-comАй бұрын
I made a new video a few weeks ago that will give you more updated insights about it. You can check here: kzbin.info/www/bejne/mqK8qoGmd8-jnqc Nvidia vs AMD. Nvidia hands down, it's way ahead in comparison with AMD. then the choice depends a lot on the type of work you need to do. If you need to train algorithms, or handle a lot of cameras at the same time you need a large memory. If you only need speed, get the one with most CUDA cores for the price range you need (even with lower videoRAM memory)
@liftup88952 жыл бұрын
Hi, how we can sizing the requirement if we want to run object detection from multiple IP camera? Thank you
@pysource-com2 жыл бұрын
Hi, it depends on the specific case as there are many way to approach this scenario: - lighter model so that the GPU can handle more at the same time - lowering FPS for each frame (if we don't need many FPS), so that we can process more streams together - multi GPU set up to hand many streams at the same time
@liftup88952 жыл бұрын
@@pysource-com thank you very much. One more question, i tried run object detection using gpu, FPS much better but detection is not working, detection is working before (using cpu), what i'm missing thank you
@maloukemallouke9735 Жыл бұрын
Thank you so much. How about CPU and memeory? I9 or Ryzen ?? 64GO or 128 ???
@KiraSlith Жыл бұрын
(Speaking from limited experience here, new field, not a lot of documentation) If you're just building your rig for AI training/use and nothing else, system RAM over 32gb won't have much effect unless you're using LLM models with a C++ back end, or some extremely complex CV models. It's all about that vRAM pool size.
@maloukemallouke9735 Жыл бұрын
@@KiraSlith thank you it's for intensive training
@cyberhard2 жыл бұрын
Do you need a GPU? Yes. Nvidia or AMD? Nvidia. Unless you're strictly going to develop using PyTorch. Then can use AMD and ROCm. How much RAM? As much as you can afford.
@gplgomes2 жыл бұрын
Is GPU for mining the same for Deep Learning?
@xcastel6234 Жыл бұрын
Technically no, but you a GPU can have multiple uses.
@albertvolcom7303 ай бұрын
I’m working on an autonomous driving project using the CARLA simulator and need advice on choosing a GPU. My budget is around 600-800€. I’m considering a used RTX 3090 or a new RTX 4070 Ti, but I’m unsure if I should prioritize VRAM over raw power. Also, my university might provide server access, but I still need a GPU for local work. Should I invest more in a powerful GPU or rely on the servers for heavier tasks? Any advice or recommendations would be greatly appreciated! Thanks!
@pysource-com3 ай бұрын
Absolutely, I recommend to always prioritize VRAM over power. You'll have a huge amount of benefits by having 24GB of vram. I'll be publishing a video soon (around a few weeks) where I compare using 12GB of ram graphic card vs 24gb. Personally I recommend to have a GPU (but it's always a personal decision depending on the importance the project you have, the need for future projects and your budget), as it saves a lot of time for testing things.
@mpoltav22 жыл бұрын
What about 1060 3 gb?
@ilkay33592 жыл бұрын
hello sir, honetly ,the people who lives in turkey ,can rarely afford these components ,reason : currency issues. still thank you , I'll wait your keras, dnn ,ömachine learning, model training and one more thing ,recently I figured out that .pt files ,I could not use .pt files in my codes. maybe you make a video about yolov5 pytorch ,deploying the pretrained model to our codes in pycharm. thank you again.
@FederickTan8 ай бұрын
Is there a big difference in performance and speed in AI tasks like stable diffusion & video rendering etc between RTX 4080 super and RTX 4090?Which one should i buy as I seldom play games or should i wait for 5090 at the end of the year?I am not a video editor or hold any jobs related to designing or editing,just a casual home user.
@pysource-com8 ай бұрын
Yes, there is quite a big difference from the 4080 super and the 4090 in terms of speed and also memory. If you're not tight on budget I would definitely go for the 4090
@Mr_K2832 жыл бұрын
Can you use laptop with Nvidia quadro T1000 graphics for computer vision
@pysource-com2 жыл бұрын
It's not ideal. There are 2 types of Nvidia T1000 (one 4gb and the other 8gb). Both are low in memory. 12gb is recommended to fully use your laptop for Computer Vision
@pysource-com2 жыл бұрын
If you need it only to run models in realtime they're fine, but not for training
@Mr_K2832 жыл бұрын
@@pysource-com thank you.
@Mr_K2832 жыл бұрын
@@pysource-com but can it be used for any other machine learning or neural network apart from computer vision
@pysource-com2 жыл бұрын
@@Mr_K283 yes it will work with all the deep learning frameworks: tensorflow, theano, pytorch, keras, and others
@RossTang2 жыл бұрын
i had two rtx a2000 6gb will it consider 12gb mem and double the cores?
@pysource-com2 жыл бұрын
HI, nope, it will consider them as 2 separate GPUs with 6gb each. It's still good because working with multiple gpus will divide the work among the gpus significantly speeding up the training, though 6gb is still low as memory and will be a limit depending on the model you're going to train.
@RossTang2 жыл бұрын
@@pysource-com yes. I can't train yolov7 with my cards.
@Mr_K2832 жыл бұрын
please is it the dedicated vram that should be more than 4gb ?
@shankn3520 Жыл бұрын
What about laptop RTX 3060 is 6GB and not 12 GB. So are you saying purchasing laptop of RTX 3060 is of no use
@kdzvocalcovers3516 Жыл бұрын
3070 ti or 2080 ...if your not creating huge sized pics these cards work great just fine...obviously more for hobbyists..but not suitable giant projects that the pro's create...3060 and 2060 are ultra slow and have inferior vram tests have shown ...you never mentionerd cuda cores or 256 bandwith..not to mention way faster ddr6x vram..like the 3080ti is the minimum..4080 is the sweet spot..16gb vram..large bandwidth and fast vram with tons of cuda cores
@CaleMcCollough Жыл бұрын
If I could only afford $150 I would get a 3050 8GB. I personally use two 3060 12GB and a 3090 24GB, but I would go for the 4060 TI 16GB for $500 if I couldn't upgrade my power supply. The 3090 24GB is a good deal, but the 4090 is way more power efficient so you'll end up paying for it in the power bill.