5 Questions about Dual GPU for Machine Learning (with Exxact dual 3090 workstation)

  Рет қаралды 66,663

Jeff Heaton

Jeff Heaton

Күн бұрын

Пікірлер: 111
@deltax7159
@deltax7159 6 ай бұрын
just found your channel! I'm a graduate student studying statistics planning on building my own ML/DL PC upon graduation to use for gaming/ my own personal research and your channel is slowly becoming INVALUABLE! thanks for all this great content Jeff!
@adityay525125
@adityay525125 3 жыл бұрын
Can we get a 3090 vs A series, with mixed precision thrown in
@weylandsmith5924
@weylandsmith5924 2 жыл бұрын
@Jeff: I don't concur about the fact that Exxact has built their workstation so that the cooling is maximized. Quite the contrary: I've not managed to understand which 3090 model they are using, but nobody will convince me that two air-cooled 3090s, stacked tightly (not even one slot separation) won't throttle. And indeed that's demonstrated in your very video. Note that you shouldn't watch for die throttling, BUT for gddr6x throttling. Unless you take some fairly drastic precautions, the memory will throttle, and this has been observed for all 3090s on the market (both open air and blower types). By drastic measures I mean: generous heatsinks on the backplate *and* at least two slot separation *and* a very good case airflow *and* reducing the TDP by at least 15% ("and", not "or"). In any case, note that your upper 3090's die *IS* throttling as well: 86C engages the thermal throttling for the die. It's not surprising that there is such a big difference with the lower one, since the upper card suckles the air heated by the lower card's very hot backplate. And you don't have any margin left: the fan is already at full speed. That's BAD. Stacking the gpus so close just so that you can use the A-series nvlink bridge is a bad policy: you trade a bit more nvlink bandwidth for a card that will severely overheat. Use the 4-slot nvlink bridge for the 3090s, and put MORE distance between the cards. Disclaimer: I'm not in the business of building workstations. I'm just an AI engineer who struggled with his own build's cooling (dual nvlinked 3090s as well), learning something in the process.
@stanst2755
@stanst2755 2 жыл бұрын
this copper mod might help kzbin.info/www/bejne/nGnJZ41-eLWJptk
@peterklemenc6194
@peterklemenc6194 2 жыл бұрын
So did you go the water-cooled option or just multi-fans experiments?
@harrythehandyman
@harrythehandyman 3 жыл бұрын
It would be nice to see RTX 3060 12GB vs RTX 3080Ti 12GB vs RTX 3090 24GB vs A6000 in FP16, FP32, FP64.
@Maisonier
@Maisonier Жыл бұрын
+1
@atefamriche9531
@atefamriche9531 2 жыл бұрын
Not an expert here, but I think in terms of design, a triple or quad slot nv-link with more spacing between the two GPUs would help a LOT. The top GPU is choked. Also, have you checked the memory junction temp? because if your GPU code is hitting 86 deg-C, then the memory junction temps are probably over 105 deg-C, and that is definitely in thermal throttling territory.
@CodLab
@CodLab Ай бұрын
hi Jeff if you read this comment please like it so i know you have read it. I wanted to Say thank you So much for this video, it helped me out so much and saved me a lot of time. the very specific topic i was looking for hours and found the right answers. thank you so much . youre a legend
@HeatonResearch
@HeatonResearch Ай бұрын
You're very welcome!
@Mi-xp6rp
@Mi-xp6rp 2 жыл бұрын
I would love to see more use of the 12 GB RTX 3060.
@qjiao8204
@qjiao8204 Жыл бұрын
I guess you must be misguided by this guy. Don't buy 3060, for this price range, the memory is not important anymore, get a 3070 or 3080, much much faster than 3060.
@jameszheng4189
@jameszheng4189 3 ай бұрын
Wow, this video is what I have been looking for. I am currently trying to build a dual 3090 platform for fine-tuning and inference, and most importantly, I want to see if SLI is necessary!
@datalabwork
@datalabwork 3 жыл бұрын
I have watched every single bit of your video...those IDS takes interest on me. Would you kindly make a video on reviewing DL based IDS on GPU, in any future?
@diegoclimbing
@diegoclimbing Ай бұрын
Thanks for the very useful information. I'm currently trying to build the cheapest computer capable of running Llama3.1 8B. The NVIDIA Quadro 6Gb graphics card is the cheapest I could find so the plan is to connect two of them to be able to load the whole model. Wish me luck.
@HeatonResearch
@HeatonResearch Ай бұрын
I would love to hear how that goes!
@lenkapenka6976
@lenkapenka6976 Жыл бұрын
Jeff, Fantastic video.... explained a lot of stuff I was slightly fuzzy on.... your explanations were first class
@wentworthmiller1890
@wentworthmiller1890 3 жыл бұрын
Comparison wishlist: 3090 vs (3080 ti, 3080, 3060, vs 3060). A combination also: 3090 + 3080 ti, 3090 + 3080, 3090 + 3060. That's a lot. Thought I'd ask 😊 😁. Thank you so much for putting these vids together - it's nice to see and understand various facets of DL, which are not covered in academics generally. Very helpful to get a holistic perspective for a noob like myself.
@0Zed0
@0Zed0 3 жыл бұрын
I'd like to see the 3090 compared to the 3060 and also a comparison of their power consumption, although with a remote system I doubt you'll be able to do that. Obviously the 3060 would be much slower to train on the same data as a 3090 but would it use more, less or the same power to do it?
@amanda.collaud
@amanda.collaud 3 жыл бұрын
@@kilosierraalpha I have a 2080 ti and a 3060 in my computer, they work good. The 3060 is not horribly slower than my 2080 ti, so... plz dont make it sound like the 3060 is not suitable for ML. You can use overclocking on the buswidth btw, I did it aswell and nothing bad happened yet :D
@-iIIiiiiiIiiiiIIIiiIi-
@-iIIiiiiiIiiiiIIIiiIi- Ай бұрын
Dis dude be knowin' how to run trains.
@seanreynoldscs
@seanreynoldscs 2 жыл бұрын
I find when I'm working with real world problems my tuning can go quicker with multiple gpu's by just training two models back to back to back as I tune.
@silverback1861
@silverback1861 2 жыл бұрын
Thanks for this comparison. learnt a lot to make a serious decision.
@mamtasantoshvlog
@mamtasantoshvlog 3 жыл бұрын
Jeff seems you confused yourself both while editing the video and while shooting the video. It's Data parallelization and not paralyzation. I hope I am correct. Let me know if that's not the case. Also would love your advice on something.
@British_hunter
@British_hunter Жыл бұрын
Smashed my setup with custom watercooling on a RTX3090x2 gpu's and a separate CPU loop. Temps on core,mem, and power don't reaching over 45 Celsius on full load
@ALEFA-ID
@ALEFA-ID Ай бұрын
man I use dual rtx 3090 setup, my second gpu is not detected. already tried everything but still not detected
@jakobw135
@jakobw135 4 ай бұрын
Can you put two GPU's from TWO DIFFERENT MANUFACTURERS, and hook them up to the same monitor?
@FrancisJMotionDesigner
@FrancisJMotionDesigner 6 ай бұрын
im trying to install a seccond gpu which is a 3070 on my PC. i already have on 3080ti installed. I have enough power but after installation, there is a lag when I move my mouse and frequent crashes. I tried removing all drivers and doing a fresh install with DDU. my motherboard is asus rog strix x570 E.... Please let me know what I'm doing wrong. could it be something with pcie lane support?
@97pingo
@97pingo 3 жыл бұрын
I would like to ask you for your opinion regarding notebooks. My question is, which notebook might be worth buying in the scenario where I might have a server for heavy computing? The choice of notebook is linked to the need for mobility
@thewizardsofthezoo5376
@thewizardsofthezoo5376 Жыл бұрын
wolfram?
@97pingo
@97pingo Жыл бұрын
@@thewizardsofthezoo5376 may you add more information?
@Enterprise-Architect
@Enterprise-Architect 9 ай бұрын
Thanks for this video. Could you please post a video on how to create a cluster using NVIDIA Tesla K80 24GB GDDR5?
@siddharthagrawal8300
@siddharthagrawal8300 4 ай бұрын
in your tests do you use nvlink on the 3090?
@danielklaffmo4506
@danielklaffmo4506 3 жыл бұрын
Jeff, thank you for making these videos. I think you are the right kind of youtuber, your looking at the practical rather than the overly theoretical. But I wish i could talk more with you because i have ideas i like to share (but after contract offcourse) I have kinda maybe done it and yeah kinda need alot of ML engineers and Personalities to gather up to make an event and annual meeting ehmmmmm please let's talk further
@maxser7781
@maxser7781 Жыл бұрын
The word is "parallelization" derived from the word "parallel". The word "paralyzation" could be used as a synonym to "paralysis", which is irrelevant in this case.
@plumberski8854
@plumberski8854 Жыл бұрын
Interesting topics for a beginner with this new ML DL hobby! Can I assume that the difference between 3090 and 3060 GPU here is the processing time (assuming data is small enough for the 3060)?
@eamoralesl
@eamoralesl Жыл бұрын
Great video it helped me get a better picture of how dual GPUs are used, a question here. I got one of the newer 2060 with 12gb and wanted to pair with another GPU but can't find the same make and model, would it matter if it's a different make? Is it worth getting 2x 2060 in 2023 just for having 24gb VRAM? should I start saving for newer GPUs? Budget is a concern because latest gen GPUs come to my country almost 3x their price on Amazon so imagine those prices... Thanks any opinion helps
@QuirkyAvik
@QuirkyAvik 2 жыл бұрын
I bought one 3090 and was so amazed I got another one. Now I am considering building a proper workstation pc since I have picked up a "hobby" of editing people 4k sometimes 8k footage for them along with learning 3d modelling as I want to get into 3d printing as well. The dual 3090 were bought at more than twice the MSRP which has stopped me from building a workstation even though I finally have a case(no pun intended) for it.
@BrianAnother
@BrianAnother 3 жыл бұрын
Parallelization
@DailyProg
@DailyProg 9 ай бұрын
Jeff do you have a comparison between 3060 and 3090 and 4090? I have a 3060 and wondering if it is worth the 6x cost to upgrade to a 4090
@MichaelDude12345
@MichaelDude12345 Жыл бұрын
This is literally the only place I could find information on this subject. I am trying to decide between starting with a 3080 and either a 4070 or 4070ti. Can anyone share with me their thoughts? Price aside I like how much less power the 4070 uses, but I think it would be a performance drop. Either way I know I need the 12gb of vram for what I want to do. The 4070ti seems like it would make up the difference in the performance that the 4070 lacks, but I really like the price-point of the 3080/4070 range. My options are to get one of those and maybe eventually save up to add another card, or go for a cheaper range and get 2 cards for the data parallelization benefits. I really wasn't sure how much data parallelization would be helpful for me but it seems like it would just be a nice bonus, so I am now leaning more towards just starting with one of the cards I listed. Anyone with more knowledge than me on the topic, could you weigh in please? I could really use some pointers.
@Mr.AmeliasDad
@Mr.AmeliasDad Жыл бұрын
Hey man, I'm currently running a 3080. I know you said pricing aside, but the 3090 has come down to the same price as 4070's so I would strongly consider that. I have the 10GB model and would kill for the extra VRAM. Creating a convolutional neural network I ran out of VRAM pretty fast when trying to expand my model. So I either had to split my model among different GPU's or go with a smaller model. Thats why you want to try for more VRAM on a single GPU. That was also on a dataset with 510 classes for classification, which isn't the easiest. I recommend spending what you would on a 4070 or 4070ti and getting a used 3090 for the VRAM. Barring that, I would consider trying to get a used 3080 12GB and saving up for a second.
@absoluteRa07
@absoluteRa07 2 жыл бұрын
Thank you very much very informative
@Edward-un2ej
@Edward-un2ej 2 жыл бұрын
I have two 3090 for almost two years. When I training with two cards together, one of them will reduce about 30% due to the cooling.
@manotmapato7594
@manotmapato7594 Жыл бұрын
Did you use nvlink?
@thewizardsofthezoo5376
@thewizardsofthezoo5376 Жыл бұрын
Dual use behalves the PCI bus?
@markhou
@markhou 2 жыл бұрын
In general would the 3060ti be a better pick than the non ti / 12gb vram?
@harry1010
@harry1010 3 жыл бұрын
Thank you for this!!!!!!
@josephwatkins1249
@josephwatkins1249 2 жыл бұрын
Jeff, I have an 8 GPU 30 series rig that I'd like to use for machine learning. If I wanted to use these for data parallelization, how would I set this up?
@rahuls190
@rahuls190 2 жыл бұрын
Hello, can I use NVlink between Quadro rtx5000 and rtx 3090, kindly please let me know.
@dmoneyballa
@dmoneyballa Жыл бұрын
I'd love to see nvidia compared to amd now that rocm is working with all of the 6000 and 7000 series.
@AOTanoos22
@AOTanoos22 2 жыл бұрын
Why can't you combine the memory of the 3090s to 48GB when using Nvlink and have a larger batch size ? I thought this is what Nvlink was made for, combining both Vrams into a unified memory pool, in this case 48 GB. Correct me if im wrong.
@andreas7278
@andreas7278 2 жыл бұрын
That's exactly what nvlink is for, this is correct
@clee5653
@clee5653 Жыл бұрын
@@andreas7278 I'm still confused, does that mean nvlink provides a 48 GB unified vram but it's not a drop-in replacement and we still need to write some acrobatic code to run models larger than vram of a single card?
@andreas7278
@andreas7278 Жыл бұрын
It is indeed a drop-in replacement if you want to call it like that, i.e. 2x rtx3090 (same goes for 2x nvidia titan rtx from the previous generation) connected via nvlink indeed provide you with one unified 48gb vram memory pool which allows you to train larger models and use larger batch sizes. As long as the library you are using supports unified memory you don't need to do any additional trickery or coding, e.g. pytorch or tensorflow will handle this automatically if you use multi gpu mode so no further coding is needed. However, other math libraries such as numpy won't make use of memory pooling. For modern deep learning this is sufficient though since most people will only need the high vram amounts for deep learning. This is what made these dual cards being so popular for machine learning researchers. A lot of scientific ML papers have been using one of these two setups (with the exception of the big players with their gigantic server farms out there like OpenAI, DeepMind, GoogleResearch etc). It was a very economic way to get nearly twice the performance of the corresponding quadro 48gb card (2 cards mostly end up with like 1.92x performance over a single one in pytorch, taking into consideration that quadra cards with their ECC memory are usually a little bit slower you end up at roughly twice the throughput) at the same memory size for an extremely competitive price. Now we finally have the rtx4090, which pushes linear algebra calculations further at a larger generational jump than ever before. But the reason why the generational jump is higher is that they cut out the nvlink memory controller and used that space for more cuda units. This means that the rtx4090 has a larger generational jump over the rtx3090 than the rtx3090 over the titan rtx at a very competitive price. Also, it means that the rtx4090 in comparison to their rtx4070 and rtx4080 delivers exceptional value for money (just look at the total cost for proper water cooling, energy consumption and ML throughput for an rtx4090 compared to like an rtx4080, it's not just much faster, it's a better deal even though it's the highend card). But if you work with any type of transformer models which are very common right now, 24gb is kinda a very low boundary. Often you may only choose the small models and then in combination with ridiculously small batch sizes (not just making training slower but also resulting in other final network results due to maximum likelihood estimation being applied on too few samples for each epoch). More reasonable SOTA models require 50-60gb upwards and 48gb vram provides you with way better options. There are crazy models out there by the likes of OpenAI which literally needs hundreds of gb of vram but well, ... you can't have everything and you would only analyse or downstream train them anyways. If the rtx4090 would allow for nvlink we could get a reasonably prized 48gb setup but as it stands, you need to buy the rtx6000 ada lovelace which will cost a lot more and you also will only be able to leverage your single card throughput. Furthermore, going to 96gb will be impossible with quadro cards now since also these ones don't allow for memory pooling via nvlink any more. So you will have to get tesla cards which are a whole price tier higher. Basically, this new generation is a disappointment for ML researchers if we take reasonable setups into consideration. Other than that the new generation is pretty amazing.
@AOTanoos22
@AOTanoos22 Жыл бұрын
@@andreas7278 thank you for this detailed explanation, very appreciated ! I’m extremely disappointed that Ada Lovelace 40 series cards have no Nvlink anymore, not even the top end RTX 6000 (Ada). Surely anyone who needs more than 48 GB will go with a last gen RTX A6000 setup. Maybe thats another one of Nvidias ways to get rid of Ampere oversupply? What really surprises me, is that Nvlink is supposed to be removed from Ada Lovelace cards on silicon design…yet the new Nvidia L40 datacenter card, which has an Ada Lovelace chip, does have Nvlink according to their website. I guess that makes it the “cheapest” card for ML with >48 GB requirement.
@clee5653
@clee5653 Жыл бұрын
@@andreas7278 You're awesome, man. Just to be specific, to train large models on nvlinked 2x 3090, all I have to do is enable ddp in pytorch, no need for any model parallelization code right? Looks like nvidia is not going to make any relatively cheap card that has vram more than 48 GB, I'm definitely considering picking up another 3090. Having done two research projects on bert-scale model, I'm fed up with not being able to lay my hands on SOTA, mid-size models. My guess is they might ramp up next-gen 5090 cards to 32 GB, but that is not going to bridge the gap between the demand anyway.
@arisioz
@arisioz 4 ай бұрын
The infamous "Data Paralyzation"
@sigma_z
@sigma_z 2 жыл бұрын
I have 6x RTX 3090, would it be possible to join all of them together? More importantly, any real advantage for machine learning? Is it better to just get a RTX 4090?
@andreas7278
@andreas7278 2 жыл бұрын
You can't "join all 6" together like you suggest. If you just plug in all 6 you can use them in parallel for machine learning but then they don't share any memory (aka there is no memory pooling). You can get nearly linear speedup as long as the model type you are training is parallizable and no other pc component is creating a bottleneck. You can typically expect 1.92 for two cards, 3.84 for 4 cards so for 6 identical gpus you will get your near linear scaling. However, the rtx3090 does not support nv-bridge etc. What you can (and should) do is get 3x nv-link which allows you to bundle two of them always together. By doing that you can effectively use 48 instead of 24gb memory allowing for bigger models and larger batch sizes. So you can both get a nice speedup (large batchsizes are tyically much faster for transformers etc) and you can play around with larger models. Some software like video editing often times does not support nvlink, but tensorflow and pytorch do (what you are probably using).
@mmehdig
@mmehdig Жыл бұрын
Data Parallelization
@infinitelylarge
@infinitelylarge 2 жыл бұрын
I think you mean "parallelization", not "paralyzation". "Parallelization" is the process of making things to work in parallel. "Paralyzation" is the process of becoming paralyzed.
@__--JY-Moe--__
@__--JY-Moe--__ 3 жыл бұрын
so U would need a software controller, like some of the software from intel ?🥩🦖 good luck !! I hope the nVidia 4000 series will be out soon! & AMD say's, it will make it's 7000 series beat nVidia in scientific computing!! some day I guess!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
AMD needs more cloud support, the day I can start to get AMD AWS instances, I will start to consider them. I like my local to mirror what I use in the cloud. I am excited about the 4000 series as well, all the rumor mills that I follow suggest 4K series will be out next year this time.
@69MrUsername69
@69MrUsername69 3 жыл бұрын
Hi Jeff, I would like to see more use cases and benchmarks with/without NVLINK as well as various precisions FP 16/32/64 to realize if Tensor Cores also combine with NVLINK Memory. Please illustrate some multi GPU use cases and benefits.
@marvelousbless9128
@marvelousbless9128 Жыл бұрын
RTX a 4500 dual GPUs
@synaestesia-bg3ew
@synaestesia-bg3ew Жыл бұрын
Your channel is for the rich kids only, you are the Mac Apple channel
@ProjectPhysX
@ProjectPhysX Жыл бұрын
Sadly Nvidia killed the 2-slot consumer GPUs. You can't buy these anymore, only hilariously oversized 4-slot cards that don't fit next to each other. So that people have to buy the overpriced Quadros for dual-GPU workstations.
@pramilapatil8957
@pramilapatil8957 Жыл бұрын
are u the gamer grandpa?
@ok6959
@ok6959 2 жыл бұрын
why this guy is so slow
@InnocentiusLacrimosa
@InnocentiusLacrimosa 7 ай бұрын
People speak at different speeds. Often highly analytical people speak at a slower pace.
@simondemeule3934
@simondemeule3934 2 жыл бұрын
Would love to see a 3090 vs A5000 vs A6000 comparaison. These are all very closely related - they use the same processor die - what varies is the feature set that is enabled (notably performance on various data types and compute unit count), the memory type and size (GDDR6X vs ECC GDDR6, 24GB vs 48GB), clock speed, power consumption (350W vs 230W vs 300W), cooling form factor (consumer style vs datacenter style), and datacenter usage agreement. It costs a similar amount to get two 3090s, two A5000s or one A6000, and that can be a sweet spot for researchers, budget-wise. That yields the same total VRAM and a comparable amount of compute performance, but in practice these setups can behave drastically differently depending on how the workload parallelizes. Cooling also becomes a concern with more than two GPUs.
@GuillaumeVerdonA
@GuillaumeVerdonA 2 жыл бұрын
This is exactly the video I needed right now, Jeff! Thank you
@hungle2514
@hungle2514 Жыл бұрын
Thank you for your video. I have a question. Suppose that I have two monster 3090 gpus and use the NVLInk to connect together. The system will see only 1 card with 48GB or 2 cards. Can I train a model need at least 32GB on the 3090 gpus ?.
@germanjurado953
@germanjurado953 9 ай бұрын
Could you figure out the answer?
@fredrikmagnusson6469
@fredrikmagnusson6469 Ай бұрын
SLI was dead a long time ago. Nothing but a waste of money. Don't get me wrong, i love performance PC's.
@thewildernessretreat01
@thewildernessretreat01 2 ай бұрын
Can you kindly let us know what is your opinion now on this set-up? I would love to know.
@mohansathya
@mohansathya 11 ай бұрын
Jeff, Did the dual 3090 (NVLink) actually give you double the vram seamlessly?
@hoblikdlouhovlasy2431
@hoblikdlouhovlasy2431 3 жыл бұрын
Great video as always! Thank you for your afford!
@theccieguy
@theccieguy Жыл бұрын
Thanks
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh 3 жыл бұрын
Run it on an RPI.
Жыл бұрын
Hello Jeff. Thank you for your sharing. However, I see an NVLink bridge in your system that looks like a 3-slot bridge. With this bridge, obviously your two GPUs had to be placed close to each other like in the video. I think, although they may still be compatible with each other, this is not a good combination. With your way, the GPU below will heat up the GPU above, and there is no gap to provide fresh air for the GPU above. This poses a risk of damage, even fire or explosion if the system runs at full load for a long time. Looking at your temperature measurements, I also agree with a guy who commented earlier that the actual highest temperature that your GPU can reach is over 100 degrees C at the hottest point (VRAM). Also, there is no 3-slot NVLink bridge dedicated for RTX 3090 on the market. Only 4-slot bridges are available for this GPU. And I think the manufacturers have their reasons, related to the temperature issue. With a 4-slot bridge, the distance will be wider, there will be more space for fresh air to circulate and cool the RTX 3090 better. I think your system should use another main board, one that has a wider gap between 2 PCIE x16 slots than the current one, and enough to fit a 4-slot NVLink. I see that a mainboard like ROG Strix TRX40-E Gaming meets this condition. And, if anything I say is not accurate, please give feedback so I can update my knowledge. :D
@jonabirdd
@jonabirdd Жыл бұрын
Data paralyzation? Really? FYI, it's parallelisation.
@whoseai3397
@whoseai3397 2 жыл бұрын
It's fine to install RTX2080+RTX3080 together, it works!
@Lorphos
@Lorphos Жыл бұрын
In the video description you wrote "data Paralyzation" instead of "Data parallelization"
@wlyiu4057
@wlyiu4057 Жыл бұрын
The upper GPU looks like it is going to overheat. I mean it is only barely drawing in air already heated by the lower card.
@KW-jj9uy
@KW-jj9uy 11 ай бұрын
Yes, the Dual GPUs paralyze the data really well. stuns them for over 10 seconds
@sherifbadawy8188
@sherifbadawy8188 Жыл бұрын
Would you suggest dual 3090TI with nvlink vs two rtx-4090 without nvlink?
@dhaneshr
@dhaneshr Жыл бұрын
its "parellization" not "paralyzation" 🙂
@mikahoy
@mikahoy Жыл бұрын
Is it need to be connected via NVLink or just plug and play as it is?
@zhyere
@zhyere 7 ай бұрын
Thanks for giving off some of your knowledge in all your videos.
@sergeysosnovski162
@sergeysosnovski162 Жыл бұрын
1:43 - parallelization ...
@KhariSecario
@KhariSecario 2 жыл бұрын
Thank you! This answer many question I have for building parallel GPU
@Rednunzio
@Rednunzio 3 жыл бұрын
Windows or Linux for ML in a multi gpu system?
@abh830
@abh830 Жыл бұрын
What the recommended cpu case for dual rtx 3090 ti....is dual system/cpu case are better ?
@HeatonResearch
@HeatonResearch Жыл бұрын
That is a 3-slot GPU, so make sure there is enough space and that you can fit it and have at least decent airflow. This is an area where the gamer recommendations on dual 3090 would apply directly to machine learning, and I've seen YT videos on dual 3090.
@yosefali7729
@yosefali7729 Жыл бұрын
Does it imporve single precision processing using two 3090 with nvlink
@HeatonResearch
@HeatonResearch Жыл бұрын
Yes, I had pretty good luck with nvlink, more here kzbin.info/www/bejne/nnOulH9um7ONZ5o
@JamieTorontoAtkinson
@JamieTorontoAtkinson Жыл бұрын
Another gem, thank you!
@HeatonResearch
@HeatonResearch Жыл бұрын
My pleasure! Thanks!
@sigma_z
@sigma_z 2 жыл бұрын
Can we do more than 2 GPUs? Like 4 RTX 3090s?.😎😍🙈
@danielwit5708
@danielwit5708 2 жыл бұрын
yes
@sigma_z
@sigma_z 2 жыл бұрын
@@danielwit5708 how? NV Link appears to only connect 2x RTX 3090's but not 4? I have 6x RTX 3090s 😛
@danielwit5708
@danielwit5708 2 жыл бұрын
@@sigma_z your question didn't specify that you asked about nvlink bridge lol I thought you just asking about more than 2 cards 😅
@kailashj2145
@kailashj2145 3 жыл бұрын
hoping to see your suggestions for this year's GTC and hoping for some coupons of the conference.
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Working on that now, actually.
@hanhan-jc5mh
@hanhan-jc5mh 2 жыл бұрын
@@HeatonResearch Thank you for your work, I would want to know which plan is better for GAN project, 4 3080Ti or 2 3090? Thannk you.
@TimGtmf
@TimGtmf 2 жыл бұрын
I have a question; can I run 3090 strix and 3090 zotac together? And what is the difference between running same brand and different brands of gpus? Thank you!
RTX 3090 SLI... This isn't going to be as easy as I thought...
19:37
JayzTwoCents
Рет қаралды 1,5 МЛН
💩Поу и Поулина ☠️МОЧАТ 😖Хмурых Тварей?!
00:34
Ной Анимация
Рет қаралды 1,9 МЛН
Electric Flying Bird with Hanging Wire Automatic for Ceiling Parrot
00:15
小天使和小丑太会演了!#小丑#天使#家庭#搞笑
00:25
家庭搞笑日记
Рет қаралды 12 МЛН
Inside Out 2: BABY JOY VS SHIN SONIC 3
00:19
AnythingAlexia
Рет қаралды 9 МЛН
DUAL 3090 AI Inference Workstation
13:43
LetsRTFM
Рет қаралды 5 М.
I Built a $7,221.51 PC (Worth it?)
10:10
12th
Рет қаралды 1,6 МЛН
Building a GPU cluster for AI
56:20
Lambda Cloud
Рет қаралды 114 М.
How I Built My $10,000 Deep Learning Workstation
22:25
Martin Thissen
Рет қаралды 12 М.
RTX 3090 SLI - We Tried so Hard to Love It
15:04
Linus Tech Tips
Рет қаралды 6 МЛН
💩Поу и Поулина ☠️МОЧАТ 😖Хмурых Тварей?!
00:34
Ной Анимация
Рет қаралды 1,9 МЛН