just found your channel! I'm a graduate student studying statistics planning on building my own ML/DL PC upon graduation to use for gaming/ my own personal research and your channel is slowly becoming INVALUABLE! thanks for all this great content Jeff!
@adityay5251253 жыл бұрын
Can we get a 3090 vs A series, with mixed precision thrown in
@simondemeule39343 жыл бұрын
Would love to see a 3090 vs A5000 vs A6000 comparaison. These are all very closely related - they use the same processor die - what varies is the feature set that is enabled (notably performance on various data types and compute unit count), the memory type and size (GDDR6X vs ECC GDDR6, 24GB vs 48GB), clock speed, power consumption (350W vs 230W vs 300W), cooling form factor (consumer style vs datacenter style), and datacenter usage agreement. It costs a similar amount to get two 3090s, two A5000s or one A6000, and that can be a sweet spot for researchers, budget-wise. That yields the same total VRAM and a comparable amount of compute performance, but in practice these setups can behave drastically differently depending on how the workload parallelizes. Cooling also becomes a concern with more than two GPUs.
@CoDLab4 ай бұрын
hi Jeff if you read this comment please like it so i know you have read it. I wanted to Say thank you So much for this video, it helped me out so much and saved me a lot of time. the very specific topic i was looking for hours and found the right answers. thank you so much . youre a legend
@HeatonResearch4 ай бұрын
You're very welcome!
@GuillaumeVerdonA3 жыл бұрын
This is exactly the video I needed right now, Jeff! Thank you
@harrythehandyman3 жыл бұрын
It would be nice to see RTX 3060 12GB vs RTX 3080Ti 12GB vs RTX 3090 24GB vs A6000 in FP16, FP32, FP64.
@Maisonier Жыл бұрын
+1
@weylandsmith59243 жыл бұрын
@Jeff: I don't concur about the fact that Exxact has built their workstation so that the cooling is maximized. Quite the contrary: I've not managed to understand which 3090 model they are using, but nobody will convince me that two air-cooled 3090s, stacked tightly (not even one slot separation) won't throttle. And indeed that's demonstrated in your very video. Note that you shouldn't watch for die throttling, BUT for gddr6x throttling. Unless you take some fairly drastic precautions, the memory will throttle, and this has been observed for all 3090s on the market (both open air and blower types). By drastic measures I mean: generous heatsinks on the backplate *and* at least two slot separation *and* a very good case airflow *and* reducing the TDP by at least 15% ("and", not "or"). In any case, note that your upper 3090's die *IS* throttling as well: 86C engages the thermal throttling for the die. It's not surprising that there is such a big difference with the lower one, since the upper card suckles the air heated by the lower card's very hot backplate. And you don't have any margin left: the fan is already at full speed. That's BAD. Stacking the gpus so close just so that you can use the A-series nvlink bridge is a bad policy: you trade a bit more nvlink bandwidth for a card that will severely overheat. Use the 4-slot nvlink bridge for the 3090s, and put MORE distance between the cards. Disclaimer: I'm not in the business of building workstations. I'm just an AI engineer who struggled with his own build's cooling (dual nvlinked 3090s as well), learning something in the process.
@stanst27552 жыл бұрын
this copper mod might help kzbin.info/www/bejne/nGnJZ41-eLWJptk
@peterklemenc61942 жыл бұрын
So did you go the water-cooled option or just multi-fans experiments?
@cineblazer2 ай бұрын
Fellow ML engineer here working in a lab that bought a few of these dual-3090 machines from Exxact before I joined the team. I've been observing heavy throttling on the top GPU, with clock speeds dropping to as low as 200MHz. I've already added an extra slot of space between the cards (we don't use NVLink here), but that hasn't helped much. I'm at the point where I'm strongly considering repasting and re-padding the cards to see if that might help, and if not, rehousing the entire machine in a new case with better airflow and more fans. Thanks for the tip about TDP-limiting, I may also give that a try.
@KhariSecario2 жыл бұрын
Thank you! This answer many question I have for building parallel GPU
@0Zed03 жыл бұрын
I'd like to see the 3090 compared to the 3060 and also a comparison of their power consumption, although with a remote system I doubt you'll be able to do that. Obviously the 3060 would be much slower to train on the same data as a 3090 but would it use more, less or the same power to do it?
@amanda.collaud3 жыл бұрын
@@kilosierraalpha I have a 2080 ti and a 3060 in my computer, they work good. The 3060 is not horribly slower than my 2080 ti, so... plz dont make it sound like the 3060 is not suitable for ML. You can use overclocking on the buswidth btw, I did it aswell and nothing bad happened yet :D
@zhyere10 ай бұрын
Thanks for giving off some of your knowledge in all your videos.
@Mi-xp6rp3 жыл бұрын
I would love to see more use of the 12 GB RTX 3060.
@qjiao8204 Жыл бұрын
I guess you must be misguided by this guy. Don't buy 3060, for this price range, the memory is not important anymore, get a 3070 or 3080, much much faster than 3060.
@lenkapenka69762 жыл бұрын
Jeff, Fantastic video.... explained a lot of stuff I was slightly fuzzy on.... your explanations were first class
@atefamriche95312 жыл бұрын
Not an expert here, but I think in terms of design, a triple or quad slot nv-link with more spacing between the two GPUs would help a LOT. The top GPU is choked. Also, have you checked the memory junction temp? because if your GPU code is hitting 86 deg-C, then the memory junction temps are probably over 105 deg-C, and that is definitely in thermal throttling territory.
@datalabwork3 жыл бұрын
I have watched every single bit of your video...those IDS takes interest on me. Would you kindly make a video on reviewing DL based IDS on GPU, in any future?
@wentworthmiller18903 жыл бұрын
Comparison wishlist: 3090 vs (3080 ti, 3080, 3060, vs 3060). A combination also: 3090 + 3080 ti, 3090 + 3080, 3090 + 3060. That's a lot. Thought I'd ask 😊 😁. Thank you so much for putting these vids together - it's nice to see and understand various facets of DL, which are not covered in academics generally. Very helpful to get a holistic perspective for a noob like myself.
@thewildernessretreat015 ай бұрын
Can you kindly let us know what is your opinion now on this set-up? I would love to know.
@hoblikdlouhovlasy24313 жыл бұрын
Great video as always! Thank you for your afford!
@silverback18612 жыл бұрын
Thanks for this comparison. learnt a lot to make a serious decision.
@jameszheng41896 ай бұрын
Wow, this video is what I have been looking for. I am currently trying to build a dual 3090 platform for fine-tuning and inference, and most importantly, I want to see if SLI is necessary!
@97pingo3 жыл бұрын
I would like to ask you for your opinion regarding notebooks. My question is, which notebook might be worth buying in the scenario where I might have a server for heavy computing? The choice of notebook is linked to the need for mobility
@thewizardsofthezoo5376 Жыл бұрын
wolfram?
@97pingo Жыл бұрын
@@thewizardsofthezoo5376 may you add more information?
@hungle2514 Жыл бұрын
Thank you for your video. I have a question. Suppose that I have two monster 3090 gpus and use the NVLInk to connect together. The system will see only 1 card with 48GB or 2 cards. Can I train a model need at least 32GB on the 3090 gpus ?.
@germanjurado953 Жыл бұрын
Could you figure out the answer?
@mamtasantoshvlog3 жыл бұрын
Jeff seems you confused yourself both while editing the video and while shooting the video. It's Data parallelization and not paralyzation. I hope I am correct. Let me know if that's not the case. Also would love your advice on something.
@mguraliuc19782 ай бұрын
can you please provide a link for the nvlink bridge?
Жыл бұрын
Hello Jeff. Thank you for your sharing. However, I see an NVLink bridge in your system that looks like a 3-slot bridge. With this bridge, obviously your two GPUs had to be placed close to each other like in the video. I think, although they may still be compatible with each other, this is not a good combination. With your way, the GPU below will heat up the GPU above, and there is no gap to provide fresh air for the GPU above. This poses a risk of damage, even fire or explosion if the system runs at full load for a long time. Looking at your temperature measurements, I also agree with a guy who commented earlier that the actual highest temperature that your GPU can reach is over 100 degrees C at the hottest point (VRAM). Also, there is no 3-slot NVLink bridge dedicated for RTX 3090 on the market. Only 4-slot bridges are available for this GPU. And I think the manufacturers have their reasons, related to the temperature issue. With a 4-slot bridge, the distance will be wider, there will be more space for fresh air to circulate and cool the RTX 3090 better. I think your system should use another main board, one that has a wider gap between 2 PCIE x16 slots than the current one, and enough to fit a 4-slot NVLink. I see that a mainboard like ROG Strix TRX40-E Gaming meets this condition. And, if anything I say is not accurate, please give feedback so I can update my knowledge. :D
@seanreynoldscs3 жыл бұрын
I find when I'm working with real world problems my tuning can go quicker with multiple gpu's by just training two models back to back to back as I tune.
@mohansathya Жыл бұрын
Jeff, Did the dual 3090 (NVLink) actually give you double the vram seamlessly?
@British_hunter Жыл бұрын
Smashed my setup with custom watercooling on a RTX3090x2 gpu's and a separate CPU loop. Temps on core,mem, and power don't reaching over 45 Celsius on full load
@FrancisJMotionDesigner9 ай бұрын
im trying to install a seccond gpu which is a 3070 on my PC. i already have on 3080ti installed. I have enough power but after installation, there is a lag when I move my mouse and frequent crashes. I tried removing all drivers and doing a fresh install with DDU. my motherboard is asus rog strix x570 E.... Please let me know what I'm doing wrong. could it be something with pcie lane support?
@diegoclimbing4 ай бұрын
Thanks for the very useful information. I'm currently trying to build the cheapest computer capable of running Llama3.1 8B. The NVIDIA Quadro 6Gb graphics card is the cheapest I could find so the plan is to connect two of them to be able to load the whole model. Wish me luck.
@HeatonResearch4 ай бұрын
I would love to hear how that goes!
@Enterprise-Architect Жыл бұрын
Thanks for this video. Could you please post a video on how to create a cluster using NVIDIA Tesla K80 24GB GDDR5?
@foju93652 ай бұрын
I think the caption meant to say “data parallelism” or “data parallelization”
@siddharthagrawal83007 ай бұрын
in your tests do you use nvlink on the 3090?
@kailashj21453 жыл бұрын
hoping to see your suggestions for this year's GTC and hoping for some coupons of the conference.
@HeatonResearch3 жыл бұрын
Working on that now, actually.
@hanhan-jc5mh2 жыл бұрын
@@HeatonResearch Thank you for your work, I would want to know which plan is better for GAN project, 4 3080Ti or 2 3090? Thannk you.
@ALEFA-ID4 ай бұрын
man I use dual rtx 3090 setup, my second gpu is not detected. already tried everything but still not detected
@DailyProg Жыл бұрын
Jeff do you have a comparison between 3060 and 3090 and 4090? I have a 3060 and wondering if it is worth the 6x cost to upgrade to a 4090
@-iIIiiiiiIiiiiIIIiiIi-4 ай бұрын
Dis dude be knowin' how to run trains.
@JamieTorontoAtkinson Жыл бұрын
Another gem, thank you!
@HeatonResearch Жыл бұрын
My pleasure! Thanks!
@Larimuss15 күн бұрын
3 years later and this is still cheaper and faster than a 4090. Thanks Nvidia.
@jakobw1357 ай бұрын
Can you put two GPU's from TWO DIFFERENT MANUFACTURERS, and hook them up to the same monitor?
@mikahoy2 жыл бұрын
Is it need to be connected via NVLink or just plug and play as it is?
@markhou2 жыл бұрын
In general would the 3060ti be a better pick than the non ti / 12gb vram?
@wlyiu4057 Жыл бұрын
The upper GPU looks like it is going to overheat. I mean it is only barely drawing in air already heated by the lower card.
@sherifbadawy81882 жыл бұрын
Would you suggest dual 3090TI with nvlink vs two rtx-4090 without nvlink?
@AOTanoos222 жыл бұрын
Why can't you combine the memory of the 3090s to 48GB when using Nvlink and have a larger batch size ? I thought this is what Nvlink was made for, combining both Vrams into a unified memory pool, in this case 48 GB. Correct me if im wrong.
@andreas72782 жыл бұрын
That's exactly what nvlink is for, this is correct
@clee56532 жыл бұрын
@@andreas7278 I'm still confused, does that mean nvlink provides a 48 GB unified vram but it's not a drop-in replacement and we still need to write some acrobatic code to run models larger than vram of a single card?
@andreas72782 жыл бұрын
It is indeed a drop-in replacement if you want to call it like that, i.e. 2x rtx3090 (same goes for 2x nvidia titan rtx from the previous generation) connected via nvlink indeed provide you with one unified 48gb vram memory pool which allows you to train larger models and use larger batch sizes. As long as the library you are using supports unified memory you don't need to do any additional trickery or coding, e.g. pytorch or tensorflow will handle this automatically if you use multi gpu mode so no further coding is needed. However, other math libraries such as numpy won't make use of memory pooling. For modern deep learning this is sufficient though since most people will only need the high vram amounts for deep learning. This is what made these dual cards being so popular for machine learning researchers. A lot of scientific ML papers have been using one of these two setups (with the exception of the big players with their gigantic server farms out there like OpenAI, DeepMind, GoogleResearch etc). It was a very economic way to get nearly twice the performance of the corresponding quadro 48gb card (2 cards mostly end up with like 1.92x performance over a single one in pytorch, taking into consideration that quadra cards with their ECC memory are usually a little bit slower you end up at roughly twice the throughput) at the same memory size for an extremely competitive price. Now we finally have the rtx4090, which pushes linear algebra calculations further at a larger generational jump than ever before. But the reason why the generational jump is higher is that they cut out the nvlink memory controller and used that space for more cuda units. This means that the rtx4090 has a larger generational jump over the rtx3090 than the rtx3090 over the titan rtx at a very competitive price. Also, it means that the rtx4090 in comparison to their rtx4070 and rtx4080 delivers exceptional value for money (just look at the total cost for proper water cooling, energy consumption and ML throughput for an rtx4090 compared to like an rtx4080, it's not just much faster, it's a better deal even though it's the highend card). But if you work with any type of transformer models which are very common right now, 24gb is kinda a very low boundary. Often you may only choose the small models and then in combination with ridiculously small batch sizes (not just making training slower but also resulting in other final network results due to maximum likelihood estimation being applied on too few samples for each epoch). More reasonable SOTA models require 50-60gb upwards and 48gb vram provides you with way better options. There are crazy models out there by the likes of OpenAI which literally needs hundreds of gb of vram but well, ... you can't have everything and you would only analyse or downstream train them anyways. If the rtx4090 would allow for nvlink we could get a reasonably prized 48gb setup but as it stands, you need to buy the rtx6000 ada lovelace which will cost a lot more and you also will only be able to leverage your single card throughput. Furthermore, going to 96gb will be impossible with quadro cards now since also these ones don't allow for memory pooling via nvlink any more. So you will have to get tesla cards which are a whole price tier higher. Basically, this new generation is a disappointment for ML researchers if we take reasonable setups into consideration. Other than that the new generation is pretty amazing.
@AOTanoos222 жыл бұрын
@@andreas7278 thank you for this detailed explanation, very appreciated ! I’m extremely disappointed that Ada Lovelace 40 series cards have no Nvlink anymore, not even the top end RTX 6000 (Ada). Surely anyone who needs more than 48 GB will go with a last gen RTX A6000 setup. Maybe thats another one of Nvidias ways to get rid of Ampere oversupply? What really surprises me, is that Nvlink is supposed to be removed from Ada Lovelace cards on silicon design…yet the new Nvidia L40 datacenter card, which has an Ada Lovelace chip, does have Nvlink according to their website. I guess that makes it the “cheapest” card for ML with >48 GB requirement.
@clee56532 жыл бұрын
@@andreas7278 You're awesome, man. Just to be specific, to train large models on nvlinked 2x 3090, all I have to do is enable ddp in pytorch, no need for any model parallelization code right? Looks like nvidia is not going to make any relatively cheap card that has vram more than 48 GB, I'm definitely considering picking up another 3090. Having done two research projects on bert-scale model, I'm fed up with not being able to lay my hands on SOTA, mid-size models. My guess is they might ramp up next-gen 5090 cards to 32 GB, but that is not going to bridge the gap between the demand anyway.
@plumberski8854 Жыл бұрын
Interesting topics for a beginner with this new ML DL hobby! Can I assume that the difference between 3090 and 3060 GPU here is the processing time (assuming data is small enough for the 3060)?
@rahuls1902 жыл бұрын
Hello, can I use NVlink between Quadro rtx5000 and rtx 3090, kindly please let me know.
@josephwatkins12492 жыл бұрын
Jeff, I have an 8 GPU 30 series rig that I'd like to use for machine learning. If I wanted to use these for data parallelization, how would I set this up?
@eamoralesl Жыл бұрын
Great video it helped me get a better picture of how dual GPUs are used, a question here. I got one of the newer 2060 with 12gb and wanted to pair with another GPU but can't find the same make and model, would it matter if it's a different make? Is it worth getting 2x 2060 in 2023 just for having 24gb VRAM? should I start saving for newer GPUs? Budget is a concern because latest gen GPUs come to my country almost 3x their price on Amazon so imagine those prices... Thanks any opinion helps
@danielklaffmo45063 жыл бұрын
Jeff, thank you for making these videos. I think you are the right kind of youtuber, your looking at the practical rather than the overly theoretical. But I wish i could talk more with you because i have ideas i like to share (but after contract offcourse) I have kinda maybe done it and yeah kinda need alot of ML engineers and Personalities to gather up to make an event and annual meeting ehmmmmm please let's talk further
@Lorphos Жыл бұрын
In the video description you wrote "data Paralyzation" instead of "Data parallelization"
@Rednunzio3 жыл бұрын
Windows or Linux for ML in a multi gpu system?
@abh830 Жыл бұрын
What the recommended cpu case for dual rtx 3090 ti....is dual system/cpu case are better ?
@HeatonResearch Жыл бұрын
That is a 3-slot GPU, so make sure there is enough space and that you can fit it and have at least decent airflow. This is an area where the gamer recommendations on dual 3090 would apply directly to machine learning, and I've seen YT videos on dual 3090.
@harry10103 жыл бұрын
Thank you for this!!!!!!
@maxser77812 жыл бұрын
The word is "parallelization" derived from the word "parallel". The word "paralyzation" could be used as a synonym to "paralysis", which is irrelevant in this case.
@arisioz7 ай бұрын
The infamous "Data Paralyzation"
@whoseai33972 жыл бұрын
It's fine to install RTX2080+RTX3080 together, it works!
@QuirkyAvik3 жыл бұрын
I bought one 3090 and was so amazed I got another one. Now I am considering building a proper workstation pc since I have picked up a "hobby" of editing people 4k sometimes 8k footage for them along with learning 3d modelling as I want to get into 3d printing as well. The dual 3090 were bought at more than twice the MSRP which has stopped me from building a workstation even though I finally have a case(no pun intended) for it.
@yosefali7729 Жыл бұрын
Does it imporve single precision processing using two 3090 with nvlink
@HeatonResearch Жыл бұрын
Yes, I had pretty good luck with nvlink, more here kzbin.info/www/bejne/nnOulH9um7ONZ5o
@absoluteRa072 жыл бұрын
Thank you very much very informative
@mnursalmanupieduАй бұрын
Does your dual GPU use NVLink bridge?
@theccieguy Жыл бұрын
Thanks
@MichaelDude12345 Жыл бұрын
This is literally the only place I could find information on this subject. I am trying to decide between starting with a 3080 and either a 4070 or 4070ti. Can anyone share with me their thoughts? Price aside I like how much less power the 4070 uses, but I think it would be a performance drop. Either way I know I need the 12gb of vram for what I want to do. The 4070ti seems like it would make up the difference in the performance that the 4070 lacks, but I really like the price-point of the 3080/4070 range. My options are to get one of those and maybe eventually save up to add another card, or go for a cheaper range and get 2 cards for the data parallelization benefits. I really wasn't sure how much data parallelization would be helpful for me but it seems like it would just be a nice bonus, so I am now leaning more towards just starting with one of the cards I listed. Anyone with more knowledge than me on the topic, could you weigh in please? I could really use some pointers.
@Mr.AmeliasDad Жыл бұрын
Hey man, I'm currently running a 3080. I know you said pricing aside, but the 3090 has come down to the same price as 4070's so I would strongly consider that. I have the 10GB model and would kill for the extra VRAM. Creating a convolutional neural network I ran out of VRAM pretty fast when trying to expand my model. So I either had to split my model among different GPU's or go with a smaller model. Thats why you want to try for more VRAM on a single GPU. That was also on a dataset with 510 classes for classification, which isn't the easiest. I recommend spending what you would on a 4070 or 4070ti and getting a used 3090 for the VRAM. Barring that, I would consider trying to get a used 3080 12GB and saving up for a second.
@infinitelylarge2 жыл бұрын
I think you mean "parallelization", not "paralyzation". "Parallelization" is the process of making things to work in parallel. "Paralyzation" is the process of becoming paralyzed.
@KW-jj9uy Жыл бұрын
Yes, the Dual GPUs paralyze the data really well. stuns them for over 10 seconds
@Edward-un2ej2 жыл бұрын
I have two 3090 for almost two years. When I training with two cards together, one of them will reduce about 30% due to the cooling.
@manotmapato75942 жыл бұрын
Did you use nvlink?
@thewizardsofthezoo5376 Жыл бұрын
Dual use behalves the PCI bus?
@sigma_z2 жыл бұрын
I have 6x RTX 3090, would it be possible to join all of them together? More importantly, any real advantage for machine learning? Is it better to just get a RTX 4090?
@andreas72782 жыл бұрын
You can't "join all 6" together like you suggest. If you just plug in all 6 you can use them in parallel for machine learning but then they don't share any memory (aka there is no memory pooling). You can get nearly linear speedup as long as the model type you are training is parallizable and no other pc component is creating a bottleneck. You can typically expect 1.92 for two cards, 3.84 for 4 cards so for 6 identical gpus you will get your near linear scaling. However, the rtx3090 does not support nv-bridge etc. What you can (and should) do is get 3x nv-link which allows you to bundle two of them always together. By doing that you can effectively use 48 instead of 24gb memory allowing for bigger models and larger batch sizes. So you can both get a nice speedup (large batchsizes are tyically much faster for transformers etc) and you can play around with larger models. Some software like video editing often times does not support nvlink, but tensorflow and pytorch do (what you are probably using).
@dmoneyballa Жыл бұрын
I'd love to see nvidia compared to amd now that rocm is working with all of the 6000 and 7000 series.
@zyxwvutsrqponmlkh3 жыл бұрын
Run it on an RPI.
@TimGtmf2 жыл бұрын
I have a question; can I run 3090 strix and 3090 zotac together? And what is the difference between running same brand and different brands of gpus? Thank you!
@BrianAnother3 жыл бұрын
Parallelization
@mmehdig Жыл бұрын
Data Parallelization
@BluesRockAddictАй бұрын
Just FYI, it's called parallelization not paralyzation :)
@sergeysosnovski162 Жыл бұрын
1:43 - parallelization ...
@synaestesia-bg3ew Жыл бұрын
Your channel is for the rich kids only, you are the Mac Apple channel
@sigma_z2 жыл бұрын
Can we do more than 2 GPUs? Like 4 RTX 3090s?.😎😍🙈
@danielwit57082 жыл бұрын
yes
@sigma_z2 жыл бұрын
@@danielwit5708 how? NV Link appears to only connect 2x RTX 3090's but not 4? I have 6x RTX 3090s 😛
@danielwit57082 жыл бұрын
@@sigma_z your question didn't specify that you asked about nvlink bridge lol I thought you just asking about more than 2 cards 😅
@Jibs-HappyDesigns-9903 жыл бұрын
so U would need a software controller, like some of the software from intel ?🥩🦖 good luck !! I hope the nVidia 4000 series will be out soon! & AMD say's, it will make it's 7000 series beat nVidia in scientific computing!! some day I guess!
@HeatonResearch3 жыл бұрын
AMD needs more cloud support, the day I can start to get AMD AWS instances, I will start to consider them. I like my local to mirror what I use in the cloud. I am excited about the 4000 series as well, all the rumor mills that I follow suggest 4K series will be out next year this time.
@marvelousbless9128 Жыл бұрын
RTX a 4500 dual GPUs
@jonabirdd Жыл бұрын
Data paralyzation? Really? FYI, it's parallelisation.
@pramilapatil8957 Жыл бұрын
are u the gamer grandpa?
@ProjectPhysX Жыл бұрын
Sadly Nvidia killed the 2-slot consumer GPUs. You can't buy these anymore, only hilariously oversized 4-slot cards that don't fit next to each other. So that people have to buy the overpriced Quadros for dual-GPU workstations.
@felinegoatzapper Жыл бұрын
its "parellization" not "paralyzation" 🙂
@fredrikmagnusson64694 ай бұрын
SLI was dead a long time ago. Nothing but a waste of money. Don't get me wrong, i love performance PC's.
@ok69593 жыл бұрын
why this guy is so slow
@SocratesWasRight10 ай бұрын
People speak at different speeds. Often highly analytical people speak at a slower pace.