9:47 _"I think it's finally time to say goodbye to Kepler, altogether."_ Considering it's been almost 3 years since Kepler support was discontinued, likely Maxwell will be dropped any day now too. I half expected Jeff to stuff this thing with Pascal or higher. I just sold off a couple Maxwell based Quadro's before I expect their value to take a dip.
@CraftComputing7 ай бұрын
Pascal is in it now. Volta and Ampere on standby 😉
@KiraSlith7 ай бұрын
Was thinking the same while Craft listed the cards off. Not to throw shade, but the Pascals really are the minimum I'd recommend for VDI tasks just as a matter of power and performance. Much like their consumer equivalents they're a quantum leap over their predecessors, so I'm glad he's stepping up to the Pascal cards for the next shoot out. From my experience as a budding CSE: The P40 is a VDI monster, with 24gb dividing neatly into 2x12gb, 4x6gb, or 8x3gb VMs. The P100 with it's 16gb of HBM2 is still a price/watt/performance monster for Blender or budget AI, It's VDI is faster too, but you waste VRAM in the process. The P4 is in practice a GTX 1070 with a 1080's die, absolutely stompy for a 75w half-height, but I've never tried it and Craft's last video on the P4 wasn't promising.
@xpatrikpvp7 ай бұрын
Pascal support was already dropped in vGPU 17.0 so...
@Darkk69697 ай бұрын
One of the reasons why I only bought three Tesla P4 for my ProxMox servers. Might look into P40 depending on the benchmarks in the next video.
@CraftComputing7 ай бұрын
Pascal was 'just' dropped by v17.0. Still supported with the latest drivers from Nvidia (Desktop v535, GRID v16.4) as of 2 months ago.... Not exactly a deal breaker on a 7+ year old GPU.
@edgecrush3r7 ай бұрын
Happy to see the Tesla vGPU series is back! It has been my fav series in the past. This got me into so many discoveries in the past. It would be nice to see more detailed installation instructions as setup can be challenging
@CraftComputing7 ай бұрын
The easiest setup right now is running the vGPU Proxmox installation script here: github.com/wvthoog/proxmox-vgpu-installer It installs the vGPU unlock, I stalls drivers, and even sets up a virtual license server.
@WillFuI7 ай бұрын
I’m excited for the P version of this video as I feel that is the best spot right now
@edgecrush3r7 ай бұрын
It would be awesome to see what a couple of P100 or P40 can do.
@travnewmatic3 ай бұрын
same, i'm window shopping P40s excessively at the moment
@blakecasimir7 ай бұрын
Aside from gaming, Tesla P40s make for a cheap, though slow, but still useable alternative for hosting LLMs. 24GB VRAM helps.
@m0taboy7 ай бұрын
True, plus they are guide cheap. Least compared to other options.
@chrisisasavage6 ай бұрын
I have a rig setup like that with twp p40s. The issue is the FP16 performance is atrocious so its very slow to run those models. I've looked at p100s due to that but would have less vram but should be much faster.
@seansingh44216 ай бұрын
Unless your LLM inference framework requires newer CUDA libraries. Nvidia is a horrible POS for making older drivers not work properly with newer cuda libraries.
@POLARTTYRTM6 ай бұрын
What makes memory usable in this case is the bit bus. Yeah you can allocate a lot of stuff to that memory but it doesn't help that much if the memory bus is not good.
@DustinShort7 ай бұрын
Really looking Forward to the Pascal results. I have a p40 I haven't been able to test yet but am hoping I can make a low cost VDI server for small and low demand CAD clients.
@AlexKidd4Fun7 ай бұрын
I've been looking forward to this series! Thanks for the great content, Jeff!
@xerox94267 ай бұрын
By default, the Frame Rate Limiter (FRL) is enabled for all GPUs (60FPS). The FRL is disabled when the vGPU scheduling behavior is changed from the default best-effort scheduler on GPUs that support alternative vGPU schedulers. The Tesla M10 is actually stronger than the GRID M40 by 20%-25%.
@5ub5pace7 ай бұрын
16:40 Helldivers 2 is quite CPU heavy. It's not an apples-to-apples comparison: you're probably seeing the more modern 7840U cores pull ahead of the Broadwell cores, despite the Tesla GPUs being faster than 780M in GPU-bound scenarios.
@tonycrabtree34166 ай бұрын
My 9th gen i9 gets really toasty. I’m seriously thinking about converting to water cooling.
@Fusion055 ай бұрын
@@tonycrabtree3416I've got a 5800x that I had to underclock just to keep temps low. Definitely considering water cooling in the future
@computersales7 ай бұрын
I love my M40 even though I don't use it much anymore. I don't understand why the Pascal stuff hasn't really dropped much in price though. It seems like it's holding on for dear life.
@gacikpl7 ай бұрын
AI homebrew/small bussines machines.
@oscarcharliezulu7 ай бұрын
Agreed - AI given they have lots of memory - not fast but you can at least try out CUDA
@gacikpl7 ай бұрын
@@oscarcharliezulu p40 24gb is now AI sweet spot, with good speed and go memory buffer for biger models..
@computersales7 ай бұрын
@@gacikpl it is weird because the 24GB M40 isn't much more than the 12GB version. Although the MI25 has jumped in value for reasons that are beyond me. I felt bad about buying it for $65 then watching the price drop to $50. Now it is worth $140... 🤪
@ericspecullaas28417 ай бұрын
@@gacikpl im going to be picking up 4 of them for a localhost model and for a device im developing
@dectoasd36447 ай бұрын
Look forward to the P40 test as I have a couple of these in a X99 desktop board with E5-2597 V4. Mostly been playing with AI inference but spinning up some local cloud for older games seems like it could be interesting.
@Zeroduckies6 ай бұрын
Let me know how it goes. I been thinking of getting some p40. How do you think it would do with local llama3 7b...
@timhanson37506 ай бұрын
@@Zeroduckies P40 does not support FP16 "half precision" so I'd suggest looking instead at P100 16GB that does have hardware support of FP16
@ytdlgandalf7 ай бұрын
Good experiment setup. Love these proper benchmarks
@manhandler6 ай бұрын
To solve a previous delima about a z420 water cooler, you need to snip or cut the 5 pin plug-ins botton side and just slide it on the 6 pin cpu fan header starting on the top pin. I did it and it actually to my surprise works great. The 140w 6850k i7 I have never gets above 60°C on high end AAA games like Helldiver2 and usually runs in the 30-40°C range, where before it ran up tp 100°C and sometimes would crash while performance would drop off. Amazing all in one water cooler I must say.
@rodrigofilho19967 ай бұрын
Maxwell is not dead yet, it has D3D 12_1 support, and the lattest drivers still support them. U could still run fine any modern game with a GTX 980 Ti.
@cxmxron_79647 ай бұрын
Noticing that most of these old compute cards are cheap due to new AI services not supporting the older cuda version. Triton server for example supports cuda 6 and newer (going off memory here), and the tesla m40 24gb is cuda 5.2 . Did learn this the hard way too since nvidia docs weren’t clear, so wasted money on a card
@AgentAsteriski6 ай бұрын
Oof. Someone stopped me before I bought a card that couldn't support the version we were running(11), but you're right, they don't make it easy to understand which card supports what.
@phoenixfireclusterbomb3 ай бұрын
Be nice if someone could translate the card script to utilize the next version.
@Psikeomega7 ай бұрын
This server case is exactly what i need to replace my r720xd thats my cloud gaming rig right now. Im running a pair of M40 card setup for passthrough for myself and the step kid. And this was 100% a gaming video. And i understand and respect that. But while not for this, the k80 is still relevant. If you ever get into playing around with AI, toss a few in there if you have them laying around. Thanks for the instructions on that by the way, saved me alot of headaches. Also, i take my daily coffee with a roughly measured 2.5 floz of heavy/whipping cream into about 17 floz of bold coffee. Sometimes i even roast it myself with a cheap air popcorn machine (which you should check out by the way. Makes a terrible mess with the chaff, my wife eants to kill me when i do it indoors, so dont do it indoors)
@KentBunn3 ай бұрын
I'm curious what other uses/utility a K80 might have, since I have one. And what chassis I should pop it into, as well. I wonder if TrueNAS Scale can make it available to VM's or Docker Containers at all... Even just to speed up performance of basic VDI, not gaming.
@joseph31647 ай бұрын
Thanks for the video! Looking forward to the pascal results. Tesla P4 is a powerhouse of a LP gpu
@lQuadXl3 ай бұрын
*Going to be testing a Tesla K80 24GB as an eGPU on my laptop soon, very cheap graphics upgrade for only AU$50!!!* 😁
@cjermo7 ай бұрын
Strange one for you Jeff: I know there's no official drivers, do you know if there's a way to get Maxwell Tesla cards (M40 etc) running on Windows XP and/or Server 2003? I know you can get Titan X Maxwell and the like going with some driver mods, but I've never seen it attempted on the Tesla cards.
@CraftComputing7 ай бұрын
Nope. Not heard of anyone attempting to use Maxwell on XP.
@virtualtools_30217 ай бұрын
@@CraftComputinggtx 960 works out of the box with xp, and simply adding the devixe ids of 970-quadro m6000 24gb gets em working. No idea on the tesla, though it would be pretty awesome especially for splittinf it up for multi vms
@jameslewis26357 ай бұрын
I suspect the issues with newer games stuttering on these older cards is because the drivers are not tuned to run said games since Nvidia have long-since ended support for cards of this kind of age.
@CraftComputing7 ай бұрын
Definitely.
@kevinerbs27787 ай бұрын
No finds this odd with DX12???? since was supposed to be a low-level A.P.I in which the driver wouldn't be a biggest part of it, yet it's the complete opposite of that????
@NdMoreSpd1.06 ай бұрын
For starters, love the testing it's good to have an idea where different cards land should anyone dive into this abyss. Interested in seeing a breakdown of sorts not only on how this compares to building/buying/supporting multiple desktop machines versus a single rack mounted server with VMs. As you mentioned towards the end there, power consumption against performance also plays a big part for some folks so trying to at least mention efficiency would be good to see as well. Most interested in changes across your setups from 8 VMs dropping down to maybe just 4 VMs or even 2 VMs on the simpler cards. What numbers might you see if you ran say two of the M40s for example with just two VMs running? But again, thanks for helping me consider something that been in mind for years, just not in patience, could I get the kids gaming with me without having to break the bank...
@icollided28 күн бұрын
NICE! I'm going to have to try this. I got 4 of the k80s back when mining Etherium was worth it. This could be awesome for thin clients!
@henderstech7 ай бұрын
With the M60 would it be possible to use it for Jellyfin and then also a separate VM for Frigate and homeassistant?
@CraftComputing7 ай бұрын
Absolutely!
@neponelАй бұрын
can you look into running multiple mac mini m4 in a cluster? using exo for example?
@triarii_004 ай бұрын
very cool series
@user-tn5tr4fp1e4 ай бұрын
I'm just wondering at a purely vm and encode box is this worth getting a tesla m60 over 2 quadro k2200, considering they would be around the same price.
@OldPoi777 ай бұрын
Now I want to see a 16 player classic deathmatch on one server! ;)
@KiraSlith7 ай бұрын
You could probably get all 16 onto a pair of tesla P40s, since they scale to 8 3GB vGPUs with the unlock script. Quake 3 Arena and Unreal Tournament both fly on even an 8800 GT, so it should lock-in at ~60 FPS across the board.
@captainsalmonslayer5 күн бұрын
I just wanna know which modern consumer nvidia gpu i should buy to be able to have two VMs share the gpu for gaming.
@kevindurb6 ай бұрын
What about SFF gpus? I’ve got a stack of 2U servers that have gtx750tis that I’m wanting to upgrade
@luheartswarm45737 ай бұрын
way back then in early 2000's 2010's, when I was younger I'd thinker quite a bit with virtual machines, trying new different systems, I flirted quite a lot with linux, and since gaming wasn't there at all yet, I wouldnt just leave windows and distro hop, as fun as it was I couldn't really do too much, it's crazy what you can do with virtualization nowadays!
@flipkibblez7 ай бұрын
I'm rocking a Tesla p4 in my main desktop for playing games and vr games (works with quest 2) though I have an older driver so I can bypass the vgpu stuff.
@wckdkl0wn6 ай бұрын
I run a dell poweredge r720. Have roughly 128gb ram. Right now 1x tesla p40. I want to get another card. You take these cards and try to split them up into multiple vm's. What about if you created a super vm for gaming on. Take 2 or more cards and pass their performance through to a single vm and give it a ton of system ram and a bulk of the cpu cores. Would that be possible?
@Wesrl7 ай бұрын
With these being so cheap I am wondering if just using them for transcoding would be useful.
@dylantrisciuzzi94797 ай бұрын
If you dont mind my asking, What heatsinks did you use in your server? I was following along your build with the ESC 4000 g3, but I am unable to find any heatsinks to use in the system. It's the last step I need before I can boot the server and start on the software side lol
@kylehaas8645 ай бұрын
Could you possibly make an updated video on getting these cards to pass through on Hyper-V in Server 2022? Or could you link a good guide online?
@TehJumpingJawa12 күн бұрын
Why is Maxwell architecture being blamed for poor performance in Helldivers 2/Hitman 3, when those games run just fine on local Maxwell GPUs? It's surely something vram related? When vram is segmented, is the bus width partitioned too?
@vladimir.smirnov7 ай бұрын
Few things to mention: 1. You should probably measure GPU and CPU utilisation, and CPU utilisation per-thread. Then it should be more clear if your low 0.1%s are because of CPU or GPU. 2. CPU is indeed way underpowered. Modern games can easily use 6 cores or more and therefore it is not a good idea of giving a VM less than that. I'm not sure why not to temporary give a VM way overpowered GPU and find sweet spot for the CPU in terms of cores/threads per VM, and then use that as a base.But overall even 3.6 GHz boost clock might not be enough. 3. Another important (for dual-VM tests) thing is PCIe locality. You should always ensure that you allocate VMs to the same CPU that have the PCIe Root Complex where your GPU plugged in. Otherwise you might expereice, let's say, not so great performance scaling because your other VM will need to go via UPI to another socket and you can easily saturate the link.
@Matlockization3 ай бұрын
Wasn't Maxwell the last generation of Nvidia cards to work with AMD cards at the same time ? Combining two vastly different architectures in games ?
@annihilatorg7 ай бұрын
I would appreciate it if you added some highlights to your data grids when you call out specific results.
@TerenceKearns6 ай бұрын
Would be interested in seeing that as a peertube server with multiple transcoder runners. How many h264 nvenc chips all up?
@falconadv14816 ай бұрын
I'm still trying to figure out what to do with the Tesla P4 I impulse bought for $50 a while back.. thought it could be useful for something eventually lol
@dirkjewitt50377 ай бұрын
Fallout the TV show is a mixed bag. I personally enjoyed most of the characters and like everyone else, I really like the cowboy ghoul. The bad parts are small but glaring. Luckily, they don't outpace the series. I didn't find myself cringing out like I usual do with most of the modern Disney Star Wars or Marvel. Watch it and let it permeate the senses, but if you are absolutely a purist when it comes to Fallout lore, it'll make you happy and sad at the same time.
@CraftComputing7 ай бұрын
'Purist' Fallout lore is hilarious, considering how outlandish and wacky some of it is. If anything, it's a bit toned down in the show.
@damasterpiece087 ай бұрын
@@CraftComputing the fact that they got the creator to say "i'm not writing anything more on the lore apart from what i've already written, the franchise is in bethesda's possession now so go by their lore" is like thinking star wars could ever be written by disney
@hotrodhunk73897 ай бұрын
Spoilers: The bit where he was like "oh I don't want my Pepe to pop" was super cringe 😬 Overall thought it was great
@wikwayer7 ай бұрын
@@hotrodhunk7389I guess this what happens when reddit don't teach you how to rizz
@PaperReaper7 ай бұрын
@@damasterpiece08I mean, star wars animation is on a roll lately. Last season of clone wars, bad batch, tales. Everything is a banger.
@tobiahhowell6 ай бұрын
I'm actually doing something similar but a bit less technical. I plan on upgrading my powerful gaming PC so it can handle any game I want to play, but I have a pretty small office space and I don't want to play games in a sauna... so I'm going to put my gaming PC in my garage and use steam remote play to display the games off a smaller PC actually in my office. Your method seems to be quite a bit more compact than the 4U server chasis I found for my computer 😅. Also, during my testing to make sure my setup would work, I found that Steam Link (an older version of Steam remote play that you can install on Raspberry Pis) doesn't like AMD graphics cards. It works just fine with NVIDIA cards (I don't have any Intel cards to test) but AMD cards do not work well😅 I think it's an encoding issue. Awesome video by the way! Keep up the awesome work!
@obraik6 ай бұрын
Have you looked at Hyper-V and it’s built in GPU partitioning that also works with consumer cards? I use it in my home server for various VMs running Plex, Frigate and one or two other video decoding/encoding activities. No special mods or drivers required and it works with both Windows and Linux VMs
@mbe1027 ай бұрын
The only thing I can't seem to figure out is how to get access too moonlight/sunshine over the net. I can access it locally, but I'd love to let some chums game on my server.
@alexhawes66907 ай бұрын
I use ZeroTier for networking
@CraftComputing7 ай бұрын
You'll need to setup forward ports in your firewall to access over WAN. Or you can use Parsec, which handles connection via client apps that establish the connection outside of firewall rules.
@augold16817 ай бұрын
I am just wondering how P40s would perform.
@CraftComputing7 ай бұрын
You'll find out shortly.
@LordZomdadoSpy6 ай бұрын
Just hearing a TF2 Soldier Voice line at the start made me like the video
@Anarchyman283 ай бұрын
Stupid question here... What happen if I add it to a 1660?
@alexkindl8617 ай бұрын
Could some of the stutter be network bandwidth/stack? Is there any image compression from the VM to the client? Is the test repeatable on different client hardware? Sorry for the questions, but it's an interesting experiment that you're doing.
@CraftComputing7 ай бұрын
The benchmark/frametime recording was happening locally on the virtual machine. Benchmarking the stream quality would be another metric entirely.
@alexkindl8617 ай бұрын
@@CraftComputing Very sensible, thanks for the response
@tsclly23775 ай бұрын
I prefer the ML 350 G9 and the P40s (2) and the Optane advantage (P900PCIe-3.0).. If you are gonig to stream to a handheld.. well I'd be doing it at 720p (think band with through the cell services) and high frame fate is more important to a gamer than the resolution.
@AI-HOMELAB7 ай бұрын
Any chance you'd try to create a DIY 8xP40 Server. I'm using a 4xP40 Server with an AMD7551p and 512gb of Ram for LLM Inference. Now that Llama 400b is upcoming I am wondering if I still could get some decent performance at 4bit with 8 or even 12 P40ties. Vram is just insanely expensive. Is there a better enterprise GPU than the P40 with the same amount or more Vram and at least the same performance, which does not cost 10x the price of a P40? 😅
@ewenchan12396 ай бұрын
I have never gotten Sunshine and Moonlight working together properly. Tried using it to play Halo Infinite over local LAN and it never really worked properly. It wouldn't take the inputs correctly, which made trying to aim in Halo impossible.
@almc84456 ай бұрын
Awesome video! I love the idea of a virtual gaming PC! On another note, can you do a super quick video breaking down the naming convention for Nvidia/AMD server cards? Specs aren’t usually too hard to find, but broad descriptions of series’s and product lines are extremely rare
@robot40007 ай бұрын
I think it could be nice to include the 780M in all of these bechmarks (or other integrated graphics) so that way there could be a baseline to compare all of these numbers to.
@alexej77907 ай бұрын
Can the CPU low clock be the reason for the low 0,1% framerate?
@bulletz4breakfast4297 ай бұрын
Love your videos i got this server and some tesla p40s finaly got everything running is there anyway to do 144hz parsec streaming i cant find anything on it other than a dummy plug for consumer cards is there any other option since you cant plug anything into the tesla
@CRE4MPIE7 ай бұрын
I virtualized my Pc using hyper-V in one of your previous videos, but I am stuck running at 1080p on the hyper-V windows 11pc i am using via Parsec. How can i increase the host resolution to say 1440p or 4k ? this doesn't seem possible ;(
@vulcan4d6 ай бұрын
I can feel the power draw
@garthkey7 ай бұрын
Is there anything I can do to calm the beast of fans in the server. I impulse bought one after the first video but I can't handle the noise and nowhere to move it
@luislozano28967 ай бұрын
I would love to see some benchmarks for Ai/Stable Diffusion!
@phoenixfireclusterbomb3 ай бұрын
I really enjoy this kind of tinkering. Wish I knew how to build the script. Never know what can be unlocked.
@TuratiVishera7 ай бұрын
Is that a Radeon VII in the background on your wall? Still rockin' one in my desktop after all these years.
@AlpineTheHusky6 ай бұрын
Thats impressive. Not the "still using Radeon VII" but the "My Radeon VII is still working" part. I recommended them to a few friends and out of 5 cards only 2 are still working where one of them has been repaired. They are hella hardware unstable. But since yours is still running I doubt its gonna die any time soon
@ccleorina6 ай бұрын
Hi i try my Nvidia Tesla P100 on my HPE DL380 Gen9 Server and folo both type Proxmox vGPU Installation Script & PolloLoco's vGPU Installation Guide. i still cant find my P100 Mapped Device and MDev Type gray out. i try Proxmox 7.4, 8.1 and 8.2 with Kernal 6.5 all fail no see Mapped Device.
@Feynt7 ай бұрын
Good run through. Looking forward to the Pascal tests.
@bobbyLovesTech7 ай бұрын
Instead of giving each host a dedicated virtual hard drive, could you host your steam library on a NAS and share that with all 8? Are there any issues with that?
@CraftComputing7 ай бұрын
The issue is disk I/O if 8 clients are attempting to access the same data. VHDs on an array mean each host has a unique copy of their own data. Shared library means only one copy exists, creating a potential bottleneck. I've dealt with that many times in the past when building similar setups.
@bobbyLovesTech7 ай бұрын
Hadn’t even thought of Disk I/O. Thanks for the reply - learned so much from your content 😌
@virtualtools_30217 ай бұрын
I installed my steam library on a FusionIO something pcie 3.0 x8 6.4TB ssd and enabled that new feature that lets you download games from local pc instead of steam servers, it works well with a 10gbit nic
@ElementX323 ай бұрын
Good morning sir, would these GPU's work for an AI server build?
@yokunz18376 ай бұрын
Can I run Tesla on bluestack or ldplayer?
@gabrielemusso987 ай бұрын
Would be nice to have some metrics while the game is running like cpu/gpu utilization, ram/vram usage and so on to better identify the bottleneck, maybe the cpu was pinned to 100% and can't keep up or viceversa
@win7best5 ай бұрын
i wish you would have tested the M10 just to see how (bad) it would perform
@buggerlugz67537 ай бұрын
How many Patriot burst SSD's you had die on you yet? (I've had a new one last only 2 days!)
@CraftComputing7 ай бұрын
Been running these 8 drives for ~18 months now. Zero failures. What are you running them on? What config? RAID?
@ukaszs50217 ай бұрын
Maybe it could be good to also post hashcat benchmark for those cards and other that you are testing
@cameronfrye55147 ай бұрын
Definitely interested to see the Pascal version, as I happen to have one laying around. Not something I'd want to do personally, but it's fun to know I COULD... The question is, have you ever tried butter in your coffee in place of cream?
@felixcosty7 ай бұрын
Thanks for the video. Have a main desktop gaming pc running Linux mint pc specs 5800X 32gigs ram 6750xt. TV pc is a 6900HX with 680M 32gigs ddr5 5200 Linux mint os. Would like to stream games from the desttop pc to the tv pc what software would I use to do so. Oh I do not use steam have not purchased a game there for 10 years. Using Heroic games Launcher using my GOG or Epic games.
@gacikpl7 ай бұрын
Probably parsec?
@felixcosty7 ай бұрын
@@gacikpl is Parsec a program?
@gacikpl7 ай бұрын
@@felixcosty yup for remote desktop with 3d and gpu acceleration.
@TekChris7807 ай бұрын
Since your Desktop is running Linux, and you say you don't use Steam, you'll want to take a look at Sunshine which is actually what Jeff uses on his server. Unfortunately Parsec does not have a Linux Host application. Note that you can 100% create a Steam account for free and you can use Steam's game streaming to stream your non-steam games 100% free, however due to the way Linux Steam / Proton installs games it might be tricky to get that working properly.
@1leggeddog7 ай бұрын
I feel the server cpu was really the limiting factor more than anything... 2.3 GHz isn't great 😢
@0mnislash797 ай бұрын
What is the input lag? 😟
@lil.shaman63847 ай бұрын
You should test the tesla T4 and A2, I really want to see those low profile single slot cards
@CraftComputing7 ай бұрын
I've previously reviewed the A16, which is the A2 x4. Unfortunately, that one was a loaner, so I can't run it through my tests right now. But my T4 is in the server for the next round.
@lil.shaman63847 ай бұрын
@@CraftComputing oh I'm quite excited for the next one then, didn't see the a16 video I'm gana have to go back and find that though
@freightrainfred75127 ай бұрын
I think it would be a great idea to do a similar video with these cards in Machine learning, Deep learning, artificial intelligence.
@kspau136 ай бұрын
great video.
@denvera1g17 ай бұрын
Great, now i want to make a sports coat out of tin foil so i can go as full metal jacket for halloween
@anzekejzar32337 ай бұрын
Hi there, was anyone able to get the vGPU driver with virtual desktops to work with Linux guest VMs?
@Markus-r6g3 ай бұрын
Keep this going however add some tests for ai such as LLm and Photogen
@wettale6 ай бұрын
VDI will cause issues with certain anti-cheat software. If you plan on playing online multiplayer with many new titles
@denox4207 ай бұрын
Love the idea, I use parsec on my main rig to play games on the go, or to play on my living room TV. Having a server dedicated for game streaming would be pretty fire
@Knirin7 ай бұрын
What about RDP?
@3v0686 ай бұрын
Craft computing for alcoholics/non drinkers, join me with a cup of coffee or tea! Enjoy your sobriety while also enjoying his content. Much love jeff!
@phoenixfireclusterbomb3 ай бұрын
I don’t understand the disappointment. It has great utility.
@kimjang_6 ай бұрын
Re upload ?
@HaydonRyan7 ай бұрын
Would love it if you also did an ai benchmark on these cards
@lordpelvis639547 ай бұрын
I love it......
@AlexStypikАй бұрын
Yes please do a P40 and P100 video. Maybe a V100, but the latter $$$
@CraftComputingАй бұрын
Hehehe, I have a pair of V100s on the bench right now 😉
@jesseslack20897 ай бұрын
The Nomad test whips my 1080ti down to the movie like 24fps...
@jasonscherer26317 ай бұрын
I work with someone that is getting 2 K80s in a GPU server for fairly cheap but wants to use it for AI/ML I told them it is horrible idea to use a card that old for something that will probably max those cards out all the time using them and aren't going to be any bit efficient compared to something even slightly newer.
@CraftComputing7 ай бұрын
At least upgrade to the M60... they're super cheap, and supported by much more modern software (both vGPU/GRID and CUDA).
@jasonscherer26317 ай бұрын
@@CraftComputing that’s what I told him but won’t listen. He will regret his choice on it.
@asmithdev21627 ай бұрын
@@jasonscherer2631 I just abandoned using a k80 because the cuda support level is only 3.5 and most programs like stable diffusion require at least cuda 5.0 or higher. I upgraded to the tesla p100 for $200 off ebay
@Wirenfeldt19907 ай бұрын
0:36 DANG IT JEFF!..You flubbed your lines.. "But what is Cloud Gaming?" - Onlive, 2015
@Ether-3D5 ай бұрын
Man wish the Tesla P4 was in this also.
@CraftComputing5 ай бұрын
:-) kzbin.info/www/bejne/gpeZaIl3fcuafaM
@cpljimmyneutron7 ай бұрын
Unfortunately Kepler is the most recent Nvidia chipset still supported by Apple.
@unijabnx20007 ай бұрын
which one is for fighting, which one is for fun???
@baddestmofoalive7 ай бұрын
lol I was actually looking at these on eBay this morning and thinking about this exact question
@CraftComputing7 ай бұрын
Wait for the Pascal benchmarks before buying 😉
@tonycrabtree34166 ай бұрын
Helkdivers2 uses 40% of my 9th gen i9 and 80%+ of my 3060 12gig.
@Sunlight917 ай бұрын
The 7840U has a higher multi-core score in Passmark than the Xeon-E5-2697-v4. In single core it's outright humiliation.
@AutieTortie5 ай бұрын
Take a shot every time he says "GPU power is split dynamically". Got pretty drunk.