Nvidia Tesla P100 vGPU Cloud Gaming Performance!

  Рет қаралды 61,042

Craft Computing

Craft Computing

Күн бұрын

Пікірлер: 272
@bad_dragon
@bad_dragon 7 ай бұрын
"not everyone is crazy enough to have a server rack in their garage" yeah i got mine in my bedroom LMAO
@bb2ridder757
@bb2ridder757 7 ай бұрын
yeah got mine in the attic
@blackryan5291
@blackryan5291 7 ай бұрын
bad_dragon...I'm freaking dead bro 😂
@marcogenovesi8570
@marcogenovesi8570 7 ай бұрын
not everyone is deaf and can do that
@TopHatProductions115
@TopHatProductions115 7 ай бұрын
lol same 😂
@SvRider512
@SvRider512 7 ай бұрын
Mine is in my room too!
@iraqigeek8363
@iraqigeek8363 7 ай бұрын
You can get P100 for $100 only since at least last October if you "Make an Offer" Not all sellers will accept it, but a few will. I bought 4 of them at $100 last year
@CraftComputing
@CraftComputing 7 ай бұрын
That's very good to know, as I might be picking a couple more of these up shortly...
@theangelofspace155
@theangelofspace155 7 ай бұрын
​@@CraftComputingwell, maybe after thos video that wont be a thing anymore 😢
@KiraSlith
@KiraSlith 7 ай бұрын
@@CraftComputing Every time a medium-large KZbinr makes a video, prices spike. I doubt they'll remain that accessible for too long now you've published this vid. :P
@milescarter7803
@milescarter7803 7 ай бұрын
I got the 12GB for 90, d'oh
@stefanl5183
@stefanl5183 7 ай бұрын
Make sure you get the PCIe version and not the SXM2, unless you have an adapter or a server with SXM2 sockets. The SXM2 versions are cheap because of this.
@lewzealand4717
@lewzealand4717 7 ай бұрын
7:57 Oops, you compared Time Spy on the P100 VM to Time Spy *Extreme* on the Radeon 7600. The 7600 gets ~11,000 in Time Spy, or about twice the single-VM score shown.
@richardsontm
@richardsontm 7 ай бұрын
Would be great to see a P4 v P40 v P100 head to head. Having a blend of Cloud Gaming and Ollama performance would be interesting for those looking for a homelab/homegamer/AI tinkerer all-rounder too 👍
@CraftComputing
@CraftComputing 7 ай бұрын
P40 just arrived :-) In a couple weeks, I'm going to be testing out every GPU I have on hand for performance and power draw. Stay tuned...
@conscience_cat1146
@conscience_cat1146 7 ай бұрын
@@CraftComputingI've heard that the P100 doesn't have H.265 support, and only includes a H.264 encoder. If that is the case, then theoretically the P40 should look alot better with Sunshine and Moonlight. Can you test this out and possibly confirm in your next video? This info will make or break which card I end up getting.
@richardsontm
@richardsontm 7 ай бұрын
@@CraftComputing We look forward to it - thank you for the fun content, it's always an interesting watch @ Craft Computing
@d0hanzibi
@d0hanzibi 7 ай бұрын
Yeah, comparison in dimensions of gaming, general workstation tasks and LLMs would be really awesome
@keylanoslokj1806
@keylanoslokj1806 7 ай бұрын
What is cloud gaming
@Chad_at_Big_CAT_Networking
@Chad_at_Big_CAT_Networking 7 ай бұрын
The cloud gaming aspect initially got my attention, but I think a lot of us are going to be more curious how they perform running Ollama at home. Looking forward to more of this series.
@fujinshu
@fujinshu 7 ай бұрын
Not all that great, considering they lack the Tensor cores that have since appeared on newer GPUs since Volta and Turing, which are kinda the reason there’s not a lot of support for Pascal and older GPUs.
@xpatrikpvp
@xpatrikpvp 7 ай бұрын
One thing to mention would be that latest supported vgpu driver for the P100 (also other pascal gpus P4/P40) is version 16.4 They dropped support in the latest 17.0/17.1
@yzeitlin
@yzeitlin 7 ай бұрын
We are still patiently awaiting the Hyper-V homelab video you mentioned on talking heads! love the content
@TheInternalNet
@TheInternalNet 7 ай бұрын
This is really really exciting. Thank you for never giving up on this project. This is the exact card I am considering for ML/AI to run in my R720xd.
@SvRider512
@SvRider512 7 ай бұрын
I have a Tesla P4 in my 720xd
@d3yuen
@d3yuen 7 ай бұрын
We (my company) owns 5 Dell PowerEdge 740s with two P100-16G-HBM each, but we used VMware vSphere as our hypervisor. Going on 4+ years, they continue to be excellent and reliable cards - still in active service today for VDI. With Dell Premier Enterprise pricing, we got them at considerably less than MSRP. It's the ongoing support and maintenance paid periodically to Dell, VMware and NVidia that's the killer. Pro tip: it's important that you line up the driver versions from the hypervisor down to your guests. That is, the driver version on your guest must be supported by the driver running in the hypervisor. 😅
@th3r3v92
@th3r3v92 7 ай бұрын
I've been using my Tesla P4 with an Erying 12800H ITX board as a home server for almost a year now, and I absolutely love it. I have a Win10 vGPU VM running on it, primarily used by my girlfriend, but it's also great when friends come over for a quick LAN session. I was really disappointed when I found out a few weeks ago that NVIDIA dropped Pascal support from the GRID driver.
@joshklein
@joshklein 7 ай бұрын
Any videos with proxmox and gpus i love to watch! Keep them coming!
@chromerims
@chromerims 7 ай бұрын
Love the vGPU cloud gaming content 👍
@TheXDS
@TheXDS 7 ай бұрын
I love your vGPU content Jeff! You actually inspired me to build a small homelab with a P100, though the CPUs on my server are somewhat older than I would like (a pair E5-2690 v4 CPUs)
@blakecasimir
@blakecasimir 7 ай бұрын
The P100s and P40s are very commonplace from China, and inexpensive. They are both decent for running large language models as well. But it's still Geforce 10 era performance, so don't expect wonders.
@mikboy018
@mikboy018 7 ай бұрын
Just one of those could power an Unreal Tournament 2004 LAN party... Of note, there is also a 12GB P100 -- they don't perform terribly, either.
@MeilleureVie2024
@MeilleureVie2024 7 ай бұрын
All P100 Ebay listings went up 50$ since you posted this video hahaha
@smalle
@smalle 7 ай бұрын
These are pretty dang impressive. This might be the coolest/most approachable card you’ve shown so far!
@MyAeroMove
@MyAeroMove 7 ай бұрын
Best series! Love to watch such kind of tinkering!
@wayland7150
@wayland7150 7 ай бұрын
I want to build a Clown Gaming Server, I should be able to get a heck of a lot of Clowns in a Small Form Factor case.
@laberneth
@laberneth 7 ай бұрын
I'm impressed what is possible today. I was a Administrator of a community school in germany 25 years ago. Time is running so fast. Informative Video. Subscribed!
@joseph3164
@joseph3164 7 ай бұрын
Great video, always love to see the enterprise hardware for home server use. What are you using for licensing servers for these cards? Are you just using the 90 day trial from nvidia, or are you using some type of fastapi-dls server?
@fruitcrepes4875
@fruitcrepes4875 7 ай бұрын
Good to know all these Liquidations we've been doing for the P100s at the datacenter are going to good use! I've boxed up thousands of these bad boys and off they go
@victorabra
@victorabra 7 ай бұрын
For gaming i recommend to use a Tesla P40 it have GDDR5X ram but it better GPU frequency and 24 GB good for AI too.
@rtu_karaidel115
@rtu_karaidel115 3 ай бұрын
I am playing on my P100 games such as Rainbow Six Siege , Mortal Kombat II and etc , and it is awesome!!! Moreover with Moonlight + Wireguard + Dualshock 4 , i am now able to play all PC Games on my phone😛 This is crazy , unbelivable and weird feelings income ... still can't belive it is possible!
@j_ferguson
@j_ferguson 7 ай бұрын
I absolutely need those BEER coasters. Also, I was very lucky to go to Block 15 during the last solar eclipse and had Hypnosis a cognac barrel aged barleywine. Their Nebula and Super Nebula stouts are way more common and still delicious though.
@nexusyang4832
@nexusyang4832 7 ай бұрын
Just looked up the spec sheet for that P100 and saw that the 16GB memory is what they called “Chip-on-Wafer-on-Substrate.” Very cool.
@ATechGuy-mp6hn
@ATechGuy-mp6hn 7 ай бұрын
I've seen the sponsor spot for the cloud gaming machine from maximum settings on this channel before, but its kinda ironic that you are setting up your own gaming server afterwards
@cabbose2552
@cabbose2552 7 ай бұрын
the pcie power connector can usually deliver 200+ watts thanks to over spec'd cables but the standard only requires 150
@d1m18
@d1m18 7 ай бұрын
Look forward to the next video that compares full single 11:56 performance
@greenprotag
@greenprotag 7 ай бұрын
Time spy (Standard) vs Time spy (extreme) results? I suspect you are closer to a standard run on Ryzen 3600 + GTX 1060 @ 4 693 (Graphics Score4 464 CPU Score6 625) but at that point I am splitting hairs. Your result is 2x playable gaming experiences on a single $150-180 enterprise GPU WITH a nice automated script for set up. This is a nice alternative to the P4 especially if the user has only 1 x16 slot.
@b127_1
@b127_1 7 ай бұрын
3:10 Both GP100 and GP102 have 3840 cores. However, GP100 has only ever been sold with 3584 cores active, while you can buy versions of GP102 with the full die, like p10, p40, p6000 and titan Xp (but not titan X (pascal), that has 3584, just like the 1080ti).
@ProjectPhysX
@ProjectPhysX 7 ай бұрын
GP100 is a very different microarchitecture despite the same "Pascal" name. It has a 1:2 FP64:FP32 ratio, in contrast to all other Pascal GPUs which have 1:32. FP64 is only relevant for certain scientific workloads. Today there is only very few GPUs that can till do FP64 with 1:2 ratio, and .Ost are super expensive: P100, V100, A100, H100, B100, MI50, MI60, Radeon VII (Pro), MI100, MI210.
@subven1
@subven1 7 ай бұрын
I can't wait for SR-IOV to be available on Intel Arc! This would open up a more modern and potentially even cost-effective approach to cloud gaming and VDI solutions. Unfortunately, Arc SR-IOV support is currently only available in the out-of-tree driver.
@techpchouse
@techpchouse 7 ай бұрын
Great 💪🏽 thinking about an upgrade from the P4 to 10/100
@bkims
@bkims 7 ай бұрын
I believe the tesla P40 is actually the same silicon as the titan xp which would be somewhat faster than the P100 for gaming id expect. To the best of my knowledge the P100's superior memory performance is really only meaningful for things like AI inferencing workloads. Either way both cards are around the same price these days which is pretty cool. edit: though perhaps the extra BW is beneficial for cloud gaming, I don't have any experience there.
@ProjectPhysX
@ProjectPhysX 7 ай бұрын
Yes P40 and Titan Xp are identical, except doubled 24GB capacity on P40. The P100 is a very different microarchitecture from the rest of Pascal: it supports FP64 with a 1:2 ratio. All other Pascal cards only do 1:32 ratio.
@insu_na
@insu_na 7 ай бұрын
I have a P100 in my old HPE server, but I've stopped using it because the P100 doesn't have p-states, meaning it can't do any power-saving mode. Now in practice when no load is applied this means the GPU idles at 30-40W which still isn't awful, but when you compare it with other GPUs even from the same generation (such as the P40) which can idle at 5W-8W it's quite the difference (I live in Germany and electricity costs actual money here). That's on top of my servers' already high idle power. My EPYC GPU Server ***idles*** at 200W without any GPUs installed, so that's a thing..
@JoshuaBoyd
@JoshuaBoyd 7 ай бұрын
While you are working on those benchmarks, I'd love to see something done with LLMS, say a quick test or two using Mistral and tiny llama on each card?
@JPDuffy
@JPDuffy 7 ай бұрын
I picked up a cheap Tesla P4 a while back to play with headless gaming. That never worked out too well as streaming seemed to cut too deeply into the tiny 75W max TDP. Instead I have it in an old Haswell Xeon with Windows 11 and with the clocks unlocked it's fantastic. I'm getting well over 60FPS in everything I play, often 90+ at high/med settings. I'd have gotten a P100, but after trying to cool an M40 and failing a couple years ago I decided to keep it simple. Take the shroud off the P4, strap on a fan and you're done.
@ProjectPhysX
@ProjectPhysX 7 ай бұрын
The P100 is a beast in FP64 compute, it smokes all of the newer cards there, 3.7x faster than even an RTX 4090. P100 is a very different microarchitecture from the rest of Pascal cards, with 1:2 FP64:FP32 ratio. Today this is only seen in A100/H100 data-center cards which cost upwards of $10k. FP64 is required for certain scientific workloads like orbit calculation for satellites.
@EvertG8086
@EvertG8086 7 ай бұрын
I’m using an Intel ARC 750 for mine. Works really well.
@nyavana
@nyavana 7 ай бұрын
Really hope unraid gets proper vGPU support. I have a p100 but since there is no easy way to get vGPU working, I can only use it for stuff like transcode
@computersales
@computersales 7 ай бұрын
I never have understood the preference towards the P100 of the P40. My only assumption is the higher memory bandwidth of the P100 is beneficial for AI workloads.
@czarnicholas2k698
@czarnicholas2k698 7 ай бұрын
I guess I leave this comment as a heads up that PVE 8.2 (Kernel 6.8) breaks nvidia driver compile, and I'll add that I'd love to see a video about how to fix it.
@tylereyman5290
@tylereyman5290 6 ай бұрын
has there been any word on how to work around this?
@xmine08
@xmine08 7 ай бұрын
That ultra fast memory makes it interesting for LLMs, could you try that? It's only 16GiB, but really fast at that and cheap so might be a solution for some!
@ewenchan1239
@ewenchan1239 7 ай бұрын
Two questions: 1) Are you able to play Halo Infinite with this setup? 2) What client are you using to connect to your system remotely? I am asking because I tried Parsec, and even with an actual, real monitor connected to my 3090, it still stuttered a lot. Thank you.
@NecroFlex
@NecroFlex 7 ай бұрын
I was lucky about 2 years ago, snagged a Tesla M40 24GB and Tesla P100 for around 90€ both, seller had them as unknown condition. Got both of them running just fine, M40 with modified registry to see it as a high performance GPU, the P100 with a bit more driver fuckery to get it to work. Have been thinking of flashing the P100 to a Quadro GP100 to see if i can use it like that with just a reg edit aswell, but no luck so far.
@BrunodeSouzaLino
@BrunodeSouzaLino 7 ай бұрын
The EPS connector is so better at power that the RTX A6000 only needs a single connector instead of multiple 8 pin connectors or that abomination that is the 12VHPWR connector. And its TDP is only 100W less than the 4090.
@hugevibez
@hugevibez 7 ай бұрын
Tbh this mostly verifies that having a dedicated GPU per VM might still be the best option if they are all going to be gaming. Sure this is a much more cost effective option than 4 1080s or whatever, but move up on the scale having 4 RTX 4090's is much more cost effective than an L40/RTX 6000 Ada (with the sweetspot being somewhere along the way) and you gain a lot of performance. I do wish it was easier to have a GPU serve both a VM and your containers at the same time on modern consumer GPUs.
@brahyamalmonteruiz9984
@brahyamalmonteruiz9984 7 ай бұрын
Jeff, how did you cooled the P100? I've seen your video on cooling the Tesla GPUs but which option did you used in the video ?
@calypsoraz4318
@calypsoraz4318 7 ай бұрын
As a fellow Oregonian, how do you mitigate the humidity in your garage? Or is it not bad enough to affect the rack?
@xerox9426
@xerox9426 7 ай бұрын
Its actually not correct that the vGPU drivers are limited to 60 FPS. You can disable the Frame Rate Limiter (FRL) by changing the scheduling from best-effort to another profile via nvidia-smi on your host. I tested this myself. Have a look at the docs. This has impact on general performance within a VM as it is a fixed or equal share. This means 2 VMs have fixed 50%. 4 have 25%.
@recoveryguru
@recoveryguru 7 ай бұрын
That computer at the beginning has got to be from about 2001, doubt it's going to work with Maximum Settings. 🤣
@denvera1g1
@denvera1g1 7 ай бұрын
Ignore me, i forgot GP100 was not just a larger pascal die with everything enlarged, though it is 610mm like a normal Titan/80TI, it only has the same number of shaders as the 471mm Titan XP The card that would have been the Titan XP and possibly the 1080TI had AMD been able to compete with Pascal Instead AMD's 500+mm die with HBM based memory, couldnt compete with the 300mm+ 60 class GTX 1080 with GDDR based memory, though it did have a 30% wider bus than a normal 60 class card.
@TheJimmyCartel
@TheJimmyCartel 7 ай бұрын
I love my Nvidia p40!
@MrButuz
@MrButuz 7 ай бұрын
It would be great if you could do a chart with bare metal score in the game then 1 virtual with the game then 2 virtual one game one heaven benchmark. That would let us know what we're in for in all scenarios.
@ericspecullaas2841
@ericspecullaas2841 7 ай бұрын
I have 2 of them. I was thinking of getting 2 more a second cpu and a new motherboard. If you want to run a localhost LLM 2 of them work really freaking well. When i was running a local LLM the response time was insane. Im talking within 1 second before it generated a response. Yeah i have 256gb of ram so that helps.
@rudypieplenbosch6752
@rudypieplenbosch6752 7 ай бұрын
wow this is great info, I just finished my Epyc Genoa build and was looking for a proper ways to get graphical performant VMs, amazing 👏. Does this also work for Linux VMs?
@EyesOfByes
@EyesOfByes 7 ай бұрын
So...Are there any specific games that benifit from increased memory bandwith
@titaniummechanism3214
@titaniummechanism3214 7 ай бұрын
2:19 Well, the pcie 8 Pin is RATED for 150 Watts, but I think it's widely accepted that it is in fact much more capable than that.
@SpreadingKnowledgeVlogs
@SpreadingKnowledgeVlogs 7 ай бұрын
amazon luna uses the tesla cards for their cloud gaming service. I believe its the t4 but i cant remember which one exactly but can double check again the way you find out is doing a bench mark in some games it will list the specs. They use crap xeon cpus though.
@SvRider512
@SvRider512 7 ай бұрын
I'd like to see Jeff get a hold of some of those Intel flex GPUs.
@CraftComputing
@CraftComputing 7 ай бұрын
Me too fam. Me too.
@Zozzle
@Zozzle 7 ай бұрын
Have you seen that 12gb tesla m40s are like $50 on ebay?
@CraftComputing
@CraftComputing 7 ай бұрын
Yes I have! Tesla M60s too! I'm going to be re-reviewing those cards shortly.
@Jimmy___
@Jimmy___ 7 ай бұрын
Love to see this budget rack content!
@rooksisfuniii
@rooksisfuniii 7 ай бұрын
Remembering I have one of these in my dormant promos box. Time to fire it up
@kraften4534
@kraften4534 7 ай бұрын
Almost only used in the P100, there is also the GP100 Quadro card
@markthompson4225
@markthompson4225 7 ай бұрын
You forgot about video rendering in jellyfin, pled and emby...
@lil.shaman6384
@lil.shaman6384 7 ай бұрын
I dont have friends, can you finally make a video about using multiple vGPUs for one host or a cluster of proxmox nodes automatically scaling on a single vm load?
@LetsChess1
@LetsChess1 7 ай бұрын
Have you don’t anything with the Tesla p40? I was wondering how different performance is between the p40 and the p100.
@AerinRavage
@AerinRavage 7 ай бұрын
8:32 for two of them, Cryses was right there =^.^=
@EyesOfByes
@EyesOfByes 7 ай бұрын
Getting one this week. I hopee
@fishywtf
@fishywtf 7 ай бұрын
If you play something competitive just know that anticheats will detect your systems virtualization unless you harden it.
@hazerdoescrap
@hazerdoescrap 7 ай бұрын
Just wondering if you could "expand" on the use case testing when you build out the Master List? For example I have a GPU stuffed in a system that I'm going to be using as an Encoder Rig for 264/265 Encoding (cause nv charges too much for AV1 right now) and I'm wondering how that would effect the GPU performance? Or if I were executing LLM testing via oLlama at the time someone were running a game.....
@DanteTheWhite
@DanteTheWhite 7 ай бұрын
When it comes to testing ollama, you need the amount of VRam that will hold whole model at once otherwise some model layers are delegated to cpu which hinders performance considerably.
@hazerdoescrap
@hazerdoescrap 7 ай бұрын
Yeah, I fired it up against a 32 core Epyc server.... it was not pretty.... Would be interesting to see how the GPU balancing handles the RAM juggling for that kind of load when split with other non-LLM functions....
@RebellionAlpha
@RebellionAlpha 7 ай бұрын
Awesome, just got a Tesla M60 today!
@nathanigo4614
@nathanigo4614 3 ай бұрын
Did you have any luck getting the nvidia drivers working with that M60? I've been struggling to get the vGPU driver modules to load.
@RebellionAlpha
@RebellionAlpha 3 ай бұрын
@@nathanigo4614 yeah I got it working on PopOS, but I was unable to use the VGPU part due to the licensing.
@idied2
@idied2 4 ай бұрын
Using this with only VM would be ideal for those who have an older laptop some what
@remcool1258
@remcool1258 7 ай бұрын
I would love to see how it compares with a p40
@jorknorr
@jorknorr 7 ай бұрын
o lord he drinks beer again!!
@gustavgustaffson9553
@gustavgustaffson9553 7 ай бұрын
got myself a p40 for the exact same reason 2 Weeks ago
@blakecasimir
@blakecasimir 7 ай бұрын
They are also decent for LLMs, not fast but the 24GB VRAM helps a lot.
@gustavgustaffson9553
@gustavgustaffson9553 7 ай бұрын
@@blakecasimir I‘ve used them in Stable Diffusion and they work pretty well. Pretty much the cheapest way to get that amount of Vram
@Dragonheng
@Dragonheng 7 ай бұрын
-I like the principle of cloud gaming, but modern anti-cheat programs are causing more and more problems
@badharrow
@badharrow 7 ай бұрын
Has anyone else encountered the issue when proxmos is installed in a raid that you dont enable IOMMU within Grub?
@DavidAshwell
@DavidAshwell 7 ай бұрын
Looks over at the P4... You still up to this challenge?
@mechanizedMytha
@mechanizedMytha 7 ай бұрын
i wonder if it would be worth it to use this to upgrade an aging pc... most definitely gonna be hard powering it... how did you get around to solving power delivery, in fact?
@tylereyman5290
@tylereyman5290 6 ай бұрын
How did you get around the degradation that occurs after 20 minutes of use without a purchased license?
@yoghurrt1
@yoghurrt1 7 ай бұрын
Oh, sorry for this delay, algorithm. Totally forgot to leave this comment while I was watching this video yesterday.
@mastermoarman
@mastermoarman 4 ай бұрын
im debating the p40, p100 and quadro a4000 for a transcode and codeproject ai for security camera image recognition
@CodingTheSystem
@CodingTheSystem 7 ай бұрын
You should test the P40, it's going way down in price as well :)
@CraftComputing
@CraftComputing 7 ай бұрын
My P40 arrived yesterday ;-)
@CodingTheSystem
@CodingTheSystem 7 ай бұрын
@@CraftComputing Awesome! Also, I'm not sure if you ever were aware but there were community made modded drivers for the hacked laptop to desktop 30 series cards like the 3070TiM you had. I can't remember the name but maybe they'd be worth trying to see if you could get it running?
@DangoNetwork
@DangoNetwork 7 ай бұрын
I am replacing my P100 with P40. Much better in many ways for general usage. And you can mod it with a 1080 cooler.
@tiger.98
@tiger.98 7 ай бұрын
Any link on the mod please?
@SaifBinAdhed
@SaifBinAdhed 7 ай бұрын
I was on this all day, I still have no output when I do mdevctl types... I don't want to give up yet, im very new to proxmox, I am doing it on a fresh install, intel i7 7700K and an RXT 2060 Super which supposed to be compatible, I run the script, all goes well, but no mdev
@NdxtremePro
@NdxtremePro 7 ай бұрын
We can install beer into the computer now avoiding the middleman? AI's of the world, UNITE!
@whyomgwhywtf
@whyomgwhywtf 7 ай бұрын
let me get them P4 numbers with the new unlock script and ProxMox 8.1... I need an excuse to rebuild my R620 lol
@bradnoyes7955
@bradnoyes7955 7 ай бұрын
How would the P100 perform doing AI video upscaling (such as Video 2X)? I've got several DVD rips (even a couple VHS 'rips') I'd like to try to AI upscale for my Plex server; so if I can throw one of these in my Proxmox box (an HPe DL360p Gen8 SFF) and setup a batch to churn through them without hogging up my gaming machine, that would be nice.
@DarrylAdams
@DarrylAdams 7 ай бұрын
So when will NVIDIA release Grace Hopper vGPU? The upscaling alone will be worth it
@CraftComputing
@CraftComputing 7 ай бұрын
Never. Hopper doesn't support graphics APIs.
@Kajukota
@Kajukota 7 ай бұрын
Man, i wish the Tesla P10s would drop in price. That could be a low power powerhouse for cloud gaming and LLMs.
@CraftComputing
@CraftComputing 7 ай бұрын
Can't wait until P10s becomes affordable.
@arjanscheper
@arjanscheper 6 ай бұрын
any chance we can see an updated vgpu tutorial? using the proxmox-vgpu-v3 script.. but cannot seem to get the p4 nor any gpu working with plex hw transcoding in a proxmox ubuntu vm. I can see them in nvidia-smi in the vm but hw transcoding keeps on failing. and the old guide just spews out errors etc.. as most guides on yt are 3/4 years old already. Just want to setup a fileserver > plex and then optional windows gaming vm or homeassistant
@user-hv5jv9gb6c
@user-hv5jv9gb6c 7 ай бұрын
Would a P100 have any advantage over a RTX 3060 or 4060 in stable diffusion?
@AlexKidd4Fun
@AlexKidd4Fun 7 ай бұрын
The more modern consumer cards (30xx or 40xx) would be way better for AI than what is being discussed here.
@ProjectPhysX
@ProjectPhysX 7 ай бұрын
There is no advantage for AI; AI needs low/mixed precision matrix acceleration which P100 lacks. The P100 smokes the newer cards in scientific FP64 workloads though. It's 3.7x faster in FP64 than even the RTX 4090.
@user-hv5jv9gb6c
@user-hv5jv9gb6c 7 ай бұрын
@@ProjectPhysX Ah ok, I understand now. Thank you....
@MickeyMishra
@MickeyMishra 7 ай бұрын
This cloud gaming thing almost makes sense now as long as you have a stable wired and fiber connection to a PC for your family. You can use Ho-hum hardware at the user side that can display the video, but all the hard work and heavy lifting can be done on the server side. To put this into perspective, imagine running a server at your home, and you have say your sisters kids that want to game. But they happen to be TERRIBLE at keeping up their PC or Chromebook from things like Hot chocolate being spilled on them. You can purchase them whatever is on sale at Best buy for $200 bucks or less , (Or old chromebooks for chips and cheese) and and still give them not only access to a family library of games, but top flight hardware you mostly don't even utilize at your home over a fast internet connection. The real bonus here is that a ho hum Intel GPU @ $70 bucks in ANY PC made in the last 10 years can do the job at 1080P. Even 4k output is possible on the client end without to much strain. This makes the reuse of old hardware a real thing for the future. Most PCI 3.0 busses in most of these machines are more than capable of displaying video and taking input from controllers. Where I have used something like this is Microsofts Remote desktop software on an old C100P Asus Chromebook. I even did basic video editing over the internet to my home PC and even other work from a coffee shop or hotel or just on the road. (Do not try this in California, YOU HAVE BEEN WARNED)
@wayland7150
@wayland7150 7 ай бұрын
This is cool and I have a question. How does this differ in practice from using two dedicated GPUs? For example I have a Proxmox machine with two A380 cards eached dedicated to an auto starting Windows 10 VM. Yes two physical Windows PCs in one SFF case. Each VM has to have it's own hardware dedicated to it. Yes another VM could use the same hardware but the first one would have to close first. In the P100 setup do you have to dedicate a particular slice of the P100 to the particular VM like I'm doing with my two PCIe GPUs? Following on from that would it be possible to have a number of non-running VMs that could be started in any order?
@lilsammywasapunkrock
@lilsammywasapunkrock 7 ай бұрын
You are talking about vgpu. These video accelerators can be split up evenly, meaning, you can allocate resources to vm's in multiples if two, and the VM will think it has its own graphics card. Meaning 16gb total memory, split once would be 2 gb "video cards" or 4 4gb cards ECT. The vram will be reserved for each vm, but the host computer will allocate resources for each vm. Meaning, if you only have one VM running, it will see slightly lower then 100% processing utilization. If you have a second VM running, it does not split it an even 50/50 unless both vm's are asking for 100% usage. Meaning one could just be playing a KZbin video for example and won't need more then 10%, but the other could be playing a game using 90%. I encourage you to watch Jeff's other videos.
@wayland7150
@wayland7150 7 ай бұрын
@@lilsammywasapunkrock yes, I comprehend what you've said but I'm still confused by the splitting up. For instance I had a Proxmox with an HD 5950, this card was actually two GPUs, I split that up with one VM dedicated to the first GPU and the other to the second GPU. Never the twain shall meet. If for instance I had another VM that could use GPU1 I could only run that when GPU1 was not being used by the first mentioned VM. So with this P100 is one slice of it dedicated to the particular VM I set it up with? It will it pick what it needs dynamically like it does with system RAM. When setting up a VM I don't pass through a particular RAM chip.
@tanmaypanadi1414
@tanmaypanadi1414 7 ай бұрын
​@@wayland7150The vram is not dynamically allocated. it is predetermined but the actual Cuda cores running the workload will work in parallel across the VMs as long as the vram is sufficient.
@itzsnyder7271
@itzsnyder7271 7 ай бұрын
How much is the power consumption?
@CraftComputing
@CraftComputing 7 ай бұрын
Heaven benchmark consumed around 100W (60 FPS cap). Both VMs utilized 100% of the GPU, and power use was between 180 and 200W. The card technically has a 250W TDP.
@itzsnyder7271
@itzsnyder7271 7 ай бұрын
@@CraftComputing that’s completely reasonable! Wow! Considering to build the same server with this wattage usage. Can you say something about the idle consumption?
@CraftComputing
@CraftComputing 7 ай бұрын
Idle was around 25W IIRC.
@liamgibbins
@liamgibbins 3 ай бұрын
mines not in the garage but next to the tv in the lounge on my r730, a tad loud but it heats my home ok
@ezforsaken
@ezforsaken 7 ай бұрын
question! Are there any modern non-nvidia options for doing 'vgpu' (shared pcie graphics card)? Amd? does intel allow srv-io on their stuff for sharing them? just asking!
@Jeremyx96x
@Jeremyx96x 7 ай бұрын
Can a standard CPU be usable for "cloud" gaming? I have my old 2600x and was wondering if I can get a m40 or similar to pair with it.
@KasperZzz22
@KasperZzz22 4 ай бұрын
Я написал гайд как запустить это чудо. Если будет интересно кому, пишите.
The PERFECT 2U Cloud Gaming Server - Asus ESC4000 G3
18:02
Craft Computing
Рет қаралды 74 М.
Gaming on my NVIDIA Tesla GPUs - Part 1 - NVIDIA Maxwell
23:15
Craft Computing
Рет қаралды 89 М.
This Game Is Wild...
00:19
MrBeast
Рет қаралды 167 МЛН
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 219 МЛН
Proxmox, VM Redundancy Using ZFS Replication
30:21
Tech Tutorials - David McKone
Рет қаралды 4 М.
How fast is an RTX A5000? - Cloud Gaming Server Pt.22
23:34
Craft Computing
Рет қаралды 46 М.
AliExpress Water Cooling my nVidia Tesla GPUs
20:55
Craft Computing
Рет қаралды 63 М.
Gaming on NVIDIA Tesla GPUs - Part 2 - NVIDIA Pascal
26:31
Craft Computing
Рет қаралды 43 М.
The ultimate gaming virtual machine on proxmox
17:19
Distro Domain
Рет қаралды 34 М.
Use ANY Headless GPU for Gaming in a Virtual Machine!
19:07
Craft Computing
Рет қаралды 193 М.
What is the nVidia Tesla P4? - Cloud Gaming Server Part 17
13:38
Craft Computing
Рет қаралды 149 М.
This Game Is Wild...
00:19
MrBeast
Рет қаралды 167 МЛН