Yes, It’s Real: PCI Express x32

  Рет қаралды 352,391

Techquickie

Techquickie

Күн бұрын

Пікірлер: 501
@marcosousa336
@marcosousa336 7 ай бұрын
Scooby Doo and the gang unmasking this ghost as SLI/Crossfire
@Tech-WonDo
@Tech-WonDo 7 ай бұрын
Ikr? idk why it had to be a whole vid
@The_Prizessin_der_Verurteilung
@The_Prizessin_der_Verurteilung 7 ай бұрын
@tech-wondo4273 "Money! Ak yakyakyakyak"
@prawny12009
@prawny12009 7 ай бұрын
Aren't those limited to x8 x8?
@SterkeYerke5555
@SterkeYerke5555 7 ай бұрын
@@prawny12009 Not necessarily. It depends on your motherboard
@brovid-19
@brovid-19 7 ай бұрын
I award you seven ahyuks and a guffaw.
@anowl6370
@anowl6370 7 ай бұрын
PCI-E does support x32 single link devices, even if it does not use a single socket. It is specified in the PCI Express capability. There is also x12
@steelwolf411
@steelwolf411 7 ай бұрын
Also x24 in some high end stuff.
@shanent5793
@shanent5793 7 ай бұрын
No one ever used it, hence the removal from the latest revision
@steelwolf411
@steelwolf411 7 ай бұрын
@@shanent5793 It was used in Cisco UCS for some VICs as well as other things. Also I believe it was used by IBM for specific cryptography accelerators.
@cameramaker
@cameramaker 7 ай бұрын
@@steelwolf411 there is no x24 in the spec. Some PHI MXM cards claimed x24 but it was running in either 2x12 / 3x8 / 6x4 mode.
@bootchoo96
@bootchoo96 7 ай бұрын
I'm just waiting on x64
@JohnneyleeRollins
@JohnneyleeRollins 7 ай бұрын
x16 is all you’ll ever need - bill gates, probably
@buff9267
@buff9267 7 ай бұрын
turns out bill is lame
@74_Green
@74_Green 7 ай бұрын
hahaha
@Nightykk
@Nightykk 7 ай бұрын
Based on a quote he never said - possibly, probably.
@lexecomplexe4083
@lexecomplexe4083 7 ай бұрын
PCI didn't even exist yet, let alone PCIe.
@chovekb
@chovekb 7 ай бұрын
Sure, like 16 x x16 it's like a 16 core CPU LOOOL
@Vade420
@Vade420 7 ай бұрын
Thank you Mr. Handsome Mustache man
@drummerdoingstuff5020
@drummerdoingstuff5020 7 ай бұрын
Grinder called…Jk😂
@realfoggy
@realfoggy 7 ай бұрын
His wife would agree
@ImMadHD
@ImMadHD 7 ай бұрын
He really is so cute 🥰
@Blox117
@Blox117 7 ай бұрын
i was thinking of the other guy when you said mustache man
@Fracomusica
@Fracomusica 7 ай бұрын
Lmao
@dennisfahey2379
@dennisfahey2379 7 ай бұрын
x32 and beyond are very common in ultra high end modular servers. If you look at the server manufacturer Trenton Systems, they have massive PCI-E array capability. Of course its still PCI-E a migration from PCI and that has its bottlenecks but when you want parallelism they do it very well. (Not affiliated; just impressed)
@shanent5793
@shanent5793 7 ай бұрын
You are mistaken, there has never been an implementation of x32, which is why it was deleted from PCIe 6.0
@sakaraist
@sakaraist 7 ай бұрын
@@shanent5793 Weird, then why do I have a x32 NIC on my desk? It was only not used in consumer boards, it very much so exists in the commercial space. You often find them as riser cards, x48 is the highest i've personally dealt with. I've also got an x32 FPGA dev kit sitting at my bench.
@shanent5793
@shanent5793 7 ай бұрын
@@sakaraist If they were referring to the total number of lanes, then this wouldn't be noteworthy because RYZEN Threadripper consumer boards have had more than 32 lanes for several years already, but they're never referred to as PCIe x32 devices. Riser cards are just glue, not end devices and are out of scope. In the case of NICs, they may have two x16 ports that can be connected to different sockets in a system to save inter-socket bandwidth, but PCIe will still treat them as two separate devices. FPGAs could of course be programmed to implement PCIe x32, but if you want to use the hardened PCIe IP it will still be x16. If your devices have actually negotiated a PCIe x32 link at the hardware level, I would love to know the part numbers because even PCI-SIG doesn't know about them and they're definitely not off-the-shelf
@jnharton
@jnharton 7 ай бұрын
@@shanent5793This needs more upvotes, honestly. ----- Just because the slot can carry 32 lanes doesn't mean that there are must be any true 32 lane devices. Makes perfect sense that you might make a single board that is a carrier for more thin one device and use a single slot. Especially in an industrial context where one larger slot might be better than a bunch of extra slots and little cards everywhere. Kind of a throwback to the days of large card edge connectors for parallel buses, only using each signal line as a separate communications lane.
@robertmitchell5019
@robertmitchell5019 7 ай бұрын
@@shanent5793 Wow did you watch the same video I did? @ 3:30 they show Nvidia cards using X32. (off the shelf BTW). They call it Infiniband because NVIDIA. And yes I know Infiniband is the communication standard that uses the PCIE x32 specs.. Just like NVME is the communication standard that uses PCIE x4.
@marcosousa336
@marcosousa336 7 ай бұрын
This just sounds like SLI/Crossfire with extra steps
@Arctic_silverstreak
@Arctic_silverstreak 7 ай бұрын
Well sli is used for synchronizing gpu while this is is just fancy name/way to aggregate high speed network card
@StrokeMahEgo
@StrokeMahEgo 7 ай бұрын
Don't forget nvlink haha
@dan_loup
@dan_loup 7 ай бұрын
A pretty good slot to put in your Virtua fighter cartridge
@thepinktreeclub
@thepinktreeclub 7 ай бұрын
ha! good one
@PrairieDad
@PrairieDad 7 ай бұрын
Riley Yoda needs to be a regular thing.
@2muchjpop
@2muchjpop 7 ай бұрын
SLI and crossfire failed back then, but with modern high speed interconnect tech, I think we can bring it back.
@zozodj2r
@zozodj2r 7 ай бұрын
When it comes to gaming, it wasn't about the interconnect. It was about the sync between the two which had frame lag.
@christophermullins7163
@christophermullins7163 7 ай бұрын
SLI or crossfire will never make sense.. it didn't back then as it's difficult to get working much less smoothly. Best case it to have all chips and memory as close to one another as physically possible. Considering we regularly +30-70% uplift in GPUs just 1.5 years later.. you're better off just throwing out your old flagship and get the new one than to try and mate 2 together. It will use more than 2x power and deliver much less than 2x performance. I get that this was probably .mostly a joke but I am just here to bring the real world to the discussion.
@illustriouschin
@illustriouschin 7 ай бұрын
Marketing just needs a way to spin it and we'll be buying 2-4 cards again for no reason again.
@WilliamNeacy
@WilliamNeacy 7 ай бұрын
Yes, I'm just not happy buying one $1000+ GPU. I want to have to buy multiple $1000+ GPU's!
@shanthoshravi5073
@shanthoshravi5073 7 ай бұрын
Nvidia would much rather you buy a 1200 dollar 4080 than two 300 dollar 4060s
@CoopersHyper
@CoopersHyper 7 ай бұрын
1:30 the binary says: "Robert was herrr" 🤓
@robertm1112
@robertm1112 7 ай бұрын
nice
@DodgerX
@DodgerX 7 ай бұрын
Hey robert ​@@robertm1112
@Trident_Euclid
@Trident_Euclid 7 ай бұрын
🤓
@carabooseOG
@carabooseOG 7 ай бұрын
How do you have that much free time?
@CoopersHyper
@CoopersHyper 7 ай бұрын
@@carabooseOG i dont lol, i just put it in a binary to text translator lol
@5urg3x
@5urg3x 7 ай бұрын
MSI tech support are the worst in the industry. You know what they told me? This is verbatim: “We don’t troubleshoot incompatibility”
@vickeythegamer7527
@vickeythegamer7527 7 ай бұрын
😂
@maxstr
@maxstr 7 ай бұрын
Really?? In the past, MSI has always had the best warranty and repair service. I had a video card that was displaying weird corrupt garbage after like 6 months, and they replaced it at no cost. I had an MSI laptop that I smashed the screen by shutting the lid on a pencil, and MSI replaced the screen under their one-time replacement warranty. But that was years ago, so I'm guessing things have changed?
@simongreen9862
@simongreen9862 6 ай бұрын
I don't know; my 2017 AM4 motherboard is still getting BIOS updates as of January 2024, which was necessary for me to swap the original 1080Ti with a new 4070 I got last month.
@5urg3x
@5urg3x 6 ай бұрын
@@simongreen9862Can we take a moment and ask the question why the fuck isn’t UEFI/BIOS firmware open source? Really should be.
@simongreen9862
@simongreen9862 6 ай бұрын
@@5urg3x I agree with you there!
@Xaqaria
@Xaqaria 7 ай бұрын
The Mellanox NICs also allow them to be connected to PCIe lanes from both CPUs. It levels out the network latency by not requiring ½ of the traffic to jump an interprocessor link to get to the NIC.
@JonVB-t8l
@JonVB-t8l 7 ай бұрын
... You telling me I don't need it. I'm an American. I don't need multiple 64 thread Ryzen Epic servers. But I got em, and they got 128 PCIE lanes each!
@4RILDIGITAL
@4RILDIGITAL 7 ай бұрын
Simplifying complex tech stuff like PCI Express x32 - just brilliant. Keep up the informative and clear tech explanations.
@sshuggi
@sshuggi 7 ай бұрын
That just sounds like SLI with extra steps.
@TheHammerGuy94
@TheHammerGuy94 7 ай бұрын
Without the proprietary connector
@eliadbu
@eliadbu 7 ай бұрын
why are you people keep comparing it to SLI, it has nothing to do with SLI. It is more like link aggregation.
@TheHammerGuy94
@TheHammerGuy94 7 ай бұрын
@@eliadbu SLI needs both the PCIE lanes and an extra SLI bridge to enable faster data between the cards. But this was from the time when PCIE wasn't fast enough for NVidia's standards. now with PCIE 4 and 5 being as fast as it is, we mostly don't need the SLI Bridge anymore. keyword: mostly. but in simpler terms, X32 lanes is more like using RAID 0 on storage.
@eliadbu
@eliadbu 7 ай бұрын
@@TheHammerGuy94 In SLI, the PCI_E is used to communicate with both devices at the same time as they both work in unison to render interlacing frames, this is more like having a second card its whole purpose is to pass the communication to the main card, so it is not as RAID 0 - as with RAID 0 both devices are part of an array and are the same. We don't need SLI bridge anymore because SLI is pretty much a dead technology.
@Riviqi
@Riviqi 7 ай бұрын
0:18 You guys worked really hard on this shot; you probably should’ve stayed on it longer. 😂
@theloudestscreamiveeverscrem
@theloudestscreamiveeverscrem 7 ай бұрын
So... This is just SLI?
@Dr2Chainz
@Dr2Chainz 7 ай бұрын
Had the same thought hah!
@EvanTech-v3q
@EvanTech-v3q 7 ай бұрын
No, it is not
@raycert07
@raycert07 7 ай бұрын
Sli for non gpus
@Arctic_silverstreak
@Arctic_silverstreak 7 ай бұрын
Not really, just a fancy name of link aggregation for, mostly, network card
@ManuFortis
@ManuFortis 7 ай бұрын
Kind of, but not really. It is using similar methods, but it's not exactly the same. This is probably closer to what is done on AMD's workstation cards with being able to attach a display synch module between multiple workstation gpu's to output as a single monitor signal with one of these: AMD FirePro™ S400 Sync Module for instance with AMD's workstation cards. (Nvidia has their own version, but I don't know the details.) If you look at the card shown by Riley in the video, you'll see that cable connecting them. I'm not sure of it's exact connector specs, but it will be somewhat similar in nature to the JU6001 connector that can be found on the AMD WX series cards. Sometimes it's populated with an actual socket/port. Sometimes not. Essentially, if I understand correctly, instead of each of the cards sharing the workload intended between them, they are all doing their own work, or shared work perhaps in some cases, and outputting it all to the same monitor. It's a subtle but important difference, because SLI/Crossfire typically is used for splitting workloads between GPU's to get a better end result, where as display synch (as I will call it for now), is more about combining separate or even shared workloads into a single tangible visual result. That sync card is effectively doing what Riley explained about the x32 setup, and the asynchronous data streams typical of PCI compared to when they are... well... synced. Maybe not the worlds best explanation, but I hope it helps.
@stevenneaves8079
@stevenneaves8079 7 ай бұрын
Back in the day X32 meant something different to us entry level audio production folks 😂
@ToastyMozart
@ToastyMozart 7 ай бұрын
And Sega fans.
@SuperS05
@SuperS05 7 ай бұрын
I still use an behringer x32♥️
@wolfeadventures
@wolfeadventures 7 ай бұрын
0:26 that’s what he said.
@nono-oz4gv
@nono-oz4gv 7 ай бұрын
lmao 2 4090s on one card would be absolutely insane
@PixyEm
@PixyEm 7 ай бұрын
Nvivia Titan Z 2024 Edition
@benwu7980
@benwu7980 7 ай бұрын
There was a time when stuff like that did get made. I bought a Dell that was meant to have a 7950GX-2, but it arrived with an Ati card.
@jondonnelly3
@jondonnelly3 7 ай бұрын
The cooling would be problematic, would need a 360mm radiator maybe a 420mm. Though I guess if you afford one the cooling and power costs wont matter ! Big problem with SLI is that memory is become a bottleneck. The two cards VRAM don't add together. So 2 X 24 is still just 24. So it would need like 2 x 48. Fuk that would be insanely expensive.
@crabwalktechnic
@crabwalktechnic 7 ай бұрын
LTT is like the MCU where this video is just setting up the next home server episode.
@StingyGeek
@StingyGeek 7 ай бұрын
A 32 lane PCI bus, awesome! GPU card makers can use it for their premium cards...and only use four lanes. Awesome....
@jamegumb7298
@jamegumb7298 7 ай бұрын
Each time someone buys any current Intel 1700 and you add an ssd it gets bumped down to ×8 anyway leaving 4 of the very few lanes you have useless. AMD has the same thing, in practiced expect a card to always ×8. Then any bench you see where the compare ×8 to ×16 there is minimal to no difference unless you go down a generation. Just make GPU link on desktop ×8 by default and make room for 2 more NVMe slots.
@commanderoof4578
@commanderoof4578 7 ай бұрын
⁠@@jamegumb7298AMD does not have the same thing Unless its a dogshit motherboard you can have 2X NVMe slots at full speed at the time and an X16 slot Its when you go past 2 you run into issues as your either adding multiple to the chipset or you start stealing lanes Without any performance loss on conflicts you can have these configurations in AM5 2x NVMe + 1 x16 4x NVMe + 1 x16
@DeerJerky
@DeerJerky 7 ай бұрын
@@jamegumb7298 eh, on AM4 I have 2 NVMes, one on gen 4 and the other on gen 3. My GPU is still on 16 gen 4 lanes, and iirc AM5 only increased the lane count
@Demopans5990
@Demopans5990 Ай бұрын
Also better for big chonky cards just for the physical support
@ddevin
@ddevin 7 ай бұрын
GPUs are getting so wide these days, they might as well support PCIe x32
@matthiasredler5760
@matthiasredler5760 7 ай бұрын
In the early 90s even simple soundcards need the ISA slot...and were long an beefy.
@Th3M0nk
@Th3M0nk 7 ай бұрын
In FPGA is fairly common to see X32, Microsoft had a board that allow u to control two FPGAs with this lanes, the trick was even it was an X32 actually it was like emulating the connection been two X16 lanes by readdressing the lanes.
@monad_tcp
@monad_tcp 7 ай бұрын
1:25 PCIe is almost a network
@eldibs
@eldibs 7 ай бұрын
"I'm sure some of you are already thinking of ways you can justify your purchase." Wow, calling me out just like that?
@chrism6880
@chrism6880 7 ай бұрын
Doesn't the most recent Mac Pro have a double PCIx16 link to their custom AMD GPU?
@Daniel15au
@Daniel15au 7 ай бұрын
3:58 I like that the connector is labeled as "black cable" even though it's not black.
@Roukos_Rks
@Roukos_Rks 7 ай бұрын
Now let's wait for x64
@fujinshu
@fujinshu 7 ай бұрын
And then maybe x86?
@lazibayer
@lazibayer 7 ай бұрын
I glanced at the thumbnail and thought it was about a new longer x series barrel for p320 for some reason.
@NegativeROG
@NegativeROG 7 ай бұрын
x32 Bandwidth? Meh. x32 RGB? Oh, HELL yeah!
@gcs8
@gcs8 7 ай бұрын
Cisco iirc has a PCI-E x24 for their MLOM + NIC (they may call it a VIC) on some of their stuff.
@EriksRemess
@EriksRemess 7 ай бұрын
last intel mac pro had two 16x slots combined for dual gpu amd cards. I guess technically that’s 32x
@cameramaker
@cameramaker 7 ай бұрын
its not. many servers have a long slot for holding riser boards (eg. 3 cards in 2U rack mount server), but those are NOT SINGLE DEVICE slots. Same as a dual x16 for dual gpu is not a single PCIe device.
@Edward135i
@Edward135i 7 ай бұрын
0:00 woah MSI Z68A-GD80 that was my first ever gaming motherboard that baby Linus is showing.
@Wycabar
@Wycabar 7 ай бұрын
my mum walked past and asked if i was watching something with steve carrell and i'm never going to unhear that.
@cianxan
@cianxan 7 ай бұрын
This video reminded me of SLI. The physical setup looks identical, you got two devices occupying two PCI Express x16 slots and have an extra cable/connection between the devices.
@NicoleMay316
@NicoleMay316 7 ай бұрын
I would love it if using one PCIE slot didnt disable another. I dont think we're ready for the jump to x32 until this bandwidth limitation for lanes is addressed.
@rightwingsafetysquad9872
@rightwingsafetysquad9872 7 ай бұрын
That limitation doesn't exist in the products that use x32. Desktop CPUs may only have 8-24 lanes, but server chips have hundreds.
@alexturnbackthearmy1907
@alexturnbackthearmy1907 7 ай бұрын
@@rightwingsafetysquad9872 True. Old server processors have WAY more PCIE lines then even top of the line modern desktop processors (PCIE 3.0 tho), and if that isnt enough, just get yourself dual cpu system.
@VideoManDan
@VideoManDan 7 ай бұрын
1:47 Isn't this exactly what old GPU's with an SLI bridge did? Or am I not understanding correctly?
@jakenkid
@jakenkid 7 ай бұрын
The number of times he said 'x' lanes... My brain might explode. I am 30 seconds into the video.
@hotflashfoto
@hotflashfoto 7 ай бұрын
On that new case from MSI, I don't think I wanna buy something that I can't pronounce.
@lordelliott42
@lordelliott42 7 ай бұрын
1:47 The way you explain that sounds a lot like SLI graphics cards.
@brovid-19
@brovid-19 7 ай бұрын
"You don't need an X in your Y" _You watch your tone, boy. I need whatever I _*_say_*_ I need._
@Mr.Morden
@Mr.Morden 7 ай бұрын
Reminds me of those old school gargantuan 16bit ISA slots used to overcome speed limits.
@haplopeart
@haplopeart 7 ай бұрын
Now we just need Desktop Chips to actually provide a reasonable amount of lanes so we can have 4 or more X16 slots
@ramonbmovies
@ramonbmovies 7 ай бұрын
that was the best quickie I've had in years.
@TheRealDrae
@TheRealDrae 7 ай бұрын
I KNEW IT, i was sure I've seen an oversize PCIe somewhere!
@michaellegg9381
@michaellegg9381 7 ай бұрын
Just a thought 💭🤔.. if you want a super small SFF build the motherboard only has 1 PCI express slot usually plus some nvme slots. So if you have 1x 32x pci express slot you could have 1 card that has the GPU and SSDs and dedicated npu and a 10gb NIC ect all in 1 expansion card especially if you have 1 side of the PCB for the GPU and the other side for the npu and SSDs and NIC and all the other hardware you want. It would make for very capable SFF builds or very very tidy full size builds that only has the motherboard and CPU and cooler and ram and 1 expansion card that's a mix of all kinds of different hardware.. so as much as we don't need 32x PCI express lane's for general hardware but the idea and the 32x slot could definitely be used up..
@thestickmahn2446
@thestickmahn2446 7 ай бұрын
"Not fast enough? Just add more lane!"
@lgfs
@lgfs 7 ай бұрын
My god that segue reminded me of STEFON in SNL... The MSI MPG Gungnir 4000 Battleflow Monster Extreme has EVERYTHING!
@Walker_96365
@Walker_96365 7 ай бұрын
Technically, a pcie gen 5x16 slot is like a pcie gen 1x256 slot
@Unknown_skittle
@Unknown_skittle 7 ай бұрын
X32 is often used to connect 2 server nodes together
@kenzieduckmoo
@kenzieduckmoo 7 ай бұрын
it still cracks me up whenever someone says dada instead of data
@thesupremeginge
@thesupremeginge 7 ай бұрын
Every time you say 'Dad a Center', a piece of my soul dies.
@abavariannormiepleb9470
@abavariannormiepleb9470 7 ай бұрын
Kind of ironic since Intel‘s current LGA 1700 platform is pretty bad regarding PCIe flexibility, for example not being able to do PCIe Bifurcation.
@LucasImpulse
@LucasImpulse 7 ай бұрын
as a child i put a pci-e wifi card into a window xp machine's pci slot and the slot blew up
@krisb853
@krisb853 7 ай бұрын
I am glad that we got SLI PCIE before GTA6.
@ryanhamstra49
@ryanhamstra49 7 ай бұрын
So, is this the future of SLI? 16 lanes talking between the GPU’s on the board and 16 lanes talking to the cpu? From each gpu?
@BenjaminWheeler0510
@BenjaminWheeler0510 7 ай бұрын
Pour one out for the man-hours spent on the 1-second star wars clip at 0:18. Worth it.
@kousakasan7882
@kousakasan7882 7 ай бұрын
In the early 90s, I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
@cjames4739
@cjames4739 7 ай бұрын
The X is referred to as "by" though. So a PCIE x4 is called PCIE by 4 and so on
@Vermicious
@Vermicious 7 ай бұрын
Thankyou. It's like when people refer to camera zoom eg. 4x as "4 ex". Infuriating
@Vermicious
@Vermicious 6 ай бұрын
@shall_we_kindly It’s a multiplication. Is 5 x 10 “five ex ten”?
@TheMatthewDMerrill
@TheMatthewDMerrill 7 ай бұрын
Yeah x4 and x8 used to be a lot too. In less than 10 years we'll be seeing more x32 things.
@ivofernandes88
@ivofernandes88 7 ай бұрын
The end pointing the reference to Linus got me dead 🤣🤣
@zeekjones1
@zeekjones1 7 ай бұрын
I feel the bandwidth could be used by an SFF with some sort of breakout expansion slots.
@ppp-ti1iz
@ppp-ti1iz 7 ай бұрын
apparently it’s up to 800G now
@XenXenOfficial
@XenXenOfficial 7 ай бұрын
Wait a minute. That binary looks suspicious. All either having started with 01 or 011. It's ascii! Quickly, someone translate it! Edit: ive noticed some binary as 01000000 which isnt ascii, but it is 1 away from capital A. BUT, A huge majority of the stuff looks like readable letters
@alexandermcclure6185
@alexandermcclure6185 7 ай бұрын
After you said "beyond 16 lanes..." my pc froze for a moment. LOL!
@chaosfenix
@chaosfenix 7 ай бұрын
I would like to see an update to the actual PCIE slot standard. It doesn't have to be exactly like this in that I don't care about the specifics like the connector type but I think I would like this architecture. It would be something like an MCIO connector that would only have 4 lanes by default. That is it. No more than that would be allowed in the connector. Each individual connector would be specced to provide power between 50-100W. I don't care the specific range. Just that it should be able to provide up to a specified power. You would still support pcie bifurcation which means you could then turn a 4 lane port into a 2x2, a 2x1x1, or a 1x1x1x1. This could be amazing for addin cards as if you wanted to add a bunch of PCIE devices you would simply assign a single pcie lane to them. Honestly not too much different here from the current spec. Here is where it would get spicy though. Part of the spec would be spacing between each individual MCIO connector. The reason for that is because you would also allow not only for the bifurcation of the slot but for the combination of the slots as well. Maybe the devices going greater than 4 would simply use Driver Binding like you said, I imagine it would be relatively easy to bind 2-4 4 lane pcie connections. Single mode would be the default but you could choose to combine up to 4 of the slots together as well in the bios. This would mean that you could still have devices connect to up to 16 PCIE lanes if you wanted but if you didn't then you would simply have 4 individual MCIO connectors you could direct attach to instead. It would be hugely more versatile. Also you would get your greater power delivery in that a more power hungry device using all 4 connectors would be supplied with like up to 200-400W of power directly. Sure you are going to have devices, especially GPUs, that still need additional power but that should be rare if they could work with up to 400W. I think you could even allow some backwards capability if you made available an adapter to go from the 4 MCIO connectors and PCIE. Then you would just need to provide cheap standoffs for the screws at the back. This wouldn't be a problem for most cards but if you had a chonker like a 4090 you could have z height issues in the case. For most regular cards it wouldn't be an issue though and the issue would go away eventually as people switched to the new standard.
@drummerdoingstuff5020
@drummerdoingstuff5020 7 ай бұрын
👀
@Hunty49
@Hunty49 7 ай бұрын
You could just make the video card with a ribbon to another PCIe slot. GPU's are already double wide.
@乂
@乂 7 ай бұрын
Who even needs that much power in the first place?
@OpsMasterWoods
@OpsMasterWoods 7 ай бұрын
Stop!
@FlaxTheSeedOne
@FlaxTheSeedOne 7 ай бұрын
Watch the video again, but slowly
@orlagh277
@orlagh277 7 ай бұрын
Incidentally, I recently watched a video on this cool channel called techquickie and apparently they use it in data server applications, for networking servers together.
@TrevorColeman
@TrevorColeman 7 ай бұрын
Exactly!
@cem_kaya
@cem_kaya 7 ай бұрын
There are also OCP ports
@ssjbardock79
@ssjbardock79 7 ай бұрын
Riley sounds like the announcer from the price is right when he does his sponsor bit
@someguy9175
@someguy9175 7 ай бұрын
so... every SLI rig was running x32 all along?
@MrMman30
@MrMman30 7 ай бұрын
The last time I saw a product with an x and a 32 next to it was in 1994. That didn't go well! Here is hoping this is not a gimmicky in-between product and is an actual leap into the future. #SEGA #32x
@kousakasan7882
@kousakasan7882 7 ай бұрын
I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
@ChrisSmith-tc4df
@ChrisSmith-tc4df 7 ай бұрын
x32 is just unwieldy. Mechanically because the slot is so long, which then causes excessive meandering of the PCIe lane pairs, because they must all be precisely equal lengths.
@HexDan
@HexDan 6 ай бұрын
So, is this how AMD Crossfire worked?
@acarrillo8277
@acarrillo8277 7 ай бұрын
Looks over at the EDSFF 4C+ slot a PCIe x32 slot in wide use in server PCIe cards. I guess we wont tell him about you.
@myne00
@myne00 7 ай бұрын
I'm still surprised optical connections aren't used yet (again? (SPIDF)). I'm expecting USB (or whatever apple calls it next) to have a fibre down the middle in that tiny blank part of the C connector at some point. Bend insensitive SMOF is cheap enough now that is plausible at scale. SFPs are getting there too.
@Thomas-VA
@Thomas-VA 7 ай бұрын
need all that sweet x32 for the next great A.I. film, music, art and book creation app / bit miner.
@brondster47
@brondster47 7 ай бұрын
wonder how many years it'll be before PCIE X16 is phased out..... remember how long AGP slots lasted for...... only time will tell..... and who knows what it'll be replaced by....
@Arctic_silverstreak
@Arctic_silverstreak 7 ай бұрын
I mean physically the connection maybe phased out but i think it's very unlikely that the pcie itself will be phased out too
@chrisbaker8533
@chrisbaker8533 7 ай бұрын
AGP only lasted for about 13 years, 1997 to 2010. Pci-e is currently at 22 years. launched in 2002. As far as when it might get phased out, when ever it stops being able to handle the data we need to transfer. Maybe 10 to 15 years on the current trajectory. OR it may wind up like USB and never die. lol
@sakaraist
@sakaraist 7 ай бұрын
@@chrisbaker8533 On desktops possibly, However PCIE is a core component of a metric shitload of embedded systems and fpga dev boards.
@Aragorn7884
@Aragorn7884 7 ай бұрын
x64 just needs 5 more to work properly...😏
@davidschaub7965
@davidschaub7965 7 ай бұрын
I've seen server motherboards with X24 physical slots that just connect to exciting PCIe switches.
@nodarstoandark1851
@nodarstoandark1851 7 ай бұрын
Video idea- Usb-c Explained: Everything about the Usb-c type and all its types!
@robotparadise
@robotparadise 7 ай бұрын
So PCI is the new SCSI...... word.
@pauldrice1996
@pauldrice1996 7 ай бұрын
So basically there's no reason we shouldn't be able to have some SLI like option.
@alexturnbackthearmy1907
@alexturnbackthearmy1907 7 ай бұрын
There isnt. And SLI isnt actually dead, you just dont see new consumer-grade cards with it anymore, and no one uses these for gaming.
@Channel7331
@Channel7331 7 ай бұрын
This was a real stretch of a video. I know you guys think you need to keep to an upload schedule, but if youve nothing to say, you really dont have to
@fatjawns3671
@fatjawns3671 7 ай бұрын
GPUs with two pcie slots coming soon 🗿
@alphaomega154
@alphaomega154 7 ай бұрын
yup. and i also just hinted an "idea" in one of recent digital foundry's video comment section about : independent GDDRxX memory module/sticks in M.2 form factor.(could come in any format, 2230, 2242 or 2280) for MULTI PURPOSES of usage cases. from adding more VRAM to BOTH iGPU and discrete GPU, to actually adding CPU fast remote cache(extra huge L4 cache anyone?). i see a market for that. i hope somebody pick the idea up. imagine you have an iGPU, and then simply plug an M.2 16GB GDDR6X memory sticks into one of your M.2 slots, and have the iGPU driver to recognize it and have some instructions to use it, will make your iGPU has 16GB of GDDR6X VRAM. and your CPU could steal some of the paging of it for its "extra cache" of theoritical L4 cache. your OS simply need to add instuctions to use it if it detected available.
@alexturnbackthearmy1907
@alexturnbackthearmy1907 7 ай бұрын
Good idea...but it already exists, called RAM sticks. They are also much faster then m.2 device will ever be. You can even use it as very fast SSD (volatile one, so dont store anything important in ram).
@MysteicVoltronus
@MysteicVoltronus 7 ай бұрын
All I heard was it was not SLI/Crossfire's fault it died. Twas complex graphics drivers that killed the beast.
@alexturnbackthearmy1907
@alexturnbackthearmy1907 7 ай бұрын
And lack of support. Out of handful of games supporting SLI, only select few actually scale with amount of cards, and most "SLI supported" games have completely messed up implementation of it so its very laggy, and second, third and fourth cards arent even doing anything.
@kevinerbs2778
@kevinerbs2778 7 ай бұрын
Drivers are even more complex now with D.L.S.S & just looking at intel Arc cards we're still stuck with drivers being the limiting factor even on DX12. look that insane performance increase they're still gaining just from driver fixes for Arc card.
@kevinerbs2778
@kevinerbs2778 7 ай бұрын
@@alexturnbackthearmy1907 There are 1,064 games for DX11 that support or can use S.L.I that's 20% out the 5,889 games for DX11. Here's something that real disappointed me, it's taking 8 times longer for games to come out on DX12 than DX11. In 10 years DX12 has barely gotten out as many games a what DX11 was releasing per-year in around a ten-year span. There's 415 DX12 games out now, around 50% of support some sort of raytracing in the 8 years & 9months that DX12 been out. There's were 347 games were released per year on DX1 in a ten-year span.
@DizConnected
@DizConnected 7 ай бұрын
I just want PCIe 6.0 with a slot that supports 1000W GPUs without any additional cables, ok 600W, but PCIe 7.0 had better support 1000W! I really hope ASUS and MSI keep supporting BTF and Project Zero, BTF already supports 600W with the correct BTF GPU, I hope there is a BTF 5090, and yes I know you still have to plug the cable for the GPU power into the back of the motherboard. I just wish PCIe slots supported the power needed for all GPUs without the extra cables.
@Abyss_end
@Abyss_end 7 ай бұрын
I miss the non-greenscreen days
@adrenaliner91
@adrenaliner91 7 ай бұрын
400 Gigabit huh? And I am here with my mobile-DSL hybrid connection that maxes out at a 150mbit connection and that will never ever be more than that in this skyscraper.
@radugrigoras
@radugrigoras 7 ай бұрын
lol all the noobs talking about SLI…guys you needed a connector between the cards known as a bridge…The pcie connection on the second card was only so your PC would detect the card, all of the inter card communication was through the bridge. That’s why you could only use the ram from 1 GPU, because they could not transfer data fast enough through that bridge. This is a more advanced technology and if it was around at the time I’m sure NVIDIA would have used it, your CPU splits the data into two, like a stripe raid array, it goes to each card and the cards communicate synch data to make sure they spit it out in order. It may require a special chipset for this function because as far as I can tell no consumer mobo or cpu supports this.
@kevinerbs2778
@kevinerbs2778 7 ай бұрын
Cards can be seen without the S.L.I connected, its still heavily driver dependent. Also it's more like higher clock speeds are still better than high IPC for S.L.I as it decreases latency. stripe raid arrary are slower than most modern cpu's & NVMe drives now. The Nv-link was bigger, but they're just using the pci-express link now on pci-express 4.0 with ada lovelace cards for cuda in workstation. AMD is using infinity fabric now too which is still basically a pci-express links.
@radugrigoras
@radugrigoras 7 ай бұрын
@@kevinerbs2778 well you can stripe nvme as well, not just sata drives, so gains are available there as well. I didn’t see this on Ada workstation, I don’t think both GPUs work on the same task in unison, realistically for workstation or AI I don’t see many tasks needing it, most workloads are split into individual tasks/calculations. Even for simulation, you are simulating individual cells/nodes taking into account previously calculated results that you read from cpu ram. What they are doing with their new server grade GPUs is dual die and offloading some of the CPU ram burden to the GPU, but that again needs to be taken advantage of by the software. If you look at something like blender, the software splits the scene into equal segments based on how many GPUs you have connected, they don’t communicate at all between each other because there is no need. Realistically this is what this video is about, a form of raid for PCI-E communication. If there was a successor to SLI it would probably need 3 GPUs or 5, always 1 extra to recompile the individually rendered area of a frame, for gaming it does not make sense because of the lag it would introduce, for workstation it doesn’t make sense either because you are not watching your task process live so latency is a non issue. If you are rendering a 4K frame with 4 GPUs and each GPU is doing 1k you will be 4x as fast with 0 need for inter GPU communication.
@kevinerbs2778
@kevinerbs2778 7 ай бұрын
@@radugrigoras title render already existed for pc games, no one wanted to use it.
@zlibz4582
@zlibz4582 7 ай бұрын
this will be useful for the upcoming intel cps and nvidia gpus
@DiamondTear
@DiamondTear 7 ай бұрын
0:05 was the only B-roll available of a motherboard with PCI and PCIe slots?
@andonel
@andonel 7 ай бұрын
so x32 is just two x16 in a trench coat?
@mikelarry2602
@mikelarry2602 7 ай бұрын
When 8K becomes the new standard !
@hummel6364
@hummel6364 7 ай бұрын
Of course I knew, the server in my basement has two of them... although it just uses them for risers with different slot setups.
@hellogoodbye4906
@hellogoodbye4906 7 ай бұрын
So its x16x2
@krtirtho
@krtirtho 7 ай бұрын
Yup, and those x32 PCIe slots are not really x32 PCIe slots
Are Expensive PC Parts Worth It?
9:22
Techquickie
Рет қаралды 267 М.
What Happens If You Fill EVERY PCI Express Slot?
6:59
Techquickie
Рет қаралды 364 М.
I thought one thing and the truth is something else 😂
00:34
عائلة ابو رعد Abo Raad family
Рет қаралды 22 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 17 МЛН
Top 5 ways you're WASTING money on with your PC!
17:43
JayzTwoCents
Рет қаралды 1,6 МЛН
Apple, Stop Putting Things On the Bottom Please
9:16
TechLinked
Рет қаралды 580 М.
Why Can’t You Buy a “Dumb TV?”
6:11
Techquickie
Рет қаралды 429 М.
Bad Value PC Parts Everyone Loves
6:48
Techquickie
Рет қаралды 204 М.
[38] We're not even at PCIe 6.0 Yet!
28:19
TechTechPotato
Рет қаралды 28 М.
Explaining PCIe Slots
11:10
ExplainingComputers
Рет қаралды 2 МЛН
This made me miss FPS games... - Corsair K70 PRO TKL
13:52
ShortCircuit
Рет қаралды 161 М.
I shouldn’t have kept the $1,000,000 computer
28:05
Linus Tech Tips
Рет қаралды 2,4 МЛН
This Guy BUILT His Own Graphics Card!
6:21
Techquickie
Рет қаралды 320 М.