1u Servers are DEAD! Long Live 2u Servers! But Why? -Ft. Supermicro AS -2114GT-DNR

  Рет қаралды 84,860

Level1Techs

Level1Techs

Күн бұрын

Пікірлер: 312
@SimmiesSchrauberChannel
@SimmiesSchrauberChannel 2 жыл бұрын
One day apart in 2 videos: Linus "I want all my LAN-PCs in 1U, so I don't waste 1 rack-slot" - Wendell "1U is dead cause 2U is more efficient" XD
@handlealreadytaken
@handlealreadytaken 2 жыл бұрын
Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.
@Blustride
@Blustride 2 жыл бұрын
In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.
@wiziek
@wiziek 2 жыл бұрын
Linus isn't technical person.
@EminemLovesGrapes
@EminemLovesGrapes 2 жыл бұрын
@@wiziek Nowadays he basically outsources all of the knowledge and throws either his money or his influence at the wall.
@Mallchad
@Mallchad 2 жыл бұрын
@@handlealreadytaken His idea's were unsustainable and ended up in "I need 1 rack per computer", which pretty quickly devolves into an explosion of racks... Prob best not to buy a new rack every time he has a new idea :P
@GeoffSeeley
@GeoffSeeley 2 жыл бұрын
@1:39 the 1U servers aren't dead, they're just huddled together in 2U chassis for warmth.
@Jamesaepp
@Jamesaepp 28 күн бұрын
In a nutshell: 1U chassis is dead, long live 2U chassis.
@JoshLiechty
@JoshLiechty 2 жыл бұрын
Having spent some time with multi-node chassis-based systems like this, my vote for a collective noun for a group of servers goes to "a cacophony."
@MiIIiIIion
@MiIIiIIion 2 жыл бұрын
Alternatively: "A tinnitus of servers".
@Level1Techs
@Level1Techs 2 жыл бұрын
I am getting such a kick out of these replies
@waterflame321
@waterflame321 2 жыл бұрын
How about a "whatt?!" Because you can't hear anything over the fans
@johnmijo
@johnmijo 2 жыл бұрын
A *MULTIPLICITY* of Nodes/Servers ?
@jannegrey
@jannegrey 2 жыл бұрын
"Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.
@UntouchedWagons
@UntouchedWagons 2 жыл бұрын
A gaggle of those servers would certainly murder my power bills, and my ear drums.
@johntotten4872
@johntotten4872 2 жыл бұрын
Legend has it headphone users ears are still bleeding. A. Scream of servers?
@jacobnoori
@jacobnoori 2 жыл бұрын
Finally, more server content! Please make them more frequently!
@MrLamrod174
@MrLamrod174 2 жыл бұрын
A serfdom of servers 😅 Also, I hope you had hearing protection while in your comms room! That node was SUPER loud!
@dismafuggerhere2753
@dismafuggerhere2753 2 жыл бұрын
a whole restaurant of servers ? I'll show myself out
@acubley
@acubley 2 жыл бұрын
You got a gen-u-wine laugh out of me!
@keithpetrino
@keithpetrino 2 жыл бұрын
A racket of servers. A reference to the fact that they're in racks but also to the noise.
@Gilgwathir
@Gilgwathir 2 жыл бұрын
Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!
@Chloiber
@Chloiber 2 жыл бұрын
We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros). While having multiple nodes so densely in a single chassis is great, it comes with a major downside: The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba. While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)
@Loanshark753
@Loanshark753 Жыл бұрын
@Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.
@jfbeam
@jfbeam Жыл бұрын
Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)
@PhoeniXfromNL
@PhoeniXfromNL 2 жыл бұрын
it's always nice when Wendell is excited about something
@survey1010
@survey1010 2 жыл бұрын
Thoughts on doing walk-through of your data center / "server room"? Would be interesting to see what you're running for day-to-day.
@wyattarich
@wyattarich 2 жыл бұрын
Every time I see a new upload, I'm excited. I can't say the same about ANY other channels on YT. I love what you're doing Wendell-never stop!
@mtothem1337
@mtothem1337 2 жыл бұрын
I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.
@Blacklands
@Blacklands 2 жыл бұрын
(Is your avatar Lain with a crown of roses??) Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!
@jmwintenn
@jmwintenn 2 жыл бұрын
the server room is built to contain the sound. they dont care how loud the servers are as long as vibration is controlled.
@morosis82
@morosis82 2 жыл бұрын
@@jmwintenn sort of true, but systems that need fans running at full speed constantly spend a lot of power budget on cooling and not computing.
@bernds6587
@bernds6587 2 жыл бұрын
@@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed. When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization. For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM) The combination of density and heat production makes such loud and truly "moving" fans necessary.
@im.thatoneguy
@im.thatoneguy 2 жыл бұрын
@@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care. I had to write an ipmi hack script which does it on our nvme server because they offer no customization. Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.
@MazeFrame
@MazeFrame 2 жыл бұрын
9:42 You can feel the current limiting making the fans start up slowly! Beauty!
@TwistedD85
@TwistedD85 2 жыл бұрын
I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D
@robr4662
@robr4662 2 жыл бұрын
You may not be able to afford this but used enterprise stuff can be had extremely cheap and you can have almost as much fun. ;-)
@morosis82
@morosis82 2 жыл бұрын
Some of the older x10 platforms from Supermicro are getting somewhat affordable these days, the twin family of servers aren't crazy anymore.
@Verhagenvictor
@Verhagenvictor 2 жыл бұрын
Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?
@halbouma6720
@halbouma6720 2 жыл бұрын
I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!
@killerful
@killerful 2 жыл бұрын
"Definitely think you'll find that appealing" god fucking dammit😂
@nukedathlonman
@nukedathlonman 2 жыл бұрын
Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.
@TheClumsySpectre2
@TheClumsySpectre2 2 жыл бұрын
Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.
@Dan-Simms
@Dan-Simms 2 жыл бұрын
Clicking the link and commenting here for your engagement. Cheers bud, keep up the great work!
@declanmcardle
@declanmcardle 2 жыл бұрын
@8:20 - "it's an older cord, but it checks out..."
@t.m.grokas6832
@t.m.grokas6832 2 жыл бұрын
I paused @7:23 and accidentally discovered your next video's thumbnail. Editor Autumn, you're welcome.
@Level1Techs
@Level1Techs 2 жыл бұрын
That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn
@LiLBitsDK
@LiLBitsDK 2 жыл бұрын
watching Wendell booting up a server being blasted by the air is like watching a kid in a giant candy store for the first time in their life :D
@MarkRose1337
@MarkRose1337 2 жыл бұрын
1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!
@christopherjackson2157
@christopherjackson2157 2 жыл бұрын
It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.
@Cynyr
@Cynyr 2 жыл бұрын
Everything old is new again.
@jackhildebrandt7797
@jackhildebrandt7797 2 жыл бұрын
Dang I was excited for Wendell to look one of the cray ex liquid cooled nodes.
@wskinnyodden
@wskinnyodden 2 жыл бұрын
So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P
@ajr993
@ajr993 2 жыл бұрын
Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.
@somehow_not_helpfulATcrap
@somehow_not_helpfulATcrap 2 жыл бұрын
What do you hear when you put your ear up next to a 1U server fan? Nothing from then on.
@llortaton2834
@llortaton2834 2 жыл бұрын
AHAH, jokes on you wendell, my 4U ATX compliant consumer grade server will NEVER DIE :D
@velo1337
@velo1337 2 жыл бұрын
it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots
@Phynix72
@Phynix72 2 жыл бұрын
Reading your thumbnail, Linus is crying over his recent build. From far continent I can hear "Why Wendel ? why ?"🤣
@andreas7944
@andreas7944 2 жыл бұрын
If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.
@BigHeadClan
@BigHeadClan Жыл бұрын
One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was. If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.
@jfbeam
@jfbeam Жыл бұрын
His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)
@TheBitKrieger
@TheBitKrieger 2 жыл бұрын
So we came full circle and blade centers are cool again?
@markmulder996
@markmulder996 2 жыл бұрын
And here is Linus (LTT) just now building five 1u gaming systems ;)
@СусаннаСергеевна
@СусаннаСергеевна 2 жыл бұрын
To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway. Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place. The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.
@markmulder996
@markmulder996 2 жыл бұрын
@@СусаннаСергеевна i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :) But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.
@Paktosan
@Paktosan 2 жыл бұрын
So this basically is the comeback of the BladeServer just on a smaller scale? We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.
@JaeTLDR1
@JaeTLDR1 Жыл бұрын
Blades share way more. This is just power and cooling being shared
@andljoy
@andljoy 2 жыл бұрын
9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D. This server just screams VDI at me.
@solidreactor
@solidreactor 2 жыл бұрын
Is there a benefit to go even further with a "4U 4-Node" configuration? Or are there some diminishing returns after a 2U 2-Node config?
@WilReid
@WilReid 2 жыл бұрын
The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks. 3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.
@R055LE.1
@R055LE.1 2 жыл бұрын
Haven't blades been following this principle for like.. ever?
@bret44
@bret44 2 жыл бұрын
Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?
@boomerau
@boomerau 2 жыл бұрын
I've also seen the side-by-side HP HP Left & Right GPU 4RU servers. Basically this is a change in blade chassis form factor and capital investment.
@zector0
@zector0 2 жыл бұрын
Imagine how his mind will explode the first time he sees a BladeCenter.
@KangoV
@KangoV 2 жыл бұрын
They are the same cables I have throughout my house :) Cool video :)
@nicholaswoods9066
@nicholaswoods9066 4 ай бұрын
Thank you for the informative video, Cheers mate
@Dexerinos
@Dexerinos 2 жыл бұрын
I saw that!!! You didnt screew-in the rail screws :P
@nihalrahman7447
@nihalrahman7447 2 жыл бұрын
Wendell and LTT anthony should collab. Talk about general server stuff, linux distros and how to dominate the world.
@joemarais7683
@joemarais7683 2 жыл бұрын
That’ll never happen. The powers that be would never let that much nerd power collect in one room
@alexmartinelli6231
@alexmartinelli6231 2 жыл бұрын
That would be EXTREMELY cool. Hope it happens someday
@tvmcrusher
@tvmcrusher Жыл бұрын
7:41 From here on out you can hear the maddening sound of an SPC being nearby.
@leviathanpriim3951
@leviathanpriim3951 2 жыл бұрын
Wendell and Steve, sit down nerds the chosen ones are on screen
@probusen
@probusen 2 жыл бұрын
Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)
@jfbeam
@jfbeam Жыл бұрын
No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)
@AlwaysStaringSkyward
@AlwaysStaringSkyward 2 жыл бұрын
@Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.
@willcurry6964
@willcurry6964 Жыл бұрын
You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.
@goblinphreak2132
@goblinphreak2132 2 жыл бұрын
I just realized the music you use gives me "contraption zack" vibes. if you remember that game from the dos days.
@majstealth
@majstealth Жыл бұрын
this will be a cramped and warm hot-aisle-job to maintain these
@JW-uC
@JW-uC 2 жыл бұрын
Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still. That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.
@ETtheOG
@ETtheOG 2 жыл бұрын
A "Banquet of Servers" maybe :o?
@kevlarandchrome
@kevlarandchrome 2 жыл бұрын
I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.
@jimecherry
@jimecherry 2 жыл бұрын
banshee fans
@ghostbirdofprey
@ghostbirdofprey 2 жыл бұрын
Suddenly I wonder if there's a supercomputer or other cluster named "Banshee"
@losttownstreet3409
@losttownstreet3409 2 жыл бұрын
Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.
@movax20h
@movax20h Жыл бұрын
The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power. If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network. I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox). 2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.
@MarkRose1337
@MarkRose1337 2 жыл бұрын
Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.
@AndirHon
@AndirHon 2 жыл бұрын
box·​en | \ ˈbäksən \ Definition of boxen archaic : of, like, or relating to boxwood or the box
@MarkRose1337
@MarkRose1337 2 жыл бұрын
@@AndirHon I prefer the Jargon file definition: boxen: pl.n. [very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.
@KingTheRat
@KingTheRat 2 жыл бұрын
HP C7000 has entered chat
@airman_85uk
@airman_85uk 2 жыл бұрын
Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….
@muadeeb
@muadeeb 2 жыл бұрын
I have an old 4 node system that I use as a Virtualization cluster
@GooberBrainTrollingCorp
@GooberBrainTrollingCorp Жыл бұрын
7:40 THIS LOOKS AND SOUNDS LIKE AN INTRO TO A HORROR MOVIE
@DMSparky
@DMSparky 2 жыл бұрын
I’m sorry in advance. But can it run Crysis?
@NathansWorkshop
@NathansWorkshop 9 ай бұрын
5:50 RAWWWWWWWWWWWRRRRRRRRRR
@JamieStuff
@JamieStuff 2 жыл бұрын
If rack mount, is it "a scream of servers"???
@Timi7007
@Timi7007 2 жыл бұрын
Blade servers all over again^^
@prashanthb6521
@prashanthb6521 2 жыл бұрын
4U with silent 120mm fans will be nice.
@Blacklands
@Blacklands 2 жыл бұрын
There's a bunch of cases on the market for this now! Some even support liquid cooling. Sliger makes some (expensive though).
@Elemental-IT
@Elemental-IT 2 жыл бұрын
I have that same rack monitor, but some idiot cut the cord to the Monitor as well as the keyboard / mouse combo. the VGA was a PITA, but standard.... and I had both parts. The keyboard is not standard, and I am missing the connectors. I really wish I had a way to figure out the pinout because 8 wires seems like it should be 2 PS/2 connectors.
@mhavock
@mhavock 2 жыл бұрын
We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣
@chrisbaker8533
@chrisbaker8533 2 жыл бұрын
I like the compute density, but that backwards mounting is a deal killer for me. Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.
@Skungalunga
@Skungalunga 2 жыл бұрын
So basically we're moving back to blade chassis?
@GameCyborgCh
@GameCyborgCh 2 жыл бұрын
a full restaurant of servers
@SlurP667
@SlurP667 2 жыл бұрын
*opens server room door* I can hear the children screaming!
@Cadaverine1990
@Cadaverine1990 2 жыл бұрын
The 2U is honestly dead too, the datacenter I work with is moving completely to HPE Synergy 12000 Frames, these can be configured with 12 blade modules hosting Dual 28 core Xeons with up to 4.5 TB of Ram each and a T4 Accelerator Card. Thus in 10 U's will hold 24 - 28 core Xeons, 54TB of Ram and 12 - T4 Cards. Everything runs on VMs and in the networking of the unit everything has zero trust between the internal machines. If the size of the datacenter is a concern though they should be looking into 52U racks. Just doing this will increase the size of your site by around 25%.
@jakevanvliet
@jakevanvliet 2 жыл бұрын
A 1RU Intel server (thinking Dell PowerEdge 650) can have 2x 40 core Xeon Platinums, 8TB RAM, 3x T4s or A2s, and dedicated 4x 25Gb Ethernet. In 10RU, that's 800 cores (40 cores x 2 sockets x 10 servers), 80TB RAM, 30 GPUs, and 100Gb of dedicated networking per node. Different scenarios and use cases call for different requirements. 1RU servers are not dead. 2RU servers are not dead. Blades are not dead. None of them should die - to help give you the ability to get a solution that best fits your environment.
@fracturedlife1393
@fracturedlife1393 2 жыл бұрын
An Epyc of Servers
@Technopath47
@Technopath47 2 жыл бұрын
All I can think is that the Frontier supercomputer shares a name with the worst ISP I've ever had the misfortune of dealing with.
@beauslim
@beauslim 2 жыл бұрын
This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.
@cynicaloutlook
@cynicaloutlook 2 жыл бұрын
They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series
@jp-ny2pd
@jp-ny2pd 2 жыл бұрын
Personally I'm a fan of the Supermicro MicroCloud servers for our colo. We deploy the 8-node configuration because we like being able to swap the drives without downing the node or running into spacing issues with PDUs in the back of the rack. The 12 and 24 node solutions are nice but a bit more of a pain to do any sort of maintenance on and less tolerant of rack configurations.
@jfbeam
@jfbeam Жыл бұрын
2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.) (To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)
@todayonthebench
@todayonthebench 2 жыл бұрын
In short. The main advantages of blade systems are still relevant. Shared redundant power and cooling. Though, blade systems also tends to toss on shared management as well as networking.
@technicalfool
@technicalfool 2 жыл бұрын
Always thought "fleet" was already a thing for servers, though maybe a "flight" given they make so much noise you'd think they're going to take off any moment.
@uncivil_engineer8013
@uncivil_engineer8013 2 жыл бұрын
A Butler's Pantry of servers
@RawBejkon
@RawBejkon 2 жыл бұрын
Really nice video!
@red5standingby419
@red5standingby419 2 жыл бұрын
Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.
@neon_necromunda
@neon_necromunda 2 жыл бұрын
Well linus will be gutted hes just built a 1u home rig
@magnawavezone
@magnawavezone 2 жыл бұрын
I’d agree if you need GPUs in your servers, but that’s still a niche usecase. Otherwise, nothing much I see changes. People have been cramming in super hot cpus in 1U for a long time and they will continue to do so, nothing really has changed. Of course, that’s assuming you don’t just move to AWS or GCP.
@jfbeam
@jfbeam Жыл бұрын
It's not as niche as it used to be.
@asdkant
@asdkant 2 жыл бұрын
A whole restaurant of servers?
@elikirkwood4580
@elikirkwood4580 2 жыл бұрын
This one server, in 2u of rack space has more compute power than my entire house with several servers and gaming desktops in it
@Deveyus
@Deveyus 2 жыл бұрын
Plural of servers? A Ruckus.
@deilusi
@deilusi 2 жыл бұрын
IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play. whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.
@silverphinex
@silverphinex 2 жыл бұрын
i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown
@raven4k998
@raven4k998 2 жыл бұрын
well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣
@KomradeMikhail
@KomradeMikhail 2 жыл бұрын
I fell asleep on a helicopter flight.... You can get used to anything over time.
@nekomakhea9440
@nekomakhea9440 2 жыл бұрын
Do they make these multi-node boxes in 3U or 4U sizes too, but crammed with 1U subnodes?
@casperghst42
@casperghst42 2 жыл бұрын
What ever happend to the Dell chassies with 4 nodes in them?
@wskinnyodden
@wskinnyodden 2 жыл бұрын
Plural of Servers: A Cadre of Servers!
@dangerwr
@dangerwr 2 жыл бұрын
(Australian accent) And here we see a wild Wendell in his natural habitat.
@timrattenbury4768
@timrattenbury4768 2 жыл бұрын
Just amazing ain't he
@dangerwr
@dangerwr 2 жыл бұрын
@@timrattenbury4768 He's fucking adorable.
@chrsm
@chrsm 2 жыл бұрын
Sounds like my colleague's laptop with a "couple" of chrome tabs open
@frank5.3
@frank5.3 Жыл бұрын
With no physical constraints, does 4U or above make sense for increased cooling ability?
@JaeTLDR1
@JaeTLDR1 Жыл бұрын
4u is a desktop tower size. Its very common on quad socket and high memory usage
The Ideal Home Server! Is it Possible?
19:29
Level1Techs
Рет қаралды 243 М.
Why no RONALDO?! 🤔⚽️
00:28
Celine Dept
Рет қаралды 98 МЛН
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 254 МЛН
From Small To Giant 0%🍫 VS 100%🍫 #katebrush #shorts #gummy
00:19
I never understood why you can't go faster than light - until now!
16:40
FloatHeadPhysics
Рет қаралды 4,1 МЛН
50% of all 4090 will end up in trash and here is why
14:26
northwestrepair
Рет қаралды 671 М.
Slimming down to 2U! SFF Rackmount Gaming PC
17:05
Jeff Geerling
Рет қаралды 297 М.
Inside a Data Center with 90,000 Servers
16:29
Coding with Lewis
Рет қаралды 133 М.
You're Probably Wrong About Rainbows
27:11
Veritasium
Рет қаралды 2,1 МЛН
We’ve NEVER done this before… - Mother Vault Part 1 - JBOD
26:00
Linus Tech Tips
Рет қаралды 2 МЛН
Supermicro SAS Backplanes Explained | Supermicro Tech Tip
27:02
Art of Server
Рет қаралды 21 М.
Eight Gaming PCs in a 1U Server - Cloud Gaming Server Part 16
24:31
Craft Computing
Рет қаралды 148 М.
Hardware Raid is Dead and is a Bad Idea in 2022
22:19
Level1Techs
Рет қаралды 690 М.
This is my new gaming PC! - 1U PCs for My New House Part 1
22:31
Linus Tech Tips
Рет қаралды 3,4 МЛН
Why no RONALDO?! 🤔⚽️
00:28
Celine Dept
Рет қаралды 98 МЛН