One day apart in 2 videos: Linus "I want all my LAN-PCs in 1U, so I don't waste 1 rack-slot" - Wendell "1U is dead cause 2U is more efficient" XD
@handlealreadytaken2 жыл бұрын
Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.
@Blustride2 жыл бұрын
In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.
@wiziek2 жыл бұрын
Linus isn't technical person.
@EminemLovesGrapes2 жыл бұрын
@@wiziek Nowadays he basically outsources all of the knowledge and throws either his money or his influence at the wall.
@Mallchad2 жыл бұрын
@@handlealreadytaken His idea's were unsustainable and ended up in "I need 1 rack per computer", which pretty quickly devolves into an explosion of racks... Prob best not to buy a new rack every time he has a new idea :P
@GeoffSeeley2 жыл бұрын
@1:39 the 1U servers aren't dead, they're just huddled together in 2U chassis for warmth.
@Jamesaepp28 күн бұрын
In a nutshell: 1U chassis is dead, long live 2U chassis.
@JoshLiechty2 жыл бұрын
Having spent some time with multi-node chassis-based systems like this, my vote for a collective noun for a group of servers goes to "a cacophony."
@MiIIiIIion2 жыл бұрын
Alternatively: "A tinnitus of servers".
@Level1Techs2 жыл бұрын
I am getting such a kick out of these replies
@waterflame3212 жыл бұрын
How about a "whatt?!" Because you can't hear anything over the fans
@johnmijo2 жыл бұрын
A *MULTIPLICITY* of Nodes/Servers ?
@jannegrey2 жыл бұрын
"Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.
@UntouchedWagons2 жыл бұрын
A gaggle of those servers would certainly murder my power bills, and my ear drums.
@johntotten48722 жыл бұрын
Legend has it headphone users ears are still bleeding. A. Scream of servers?
@jacobnoori2 жыл бұрын
Finally, more server content! Please make them more frequently!
@MrLamrod1742 жыл бұрын
A serfdom of servers 😅 Also, I hope you had hearing protection while in your comms room! That node was SUPER loud!
@dismafuggerhere27532 жыл бұрын
a whole restaurant of servers ? I'll show myself out
@acubley2 жыл бұрын
You got a gen-u-wine laugh out of me!
@keithpetrino2 жыл бұрын
A racket of servers. A reference to the fact that they're in racks but also to the noise.
@Gilgwathir2 жыл бұрын
Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!
@Chloiber2 жыл бұрын
We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros). While having multiple nodes so densely in a single chassis is great, it comes with a major downside: The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba. While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)
@Loanshark753 Жыл бұрын
@Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.
@jfbeam Жыл бұрын
Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)
@PhoeniXfromNL2 жыл бұрын
it's always nice when Wendell is excited about something
@survey10102 жыл бұрын
Thoughts on doing walk-through of your data center / "server room"? Would be interesting to see what you're running for day-to-day.
@wyattarich2 жыл бұрын
Every time I see a new upload, I'm excited. I can't say the same about ANY other channels on YT. I love what you're doing Wendell-never stop!
@mtothem13372 жыл бұрын
I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.
@Blacklands2 жыл бұрын
(Is your avatar Lain with a crown of roses??) Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!
@jmwintenn2 жыл бұрын
the server room is built to contain the sound. they dont care how loud the servers are as long as vibration is controlled.
@morosis822 жыл бұрын
@@jmwintenn sort of true, but systems that need fans running at full speed constantly spend a lot of power budget on cooling and not computing.
@bernds65872 жыл бұрын
@@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed. When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization. For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM) The combination of density and heat production makes such loud and truly "moving" fans necessary.
@im.thatoneguy2 жыл бұрын
@@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care. I had to write an ipmi hack script which does it on our nvme server because they offer no customization. Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.
@MazeFrame2 жыл бұрын
9:42 You can feel the current limiting making the fans start up slowly! Beauty!
@TwistedD852 жыл бұрын
I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D
@robr46622 жыл бұрын
You may not be able to afford this but used enterprise stuff can be had extremely cheap and you can have almost as much fun. ;-)
@morosis822 жыл бұрын
Some of the older x10 platforms from Supermicro are getting somewhat affordable these days, the twin family of servers aren't crazy anymore.
@Verhagenvictor2 жыл бұрын
Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?
@halbouma67202 жыл бұрын
I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!
@killerful2 жыл бұрын
"Definitely think you'll find that appealing" god fucking dammit😂
@nukedathlonman2 жыл бұрын
Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.
@TheClumsySpectre22 жыл бұрын
Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.
@Dan-Simms2 жыл бұрын
Clicking the link and commenting here for your engagement. Cheers bud, keep up the great work!
@declanmcardle2 жыл бұрын
@8:20 - "it's an older cord, but it checks out..."
@t.m.grokas68322 жыл бұрын
I paused @7:23 and accidentally discovered your next video's thumbnail. Editor Autumn, you're welcome.
@Level1Techs2 жыл бұрын
That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn
@LiLBitsDK2 жыл бұрын
watching Wendell booting up a server being blasted by the air is like watching a kid in a giant candy store for the first time in their life :D
@MarkRose13372 жыл бұрын
1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!
@christopherjackson21572 жыл бұрын
It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.
@Cynyr2 жыл бұрын
Everything old is new again.
@jackhildebrandt77972 жыл бұрын
Dang I was excited for Wendell to look one of the cray ex liquid cooled nodes.
@wskinnyodden2 жыл бұрын
So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P
@ajr9932 жыл бұрын
Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.
@somehow_not_helpfulATcrap2 жыл бұрын
What do you hear when you put your ear up next to a 1U server fan? Nothing from then on.
@llortaton28342 жыл бұрын
AHAH, jokes on you wendell, my 4U ATX compliant consumer grade server will NEVER DIE :D
@velo13372 жыл бұрын
it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots
@Phynix722 жыл бұрын
Reading your thumbnail, Linus is crying over his recent build. From far continent I can hear "Why Wendel ? why ?"🤣
@andreas79442 жыл бұрын
If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.
@BigHeadClan Жыл бұрын
One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was. If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.
@jfbeam Жыл бұрын
His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)
@TheBitKrieger2 жыл бұрын
So we came full circle and blade centers are cool again?
@markmulder9962 жыл бұрын
And here is Linus (LTT) just now building five 1u gaming systems ;)
@СусаннаСергеевна2 жыл бұрын
To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway. Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place. The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.
@markmulder9962 жыл бұрын
@@СусаннаСергеевна i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :) But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.
@Paktosan2 жыл бұрын
So this basically is the comeback of the BladeServer just on a smaller scale? We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.
@JaeTLDR1 Жыл бұрын
Blades share way more. This is just power and cooling being shared
@andljoy2 жыл бұрын
9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D. This server just screams VDI at me.
@solidreactor2 жыл бұрын
Is there a benefit to go even further with a "4U 4-Node" configuration? Or are there some diminishing returns after a 2U 2-Node config?
@WilReid2 жыл бұрын
The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks. 3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.
@R055LE.12 жыл бұрын
Haven't blades been following this principle for like.. ever?
@bret442 жыл бұрын
Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?
@boomerau2 жыл бұрын
I've also seen the side-by-side HP HP Left & Right GPU 4RU servers. Basically this is a change in blade chassis form factor and capital investment.
@zector02 жыл бұрын
Imagine how his mind will explode the first time he sees a BladeCenter.
@KangoV2 жыл бұрын
They are the same cables I have throughout my house :) Cool video :)
@nicholaswoods90664 ай бұрын
Thank you for the informative video, Cheers mate
@Dexerinos2 жыл бұрын
I saw that!!! You didnt screew-in the rail screws :P
@nihalrahman74472 жыл бұрын
Wendell and LTT anthony should collab. Talk about general server stuff, linux distros and how to dominate the world.
@joemarais76832 жыл бұрын
That’ll never happen. The powers that be would never let that much nerd power collect in one room
@alexmartinelli62312 жыл бұрын
That would be EXTREMELY cool. Hope it happens someday
@tvmcrusher Жыл бұрын
7:41 From here on out you can hear the maddening sound of an SPC being nearby.
@leviathanpriim39512 жыл бұрын
Wendell and Steve, sit down nerds the chosen ones are on screen
@probusen2 жыл бұрын
Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)
@jfbeam Жыл бұрын
No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)
@AlwaysStaringSkyward2 жыл бұрын
@Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.
@willcurry6964 Жыл бұрын
You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.
@goblinphreak21322 жыл бұрын
I just realized the music you use gives me "contraption zack" vibes. if you remember that game from the dos days.
@majstealth Жыл бұрын
this will be a cramped and warm hot-aisle-job to maintain these
@JW-uC2 жыл бұрын
Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still. That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.
@ETtheOG2 жыл бұрын
A "Banquet of Servers" maybe :o?
@kevlarandchrome2 жыл бұрын
I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.
@jimecherry2 жыл бұрын
banshee fans
@ghostbirdofprey2 жыл бұрын
Suddenly I wonder if there's a supercomputer or other cluster named "Banshee"
@losttownstreet34092 жыл бұрын
Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.
@movax20h Жыл бұрын
The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power. If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network. I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox). 2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.
@MarkRose13372 жыл бұрын
Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.
@AndirHon2 жыл бұрын
box·en | \ ˈbäksən \ Definition of boxen archaic : of, like, or relating to boxwood or the box
@MarkRose13372 жыл бұрын
@@AndirHon I prefer the Jargon file definition: boxen: pl.n. [very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.
@KingTheRat2 жыл бұрын
HP C7000 has entered chat
@airman_85uk2 жыл бұрын
Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….
@muadeeb2 жыл бұрын
I have an old 4 node system that I use as a Virtualization cluster
@GooberBrainTrollingCorp Жыл бұрын
7:40 THIS LOOKS AND SOUNDS LIKE AN INTRO TO A HORROR MOVIE
@DMSparky2 жыл бұрын
I’m sorry in advance. But can it run Crysis?
@NathansWorkshop9 ай бұрын
5:50 RAWWWWWWWWWWWRRRRRRRRRR
@JamieStuff2 жыл бұрын
If rack mount, is it "a scream of servers"???
@Timi70072 жыл бұрын
Blade servers all over again^^
@prashanthb65212 жыл бұрын
4U with silent 120mm fans will be nice.
@Blacklands2 жыл бұрын
There's a bunch of cases on the market for this now! Some even support liquid cooling. Sliger makes some (expensive though).
@Elemental-IT2 жыл бұрын
I have that same rack monitor, but some idiot cut the cord to the Monitor as well as the keyboard / mouse combo. the VGA was a PITA, but standard.... and I had both parts. The keyboard is not standard, and I am missing the connectors. I really wish I had a way to figure out the pinout because 8 wires seems like it should be 2 PS/2 connectors.
@mhavock2 жыл бұрын
We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣
@chrisbaker85332 жыл бұрын
I like the compute density, but that backwards mounting is a deal killer for me. Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.
@Skungalunga2 жыл бұрын
So basically we're moving back to blade chassis?
@GameCyborgCh2 жыл бұрын
a full restaurant of servers
@SlurP6672 жыл бұрын
*opens server room door* I can hear the children screaming!
@Cadaverine19902 жыл бұрын
The 2U is honestly dead too, the datacenter I work with is moving completely to HPE Synergy 12000 Frames, these can be configured with 12 blade modules hosting Dual 28 core Xeons with up to 4.5 TB of Ram each and a T4 Accelerator Card. Thus in 10 U's will hold 24 - 28 core Xeons, 54TB of Ram and 12 - T4 Cards. Everything runs on VMs and in the networking of the unit everything has zero trust between the internal machines. If the size of the datacenter is a concern though they should be looking into 52U racks. Just doing this will increase the size of your site by around 25%.
@jakevanvliet2 жыл бұрын
A 1RU Intel server (thinking Dell PowerEdge 650) can have 2x 40 core Xeon Platinums, 8TB RAM, 3x T4s or A2s, and dedicated 4x 25Gb Ethernet. In 10RU, that's 800 cores (40 cores x 2 sockets x 10 servers), 80TB RAM, 30 GPUs, and 100Gb of dedicated networking per node. Different scenarios and use cases call for different requirements. 1RU servers are not dead. 2RU servers are not dead. Blades are not dead. None of them should die - to help give you the ability to get a solution that best fits your environment.
@fracturedlife13932 жыл бұрын
An Epyc of Servers
@Technopath472 жыл бұрын
All I can think is that the Frontier supercomputer shares a name with the worst ISP I've ever had the misfortune of dealing with.
@beauslim2 жыл бұрын
This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.
@cynicaloutlook2 жыл бұрын
They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series
@jp-ny2pd2 жыл бұрын
Personally I'm a fan of the Supermicro MicroCloud servers for our colo. We deploy the 8-node configuration because we like being able to swap the drives without downing the node or running into spacing issues with PDUs in the back of the rack. The 12 and 24 node solutions are nice but a bit more of a pain to do any sort of maintenance on and less tolerant of rack configurations.
@jfbeam Жыл бұрын
2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.) (To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)
@todayonthebench2 жыл бұрын
In short. The main advantages of blade systems are still relevant. Shared redundant power and cooling. Though, blade systems also tends to toss on shared management as well as networking.
@technicalfool2 жыл бұрын
Always thought "fleet" was already a thing for servers, though maybe a "flight" given they make so much noise you'd think they're going to take off any moment.
@uncivil_engineer80132 жыл бұрын
A Butler's Pantry of servers
@RawBejkon2 жыл бұрын
Really nice video!
@red5standingby4192 жыл бұрын
Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.
@neon_necromunda2 жыл бұрын
Well linus will be gutted hes just built a 1u home rig
@magnawavezone2 жыл бұрын
I’d agree if you need GPUs in your servers, but that’s still a niche usecase. Otherwise, nothing much I see changes. People have been cramming in super hot cpus in 1U for a long time and they will continue to do so, nothing really has changed. Of course, that’s assuming you don’t just move to AWS or GCP.
@jfbeam Жыл бұрын
It's not as niche as it used to be.
@asdkant2 жыл бұрын
A whole restaurant of servers?
@elikirkwood45802 жыл бұрын
This one server, in 2u of rack space has more compute power than my entire house with several servers and gaming desktops in it
@Deveyus2 жыл бұрын
Plural of servers? A Ruckus.
@deilusi2 жыл бұрын
IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play. whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.
@silverphinex2 жыл бұрын
i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown
@raven4k9982 жыл бұрын
well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣
@KomradeMikhail2 жыл бұрын
I fell asleep on a helicopter flight.... You can get used to anything over time.
@nekomakhea94402 жыл бұрын
Do they make these multi-node boxes in 3U or 4U sizes too, but crammed with 1U subnodes?
@casperghst422 жыл бұрын
What ever happend to the Dell chassies with 4 nodes in them?
@wskinnyodden2 жыл бұрын
Plural of Servers: A Cadre of Servers!
@dangerwr2 жыл бұрын
(Australian accent) And here we see a wild Wendell in his natural habitat.
@timrattenbury47682 жыл бұрын
Just amazing ain't he
@dangerwr2 жыл бұрын
@@timrattenbury4768 He's fucking adorable.
@chrsm2 жыл бұрын
Sounds like my colleague's laptop with a "couple" of chrome tabs open
@frank5.3 Жыл бұрын
With no physical constraints, does 4U or above make sense for increased cooling ability?
@JaeTLDR1 Жыл бұрын
4u is a desktop tower size. Its very common on quad socket and high memory usage