Dell's 2U Flagship: The AMD EPYC-powered PowerEdge R7525

  Рет қаралды 32,807

ServeTheHome

ServeTheHome

Күн бұрын

Пікірлер: 98
@squelchedotter
@squelchedotter 3 жыл бұрын
If anyone ever makes a compilation of patrick introducing model numbers for 10 minutes straight I'd absolutely watch it
@hrobayo1980
@hrobayo1980 3 жыл бұрын
It will be awesome... :D
@gamebrigada2
@gamebrigada2 3 жыл бұрын
I've been buying R7525's pretty much as soon as Dell had a spec sheet to order from. I love these things. One thing that blew my mind was that the board absolutely takes advantage of all of those pcie lanes. Dell simply made everything modular because they could. Even the idrac module pops into a pretty standard x8 slot. I really hope they keep this general design for a long time.
@kellymoses8566
@kellymoses8566 3 жыл бұрын
Licensing SQL Server Enterprise for all cores on this would cost a cool $912,384
@dandocherty248
@dandocherty248 2 жыл бұрын
I bought this server for our company in Fremont got it with 2 24 core AMD 24 core CPU 512gb DDR4 ram and 24 4tb drives its very powerful
@qupada42
@qupada42 3 жыл бұрын
Loving the comment about the bezels when buying in quantity. I honestly don't think I've ever purchased a bezel intentionally, only when I forgot to remove it from the BOM. One thing you didn't mention that I've found interesting in this generation (the 1U R6525 is the same) is the PSU layout; one on each side of the chassis (rather than the more traditional both on one side) could be either a blessing or a curse depending on where you've installed the PDUs in your datacentre. Almost certainly makes for better airflow/cooling with less in the way behind the CPUs though, especially with these 200W+ SKUs. As for the CPUs, there are some fun performance characteristics for certain heavy workloads. The one that's made the biggest difference to ours is the number of cores per CCX, which affects how many cores are sharing each 16MB block of the L3 cache. The headline-grabbing 64 core parts are great and all, but they all have 4 cores per cache block, which doesn't translate into the real-world performance you're really looking for. The true standout performers are the ones with 2 cores sharing each L3 cache (7F72, 7532, 7302) or with the full 16MB for each core (7F52).
@UpcraftConsulting
@UpcraftConsulting 3 жыл бұрын
I think Dell might be warming up to AMD on enterprise. I saw they are bringing this platform to some of their turn key enterprise solutions like vxrail which was always one of their most conservative platforms for updates. Just set one of these R7525 boxes up last week. Unfortunately I had it drop shipped to datacenter and I did not get to "play" with it so the video is nice to see what it actually looked like inside. Got 16 core model to minimize licensing costs as it's small business so not a ton of users. 1 box has replaced 4 old r620 machines with equallogic shared storage. Taking the space from 6u down to 2u with higher performance overall. It is customer owned space, but if they were paying for hosted datacenter it would be a good deal cheaper on rack space alone. I liked the OCP 3 option. We put in dual 25Gbit but only using 10Gig dac for now but it was basically the same price as 10Gig nic when dell discounted these options so why not get the faster version just in case. (Sometimes the discounts depend on availability so I guess 25gig cards had lots of stock on the shelf)
@droknron
@droknron 3 жыл бұрын
The channel is really flying Patrick. Congratulations on your success and the hard work paying off! :)
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Thanks!
@Non-Fungible-Gangsta
@Non-Fungible-Gangsta 3 жыл бұрын
that camera quality is so good.
@hugevibez
@hugevibez 3 жыл бұрын
Never thought I'd see the day id see 🤤 quality b-roll of rack servers
@juergenhaessinger4518
@juergenhaessinger4518 3 жыл бұрын
This is amazing. I wish I could afford to put this in my home lab.
@Diegor35
@Diegor35 3 жыл бұрын
Me too man
@KillaBitz
@KillaBitz 3 жыл бұрын
Perfect for a plex server. Just add a Quadro RTX6000 and a boot usb. 8k Transcode beast!!
3 жыл бұрын
They'll be going in the skip in 4-5 years time.
@majstealth
@majstealth 3 жыл бұрын
@ to be replaced by servers that could go up to 16tb of ram, not that any sane man would need that in the next 5 years last server we deployed was single cpu 128gb, still way too much for the client these massive beasts are only really usefull for a handfull of applications, and for these its good they exist, but bread and butter are still smaller ones.
@VigneshBalasubramaniam
@VigneshBalasubramaniam 3 жыл бұрын
Many customers are asking for firmware from OEMs that don't blow the PSB fuses in EPYC CPUs. Hopefully more enterprise customers ask for it.
@dandocherty2927
@dandocherty2927 2 жыл бұрын
I got this server running 2 24 core 3rd gen cpu with 7 out of 24 3.85tb ssd drives 1tb nvme raid for vmware esxi 7 boot os 512gb of ddr4 3200mhz. Several 10gb nic cards this thing is crazy powerful love it
@VraccasVII
@VraccasVII 3 жыл бұрын
I'm so happy that you guys have a youtube channel, amazing how I didn't find this sooner
@NTipton90
@NTipton90 3 жыл бұрын
Just got one of these at work! Super excited!!!
@dupajasio4801
@dupajasio4801 3 жыл бұрын
I'll be buying few of those soon. Excellent timing and info. So many config options.... And yes, VMWare licensing is playing huge part of making decision. Thx
@hariranormal5584
@hariranormal5584 3 жыл бұрын
donate me a server.
@ystebadvonschlegel3295
@ystebadvonschlegel3295 3 жыл бұрын
05:21 $73,218.42 (after $43,516 “discount”)! Was having fun until that moment.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
A huge cost driver are the 24x NVMe SSDs plus the ultra high-end CPUs. These start at much lower prices :)
@wmopp9100
@wmopp9100 3 жыл бұрын
@@ServeTheHomeVideo drives are always one of the biggest cost drivers (RAM being the other one)
@jfbeam
@jfbeam 3 жыл бұрын
Starting at 2000$ - lol. (and then add a pair of 7000$ processors, and 20,000$ of RAM.)
@JMurph2015
@JMurph2015 3 жыл бұрын
Like everyone else is saying, well played to Dell for the serious commitment to modularity on this one.
@berndeckenfels
@berndeckenfels 3 жыл бұрын
Yes great feature (however a bit limited if you consider the biggest module beeing the iDRAC). 6 OCP slots or something would be a monster (2 x 2 50gb, 2 HBAs, Hotplug SATA DOM, iDRAC) would be good for HCI Servers
@xXfzmusicXx
@xXfzmusicXx 3 жыл бұрын
Looks like there is 1 PCIE x1 slot on the board
@philsheppard532
@philsheppard532 3 жыл бұрын
I saw 2 one right center one left against a wall.
@jeyendeoso
@jeyendeoso 3 жыл бұрын
This is Patrick the new CEO of Intel, according to Dr. Ian Cutress? hahaha
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Ha! TTP!
@berndeckenfels
@berndeckenfels 3 жыл бұрын
I still don’t see what is Great for Security to lock the CPUs.. maybe that would be worth an interview with dell/amd?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
We covered this a bit in the AMD PSB article/ video
@berndeckenfels
@berndeckenfels 3 жыл бұрын
@@ServeTheHomeVideo yes you tried to ,)
@wilhelmschonfeldt5506
@wilhelmschonfeldt5506 3 жыл бұрын
Thanks for the great video. We are actually looking at these servers as high speed NVME storage systems. That we be served up using datacore. Just for clarification the system is able to take risers even when the chassis is the 24 disk nvme backplane? Also given that the interface is U.2 would it be able to take normal ssds? Or would the mixed sata/sas + 8nvme option be the better bet?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Riser slots are still available. Only 96 of 160 lanes in this configuration are used by the NVMe SSDs. I would look for a mixed backplane if not going all NVMe. If you do not need all 160x lanes you can do 128 lanes and get the extra inter-socket link for more bandwidth.
@Real_Tim_S
@Real_Tim_S 3 жыл бұрын
What an awesome design concept!! If only the PCIe riser card port(s) was standard this would be the ultimate design. At least there are several ports of Slimline-8i that are more or less industry standard. @ServeTheHome, pass along to Dell they should just open source the riser connector and let the market build cards and adapters for it. I could see several uses for repackaging this motherboard if all the ports were made available, as an example it's the perfect platform concept for real-time CNC machine processing engines with >10 axis... With pin-out and connector part numbers available I wouldn't see any faults to this platform - which given the last decade or so of stagnant proprietary system designs is actually a huge accolade.
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
I actually want to do away with standard x16 slots and move to everything cabled. That gives even more flexibility. I was told by a few folks in the industry that doing so costs so much more it is prohibitive at Gen4 speeds.
@Real_Tim_S
@Real_Tim_S 3 жыл бұрын
@@ServeTheHomeVideo I agree, cabled results in flexibility that form factors inhibit. Clarifying question, "... I was told by a few folks in the industry that doing so costs so much more..." Is this "cables are more expensive" or "PCIe x16 connectors and the huge routing congestion and space claim for the conventional PCI add-in-card form factor is more expensive"? For cable speeds, I imagine is an economy of scale question we're seeing 200Gbe QSFP and 400Gbe (QFSP-DD) copper speeds at line rates of 50Gbps - even PCIe Gen5 should be possible over similar cable materials (and then there's fiber...). This Dell MB basically gets to what I believe is the optimal layout - direct fan-out of parallel memory and power to the shortest path, then fan-out of highspeed lanes to the shortest path via a cable connector. It's kind of approaching a SoC/RaspberryPi like mentality, of "get this IO off my tuned chip IO as fast as possible so that anyone can design an outer application system. A concept which I find a LOT of appeal with.
@didjeramauk
@didjeramauk 3 жыл бұрын
Very interesting. Something to look at. Think it would be interesting to look at replacing say a 8 node hyper v cluster with fc attached storage array with 3 of theses and something like vSAN or s2d.
@kwinzman
@kwinzman 3 жыл бұрын
Let me optionally turn off the CPU fuse blowing if I don't need that "security" feature.
@creativestarfox
@creativestarfox 3 жыл бұрын
What would be a typical use case for such a expensive and high-end server?
@tjmarx
@tjmarx 3 жыл бұрын
Wait, AMD is doing vendor locking? This is a real problem. If it was user controllable, so I could choose to unlock it at will, that would be a security feature. That I have no control over it whatsoever makes it a OEM protect.
@MarkRose1337
@MarkRose1337 3 жыл бұрын
It's to prevent firmware tampering, something you want on a server. It basically limits the CPU to booting code that has been signed by motherboard manufacturer. This does have the side effect of locking the CPU to that vendor.
@Real_Tim_S
@Real_Tim_S 3 жыл бұрын
It's a one way door. If the system's BIOS requests that the CPU blow fuses to accept initialization microcode signatures from one platform vendor's signature, the CPU will lock the signature of that vendor. That's where you want the switch to be, so Dell is who you want to shake your fist at, not AMD. Unlocking the CPU can't be user controlled by design, as userspace is not a trusted environment. Should someone attempt to install a rootkit and change the platform signature or initialization payload, the CPU will refuse to accept initialization code from the BIOS (machine won't boot). Think about that for a second - how would a CPU differentiate from an attack and a user-requested signing key reset? I feel your pain on this issue - I prefer LinuxBoot/Coreboot with hardware measurement I control than a proprietary BIOS (because I have trust issues with people who do software). Having this signing switch flipped before I would be able to put on my own firmware would require that Dell do the manufacturing test with a Dell-locked CPU and then ship an un-blown part separately (with no performance guarantees) for later installation, so that I could blow the fuses with my own signing key. Pretty sure that's not an option Dell has on their configurator...
@TheBackyardChemist
@TheBackyardChemist 3 жыл бұрын
@@Real_Tim_S They could have designed in a second efuse, which can be blown by the owner by shorting a jumper to permanently put the CPU into an unsecured mode and removing the vendor lock.
@demoniack81
@demoniack81 3 жыл бұрын
@@TheBackyardChemist Exactly, I understand why this feature exists but there is no reason why it shouldn't be made reversible if you have access to the hardware.
@tjmarx
@tjmarx 3 жыл бұрын
I understand what it's being marketed as @Mark Rose but there is no valid security reason to implement a feature like this without a way to turn it off. Not only to make it reversible, but also to stop the fuses from blowing in the first place if you don't need the security in your use case. This isn't a firmware lock btw, it's a code base lock that runs for firmware lines. That's significant because it has implications for the future. But mostly it's important to note that a security feature like this implemented securely would bind to the hardware not a vendor or a firmware line. One could attempt to make the argument that it doesn't need to be that secure because it's only trying to catch remote code execution. And that by extending it out to a vendor you give repair flexibility. And whilst that's true, it's also the case that because it's a remedy for remote attackers a system to engage and disengage the security feature through physical means renders the need for a vendor lock unnecessary In reality this is an implementation that takes a valid use case feature request from some of the highest purchasing customers, and implementing it in such a way that it actively works against the customer and for the OEM channel partner. I suspect this is how AMD get large OEMs on board with their platform pushing it to high end customers and locking them in. I suspect anti competition law suits over it by the end of the decade
@chrisjorgensen1032
@chrisjorgensen1032 3 жыл бұрын
I have a bunch on Intel based R7x0 running esxi. I'd love to switch to the amd on the next refresh cycle, but mixing amd and Intel seems like it could be a headache.
@ChristianOhlendorffKnudsen
@ChristianOhlendorffKnudsen 3 жыл бұрын
160 PCI-E lanes? That's pretty crazy, but I suppose they need crazy throughput to support several 100G interfaces. Edit: Checked the numbers, 100G NICs will not be the major consumer of PCIe lanes, not when we're talking out of 160 lanes. If you deck out the server with ultra high throughput disks, they will eat a majority of the PCIe lanes, but, really, 160 Gen 4 lanes is a lot!
@mr.z5180
@mr.z5180 3 жыл бұрын
Did server can setup raid and get more IOP ? interest to buy it :)
@Mutation666
@Mutation666 3 жыл бұрын
How expensive are those cables tho ? I know my HBA pcie cables were pretty pricy
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Dell likely gets better pricing than individuals on cables
@Mutation666
@Mutation666 3 жыл бұрын
@@ServeTheHomeVideo Yeah but looking when these go 2nd hand and you want to change the set up
@MarkRose1337
@MarkRose1337 3 жыл бұрын
This fits well into your thesis of SATA being dead in the data center. I like the modularity of this system. I think it will last as long as PCI 4 is relevant. Server of the future indeed. Watching this drinking tea from my STH mug!
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Tea sounds like an excellent idea before filming the next video this afternoon! Thanks for your support.
@mdd1963
@mdd1963 3 жыл бұрын
~38 TB total (24 x 1.6 TB NVME drives as shown) is hardly a bulk data storage bonanza/breakthrough...
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
@@mdd1963 We often test with lower capacity drives just due to the constraints we have. There are plenty of larger capacity 2.5" options.
@berndeckenfels
@berndeckenfels 3 жыл бұрын
@@mdd1963 24 x 8tb Tier 0.5 makes great hyperconverged storage nodes (however for storage alone the 2U is a bit wasteful, but with a Hypervisor node it is not too bad especially with no pcie switches needed for all disks). It’s of course not a good nearline option or nas shelf.
@Owenzzz777
@Owenzzz777 3 жыл бұрын
@@mdd1963 if you want to, there are 15 TB NVMe SSDs or 30 TB SAS SSDs out there. If you are really crazy, there are 100 TB SATA SSDs. If money is no object, you can actually build larger capacity storage servers with SSDs
@hrobayo1980
@hrobayo1980 3 жыл бұрын
It will be awesome, if STH could review Gigabyte R182-Z93...
@marouanebenderradji137
@marouanebenderradji137 3 жыл бұрын
can someone please explain to me what is a hyperscaler ??
@hariranormal5584
@hariranormal5584 3 жыл бұрын
can 7H12 Be used alone in 1 Socket servers?
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Yes but they are not discounted like P series parts
@hariranormal5584
@hariranormal5584 3 жыл бұрын
@@ServeTheHomeVideo Odd because Geekbench only has Dual socket benchmarks of 7h12 ;p
@tommihommi1
@tommihommi1 3 жыл бұрын
Modularity turned up to 11
@joevining2603
@joevining2603 3 жыл бұрын
Maybe I'm the first to say it, and I might be going out on a limb here, but this seems like a very forward-looking design.
@johnmijo
@johnmijo 3 жыл бұрын
Because we ALL know that LED's make it GO FASTER :p
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
This server is lucky all of the photos/ b-roll was done before the latest crop of RGBWW panels arrived in the studio.
@johnmijo
@johnmijo 3 жыл бұрын
@@ServeTheHomeVideo . well I like to post this on the Hardware Unboxed and Gamers Nexus channels, as it is a bit of a meme about RGB ;) That being said I prefer a more industrial look with no bling, in fact I remember cutting the LED wires from some fans i mistakenly purchased from CompUSA, now there is a stroll down memory lane.
@tmakademia3526
@tmakademia3526 12 күн бұрын
Why not switch on???
@ServeTheHomeVideo
@ServeTheHomeVideo 11 күн бұрын
We test these in our colocation labs in data centers. They are super loud to try filming next to them
@Amogh-Dongre
@Amogh-Dongre 3 жыл бұрын
Take a shot everytime he says PCI-E
@tdevosodense
@tdevosodense 3 жыл бұрын
I have worked as a it-support-tech many years ago, and at that point Dell was best used a doorstopper 😉
@martinenglish6641
@martinenglish6641 3 жыл бұрын
Built more like a Main Frame. Good.
@webserververse5749
@webserververse5749 3 жыл бұрын
Why am I watching these videos when I know I can't afford new server hardware for my home lab?
@AchwaqKhalid
@AchwaqKhalid 3 жыл бұрын
Sorry. *No PSB locking servers* for me nor our organization ❌
@MirkWoot
@MirkWoot 3 жыл бұрын
I am waiting for the giveaway ^^, .... holy christ id love to have this to play with... expensive toy tho. So interesting with the NVMe SSD bays, eventho thats not the newest about this server
@SimmanGodz
@SimmanGodz 3 жыл бұрын
Fuck that platform locking noise. Better security my ass.
@berndeckenfels
@berndeckenfels 3 жыл бұрын
Dell don’t need screwless caddies since they don’t sell empty caddies ,)
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Someone, somewhere has to do it.
@berndeckenfels
@berndeckenfels 3 жыл бұрын
@@ServeTheHomeVideo might be a 12 year old who is happy for not starving (bad humor attempt) or an robot planning world domination while working on the conveyer belt. BTW: isn’t it unlucky to put all 4 screws in, i always stop at 3 ,)
@_MrSnrub
@_MrSnrub 3 жыл бұрын
Patrick, can i trade you my r720xd for this?
@joealtona2532
@joealtona2532 3 жыл бұрын
Modularity is cool. But I'd prefer standard cross platform IO rather than proprietary Dell connectors. Also CPU locking is a bummer, no thanks Dell.
@berndeckenfels
@berndeckenfels 3 жыл бұрын
Man that iDRAC wastes more space than the MB ,)
@BaMb1N079
@BaMb1N079 3 жыл бұрын
locked to dell? so it is not up to the customer that pays for the device, the maintenance and the support what he/she's gonna do with parts they own? what sick shit is that?
@2xKTfc
@2xKTfc 3 жыл бұрын
Geez, these toys are getting almost as spendy as The Signal Path's fancy toys!
@BR0KK85
@BR0KK85 3 жыл бұрын
Whats next Dell.... soldered on RAM:D
@ServeTheHomeVideo
@ServeTheHomeVideo 3 жыл бұрын
Likely the video after our next one (ETA later this week/ weekend) will have a Lenovo system with soldered DRAM
@BR0KK85
@BR0KK85 3 жыл бұрын
@@ServeTheHomeVideo i knew it .... everyone is dooing an apple these days.... Will watch the Video as soon as it hits yt
@manuelsuazo1125
@manuelsuazo1125 3 жыл бұрын
1k like, lol
@billymania11
@billymania11 3 жыл бұрын
These power hungry rack machines are lack dinosaurs compared to the Apple M1 technology.
@nilswegner2881
@nilswegner2881 3 жыл бұрын
No. Apple has got nothing to do with datacenters.
Dell EMC PowerEdge C6525 AMD EPYC Powered Kilo-Thread Server Review
20:30
384 Thread MEGA Server from ASUS AMD EPYC 9004 Genoa
18:21
ServeTheHome
Рет қаралды 77 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 88 МЛН
Why no RONALDO?! 🤔⚽️
00:28
Celine Dept
Рет қаралды 100 МЛН
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 41 МЛН
Over 1PB of Storage Dell EMC PowerEdge XE7100 Review
21:55
ServeTheHome
Рет қаралды 95 М.
Dell R7525 vs R7515 | Server Comparison
8:13
xByte Technologies
Рет қаралды 3,3 М.
AMD Walloped Intel on Top500 Supercomputer List
18:57
ServeTheHome
Рет қаралды 30 М.
Most Significant Server of 2020 from Wiwynn and Ampere
28:17
ServeTheHome
Рет қаралды 52 М.
Is this my Fault? - Flooded PC Repair Part 1
15:41
Linus Tech Tips
Рет қаралды 2,5 МЛН
Dual 4th gen AMD EPYC Dell PowerEdge R7625 Server REVIEW | IT Creations
11:54
AMD EPYC 7H12 The Fastest CPU You Can Buy
23:33
ServeTheHome
Рет қаралды 83 М.
Review: Dell PowerEdge R730XD From TechSupply Direct Running FreeNAS
34:03
Supermicro 1U Ultra AMD EPYC 7003 Milan Server Review
24:16
ServeTheHome
Рет қаралды 37 М.
AMD EPYC 2P with 160 PCIe Gen4 Lanes in a Dell EMC PowerEdge R7525
18:53
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 88 МЛН