8TB RAID 0 Guide : 4x PCIe NVMe Adapters Tested - Which Adapter Reigns Supreme? HP Z840 Workstation

  Рет қаралды 6,972

RACERRRZ

RACERRRZ

Күн бұрын

Пікірлер: 13
@raceteker
@raceteker Жыл бұрын
Another fun, yet informative video! A couple of notes. 1. I’ve seen others suggest that 1tb nvme’s tend to be a bit quicker than 2tb nvme’s when used in raid arrays. 2. A few other videos I’ve seen recently testing speed and thermals for nvme’s actually suggest that pcie4.0 nvme’s (drives themselves not adapters) run a bit cooler while still achieving the same if not better read write speeds. I wonder if this is equivalent to 4 and 8 cylinder engines going at the same rate of speed. They can both achieve the same speed, only one can do it at lower effort (and temperature). Thanks again and keep ‘em coming for us z-enthusiasts
@racerrrz
@racerrrz Жыл бұрын
Thank you, I am glad you enjoyed it. I had not picked up on that detail, nor I have tested the 1TB drives in RAID0 just yet. I just figured it was the lack of DRAM that slowed them down but maybe there is more to it! (I don't have set of 4x DRAM NVMes to test). I'll see if I can clear a couple of 970 Evo Plus NVMes to do a quick check relative to the 980's. The max RAID 0 speeds of ~2000MB/s on each NVMe doesn't seem all that random (PCIe 2.0 speed limit was this also) and I presume there is a limit somewhere that restricts the speed of these NVMes. Being DRAMless does require them to more heavily use the CPU and RAM and that may be the speed restriction. The Adata Legend 800 NVMes are a bit of a hybrid given that they are PCIe 4.0 but only have PCIe 3.0 speeds. They may not be the fastest drives around but what I liked about them was the endurance rating (2TB; 1200TBW) and the fact that they are running at PCIe 3.0 speeds (well matched to the Z840; Samsung 980 Pros being bottlenecked by the Z840's PCIe 3.0 would be a bit of shame). Likely yes, the efficiency of the PCIe 4.0 NVMes must have improved to net the higher speeds at lower temperatures / more torque / area under the curve. The question is, are the newer Gen 4.0 NVMes the V8 or the tuned 4-cylinder with variable valve timing and boost? haha (Older and running hotter would suggest to me V8 for Gen 3.0 NVMes lol).
@milanhosek343
@milanhosek343 25 күн бұрын
Hello, thank you for the great video, any chance any of these would fit into my HP z600 v2 ?
@racerrrz
@racerrrz 24 күн бұрын
That's a good question. Technically they will fit but there is a limitation that will prevent the system from detecting more than 1 NVMe in these adapters - bifurcation. The Z400, Z600, Z800, Z420, Z620 and Z820 lack bifurcation support in the BIOS. There are some technical workarounds that may work but they come at a cost. One that is worth consideration is a NVMe PCIe adapter with a PLX chip - which would allow bifurcation to be managed independent of the motherboard. I have not had the privilege of testing them out (mostly due to cost) - but that would be the only way that you could get multiple NVMes on the single PCIe slot for the older workstations.
@CheapSushi
@CheapSushi Жыл бұрын
It doesn't matter much for newer hardware since they tend to have Bifurcation but some older motherboards from X99 and especially X79 don't have bifurcation. So something to be careful with. My ASUS X99 workstation boards don't have it. There are adapter boards with PLX-like switches on them that allow you to run multiple drives per card but they are of course more expensive.
@racerrrz
@racerrrz Жыл бұрын
Hi, yes that is correct. The NVMe booting and PCIe bifurcation tends to vary between motherboards and manufacturers, and support needs to be checked. The HP Workstations (from ~2014 onward) obtained support through BIOS updates (as an example, the HP Z440, which supports bifurcation, uses the same CPU socket as the X99). I have read about NVMe adapters with a PLX Chip but they are not all that common and do come at a steep price. But those prices are coming down and right now something like the IO CREST Quad M.2 NVMe Ports to PCIe 3.0 x16 adapter (has a PLX chip) costs nearly the same amount as the HP Z Turbo Drive Quad Pro. For ~$300 USD you could likely get your Asus X99 workstation board to support NVMe drives. Hopefully you did get a BIOS update at some point to allow booting from NVMe drives via PCIe?
@georgeburtner1174
@georgeburtner1174 Жыл бұрын
Regarding the unexpected speed results, in which order were the adapters tested? Was the first adapter retested at the end? Some wear or wear management internal to the drives might have had a large effect, especially on brand new drives. Retesting the slowest and fastest again would be pretty telling, even if the drives have been used extensively since the first test.
@racerrrz
@racerrrz Жыл бұрын
Hi there. Thank you for the suggestions. I agree, it would be worth retesting the AORUS and likely the HP Z Turbo Drive Quad Pro with the same drives for comparison (the Jeyi U.2 adapter is way too much work to assemble! haha). I have not had a chance to actually fully implement the new NVMes into my workflow, so they stand in the same state they were when I finished the Jeyi U.2 test. All the adapters were tested back-to-back (same day/night, in order; #1 Aorus, #2 Asus, #3 HP and #4 Jeyi U.2) and the transferred files on the NVMes were deleted between tests on adapters, but the drives were not / RAID 0 was not formatted between tests. Given the relatively small data load that the NVMes would have received I would not expect the NVMe performance to change drastically between tests (I would estimate less than 200GB was written after all testing - so likely ~50GB per drive factoring in the RAID 0; making it ~200GB of "wear" between adapter #1 and adapter #4 per NVMe). They do slow dramatically when they become filled with data however, but that is less likely to be an issue here. The best case would be to purchase 16 2TB NVMes so that it is a fair test, but that's a bit excessive on the budget! No I didn't retest the AORUS adapter at the end, mostly because I needed the AORUS adapter after testing for work / it is still in daily use with a set of 4x 1TB Samsung 980's (not in RAID; I have various software on them and it will be a pain to remap it all). I have a new video nearly out that looks at the AORUS data in a bit more detail, with data for 3x different speed tests (Blackmagicdesign Disk Speed Test, ATTO Disk Benchmark and Crystal Disk Benchmark) and the speeds were all slower than expected for the AORUS (Max ~3500MB/s Read & Write while in RAID 0). I would want to test a set of four DRAM NVMes and ideally a set of four PCIe 4.0 NVMes in a PCIe 4.0 PCIe slot (I don't have a PCIe 4.0 motherboard on hand). If I get some spare time I'll go back for another test in the AORUS - same ADATA Legend 800 NVMes but a new test. My hypothesis is that the AORUS doesn't handle higher than 3500MB/s on read/write for the PCIe 3.0 interface, but I can't quite test this and I haven't found any videos online to test my hypothesis (note it performs as expected on PCIe 4.0 from what I have seen online). The only other thing I noticed was that drive capacity might have a role in speeds also, but most of the videos I found on the AORUS were for 1TB NVMes. Side note, all four adapters were slower than the theoretical maximum for RAID 0 on these NVMes on a x16 PCIe 3.0 slot, which I have as closer to 14000MB/s (ignoring overhead, data distribution efficiency, or PCIe slot limitations etc.). But I suspect the slower speeds (i.e. Read speed of 8000MB/s on RAID 0 = ~2000MB/s per NVMe) were at least in part a consequence of not having DRAM on the NVMes. For comparison, check out BuildOrBuy's Channel. He has a very methodical approach to his testing. He saw decent speeds on the AORUS, but he used a different system (Gigabyte TRX40 Designare) and GEN4 hardware, and not 4x NVMes in RAID0: kzbin.info/www/bejne/eYOQYpWbiN2Bp6c Some of His AORUS Crystal Disk Mark Data (Peak Sequential; SEQ1M Q8T1 @ 1Gib): 980 Pro 1TB: Read: 6612MB/s, Write: 4957MB/s (@ 61°C max) SN850 1TB: Read: 4012MB/s, Write 4340MB/s (pre-firmware update @ 58°C max) SN850 1TB: Read: 7071MB/s, Write 5215MB/s (post-firmware update @ 66°C max) edit: Another video showcasing the AORUS's true potential (4x 2TB Sabrent Rockets in RAID 0 on TRX40 AMD system with PCIe 4.0): kzbin.info/www/bejne/i16uc5p7jdeKgqM
@user-zxc127
@user-zxc127 Жыл бұрын
How to create raid 0 of 4 ssd on hp z640 workstation?
@racerrrz
@racerrrz Жыл бұрын
Hi there. In my next video I have this information, but I am busy with new projects and I haven't had a chance to finish the video, sorry about that! The Raid 0 in the Z640 will be identical to the Z840 in setup. I can recommend the Asus Hyper M.2 V2 adapter for the 4x NVMes since I obtained decent speeds in my Z840 with it. The easiest method for RAID 0 is to use software raid in Windows 10 (or in Linux etc.). As a quick guide: Create the RAID array: After installing the NVMe adapter with x4x4x4x4 lane bifurcation in the Z640 BIOS, go to "Disk Management" in Windows 10. Right-click on one of the NVMe drives you want to include in the RAID 0 array and select "Create New Striped Volume." Follow the wizard, select the other NVMe drives you want to include, and choose the desired RAID 0 configuration. Format the RAID volume: Once the RAID 0 array is created, format it using a file system of your choice (e.g. NTFS). Verify the RAID configuration: Check that the RAID 0 volume appears correctly in "Disk Management" and shows the combined capacity of the NVMe drives (e.g. 4x 1TB NVMes should read slightly less than 4TB capacity in RAID 0). Let us know how you get on! It should work. One small detail, you may not be able to get your boot drive to run on the same NVMe adapter (I had that issue here; 4x NVMes into Aorus Gen 4 adapter: kzbin.info/www/bejne/aX6vfZd3dttnmbc)
@user-zxc127
@user-zxc127 Жыл бұрын
@@racerrrz I'm really looking forward to the new video)
@PearlX9
@PearlX9 7 ай бұрын
Kindly test Sabrent
@racerrrz
@racerrrz 7 ай бұрын
I considered other NVMes but I settled on the Adata Legend 800's for their price point + endurance rating + being PCIe 4.0, but geared for PCIe 3.0 performance. I don't plan on upgrading to a PCIe 4.0 system anytime soon - which made me hold off on getting the newer gen NVMes (my hardware would limit their speeds to PCIe 3.0). But I have been keeping an eye out for a "cheap" modern system that I can use in videos. If a good price comes up for Sabrent rockets I will grab some for testing!
Gli occhiali da sole non mi hanno coperto! 😎
00:13
Senza Limiti
Рет қаралды 16 МЛН
Get 10 Mega Boxes OR 60 Starr Drops!!
01:39
Brawl Stars
Рет қаралды 16 МЛН
OMG what happened??😳 filaretiki family✨ #social
01:00
Filaretiki
Рет қаралды 12 МЛН
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 26 МЛН
Hardware Raid is Dead and is a Bad Idea in 2022
22:19
Level1Techs
Рет қаралды 671 М.
I predicted the future! - GRAID Graphics Accelerated Storage
26:34
Linus Tech Tips
Рет қаралды 2,1 МЛН
What else is an M.2 WiFi slot good for?
10:24
Peter Brockie
Рет қаралды 1,2 МЛН
I Made My Own JBOD Enclosure For CHEAP
16:51
Hardware Haven
Рет қаралды 255 М.
M.2 SSD Adapters & Enclosures
16:20
ExplainingComputers
Рет қаралды 208 М.
This Workstation Would Be It. HP Z840 Full System Overview.
21:33
СОБАЧЬЯ ПОДСТАВА ► SCHOOLBOY RUNAWAY #4
54:37
Kuplinov ► Play
Рет қаралды 1,4 МЛН
Әке мен Мама мені үйде қамап кетті!
23:53