Using dRAID In ZFS for Faster Rebuilds On Large Arrays.

  Рет қаралды 4,546

ElectronicsWizardry

ElectronicsWizardry

Күн бұрын

In this video I take a look at dRAID in ZFS. dRAID is a variant of RAIDZ that allow for much faster rebuilds and better uses of a hot spare drive. In this video I compare the rebuild times to a RAIDZ array, and look at performance differences. I also cover how to create a dRAID array in ZFS, and the different parameters that need to be set.
00:00 Intro
00:50 How dRAID is different from RAIDZ
02:33 Pros and Cons of dRAID
04:11 Rebuild time comparison
06:29 Performance comparison
07:59 How to create a dRAID Zpool
10:52 Calculating usable space using dRAID
12:39 When dRAID makes sense
13:54 Conclusion

Пікірлер: 16
@Mikesco3
@Mikesco3 7 ай бұрын
I really like your deep dives into these topics. You're one of the few KZbinrs I've seen that actually knows what is being presented...
@dominick253
@dominick253 6 ай бұрын
Apalards adventures is really knowledgeable as well.
@andrewjohnston359
@andrewjohnston359 4 ай бұрын
@@dominick253 true, and Wendell from level one techs
@wecharg
@wecharg 7 ай бұрын
Thanks for taking my request, that was really cool to see! I ended up going with CEPH but this is interesting and might use it in the future! -Josef K
@makouille495
@makouille495 7 ай бұрын
how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍
@FredFredTheBurger
@FredFredTheBurger 7 ай бұрын
Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.
@zyghom
@zyghom 7 ай бұрын
imagine: I only use mirrors and stripes but I am still watching it ;-)
@TheExard3k
@TheExard3k 7 ай бұрын
If I had like 24 drives, I'd certainly use dRAID. Sequential resilver....just great, especially with today's drive capacities.
@boneappletee6416
@boneappletee6416 6 ай бұрын
This was a very interesting video, thank you for the explanation! :) Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊
@awesomearizona-dino
@awesomearizona-dino 7 ай бұрын
Upside down construction picture?
@ElectronicsWizardry
@ElectronicsWizardry 7 ай бұрын
I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.
@Mikesco3
@Mikesco3 7 ай бұрын
I'm curious if you've looked into ceph
@ElectronicsWizardry
@ElectronicsWizardry 7 ай бұрын
I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.
@andrewjohnston359
@andrewjohnston359 4 ай бұрын
@@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!
@Spoolingturbo6
@Spoolingturbo6 3 ай бұрын
@2:15 can you explain how to set that up, or give a search term to look that up? The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios) I knew about this, but just now at the stage I need to use metadata and small files.
@severgun
@severgun 6 ай бұрын
why data sizes so weird? 7 5 9? None of them divisible by 2. why not 8d20c2s? Because of fixed width I thought it better to comply 2^n rule. Or I miss something? How compression works here?
Whats the faster VM storage on Proxmox
8:56
ElectronicsWizardry
Рет қаралды 46 М.
Getting data of my failing RAID array
20:33
ElectronicsWizardry
Рет қаралды 4,1 М.
I Can't Believe We Did This...
00:38
Stokes Twins
Рет қаралды 131 МЛН
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 17 МЛН
Heartwarming Unity at School Event #shorts
00:19
Fabiosa Stories
Рет қаралды 19 МЛН
This is not my neighbor  Terrible neighbor! #funny #zoonomaly #memes
00:26
ZFS Features
36:48
IT-консультант Алексей Нефедьев
Рет қаралды 2 М.
File System Hell
36:21
RetiredTechie
Рет қаралды 276
RAIDZ Expansion by Matt Ahrens & Don Brady
43:13
OpenZFS
Рет қаралды 3,6 М.
All about Proxmox Boot drives:  Capacity, Endurance and Performance
11:34
ElectronicsWizardry
Рет қаралды 32 М.
Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
25:08
RAIDZ Expansion by Matt Ahrens
34:55
OpenZFS
Рет қаралды 8 М.
How To Use ZFS Encryption With TrueNAS For Pools and Datasets
15:07
Lawrence Systems
Рет қаралды 17 М.
НОВЫЕ ФЕЙК iPHONE 🤯 #iphone
0:37
ALSER kz
Рет қаралды 42 М.
S24 Ultra and IPhone 14 Pro Max telephoto shooting comparison #shorts
0:15
Photographer Army
Рет қаралды 9 МЛН
Самый тонкий смартфон в мире!
0:55
Не шарю!
Рет қаралды 137 М.
Опасность фирменной зарядки Apple
0:57
SuperCrastan
Рет қаралды 2,1 МЛН