TrueNAS Tutorial: Expanding Your ZFS RAIDz VDEV with a Single Drive kzbin.info/www/bejne/q4Gmo3ejn7yJlas Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work? kzbin.info/www/bejne/g2WnfXaeh719pck ZFS COW Explained kzbin.info/www/bejne/pJ2liYuar5V9gaM TrueNAS ZFS VDEV Pool Design Explained: RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance. kzbin.info/www/bejne/Y3LRnHuZbLNjsK8 ⏱ Timestamps ⏱ 00:00 ▶ How to Expand ZFS 01:23 ▶ How To Expand Data VDEV 02:11 ▶ Symmetrical VDEV Explained 03:05 ▶ Mixed Drive Sizes 04:45 ▶ Mirrored Drives 06:00 ▶ What Happens if you lose a VDEV? 07:37 ▶ Creating Pools In TrueNAS 10:30 ▶ Expanding Pool In TrueNAS 16:00 ▶ Expanding By Replacing Drives
@tailsorange28722 жыл бұрын
Can we just give you a nickname "Lawrence Pooling Systems" instead :)
@zeusde862 жыл бұрын
I'd really wish, that you could point out the importance of "ashift" in zfs. I just recently learned, that most SSDs have 512b instead ok 4k-sectors, and that using "ashift=12" on them (instead of 9) is, what really hits the performance so bad, that many SSDs will fall behind spinning-rust performance levels. In general i'd really like to see best practices on SSD-Pools (which cache-type to use, ashift, as described above, and which disk-types to avoid). while it may sound luxurious to have ssd-zpools in a homelab, this is especially important on e.G. proxmox-instances with zfs-on-root (on SSDs).
@garygrose9188 Жыл бұрын
Brand new and as green as it gets, when you say "let's jump over here" and landed in a comand page, exactly how did you get there?
@LAWRENCESYSTEMS Жыл бұрын
@@garygrose9188 you can SSH into the system
@chromerims Жыл бұрын
5:51 -- I like this. Two pools: first one with faster flash, and the second with HDDs. Thank you, Tom! 👍
@davidbanner9001 Жыл бұрын
I'm just moving from Open Media Vault to TrueNAS scale and your uploads have really help me understand ZFS. Thanks.
@gorillaau Жыл бұрын
What was the deal breaker that made you leave Open Media Vault? I'm pondering shared storage device as a data store for proxmox.
@davidbanner9001 Жыл бұрын
@@gorillaau Overall flexibility and general support. A large amount of almost preconfigured apps/dockers and the ability to run VM's. If you are running Proxmox these are probably less of a concern? Switching to ZFS is also very interesting and something I have not used before.
@nitrofx80 Жыл бұрын
I don't think it's a good idea. I just migrated from OMV to truenas and not.very happy about the change. I think that there is a lot more value for the home user for OMV thank truenas
@nitrofx80 Жыл бұрын
As far I know there is only support support for one filesystrm in truenas. OMV supports all files stems and its all really up to you what you want to use.
@eggman9713 Жыл бұрын
Thank you for the detailed explanation on this topic. I'm just starting to get really into homelab and large data storage. I've been a user of Drobo products (now bankrupt, obsolete, and unsupported) for many years and their "BeyondRAID" system allowing mixed-size drives was a game-changer in 2007 and few other products could do that then or now. I also use Unraid but since it is a dedicated parity disk array and each disk is semi-independent it has limitations (mainly on write speed), but is nice in a data recovery situation where each individual data drive can function outside the machine. I know that OpenZFS developers have announced that "expansion" is coming, and users have been patiently awaiting it, which would make zfs more like how a Drobo works. Better than buying whole VDEVs worth of disks at a time and finding a place for them.
@Dr.Hoppity5 ай бұрын
Thanks for the excellent practical demonstrations of how zfs distributes io!
@GoosewithTwoOs2 жыл бұрын
That last piece of info is really good to know. Got a ProxMox server running and I want to replace the old drives that came with it with some newer larger drives. Now I know.
@marklowe7431 Жыл бұрын
Super well explained. Cheers. Enterprise grade integrity, performance & home user flexibility. Pick two.
@alex.prodigy2 жыл бұрын
excellent video , makes understanding basics of ZFS very easy
@Anonymousee Жыл бұрын
16:02 This is what I really wanted to hear, thank you! Too bad it was a side-note at the end, but I did learn some other things that may come in handy later.
@Mike-01234 Жыл бұрын
After reviewing everything I wanted drive redundancy and pool size efficiency I built a raidZ2 that was 5 years ago never looked back. My drive failure rate has been 1-2 drives a year those were used WD red drives I bought on ebay. I now only buy brand new WD reds haven't had a failure yet in last few years. I'm looking at move the Truenas up to 14TB from 6TB, and for critical files backing up to a windows mirror drives on a windows box. I don't like all the security issues around windows if you blue screen, or something happens to the OS difficult to recover data sometimes. My new build will be 5 drive 14TB raid Z2 plus a 2nd mirror Vdev as a backup set for critical data move that off the windows box on to the Truenas.
@David_Quinn_Photography2 жыл бұрын
16:05 answered the question I had but I learned some interesting things thank you for sharing. I have a 500gb, 2tb, and 3tb drives and wanted to at least replace my 500gb for a 8th that I got on sale.
@perriko2 жыл бұрын
Great instruction as usual... fact with reason! Thankyou!
@deadlymarsupial12362 жыл бұрын
I just went with trunas scale zfs using intel e series 6 core / 12 thread xeon 32g ram & 4 x 20TB WD RED PROs I like the idea that I can move the whole pool/array of drives to another mainboard and not have to be worried about differing proprietary raid controllers or such controllers failing. I also like using a server mainboard with remote admin built onto the board and a dedicated network interface so I can power up the machine via vpn remote access if need be. Although it is very early days in set-up/test, I am so far very impressed and worth the extra $ for server hardware platform. People may however be surprised how much storage is allocated for redundancy - at least 1 drive's worth to survive 1 drive failing. What is a bit tricky is configuring a windows vm hosted on the nas that can access the nas shares. Haven't quite figured out how to set up a container to host ubiquiti controller either. One of the things this nas will do is host storagecraft spx backup sets and the windows vm hosts the incremental back-up image manager that routinely verifies, consolidates and purges redundant data as per retention politices. I haven't decided what ftp server for receiving backups of remote hosts yet. Could go with filezilla I supose. Another nice solution would be pxe boot service for providing a range of system boot images for setting up and troubleshooting systems in a workshop environment. There has been some implementations where trunas is hosted within a hypervisor such as proxmox so trunas can focus exclusively on nas while other vm's can run a windows server, firewall and perhaps containers for ubiquiti controller. May need more cores for that however when I have the time and get another 32G Ram to put in the machine, I plan to see if I can migrate the existing bare metal install of truenas scale to proxmox hypervised vm just to see how that goes.
@theangelofspace1552 жыл бұрын
There are some video of setting truenas scale as a promox VM, I went that router. I use scale just for file manager, promox for VM hypervisor and unraid for container (docker) manager.
@deadlymarsupial12362 жыл бұрын
@@theangelofspace155 Thanks, it will be interesting to see how easily (or not) migrating trunas from bare-metal to vm within proxmox will go. I suspect back-up of the truenas configuration, mapping the drive and network interfaces to the vm and setting up auto-boot on restored mains power but need to put together a more thorough research inspired plan first.
@johngermain51462 жыл бұрын
You saved the best for last (adding larger capacity drives.) As my enclosure has the max # of drives installed and 2 vdevs with no room for more, replacing the drives with larger ones is "almost" my only solution without expanding.
@theangelofspace1552 жыл бұрын
You can add a 12-15 disk DAS for around $200-$250
@theangelofspace1552 жыл бұрын
Well my last commet was deleted. Check serverbuilds if you need a guide
@johngermain51462 жыл бұрын
@@theangelofspace155 Your last comment is still here!
@zeusde862 жыл бұрын
Actually you CAN remove data-vdevs, you just cannot do with raid-z vdevs. with mirrored vdevs this works, see also "man zpool-remove(8)": "Top-level vdevs can only be removed if the primary pool storage does not contain a top-level raidz vdev". ...on very full vdevs it just taked some time to move the stuff around...
@SirLothian Жыл бұрын
I have a boot pool that was originally a single 32GB thumb drive that I mirrored with a 100GB SSD. I wanted to get rid of the thumb drive so I replaced the thumb drive on the boot pool with a second 100 GB SSD. I had expected the capacity to go from 32 GB to 100 GB but it did not. This surprises me since the video said that replacing the last drive on a pool would increase the pool size to the smallest disk in the pool. Looks like I will have to destroy the boot pool and recreate it with full capacity and then reinstall TrueNAS on it.
@Darkk69692 жыл бұрын
One thing I love about ZFS as it's incredibly easy to manipulate the storage pools. I was able to replace 4 3TB drives with 4 4TB drives without any data loss. It took awhile to resilver each time I swap out the drive. Once all the drives been swapped out ZFS automatically expanded the pool.
@tubes91812 жыл бұрын
This is available on a lot more than just zfs.
@MHM4V3R1CK2 жыл бұрын
How long did that take btw?
@ewenchan12392 жыл бұрын
Two things: 1) Replacing the disks one at a time for onesie-twosie TB capacities to the larger capacities isn't a terrible issue. But if you're replacing 10 TB drives with 20 TB drives, then the resilver process (for each drive) takes an INORDINATE amount of time such that you might actually be better off building a new system with said 20 TB drives and then migrating the data over your network vs. the asynchronous resilvering process. 2) My biggest issue with ZFS is the lack of OTS data recovery tools that a relatively simple and easy to use. The video that Wendall made with Allan Jude talks about this in great detail.
@andymok794511 ай бұрын
Thanks. Waiting for the feature to add a drive to expand. I used much larger drive size when I created my pools. For me, data integrity is way more important. It is for my own use, but important stuff and I have nightly rsync happening to copy to another TrueNAS setup. The I also have a 3rd system that is my off-line archive copy. Gets powered up and connected to network and rsync away. When done, network disconnected and power removed.
@madeyeQ2 жыл бұрын
Great video and very informative. I may have to take another look at TrueNAS. At the moment I am using a debian based system with just ZFS pools managed from the CLI. (yes I am a control freak). One thing to note about ZFS raid (or any other raid) is that it's not the same as a backup. If you are worried about loosing a drive, make sure you have backups! (learned that one the hard way about 20 years ago).
@alecwoon63252 жыл бұрын
Thanks for sharing. Great content! 👍
@ManVersusWilderness2 жыл бұрын
What is the difference between "add vdevs" and "expand pool" in truenas?
@HelloHelloXD2 жыл бұрын
Great video as usual. Thanks
@bartgrefte2 жыл бұрын
Can you make a video about which aspects of ZFS are very RAM-demanding? A whole bunch of websites say that with ZFS, you need 1GB of RAM for each TB of storage, but there are also a whole bunch of people out there who are able to use ZFS without problems on systems with far from enough RAM to obey that requirement.
@LAWRENCESYSTEMS2 жыл бұрын
Right here: Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work? kzbin.info/www/bejne/g2WnfXaeh719pck
@Mr.Leeroy2 жыл бұрын
The only demanding thing is deduplication, the rest is caching. You can control on per dataset basis what gets cached (also is it only metadata or data itself is being cached). As well as what gets cached where, into RAM, into L2ARC. Dataset CLI parameters like `primarycache` is you what you need. Still be very cautious going below minimal requirements e.g. 8GB RAM for FreeNAS, it is not ZFS dictated, but rather particular appliance as a whole OS. Something like ZFS on vanilla FreeBSD may very well go a lot lower than 8GB, all depending on your serivces running.
@bartgrefte2 жыл бұрын
@@Mr.Leeroy I wasn't thinking as low as 8GB, more like 32GB, but with so much storage that the "1GB RAM per TB storage" requirement still doesn't apply.
@Mr.Leeroy2 жыл бұрын
@@bartgrefte 32GB is perfectly adequate. I don't suppose you approach triple digit TB pool just yet.
@bartgrefte2 жыл бұрын
@@Mr.Leeroy no pool yet, waiting for a good deal for hdd's, now if ZFS had the option to start a RAIDz2 (or 3) with a small amount of drives and adding drives later.... Everything else is ready to go, build a system with used parts only and it has 16 3.5" and 6 2.5" hot swap bays in a Stacker STC-T01 that I managed to get my hands on :)
@MobiusCoin4 ай бұрын
This sounds a like a lot of work for not that much benefit. Watching this has really changed my approach to my build. I'm going to save up and get as many drives as my build will fit and just not go through this hassle. I actually planned on getting 3 drives, raidz1, and expanding. But nah, I'd rather not create this extra work for myself and just be more patient. Although I don't mind the last method. Again just have to be patient.
@prpunk7872 ай бұрын
From a noob. If you keep adding VDEV with 4 HDD on raidz, you can expand the pool by adding another VDEV with raidz. But the pool only tolerates 1 drive failure, but the capacity would be lower because there are 2 VDEVs on raidz. If you had all 8 HDD in 1 VDEV in the pool, you would have more storage and its still able to tolerate one drive failure. I'm correct on that?
@Savagetechie2 жыл бұрын
extendable vdevs can't be too far away. the openzfs developer summit is next week, maybe they'll even be discussed there?
@hpsfresh Жыл бұрын
Doesn’t ads supports attach command even for non-mirrors?
@StylinEffect3 ай бұрын
I currently have 5x 4TB drives and am looking at using TrueNAS. What would be the best configuration that would allow me to expand to max capacity which is 8 drives for my case?
@knomad666 Жыл бұрын
Great explanation.
@KerbalCraft4 ай бұрын
I added a data vdev to my pool, I don't see an increase in storage I originally had a pool with 1 vdev containing 3 4TB SSDs (RAIDz1). I just added another data vdev with 3 4TB SSDs in RAIDz1, to increase the pool storage. However, after I added the vdev, the storage did not increase, but the second vdev shows up (pool shows 6 drives, 2 vdevs). Why is this? Am I missing something?
@LAWRENCESYSTEMS4 ай бұрын
Not and issue I have run into from the UI, but from the command line you can run "zpool list" and it will show the space available.
@LukeHartley Жыл бұрын
What's the most common cause of a VDEV failing? I like the idea of creating several VDEV's but the thought of 1 failing and loosing EVERYTHING scares me.
@BenVanTreese Жыл бұрын
VDEVs would fail due to normal drive failures. The issue with a lower raid level is that while you do have the ability to lose 1 drive and keep all data, when you put in a new drive to replace the failed one, it must do a lot of read/write to recalculate the parity on the drive you put in. This process can cause any other drives that are close to failing to fail as well. Usually people buy drives in bulk, so if you buy 16x drives at once, and they were all made at the same time from same manufacturer, the chances of another drive failing at the same time the first did is higher as well. The chance of dual drives failing on the same vdev when you're doing raid2 and you have a hot spare or two assigned to the pool is just lowering and lowering risk, but that risk is never 0, which is why you have backups of raid (raid is not a backup). Anyway, hopefully that is helpful info.
@lukehartleyfilms Жыл бұрын
@@BenVanTreese very helpful! Thanks for the info!
@deacbeugene9 ай бұрын
Questions about dealing with pools: can one move a dataset to another pool? Can one delete a vdev from a pool if there is enough space to move the data?
@LAWRENCESYSTEMS9 ай бұрын
You can use ZFS replication to copy them over to another pool.
@Thomate13752 жыл бұрын
Heey, I have a problem with the pool creation ... I have a fresh install of truenas scale with 2x 500gb hdds But everytime I try to create a Pool with them there comes an error of "...partion not found " Everything that I could find online says that I would have to wipe the disk and eventually reboot the system. I have done this multiple times now but nothing changes. Have also done a smart test but according to the results the drives seems to be ok
@hojo1233 Жыл бұрын
What about truenas and 2 drives in basic mirror? Is there any way to expand it using bigger drives? Unfortunately I don't have any more free ports in server. In my configuration I have 4 ports total - 2 of them are for data drives (2x4TB). Another one is for SSD cache, and last one is for boot. I had no issues with that configuration whatsoever, but now I need to increase storage capacity. Is there any way to expand it without rebuilding everything from scratch? For example by replacing 4TB disks to 8TB and resizing pool?
@frederichardy1990 Жыл бұрын
With the "Expanding by replacing", assuming that you can shutdown the TrueNAS server for a few hours, copying all the existing drives of a vdev (with dd or even standalone duplicator) to higher capacity drive could work??? It would be much faster than replacing one drive at a time for vdev with a lot of drives.
@tank1demon Жыл бұрын
So there functionally isn't a solution for having a system where you will end up with 5 drives in a pool but have to start with 4? As in adding anything to an existing vdev? I'm on xubuntu 20.04 and I'm trying to work out how to go about that, if possible. Can I just build a pool with drives without a vdev and add to that pool?
@kommentator11572 жыл бұрын
Would it be possible (though not advisable) to have vdevs with differents widths? Edit: Just got to the part where you show it. Yep it's possible, not recommended.
@KB3M2 ай бұрын
Hi Lawrence, Have you generally upgraded all your TrueNas zpools to the feature flag version that prevents moving pools to an older Truenas version? I'm just a home user, any reason not to?
@LAWRENCESYSTEMS2 ай бұрын
I update the feature flag once I know I am not going back to a previous version.
@Linrox6 ай бұрын
Is it possible to upgrade a mirrored (2drive) raid to a raidz with an extra 2 drives. without data loss
@LAWRENCESYSTEMS6 ай бұрын
Not that I am aware of.
@IntenseGrid6 ай бұрын
Several RAIDs have a hot spare, (or cool by powering down the drive). I would like to have a cold spare for my zpool that gets automatically used so resilvering can kick off without me knowing a thing. I realize that this is sometimes dangerous because we don't know what killed the drive, and may kill another one while resilvering, but most of the time, drives themselves are the problem. Doez ZFS support the hot or cold spare concept?
@LAWRENCESYSTEMS6 ай бұрын
Yes, you can have a hot spare.
@arturbedowski114811 ай бұрын
hi, I copied my hdd on ssd and i tried expanding zfs pool via gparted, but it didnt work (ssd has waaaay biger storage). Is it possible to expand my rpool zfs partition or is it not posible?
@johnpaulsen18492 жыл бұрын
Great video. I know that Wendell from Levelonetechs has mentioned that expanding vdevs is coming? What do you think about that? Also do you have any content on adding hot spares or SSD cache to an existing pool?
@Pythonzzz Жыл бұрын
I keep checking around every few months for updates on this. I’m hoping this will be an option by the time I need to add more storage.
@AdamEverythingRC2 жыл бұрын
are you able to add a drive so that you can increase your fault tolerance. for instance, i started with 5 drives with Z1 i would like to add another drive and change from Z1 to Z2. is that possable?
@LAWRENCESYSTEMS2 жыл бұрын
No
@philippemiller4740 Жыл бұрын
Hey Tom I thought you could remove vdevs but only mirrors not raidz vdevs from a pool?
@DiStickStoffMono0xid2 жыл бұрын
I did read somewhere that it’s possible to “evacuate” data from a vdev to remove it from a pool, is that maybe a new feature?
@BrentLeVasseurАй бұрын
Since it’s almost 2025/late 2024, has this changed where you can add one drive at a time? Maybe an update video is in order? Thanks!
@LAWRENCESYSTEMSАй бұрын
You mean like this one from 2024? kzbin.info/www/bejne/q4Gmo3ejn7yJlas
@BrentLeVasseurАй бұрын
@@LAWRENCESYSTEMS I watched it thanks! I just setup my very first ProxMox server and TrueNAS VM on Proxmox, and I feel like I have given birth to a borg baby. And I was wondering how I can later increase the pool size and this video popped up, so thanks!
@AdamEverythingRC2 жыл бұрын
Question ...for a Truenas server what would be better ASUS x58 sabretooth with Xeon x5690 , or ASUS sabretooth 990FX R2.0 AMD 8350 using a sas card to the drives I have both i could use memory 24gb on the intel and 16gb on the AMD. just not sure what would be better. will also be using m.2 card with a 256gb m.2 drive as a log cache or would it be better used as just extra cache. this will be a file server to hold all my photos (Photographer) thanks for your time and thoughts on this
@Saturn28882 жыл бұрын
So I have 4x1TB. Replace 1TB with 8TB, resilver, no change. Replace another 1TB, resilver, now it's 8TB larger from the first one? Or is it that you replace all drives first, then it shows the new size?
@gloth Жыл бұрын
no changes until you replace that last drive and you have 4x8tb on your vdev
@Saturn2888 Жыл бұрын
@@gloth thanks! I eventually figured it out and switched to all mirrors
@mikew642 Жыл бұрын
So on a mirrored pool, if I add a vdev to that pool, my dataset won't know the difference, and just give me the extra storage?
@LAWRENCESYSTEMS Жыл бұрын
Yes, datasets don't care about how the VDEV's they are attached are expanded.
@mikew642 Жыл бұрын
@LAWRENCESYSTEMS Thank you sir! Your one of the main reasons I started playing with ZFS / TrueNAS! THANK YOU for your content!
@romanhegglin2 жыл бұрын
Danke!
@simonsonjh2 жыл бұрын
I think I would use the disk replacement method. But waiting for new ZFS features.
@yc3X Жыл бұрын
Is it possible to just drag and drop files into the the Vdrive nas? Secondly is it possible to run games off the nas? I have some super old games I wanted to store on it and just play them off it. I wasn't sure if the files are compressed or not when placing them on the nas.
@LAWRENCESYSTEMS Жыл бұрын
Yes, you can put them on a share and as long as a game can run from a share it should work,
@yc3X Жыл бұрын
@@LAWRENCESYSTEMS Awesome thanks! Yeah, I'm using a Drobo currently but who knows when it might die so I figured I would start looking into something newer. I figured it must be something similar to a drobo.
@jms019 Жыл бұрын
Isn't RAIDz1 expansion properly in yet ?
@SandWraith02 жыл бұрын
Just one question: how is any of this better than how Unraid does it (or OpenMediaVault with a combination of UnionFS and Snapraid or Windows with Stablebit)?
@LAWRENCESYSTEMS2 жыл бұрын
ZFS has much better performance and better scalability
@Im_Ninooo2 жыл бұрын
that's basically why I went with BTRFS. so I could expand slowly since drives are quite expensive where I live so I can't just buy a lot of them at once.
@Im_Ninooo2 жыл бұрын
@@wojtek-33 I've been using it for years now, but admittedly only with a single disk on all of my servers, so can't speak from experience on the resiliency of it.
@LesNewell2 жыл бұрын
@@wojtek-33 I've been using BTRFS for 10+ years (mostly raid5) and in that time have had two data loss incidents, neither of which could be blamed on BTRFS. One was raid0 on top of LUKS with 2 drives on USB. Basically I was begging for something to go wrong and eventually it did. One USB adapter failed so I lost some data. This was only a secondary backup so no big deal. The other time was when I was creating a new Raid5 array of 5x 2TB SSDs and had one brand new SSD with an intermittent fault. I mistakenly replaced the wrong drive. Raid5 can't handle 2 drive failures at the same time (technically one failure and one replacement) so I lost some data. Some of the FS was still readable but it was easier to just wipe and start again after locating the correct faulty drive and replacing it. As an aside, I find BTRFS raid5 to be considerably faster than ZFS RaidZ. ZFS also generates roughly twice as many write commands for the same amount data. That's a big issue for SSDs. BTRFS raid5 may have a slightly higher risk of data loss but for SSDs I think that risk is offset by the reduced drive wear and risk of wearing drives out.
@Mr.Leeroy2 жыл бұрын
each added drive is also ~52kWh per year, so expanding vertically still makes more sense..
@glitch01568 ай бұрын
I think for Raid0, you can add drives to the pool without rebuilding the pool.
@Kannulff2 жыл бұрын
Thank you for the great explanation and video. As always :) Is it possible to put here the fio command line? Thank you. :)
@ovizinho Жыл бұрын
Hello!…. I have a doubt that I think is so simple everywhere I research this doubt goes unnoticed… I built a NAS with an old PC and everything is ready for the installation of TrueNas…. My question…where to connect the LAN cable? Direct on the internet router? Or on the main PC's LAN? NAS-router or NAS-main computer? Both the NAS and the main computer have 10gb LAN each… If it is NAS-Router after installing TrueNas, do I disconnect it from the router and connect it to the main computer? Thanks in advance!… Processor i7 6700 3.40 GHz Mother board ASUS EX-B250-V7 Vídeo card GTX 1060 6GB (PG410) Memory DDR4 16GB 3000mhz SSD 500GB NVMe HDD 1TB
@__SKYNET__10 ай бұрын
Tom can you talk about the new features upcoming in Pool expansion in ZFS 2.3 thanks, appreciate it
@LAWRENCESYSTEMS10 ай бұрын
eventually
@Shpongle646 ай бұрын
I don't understand when he combines multiple RaidZ1's into a large ZFSpool that one disk in the vdev causes such a major problem. Isn't RaidZ1 supposed to have a one disk failsafe?
@LAWRENCESYSTEMS6 ай бұрын
I don't understand the question
@Shpongle646 ай бұрын
@@LAWRENCESYSTEMS I rewatched and I misunderstood. When you put the multiple raidz1 vdevs into a pool it sounded like if one disk in the vdev goes down it can corrupt the pool. As long as you quickly replace the failed disk in the Raidz1 vdev then the whole pool is fine.
@Reminder52612 жыл бұрын
Is it possible for you to do a video on creating a ZFS share? There is nothing on youtube to assist me with this. For some reason, I am unable to get my ZFS shares up and running.
@wiktorsz19672 жыл бұрын
Check if your user group has smb authentication enabled. At first I assumed that if my user settings are set up then it would work or just automatically allow primary group to authenticate. Also make sure to set share type as “smb share” at the bottom when creating your dataset and add your user and group to ACL in dataset permissions. I don’t know if you have done all that already, but for me it works with all the things I wrote above Edit: if you’re using Core (like me) and your share doesn’t work on iPhone then enable APF in services. On Scale you need to enable “APF compatibility” or something like that somewhere in dataset or ACL settings
@0Mugle02 жыл бұрын
check there is no spaces in the pool or share names. fixed it for me
@Djmaxofficial9 ай бұрын
But what if i wanna use diferent size drives¿?
@rcdenis12 жыл бұрын
How you reduce the size of a zfs pool? I have more room than I need and need that extra space for another server.
@LAWRENCESYSTEMS2 жыл бұрын
As I said in the video, you don't.
@rcdenis12 жыл бұрын
@@LAWRENCESYSTEMS ok, guess I'll have to backup everything, tear it down, start over and restore. And I wanted to go fishing next weekend! Thanks for the video
@LA-MJ2 жыл бұрын
would you recommend raidz1 for ssds?
@LesNewell2 жыл бұрын
RaidZ1 generates quite a lot of extra disk writes, which is bad for SSD life. I did some testing a while back between ZFS raidZ and BTRFS Raid5. BTRFS generated roughly half as many disk writes for the same amount of data written to the file system. How do you intend to use the system? If it's mostly for backups you'll probably never wear the drives out. If it's for an application with regular heavy disk writes you may have a problem.
@GW2_Live Жыл бұрын
This does drive me a little nuts tbh, as a home user. I have a MD1000 disk shelf, with 4/15 bays empty, would be nice add 4 more 8TB drives to my VDEV without restoring all the data from my backup
@emka234710 ай бұрын
yeah... this is why i'm thinking about unraid
@z400racer372 жыл бұрын
Doesn’t Unraid allow adding 1 drive at a time @Lawrence Systems ?
@LAWRENCESYSTEMS2 жыл бұрын
Not sure, I don't use Unraid.
@z400racer372 жыл бұрын
@@LAWRENCESYSTEMS Pretty sure I remember them working some magic there somehow. Could be interesting to check out. But I'm a TrueNAS guy also.
@LAWRENCESYSTEMS2 жыл бұрын
No Unraid does not natively use ZFS
@z400racer372 жыл бұрын
@@LAWRENCESYSTEMS @superWhisk ohh I see, I must have misunderstood when researching it ~a year ago. Thanks for the clarification guys 👍🏼
@TonyHerson3 ай бұрын
If you're running stripe you can add one drive
@donaldwilliams68212 жыл бұрын
Re Expanding VDEVs by replacing drives with larger ones. One note, if you are doing that with RAIDZ1 you are intentionally putting the VDEV into degraded mode. If another drive should fail during the rebuild that vdev and zpool will go offline. This is especially risky with spinning drives over 2TB since they have longer rebuild times. A verified backup should be done before attempting that process. Some storage arrays have a feature mirrors out a drive vs. forcing a complete rebuild. I.e. SMART errors increase, the drive will be mirrored out before it actually fails. I don't believe ZFS has a command like that? You mirror the data to the new drive in the background then "fail" the smaller drive, the mirrored copy becomes active and a small rebuild is typically needed to get it 100% in sync. Depending on the IO activity at the time.
@zeusde862 жыл бұрын
You can do this without degrading the pool, just leave the disk to be replaced attached, and perform a "replace" action instead of just plugging it out. You will notice, that the pool reads from all available drives to prefill the new one, including the disk designated to be removed. If you have spare disk-slots, this method is definately preferred, done this multiple times.
@donaldwilliams68212 жыл бұрын
@@zeusde86 Excellent! Thank you. I am still learning ZFS. I use it on my TrueNAS server, many VMs, Linux laptop and Proxmox.
@ericfielding6682 жыл бұрын
@@zeusde86 The "replace" action is a great idea. I wonder if the addition of a "hot spare" (i.e. yet another drive) would help if things went sour during the change.
@maddmethod58802 жыл бұрын
man I wish proxmox had a nice UI like that for zfs. gotta do a lot of this in command line like a scrub
@theangelofspace1552 жыл бұрын
You can move to the dark side and run truenas as a VM under promox 😬
@skullpoly1967 Жыл бұрын
Yea I does do that
@AnuragTulasi2 жыл бұрын
Do a video on dRaid too
@tupui Жыл бұрын
Did you see OpenZFS added Raidz expansion!?
@LAWRENCESYSTEMS Жыл бұрын
Added, but not in all production systems yet.
@RobFisherUK9 ай бұрын
I only have two drives and only space for two, so 16:00 is the answer for me!
@tylercgarrison2 жыл бұрын
is that backgoround blue hex image from gamersnexus? lol
@LAWRENCESYSTEMS2 жыл бұрын
I never really watch that channel, they were part of an old template I had.
@WillFuI6 ай бұрын
So there is no way to make a 4drive z1 into an 8 drive z2 without loosing all the data currently on the drives. Dang would have loved that
@donaldwilliams68212 жыл бұрын
Re: VDEV loss. In the case of RAIDZ1 you would need two faillures for the VDEV to go offline. Your illustration shows one failure bringing entire VDEV offline, which isn't correct. That VDEV would be degraded but still online. I do agree that Z2 is a better option. Re: Mirrors. Ah yes the old EMC way of doing things. haha I have seen plenty of mirror failulres too.
@Mr.Leeroy2 жыл бұрын
@SuperWhisk triple mirror is far from terrible idea, when you are designing cost-effective tiered storage. E.g. as a homelab admin you consider how low the ratio of your non-recoverable data to recoverable trash like plex storage gets, and suddenly tripple mirror + single drive pools make sense.
@Mr.Leeroy2 жыл бұрын
@SuperWhisk look up tiered storage concept, or re-read, idk..
@jeff-w3 ай бұрын
You can make a single drive vdev and put it in a pool if you wish.
@jlficken Жыл бұрын
How would you set up an all SSD 24-bay NAS with ZFS? I'm thinking either 3 x 8-disk RAIDZ2 VDEV's, 2 x 12-disk RAIDZ2 VDEV's, or maybe 1 x 24-disk RAIDZ3 VDEV? The data will be backed up elsewhere too. It's not necessary to have the best performance ever but it will be used as shared storage for my Proxmox HA Cluster.
@LAWRENCESYSTEMS Жыл бұрын
2X12
@jlficken Жыл бұрын
@@LAWRENCESYSTEMS Thanks for the reply! I'll try to grab 4 more SSD's over the next couple of months to make the first pool and go from there.
@praecorloth2 жыл бұрын
I'm going to be one of those mirror guys. When it comes to systems that are going to have more than 4 drives, mirrors are pretty much the only way to go. The flexibility in how you can set them up means that if you need space and performance, you can have 3x 2-way mirrors, or if you need better data redundancy (better than RAIDZ2), you can set up 2x 3-way mirrors. The more space for physical drives you have, the less sense parity RAID makes. Also, for home labbers using RAIDZ*, watch out for mixing and matching disks with different sector sizes. Like 512 byte vs 4096 byte sector size drives. That will completely fuck ANY storage efficiency you think you're going to get with RAIDZ* over mirrors.
@Mike-01234 Жыл бұрын
Mirror is only good if performance is your top priority. Raidz-2 exceeds space, and up to 2 drive failures when compared to mirror. If you step up to 3 way mirror now you can lose up to 2 drives but you still lose more space then a raidz-2. The only gain is performance.
@praecorloth Жыл бұрын
@@Mike-01234 storage is cheap, and performance is what people want. Parity RAID just doesn't make sense anymore.
@whyme2500 Жыл бұрын
Not all heroes wear capes....
@kevinghadyani2632 жыл бұрын
Watching all these ZFS videos on your channel and others, I'm basically stuck saying "I don't know what to do". I was gonna make a RAIDZ2 with my eight 16TB drives, but now I'm thinking it's better to have more vdevs so I can upgrade more easily in the future. It just makes sense; although, I can lose a ton of storage capacity doing it. I thought about RAIDZ1 with 4 drives like you showed striped together, but I don't think that's very safe; definitely not as safe as a single RAIDZ2 especially with 16TB drives. I wanna put my photos and videos on there; although, I also need a ton of storage capacity for my KZbin videos. Each project is 0.5-1TB. And I don't know if I should use any of my older 2TB drives as part of this zpool or put them in a separate one. I feel completely and unable to move. My 16TB drives have been sitting there for some days now, and I need the space asap :(. I don't want to make a wrong decision and not be able to fix it.
@phillee2814 Жыл бұрын
Thankfully, the future has arrived and you can now add one drive to a RAIDZ to expand it.
@LAWRENCESYSTEMS Жыл бұрын
Not yet
@phillee2814 Жыл бұрын
@@LAWRENCESYSTEMS So they were misleading us all at the OpenZFS conference then?
@LAWRENCESYSTEMS Жыл бұрын
@@phillee2814 My point is that it's still a coming in the future feature, not in production code yet.
@blackrockcity Жыл бұрын
Watching this at 2x was the closest thing I've seen to 'The Matrix' that wasn't actually art or sci-fi. 🤣
@LAWRENCESYSTEMS Жыл бұрын
I use 2X as well, KZbin should offer up to 3X.
@bridgetrobertson7134 Жыл бұрын
Yup, I hate ZFS. Looking to offload from Open Media Vault which has run flawlessly for 6 years with 3 10TB drives on snapraid. I wanted less of a do it all server and more of a long term storage this time around. Problem is, I can't afford to buy enough drives at clown world prices to satisfy zfs if I can't just add a drive or two later. What's worse is 20TB drives are within $10 of my same old 10TB drives. Will look for something else.
@84Actionjack2 жыл бұрын
Must admit the expansion limitation is a reason I'll stick to "Stablebit" on my Windows Server as my main storage but I fully intend to adopt ZFS on TrueNAS as a backup server. Thanks
@Im_Ninooo2 жыл бұрын
with BTRFS you can add a drive of any size, at any time and run a balance operation to spread the data (and/or convert the replication method)
@84Actionjack2 жыл бұрын
@@Im_Ninooo Stablebit works the same way in windows. Thanks
@lyth1um Жыл бұрын
the worst part about zfs so far is shrinking, lvm and dumb fs can do it. but like in real life, we cant get everything.
@Mice-stro2 жыл бұрын
Something interesting is that while you can't expand a pool by 1 drive, you can add it as a hot spare, and then add it into a full pool later
@MHM4V3R1CK2 жыл бұрын
I have one hot spare on my 8 disk raidz2. So 9 disks. Are you saying I can expand the storage into that hot spare so it adds storage space and removes the hot spare?
@ericfalsken51882 жыл бұрын
@@MHM4V3R1CK No, but if you expand the raidz later, you can use the hot spare as one of those drives..... Not sure if that's quite as awesome..... but the drive is still giving you usefulness in redundancy.
@MHM4V3R1CK2 жыл бұрын
@@ericfalsken5188 Not sure I follow. Could you explain in a little more detail please?
@ericfalsken51882 жыл бұрын
@@MHM4V3R1CK You're confusing 2 different things. The "hot spare" isn't part of any pool. But it's swapped into a pool to replace a dead or dying drive when necessary. So it can still be useful to help provide resiliency in the case of a failure.... but isn't going to help you expand your pools. On the other hand, because it isn't being used.... when you DO get around to making a new pool with the drive (or if TrueNas adds ZFS expansion in the meantime) then you can still use the drive. If you do add the drive to a pool, then it's not a hot spare anymore.
@MHM4V3R1CK2 жыл бұрын
@@ericfalsken5188Oh yes. I understand the hot spares functionality. I thought for some reason based on your comment that having the hot spare configured in the pool meant I got some free pass to use it to expand the storage. I misunderstood. Thanks for your extra explanation!
@christopherwilliams1878 Жыл бұрын
did you know that this video is uploadet to an other channel ?
@LAWRENCESYSTEMS Жыл бұрын
Nope, thanks for letting me know.
@enkrypt3d Жыл бұрын
so what's the advantage of using several vdev's?? If you lose one you lose everything?! EEEK!
@june56462 жыл бұрын
How to expand a pool? You don't unless you're rich lmao
@nid274 Жыл бұрын
Wish it was more easy
@emka234710 ай бұрын
i guess unraid is the way to go...
@icmann42968 ай бұрын
Please remake this video. Starting point, viewer knows raid and mdadm, and knows nothing about zfs, and believes that zfs is useless if it can't do the MOST BASIC multi-disk array function of easily expanding storage. I shouldn't have to watch 75 other videos to understand zfs well enough to get one unbelievably, hilariously basic question answered.
@LAWRENCESYSTEMS8 ай бұрын
ZFS is complex and if you are looking a raid system that can be easily expanded then ZFS is not for you.
@LudovicCarceles Жыл бұрын
Merci !
@namerandom2000 Жыл бұрын
This is so confusing....there must be a simpler way to explain this.
@bluegizmo1983 Жыл бұрын
How to expand ZFS: Switch to UnRAID and quit using ZFS if you want easy expansion 😂
@LAWRENCESYSTEMS Жыл бұрын
But then you lose all the performance and integrity features of ZFS.
@LesNewell2 жыл бұрын
ZFS doesn't make it very clear but basically a pool is a bunch of vdevs in raid0.
@piotrcalus2 жыл бұрын
Not exactly. In ZFS writes are balanced to fill all free space (all vdevs) at the same time. It is not RAID0.
@ff34jmr2 жыл бұрын
This is why synology still wins … easy to expand volumes.
@bangjago283 Жыл бұрын
Yes. We use synology for 32tb. But do you have recommendations for storage 1PB?
@TheBlur81 Жыл бұрын
All other things aside, would a Z2 2 vdev pool (4 drives per vdev) have the same sequential read/write as a single 6 drive vdev? I know the IOPS will double, but strictly R/W speeds...
@ashuggtube2 жыл бұрын
Boo to the naysayers 😊
@dariokinoshita89648 ай бұрын
This is very bad!!! Windows Storage Spaces allow add 1, 2 3 or any number of disc with same redundancy.
@LAWRENCESYSTEMS8 ай бұрын
Windows Storage Spaces is a not nearly as robust as ZFS and a very poor performing product that I never recommend anyone to use.