No video

Whats the faster VM storage on Proxmox

  Рет қаралды 48,520

ElectronicsWizardry

ElectronicsWizardry

Күн бұрын

ZFS, BTRFS, LVM, Directory. There are many options for storing VM images on a disk in Proxmox and other KVM based hypervisors. In this video, I take a look at the features and performance of all of these different storage methods.
For my test system I used a Xeon E5 2643 V4 system running Proxmox VE 7.2-7 with 128GB RAM, and a PM1725 as the test ssd.

Пікірлер: 116
@xxxbadandyxx
@xxxbadandyxx Жыл бұрын
I feel like i have learned more from this 8:55 second video than i have scouring forums for hours and piece mealing things together. thank you for the straight forward video.
@gg-gn3re
@gg-gn3re 5 ай бұрын
Yea when you look at lvm vs lvm thin for example you get trash information all over the forums and other sites. This guy has been the best mass amount of information for various projects for many years
@joshuaharlow4241
@joshuaharlow4241 2 ай бұрын
Agreed, I'm not sure how much time I saved, but it's a lot.
@LiterallyImmortal
@LiterallyImmortal Жыл бұрын
I’ve been trying to learn Proxmox the past couple days and this was SUPER helpful. Thanks a bunch man. Strait to the point and you explain your opinions on the facts presented.
@RomanShein1978
@RomanShein1978 Жыл бұрын
Great video. It is worth mentioning that it is possible to use the same ZFS pool to store all kinds of data (vdisks, backups, isos etc.). The user may create 2 datasets, and assign the first dataset as zfs storage and the second one as a directory.
@magnerugnes
@magnerugnes 9 ай бұрын
kzbin.info/www/bejne/pYSnXomlodSEl8k
@cryptkeyper
@cryptkeyper Жыл бұрын
Finally a video that is straight to the point on what I wanted to know. Thank you
@theundertaker5963
@theundertaker5963 Жыл бұрын
Thank you for an amazing, straight to the point, and concise video. I have actually been spending a lot of time trying to put together all the bits and pieces of what you managed to put into this fantastic video for a project of mine I shall be undertaking soon. Thank you for the time you put into collecting, and presenting all the benchmarks. You have a new subscriber.
@paulwratt
@paulwratt Жыл бұрын
For those interested, Wendell just did a "what were learned" review of Linus' (LTT) PetaByte ZFS drive failure - "A Chat about Linus' DATA Recovery w/ Allan Jude" - ZFS got another development boost (with more coming) as a result ..
@Shpongle64
@Shpongle64 3 ай бұрын
Bro I've been researching this topic for a couple hours on and off each day. Thank you for just combining this information into one video.
@Shpongle64
@Shpongle64 3 ай бұрын
Generally what I gathered was set the physical storage to ZFSpool non-directory and then have the VM disks set to raw.
@MHM4V3R1CK
@MHM4V3R1CK Жыл бұрын
Thank you for these videos. Very clear and answers the questions that come up as I'm listening. Satisfying!
@nalle475
@nalle475 Жыл бұрын
Thanks for a great video. I found ZFS to be the best way to go.
@crossfirebass
@crossfirebass 10 ай бұрын
Not gonna lie...I need a whole new vocabulary lol. Thanks for the explanations. I kind of dove face first into the world of Virtualization and wow do I need an adult. I bought some pc guts off a coworker for $500. 64 x AMD Ryzen Threadripper 2990WX 32-Core Processor, 64 Gigs RAM (forgot the speed/version), and an ASROCK MB. I threw in 24TB of spinning rust and now learning how to VM/setup an enterprise. End goal...stay employed lol. Thanks again for the help.
@forrestgump5959
@forrestgump5959 2 ай бұрын
and how is it going so far?
@gregoryfricker9971
@gregoryfricker9971 Жыл бұрын
This was an excellent video. May the algorithm bless you.
@tulpenboom6738
@tulpenboom6738 3 ай бұрын
One advantage of LVM over ZFS though, is that you can share it across hosts. If you have a cluster using shared iSCSI, FC or SAS storage (where every host sees the same disk) you can put LVM on that disk (on the first host, use vgscan on the rest), add it as shared LVM in the GUI and all other hosts see the same volume group. Allocate VM's out of that group, and it's easy and quick to do live migrations. ZFS cannot do this.
@2Blucas
@2Blucas 4 ай бұрын
Thank you once again for the excellent video and for sharing your knowledge with the community.
@PeterBatah
@PeterBatah 8 ай бұрын
Thank you for sharing your time and expertise with us. Insightful anf informative. Clear and precise.
@BenRook
@BenRook Жыл бұрын
Nice presentation of what's available and pros/cons... good vid! Will stayed tuned for future content...thx.
@poopbot5340
@poopbot5340 4 ай бұрын
great video, straight to the point! Confirmed my answer by :30 but stuck around to see how they all performed.
@SteveHartmanVideos
@SteveHartmanVideos 9 ай бұрын
This is a fantastic primer on file storage for proxmox.
@iShootFast
@iShootFast 11 ай бұрын
awesome overview and cleanly laid out.
@Alex-sm6dx
@Alex-sm6dx 2 ай бұрын
Great video, thank you!
@advanced3dprinting
@advanced3dprinting 8 ай бұрын
Really love your content i hate that channels with way less info but just do flashy edits get the attention when the guys that know their shxt don't get the same views
@mmgregoire1
@mmgregoire1 11 ай бұрын
Ceph RADOS is definitely the way to go, I hope that the performance for BTRFS is improved in the future, I do not really care for RAID5 or 6 and prefer 10, 1 or none generally anyway. BTRFS send and receive is a killer feature. I prefer that BTRFS is licensed and in kernel, this make booting and recovery senarios based on BTRFS potentially better with some work on proxmox side. cross fingers for BTRFS.
@lawrencerubanka7087
@lawrencerubanka7087 Ай бұрын
I'm with you! Ceph works a treat in conjunction with Proxmox HA. Ceph let's any node see the disk image so there's no down time when migrating a VM. We get replication across disks or hosts as well as the raid-like erasure encoding. I have great fun shutting down nodes running VMs and watching the VM hop across the network to another node never missing a beat. The options offered by Proxmox are awesome!
@DLLDevStudio
@DLLDevStudio 7 ай бұрын
btrfs changed since this video was made. it should be a way faster today. i wish for an updated video....
@ElectronicsWizardry
@ElectronicsWizardry 7 ай бұрын
I have been interested in BTRFS for a while now, and plan on taking a look at it in the future. It seems to still be in tech preview status, so I'm waiting for it to be stable before I look at it much more.
@DLLDevStudio
@DLLDevStudio 7 ай бұрын
​@@ElectronicsWizardry Hello, brother. It appears that the system is stable when using the stable kernel. I wish it had some effective self-healing capabilities, which would allow it to replace ZFS in some of my applications. Although ZFS is excellent, Btrfs seems to be faster already. Meanwhile, XFS is still the fastest but lacks any kind of protection.
@AdrianuX1985
@AdrianuX1985 Жыл бұрын
5:00.. After many years, the BTRFS project is still considered unstable. Despite this, Synology uses BTRFS in its commercial products.
@paulwratt
@paulwratt Жыл бұрын
yay for network storage devices that use proprietary hardware configurations. ("right to repair" be damned)
@carloayars2175
@carloayars2175 Жыл бұрын
Synology Hybrid RAID (SHR) uses a combination of BTRFS and LVM. It avoids the problem parts of BTRFS this way while still delivering a reliable file system with many of the main benefits of BTRFS/ZFS.
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
I think shr also uses mdadm. Mdadm is used for raid, btrfs is used for a filesystem and checksumming. If a checksum error is found md delivers a different version and the corrupt data is replaced. Lvm is used to support mixed drive sizes.
@mmgregoire1
@mmgregoire1 11 ай бұрын
BTRFS is also used by android, google, facebook, SUSE and many more...
@gg-gn3re
@gg-gn3re 5 ай бұрын
lots of things use BTRFS commercially for years as others have mentioned. BTRFS will be considered unstable for another 10 or more years, so don't let that stop you if you want to use it for some reason. Us home people don't have the issue of what license certain stuff has since we don't resell, so we can use many things that these vendors can't / won't.
@philsogood2455
@philsogood2455 Жыл бұрын
Informative. Thank you!
@angelgil577
@angelgil577 Жыл бұрын
You are a smart Cooke. Thank you, this info is very helpful.
@haywagonbmwe46touring54
@haywagonbmwe46touring54 Жыл бұрын
Ahh thanks! I was looking for just this kinda video.
@andymok7945
@andymok7945 Жыл бұрын
Thanks, very useful info in this video.
@lecompterc83
@lecompterc83 Ай бұрын
No idea what was just said, but I’ll piece it together eventually 😂
@dgaborus
@dgaborus Жыл бұрын
At 7:07 slight performance advantages? Performance is 3x faster with PCI-e passthrough than with ZFS or LVM. Although, I prefer ZFS as well for the flexibility.
@perfectdarkmode
@perfectdarkmode 3 күн бұрын
If you use ZFS, does that mean you would not want hardware RAID on the physical server?
@daniellauck9565
@daniellauck9565 5 ай бұрын
Nice content. Thanks for sharing. Is there any comparison or deep studying about centralized storage with iSCSI or fiber channel ?
@danwilhelm7214
@danwilhelm7214 Жыл бұрын
Well done! My data always resides on ZFS (FreeBSD, SmartOS, Linux).
@zebraspud
@zebraspud 6 ай бұрын
Thanks!
@JJSloan
@JJSloan Жыл бұрын
Ceph has entered the chat
@jasonmako343
@jasonmako343 Жыл бұрын
nice job
@perfectdarkmode
@perfectdarkmode 3 күн бұрын
How does ZFS compare to Ceph?
@user-ct7wu1zv9e
@user-ct7wu1zv9e 11 ай бұрын
Hello dear EW, please can you again review Proxmox 8 with ZFS vs BTRFS performance?
@ElectronicsWizardry
@ElectronicsWizardry 11 ай бұрын
I think btrfs is in technical preview status still currently. I’m waiting for it to get in the full release and I’ll take a closer look.
@adimmx8928
@adimmx8928 Жыл бұрын
I have a sql query taking 15 seconds long on a vm in proxmox stored on a nvme ssd. I created 3 other vm all running the same os but on different file system ext4,btrfs and zfs and installed only the mariadb server serving the same database but via tcp and i could not get the performance of the first initial vm. Any ideas why? I get close to performance with an lxc container only.
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
Im. It sure what the issue your have is here, some of the things I think might be the issue. Check that virtio drivers are used for the vm to allow the best virtual disk performance. I’d guess the massive performance difference could be due to caching. I’m not sure how caching is setup for containers but if ram is being used as a cache that large of a performance delta would be expected. Also if your system supports it I’d try doing a pcie passthrough of the ssd to the vm as it should allow the best performance by removing the overhead of virtual disks.
@gsedej_MB
@gsedej_MB Жыл бұрын
Hi. Is it possible to "pass" zfs-directory or zfs-subzfs to guest. The main idea is, that zfs is filesystem and guest needs to have own filesystem (e.g. ext4) which is overhead. So only the host should be doing filesystem operation, while guest would see as folder. I guess zfs would have to support some kind of server/client infratructure but without networking overhead...
@attilavidacs24
@attilavidacs24 3 ай бұрын
I can't get decent speeds on proxmox with my nvme or HDDs. I'm getting a max of 250Mb/s on a 4 hdd raid 5 array virtualized even with PCI passthrough but unvirtualized its 750Mb/s. Even my NVMe drive virtualized starts off at 800Mb/s then drops down to 75 - 200mb/s and fluctuates. I'm running the virtio SCSCI controller. Why are my speeds slow?
@ElectronicsWizardry
@ElectronicsWizardry 3 ай бұрын
That's a strange issue Ive never seen. What hardware are you using? Do you get full speeds on the Proxmox host using tools like FIO? Is the CPU usage high when doing disk IO?
@attilavidacs24
@attilavidacs24 2 ай бұрын
@@ElectronicsWizardry I'm running a Ryzen 7900 CPU, LSI 9300 HBA connected to x7 HDDs on 2 vdevs, 1 cache SSD. One NVMe pve boot drive and I also have a samsung evo NVMe for VMs and a Mellanox 10g NIC. I will try some FIO benchmarks and report back. I have 64GB total RAM and the CPU usage stays quite low throughout all the VMs. My HBA is using PCI passthrough to a TrueNAS VM.
@smalltimer4370
@smalltimer4370 10 ай бұрын
I'm in the process of building an nvme Proxmox server using a combination onboard nvme drive w/ 4 x 2TB nvme in zraid 10 That said and based on your experience, would this be the optimal way to go for vm's? ps. having read multiple posts or comments on ssd wear, I remain a bit worried on my setup choice as I'd like to get the most out of my storage system without sacrificing the life of the devices - ie, 3 years would seem reasonable for a refresh imo
@ElectronicsWizardry
@ElectronicsWizardry 10 ай бұрын
Yea a raid 10 makes a lot of sense for VMs due to the high random performance. I wouldn't worry about SSD wear much for home server use as most SSDs have more endurance that you would ever need, and they will go well over the rated limit. I'd guess the drives will be fine in 3 years. There are high endurance drives you can get if your worried about endurance.
@VascTheStampede
@VascTheStampede Жыл бұрын
And what about Ceph?
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
I was only looking at local storage in this video so I didn’t include iscsi, ceph,nfs and similar. I don’t think there would be a easy way to compare ceph to on host storage as it’s made for a different use case and I isn’t have the correct equipment for testing currently.
@scottstorck4676
@scottstorck4676 Жыл бұрын
Ceph lets you store data over many nodes, to ensure availability. If you need the availability Ceph provides, the kind of benchmarking done for this video is not something you would normally look at. I run a small six node Proxmox cluster with Ceph, and the performance it provides is not really comparable with filesystems on single nodes, as the resources are used on the cluster as a whole. There are so many factors when dealing with performance on Ceph, including the GHz of a CPU core, the network speed, the number of HDD / SSD / NVME used as well as their configuration. It is not something where you can compare benchmark results between systems, unless the hardware and software and configuration is 100% identical.
@lawrencerubanka7087
@lawrencerubanka7087 Ай бұрын
​@@scottstorck4676... and the network speed, and the network speed. :) I'm still floored by how fast Ceph runs in real-world use.
@robthomas7523
@robthomas7523 Жыл бұрын
What file system do you recommend for a server whose storage needs keep growing at a high rate. LVM?
@lawrencerubanka7087
@lawrencerubanka7087 Ай бұрын
Ceph. You can throw another OSD (drive) into a pool at any time. You have similar options for replication (mirroring) and erasure encoding (Zraid) as with ZFS or raid plus the ability to spread the storage across multiple nodes in a cluster. No need for periodic replication of your LVM-based images. Ceph does this in real time, continuously. All nodes see the same data at the same time.
@jossushardware1158
@jossushardware1158 2 ай бұрын
what about ceph
@ElectronicsWizardry
@ElectronicsWizardry 2 ай бұрын
I didn't cover Ceph as its not a traditional filesystem/single drive solution like the other options covered. I plan on doing more videos on Ceph in the future. The quick summary is Ceph is great if you want redundant storage across multiple nodes that's easy to grow. Its typically slower than a single drive in a small environment due to the additional overhead of multiple nodes, and having to confirm writes across multiple nodes.
@jossushardware1158
@jossushardware1158 2 ай бұрын
@@ElectronicsWizardry Thank you for your answer. I have understood that enterprise SSD with PLP is the only way to make CEPH faster. Of course node links has to be at least 10GB or more. Do you know does Mysql Galera cluster also confirm writes across multiple nodes? So would it also benefit of PLP in ssd?
@paulwratt
@paulwratt Жыл бұрын
That statement you made about the layers you need to adjust individually is not reflected in any graphs anywhere, and thats a shame, because it clearly demonstrates _another_ main benefit of using ZFS over LVM ( yay, look BTRFS is way out in front, oh wait .. ) Not sure how to take the "ZFS fakes Proxmox cache setting", for testing non-cached it _is_ relevant, but that is _not_ a real world scenario, to the extent that you could attach a drive/device which has _no physical cache_ and ZFS will still happily cache that device, a more authentic real world scenario ( _if_ you could indeed find such a device). The _best_ part about ZFS, as Wendell showed and admitted, when your (especially raid) drive pool goes belly up, to the point software tools can not even help, you can still reconstruct original data by hand if need be, as _everything_ is there needed to achieve that .. BTRFS _might_ "get there in the end", as ZFS has had an extra 10 years of use, testing and development up its sleeve, but those BTRFS "features" that have not been "re-aligned" for years, means it's never going to be a practical solution, except in isolated cases, its better of being used for SD-Card filesystems, where it can extend the limited life span of the device (if setup correctly), and speed is already a physical issue (as long as you dont want to use said SD-Card on a windows system .. ). thanks for taking the time to do the review ..
@AdrianuX1985
@AdrianuX1985 Жыл бұрын
For several years, the dedicated FS for SD cards has been F2FS (Flash-Friendly File System).
@mikemorris5944
@mikemorris5944 Жыл бұрын
Can you still use ZFS for storage option if you didn't install ProxMox using ZFS format?
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
Yea ZFS can be added to a Proxmox system no matter what the boot volume is set to. The boot volume only affects data that is stored on the boot drive, and storage of any type can be added later on.
@mikemorris5944
@mikemorris5944 Жыл бұрын
@@ElectronicsWizardry thanks again EWizard
@Goldcrowdnetwork
@Goldcrowdnetwork Жыл бұрын
@@ElectronicsWizardry so if adding a USB storage device like a 2 terabyte WD Passport drive (I know this is not ideal but what I have laying around) ZFS would be a better choice than LVM or LVM-thin in your opinion for storing LXC templates and snapshots with Docker apps inside them?
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
@@Goldcrowdnetwork For practical purposes, there will be almost no difference. The containers will run the same on both. Id personally use ZFS as I like the additional features like checksumming and like using the zfs tools. LVM would be a tiny bit faster, but it will likely be very limited by the HDD with both of these.
@SEOng-gs7lj
@SEOng-gs7lj Жыл бұрын
i don't quite understand the remark that ZFS can connect "to the physical disks and goes all the way up to the virtual disks" at 4:13. I mean, doesn't lvm/ext4 in proxmox provide the same? i'm trying to create a ubuntu VM with a virtual disk formatted as ext4, is this correct? if not, is there a demo showing the "better" way? thank you
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
I think I said that wrong in the video. Other filesystems can be used as one layer between the disks and the VM. The point I was trying to get across was that ZFS has additional features that additional software would be needed if similar features were wanted in filesystems like EXT4. ZFS for example supports RAID and snapshots, and in order to have similar features on EXT4, MDADM for RAID and LVM/QCOW2 would have to be used for snapshots. I like using ZFS as there is one piece of software to handle the filesystem, RAID, snapshots, volume manager and other drive related operations. The filesystem your VM is using isn't affected by the storage configuration on the host, and using EXT4 on a Ubuntu vm will work well.
@SEOng-gs7lj
@SEOng-gs7lj Жыл бұрын
@@ElectronicsWizardry cool thank you!
@SEOng-gs7lj
@SEOng-gs7lj Жыл бұрын
i have proxmox(zfs) and ubuntu(ext4) guest, after installing MySQL in my ubuntu, it takes 3 mins to ingest an uncompressed .SQL, something is definitely wrong, any idea what I can check/fix? thanks!
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
I’d take a look at system usage during the import in the vm first. What is the cpu or disk usage. Then if it’s disk usage limited check if other hosts are using too much disk on the host.
@SEOng-gs7lj
@SEOng-gs7lj Жыл бұрын
@@ElectronicsWizardry i'm hitting the 100% disk utilization.. but there is hardly any activity apart from mysql.. seems to be a configuration issue but i don't know where
@dominick253
@dominick253 Жыл бұрын
I feel like there's a code in your blinking. Maybe Morse code?
@JohnSmith-iu8cj
@JohnSmith-iu8cj Жыл бұрын
SOS
@Josef-K
@Josef-K 10 ай бұрын
What about draid?
@ElectronicsWizardry
@ElectronicsWizardry 10 ай бұрын
I haven't looked at Draid, and will take a look at it soon and make a video
@Josef-K
@Josef-K 10 ай бұрын
@@ElectronicsWizardry well I was tinkering around today with a 4TB and 3TB that I wanted to mirror. I ended up splitting them into 1tb partitions, so it let me create Draid2 with only two drives (7 x 1TB partitions) one of which as spare. This got me thinking - can Draid be used as my root Proxmox (bare metal) in order to make proxmox even more HA? And now I'm also wondering - is there any kind of performance and/or reliability gain (maybe even across multiple nodes) if I have even more partitions per disk for Draid? The idea being you can sip each partition for its data across every partition in my cluster.
@user-wc1st6lx1p
@user-wc1st6lx1p 7 ай бұрын
You need to increase the wait time in your blink-function.
@DomingosVarela
@DomingosVarela Жыл бұрын
Hello, I'm installing the new version for the first time on an HP server with 4 300G disks, I want to know the recommended option for using the disks, keep proxmox installed on a single disk and use the rest in pool zsf mode for the vms? What option do you recommend? Thanks Best Regards
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
Does the server have a RAID card? If so I'd setup hardware raid using the included raid card. Then I'd probably go ZFS for its features, or ext4 if you want a tiny bit more speed. I will warn you that running vms on HDDs will be a bit slow for host uses. If it doesn't have a raid card, I'd probalby use ZFS for raid 10.
@DomingosVarela
@DomingosVarela Жыл бұрын
@@ElectronicsWizardry thanks for your response! my server has a raid card and I disabled it because zsf doesn't work very well on top of the hardware configured raid, so I disabled the hardware raid. if I use raid10 with the 4 disks I will only have the value of one of them, on this same disk will I install proxmox and the VMs?
@ElectronicsWizardry
@ElectronicsWizardry Жыл бұрын
Yea I’d you can disable the raid card and use Zfs that’s what’s I’d do as I’m a fan of Zfs. Using hardware raid and ext4 would be a bit faster especially if the hardware raid card has a battery backed cache it can use.
@DomingosVarela
@DomingosVarela Жыл бұрын
@@ElectronicsWizardry I'm using HP Gen10 it has a very good raid card, but I would really like to use zsf for its advantages associated with proxmox, so I need some support to understand the recommended option in using the disks, separate the proxmox installation with the VMs or use a raid10 for all disks and keep the proxmox and the VM in the same pool?
@davidkamaunu7887
@davidkamaunu7887 Жыл бұрын
Ext4 isn’t faster than Ext2 because it is a journaling file system like NTFS. Journaling File Systems have overhead from the journaling. Like it wasn’t good to have a LUKS on ext3 or ext4
@davidkamaunu7887
@davidkamaunu7887 Жыл бұрын
Another thing most people won’t cache on to. Never RAID flash storage (SSDs or NVMe) as you create a race condition that will stress the CPU and the quartz clock. Why? Because they have identical access times that are as fast as disk cache or buffer
@gg-gn3re
@gg-gn3re 5 ай бұрын
NTFS on windows doesn't journal. It was designed to but never implemented. Just like it is also a case sensitive file system but windows disables that entirely. Their new filesystem has these features, mostly because ntfs breaks so much with their linux subsystem. All in all NTFS is more comparable to ext2 than it is ext4 ext4 is also faster than ext2 when it is reading from HDDs (cuz of journaling)... SSDs it depends on type of data but not journaling can be faster sometimes
@daviddunkelheit9952
@daviddunkelheit9952 5 ай бұрын
@@gg-gn3rethat’s an observation over your experience. You should always qualify your statements. Otw 😬
@daviddunkelheit9952
@daviddunkelheit9952 5 ай бұрын
@@davidkamaunu7887Intel has a couple of functions in 8th and 9th generation processors that allow for PcIE port bifurcations. This allows the use of H10 Optane which has Nand and Optane on same m2 socket. There is Virtual RAID on CPU. VROC which is found on Xeon scalable and used for specific storage models. It requires an optional upgrade key in the D50TNP modules. RAID 0/1 5/10. These are VMD NVMe
@gg-gn3re
@gg-gn3re 5 ай бұрын
@@daviddunkelheit9952 no that's a fact. Posted on microsofts website. The only automated journaling is metadata and that is recent.
@RetiredRhetoricalWarhorse
@RetiredRhetoricalWarhorse 6 ай бұрын
I am getting to the point of realization how much Proxmox is not anywhere near ready to be competing with Vmware. The way administration works, the absolutely bad documentation and all the resources online are just so jank... Too bad. I'm considering even aborting switching my homelab over. I see no benefits even to just running the current ESXi without patches indefinitely.
@shephusted2714
@shephusted2714 Жыл бұрын
big takeaway here is you want a nas with lots of ecc mem and zfs - a z440 with 256gb ram is about 1k making it a great deal
@teagancollyer
@teagancollyer Жыл бұрын
I normally watch your videos in the background but actually focused on this vid today and noticed how much you blink which, no offense intended, i found a bit distracting.
@paulwratt
@paulwratt Жыл бұрын
you probably could have _not_ said that, _no offense_ intended .. I think he is fully aware of it ..
@teagancollyer
@teagancollyer Жыл бұрын
@@paulwratt yeah I thought about not including it, i just felt it rude without it and i meant it sincerely.
@AdrianuX1985
@AdrianuX1985 Жыл бұрын
I didn't pay attention, only your comment suggested it. I don't understand people who pay attention to such nonsense.
@MarkConstable
@MarkConstable Жыл бұрын
@@AdrianuX1985 Because it is quite distracting. The quality of the content is excellent, but I had to look away most of the time.
@paulwratt
@paulwratt Жыл бұрын
@@AdrianuX1985 its fine, you didn't need to reply (unless no one else did)
@typingcat
@typingcat Жыл бұрын
Why blink so much?
@abb0tt
@abb0tt 4 ай бұрын
Why not educate yourself?
@lawrencerubanka7087
@lawrencerubanka7087 Ай бұрын
Don't be an ass.
@ChetanAcharya
@ChetanAcharya Жыл бұрын
Great video, thank you!
Proxmox Recovery: Getting VMs running after a host failure
20:09
ElectronicsWizardry
Рет қаралды 23 М.
Turning Proxmox Into a Pretty Good NAS
18:31
apalrd's adventures
Рет қаралды 230 М.
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 13 МЛН
ISSEI & yellow girl 💛
00:33
ISSEI / いっせい
Рет қаралды 22 МЛН
ROLLING DOWN
00:20
Natan por Aí
Рет қаралды 10 МЛН
Linus Torvalds: Speaks on Linux and Hardware SECURITY Issues
9:24
How Much Memory Does ZFS Need and Does It Have To Be ECC?
6:59
Lawrence Systems
Рет қаралды 51 М.
Proxmox Storage via GUI, Good but Limited
16:57
RetiredTechie
Рет қаралды 1,6 М.
File Systems | Which One is the Best? ZFS, BTRFS, or EXT4
12:07
Chris Titus Tech
Рет қаралды 201 М.
Hardware Raid is Dead and is a Bad Idea in 2022
22:19
Level1Techs
Рет қаралды 672 М.
Setup ZFS Pool inside Proxmox
17:31
MRP
Рет қаралды 70 М.
What to do with a Degraded ZFS Pool
13:28
Craft Computing
Рет қаралды 37 М.
10 tips to get the most out of your Proxmox server
5:24
ElectronicsWizardry
Рет қаралды 56 М.
What is Ceph and SoftIron HyperDrive?
5:45
Tech Enthusiast
Рет қаралды 85 М.
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 13 МЛН