Absolutely outstanding! The information, pace and delivery were spot on, as was the solution. You took something that many will feel is difficult and dangerous, and made it understandable and straightforward. Well, done and a subscription well earned.
@esmailbosspro80666 ай бұрын
I get so excited when you post
@techadsr6 ай бұрын
My three node Proxmox cluster had a ceph cluster using three USB attached 1TB nvme drives, one per node. I was experimenting with it as a reduction in single point of failure vs NFS on a Synology NAS. Ran fine for about a week. Then one drive "failed". Quotes because testing the drive after removing it showed it was good. Maybe it overheated? Unknown but Proxmox ceph said it was down. Thought about how I would repair it but ultimately didn't find out how to replace the one failed drive. Now that you've shown how before it actually fails hard, I may rebuild the ceph cluster. If the failure happens again, I'll try to replace it using your example here in this video. Thanks
@zippi7776 ай бұрын
Thanks as always for these usefull guides!
@bast74864 ай бұрын
suuper easy to uunderstand. Great Work. :)
@jwspock16905 ай бұрын
TNX for the video
@PerNilsson16 ай бұрын
Great video!
@kristof94976 ай бұрын
Thanks for the video.
@yarinik6 ай бұрын
Thank you for the video
@eovermeer5 ай бұрын
Great tutorial, is this the way to go to replace my 3-node Ceph disks from NVMe 256GB to 1TB?
@heselmas3 ай бұрын
Should work.
@hafizrafiyev69356 ай бұрын
Hello,why did you reboot pve1 in order to replace osd disk?is reboot mandatory?
@MRPtech6 ай бұрын
If your proxmox nodes supports "hot-swap" of drives then you don't need to shut it down. In my case i have 3x Beelink Mini PCs. I can't replace a drive while node is running. I need to shut it down -> replace the drive -> start node