SMALL Proxmox Cluster Tips | Quorum and QDevices, Oh My! (+ Installing a QDevice on a RasPi)

  Рет қаралды 36,331

apalrd's adventures

apalrd's adventures

Күн бұрын

So you want to turn your one or two Proxmox nodes into a small Proxmox cluster? What do you need to know about quorum, especially as it pertains to really small clusters of 2 and 3 nodes? In this video, I go over the different ways to stabilize a 2-node cluster for high availability, the storage requirements for high available VMs, using a Raspberry Pi as a QDevice, and if you even need one. I also talk about how a QDevice might not help you as much as you think in a 3 node cluster.
I also cover installing a the QDevice daemon on a Raspberry Pi, and using a very small Proxmox node to maintain quorum without allowing it to be part of the high available VM pool.
Blog post:
www.apalrd.net/posts/2022/clu...
My Discord server:
/ discord
If you find my content useful and would like to support me, feel free to here: ko-fi.com/apalrd
Timestamps:
00:00 - Introduction
00:35 - Hardware
01:22 - Two Nodes Not Clustered
03:39 - Two Nodes Clustered
05:55 - Two Nodes Extra Vote Hack
07:34 - Two Nodes + One Quorum Only
09:28 - Shared Storage or Replication?
12:08 - QDevice on Raspberry Pi
16:46 - Two Nodes + QDevice
18:24 - Three Nodes + QDevice
20:51 - Conclusions and Future
Proxmox is a trademark of Proxmox Server Solutions GmbH

Пікірлер: 122
@jimallen8238
@jimallen8238 Жыл бұрын
Well done! I have watched many videos on Proxmox and read a bunch online about config, and I must say that your explanations are the clearest and most accessible. No fluff or ego, just clear and concise explanations and demo. I wish other KZbinr's could follow your lead.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad it helped!
@mtothem1337
@mtothem1337 2 жыл бұрын
I can't put my finger on it. But the way you describe things are top notch. And keeps the viewer interested in the topic.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Thanks!
@luisliz
@luisliz 2 ай бұрын
FR, I think he also puts the title of things you expect to search for. I had this issue and as soon as I saw it was apalrd I was like perfect he will def fix my problem.
@koloblicin4599
@koloblicin4599 Жыл бұрын
This channel really is the gift that keeps on giving for me. Some videos still *woosh* over my head with condensed facts, but others would randomly pop up to help me tackle another challenge.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad you like it!
@nyanates
@nyanates Жыл бұрын
Thanks so much for these small cluster vids. Very clear and concise. Answers a lot of the questions I’ve been wrestling with getting started. Still learning it but I really like Proxmox and the solution it provides.
@brainamess2979
@brainamess2979 7 ай бұрын
I had been directed to this video when I had just started with proxmox a few months ago but was so lost. Coming back today I appreciate this video so much. Thank you.
@JL-db2yc
@JL-db2yc 2 жыл бұрын
Excellent tutorials you have in your channel! This is what I needed to create my first cluster with two nodes. Thank you.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Glad it helped!
@DarthBobo
@DarthBobo 2 жыл бұрын
I have literally just got into proxmox - also running ha and octoprint etc. really looking forward to further videos on your channel. Keep up the good work.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Thanks! I've got a few more in this series planned
@iamweave
@iamweave 10 ай бұрын
Thanks for answering my question about if a node loses connectivity but doesn't go down and how that affects replication when connectivity is restored. Love your vids!
@marcon527
@marcon527 2 жыл бұрын
exactly what i was looking for. a double vote for my primary server who must be always on. so i can leave my second node powered off when i don't need him. Superb explanation
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Thanks! It's a really niche solution for really specific 2-node cluster cases
@dubscheckem1760
@dubscheckem1760 7 ай бұрын
Your explanations helped me better understand quorum and solve some issues i was having ! cheers
@JavierPerez-fq2fi
@JavierPerez-fq2fi Жыл бұрын
Awesomely well explained with step by step examples! Thank you dude!!
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad it was helpful!
@carlosjackson3201
@carlosjackson3201 2 жыл бұрын
Your qdevice setup worked like a charm! Thanks!!
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Glad it helped!
@ohokcool
@ohokcool 27 күн бұрын
Great update to your other vid, I realized after watching that it’s perfect for the use case you outlined but that it only worked one way. I think I’ll go for a 3rd “witness” node for the quorum.
@CosminStefanMarin
@CosminStefanMarin 5 ай бұрын
This was clear and concise. Thank you.
@donotknowwhoami8228
@donotknowwhoami8228 2 жыл бұрын
Good explanation. Love it.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Glad it was helpful!
@octothorpian_nightmare
@octothorpian_nightmare 2 жыл бұрын
This was cool, thanks for the run through!
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Glad you enjoyed it!
@JeanFrancoCaringi
@JeanFrancoCaringi Жыл бұрын
terrific explanation, thank you very much! grettings from Mexico!
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Greetings from near Detroit!
@ronsflightsimlab9512
@ronsflightsimlab9512 2 жыл бұрын
Amazing work. Thank you!!!
@philbos6232
@philbos6232 11 ай бұрын
Haha I love the fact that around 6:29 ge realized he forgot to film the reconnection of pve2 and just plain reversed the clip of taking the ethernet cable out. Made me chuckle Great video, will keep watching!
@fabfianda
@fabfianda 2 жыл бұрын
Great job with videos, thank you very much.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad you like them!
@chrisjchalifoux
@chrisjchalifoux 2 жыл бұрын
Thank you for the video it helped me out a lot
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Glad you liked it!
@MrNoBSgiven
@MrNoBSgiven 2 жыл бұрын
Thank you for this video. I made my life easier. I do run a two node "cluster" on Pi2 and Pi3 for pihole and unbound services using a virtual ip (keepalived). One suggestion: do NOT upgrade yoru kernel on SD card and reboot Pi AFTER but BEFORE you do this . 😜
@chkpwd
@chkpwd 2 жыл бұрын
You explain things so well. Would you mind doing a video on creating self signing certificates to avoid the SSL certificate error on the browser?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Do you mean setting up a certificate authority for your homelab, so your own devices trust it? Or trying to setup publicly signed certificates? Both have their own benefits and difficulties
@chkpwd
@chkpwd 2 жыл бұрын
@@apalrdsadventures I have self signed certificates via lets encrypt and nginx for services externally with the help of Cloudlflare. But in the case of proxmox web portal and esxi and some others, I struggle to sign those on my homely network internally.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
So you want a homelab certificate authority, so you can add the CA to your laptop's trust list and then create your own 'lets encrypt' for internal services and they'll all be trusted via the trust in your homelab CA?
@chkpwd
@chkpwd 2 жыл бұрын
@@apalrdsadventures correct.
@techdad6135
@techdad6135 2 жыл бұрын
@@apalrdsadventures +1 for this request. Also looking forward to the upcoming ceph storage video. I appreciate your content. Keep up the good work sir.
@user-gw9el1ew2f
@user-gw9el1ew2f 2 жыл бұрын
good stuff thanks
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad you enjoyed it
@davidwilliss5555
@davidwilliss5555 Жыл бұрын
Great video. I just ordered a new server for my home lab and was wondering how I could run proxmox clustered with only 2. I have a 3rd server that's running TrueNAS scale, so I think I'll just spin up a VM on there and use that as the qdevice.
@NetBandit70
@NetBandit70 2 жыл бұрын
6:29 LOL!
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
it's just like unplugging it, in reverse
@danieldewindt3919
@danieldewindt3919 3 ай бұрын
Grate, nice explanation :) All there is to know. I once ran into a problem and needed to Remove a Host from a Cluster. It would be grate if you know how it's done the correct way. I had to reinstall everything on Proxmox 6 or 7. Keep up with the nice video's ciao
@lensherm
@lensherm Жыл бұрын
From what I understand and I just confirmed by testing, the QDevice node does not need corosync-qdevice installed. That only goes on the cluster nodes. The QDevice only needs the corosync-qnetd installed.
@johnwashifi
@johnwashifi Жыл бұрын
Hello, could you show how to use a docker or a Virtual Machine on Truenas Scale to do this QDevice? Thanks in advance!
@takeover4726
@takeover4726 Жыл бұрын
sub, just coz u cover important topics for new users, i was going to do a 3 node cluster, but i dont need it, so just wasting power, now i can do 2 node and use a pi,
@johnwashifi
@johnwashifi Жыл бұрын
Perhaps you can do a tutorial showing how to remove a node and replace it on a cluster.
@johnwashifi
@johnwashifi Жыл бұрын
Hello, could you make a tutorial on how to spin-down hdd in Proxmox? thanks in advance!
@danbrown586
@danbrown586 2 жыл бұрын
When you forget to sudo a command, you can run "sudo !!" to save retyping or editing the command line.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Usually I cut it out with the magic of editing, then it was never mistyped to begin with
@ierosgr
@ierosgr 2 жыл бұрын
It would be nice for a video follow up troubleshooting these scenarios..... for instance what if that qdevice or third node or one of the nodes go down for good ... what lined from which configuration files needs to be erased or changed in order to be able to add new devices as a second node, or third node or q device. Many videos are doing well explaining how to do thing and the completely leave besides the unistallation/deletion process, like every machine and project was build to last forever.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
by 'for good' do you mean you are intentionally removing them from the cluster? Removing a regular node isn't terribly difficult - pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node - basically you just make sure replication tasks are deleted and then run `pvecm delnode ` Removing a QDevice just requires `pvecm qdevice remove` The difficulty can potentially come if you try to add a new node back using the IP/name of a node that was previously removed, or if you try to re-used the now-removed node without reinstalling.
@ierosgr
@ierosgr 2 жыл бұрын
@@apalrdsadventures I mean broken you have to replace a part or all of it. Intentionally would have the same effect as well. Bottom line is needs to be replaced for whaever reason that might be. I know it isnt hard enough, but then again neither is to make a cluster, setup a VM using HA group ...etc., yet you make a video. Upon that thought was the recommendation to make another one for the rest of the stuff which wasn t mentioned here. Else you could just give links for this video as well and dont talk about it at all. This would be pointless though since its against the whole idea of making a video. See where I am going with that? Giving me links is useless, I can search by myself don t need a proposal for that Believe it or not this is a constructive criticism even if I fail somehow to show it with what I am writing here. Removing a QDevice just requires `pvecm qdevice remove` ->> this would remove the qdevice indeed but in your video you install it as well so what apt installs need and apt remove or purge as well don t you think? Except if you will replace the device and the installation stays for that purpose. The difficulty can potentially come if you try to add a new node back using the IP/name of a node that was previously removed, or if you try to re-used the now-removed node without reinstalling. ->> The whole point is if something breaks to replace it using the same network specs not another one. Services might depend on that static ip Once again even in my head I don t sound like offensive I am sorry if you take it this way.
@bluesquadron593
@bluesquadron593 2 жыл бұрын
Really nice explanation. Did you already made a video on how to remove an existing node from the cluster. That is a pain. Also perhaps falls into this category to change the name of the cluster node or the ip.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
for my cluster tutorial videos I re-install Proxmox for every video, sometimes more than once. I want to start from a fresh slate as you would. Removal is not as easy as add but it's possible. You have to remove all replication tasks first, then run delete (pvecm delnode) from any other node. If you ever want to add another node with the same IP (physically the same or not), you need to tell proxmox to re-copy ssh certs across the cluster (pvecm updatecerts) after you re-add a node with the same IP, since it will still have the certs from the old node in its known hosts file and you will get a lot of errors until you do that. You should also reformat/reinstall the node that left the cluster before using it again, even if you will still be running Proxmox on it. As to changing the name of the cluster ............ please don't do that :(
@bluesquadron593
@bluesquadron593 2 жыл бұрын
@@apalrdsadventures yeah, even reading this my head hurts. I had my fair share of cluster *@-cks to figure out these things.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Oh this is just Proxmox, wait until Ceph comes in for the next video
@bluesquadron593
@bluesquadron593 2 жыл бұрын
@@apalrdsadventures I use ceph in my main cluster. No drama there. I have three identical nodes of hp elitedesks. Decent compute power but still low cost and low power.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Ceph is more tolerant of adding/removing nodes to the cluster but it still has its own quirks for sure, in addition to the Proxmox quirks
@techdad6135
@techdad6135 2 жыл бұрын
Just recently jumped on the Proxmox train. I have a Proxmox backup server running on bare metal. Is it possible to use this node as the QDevice? If so, are the steps to install/setup the same as the RPI?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Steps are the same, except PBS may already have root password enabled.
@igorpavelmerku7599
@igorpavelmerku7599 3 ай бұрын
Hi, do you have any video (or suggestion) about centralized update management (like OS patches, updates in general -like tomcat, mysql etc) in mixed Linux environment (ubuntu, SLES)? Not proxmox related ... Thanks
@LivioHenery
@LivioHenery 6 ай бұрын
I wanted to add a 3rd node to my cluster, if I so would I need to do the last step again ( pvecm qdevice setup -f)? I understand the other I'll still have to do this (apt install corosync-qdevice) on the 3rd node. Thanks in advanced
@apalrdsadventures
@apalrdsadventures 6 ай бұрын
In general when you move to 3 nodes you'd remove the qdevice.
@jbrande
@jbrande Жыл бұрын
Thank you, just an idea, i have a rpi but i trust more my nas as qdevice, so i run it on a vm on my nas. but thanks for explaining all the cases, i didnt know that if one goes down the other cannot run a vm
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Glad I could help
@jbrande
@jbrande Жыл бұрын
@@apalrdsadventures now im realizing that to be able to replicate the volumes need to be zfs, so i hate to reinstall everything
@apalrdsadventures
@apalrdsadventures Жыл бұрын
yes, Proxmox relies on zfs send internally for replication
@iamweave
@iamweave 10 ай бұрын
So in a 2+1 node (a+b+q) situation with ZFS replicas to a+b, if a goes down and the VMs restart on b, but a node is only down because of a network failure (it's still running), what happens when a is reconnected? Will the new changes on b node reverse replicate back to the a node?
@apalrdsadventures
@apalrdsadventures 10 ай бұрын
Once a VM migration happens, the old copy is unused and once a is reconnected it will learn that it no longer owns that VM ID and must shut its copy down immediately. The replication config will have also changed to make a the replication target, so it will get its copy updated with the changes made on b so it can be a migration target in the future.
@EduardoSantanaSeverino
@EduardoSantanaSeverino 6 ай бұрын
The problem that I see is when you want to remove one promox node from the cluster and then add a new one. I have sent that even though you removed one node from the cluster, it is still showing up in the web UI. And then you have to go and delete a folder to fix it.
@igorpavelmerku7599
@igorpavelmerku7599 4 ай бұрын
Hi, discovered your videos a day ago, very well done, much appreciated. Unfortunately, in my two-node installation, clustering does not work. When i try to join the second node get error that it can't find the pve2 pem certificate. Even adding second vote to the first node and restarting cluster daemons...
@apalrdsadventures
@apalrdsadventures 4 ай бұрын
you can run pvecm updatecerts -f on each node if it's having cert issues. Usually this is dealt with during cluster join but if the nodes were previously clustered or doing other things it can be messed up.
@igorpavelmerku7599
@igorpavelmerku7599 3 ай бұрын
@@apalrdsadventures Thanks a lot, hopefully will get to check this out later this week. Thanks!
@igorpavelmerku7599
@igorpavelmerku7599 14 күн бұрын
@@apalrdsadventures Hi, much later ... the switch was the culprit, HP ProCurve 1810 (read about that in a thread I discovered today). Installed another switch, Zyxel GS1900, and boom ... cluster working in notime. Cheers.
@ruwn561
@ruwn561 2 жыл бұрын
Hello. Your VM guests will still use the little node, unless you choose restricted on the hagroup.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Only if none of the other nodes in the hagroup are available to migrate to, which can't happen in a 3 node cluster since 2 nodes would be down.
@Nettechnologist
@Nettechnologist Жыл бұрын
Any issues with different processors amd/intel in a cluster or different generations of the same brand?
@apalrdsadventures
@apalrdsadventures Жыл бұрын
If you aren't live migrating VMs, there are no issues at all. CTs never live migrate, so they have no issues either. If you do want to live migrate VMs (and this only applies to VMs you want to live migrate): Mixed generations of AMD or Intel in the mainstream microarchitecture: You have to manually pick the cpu = lower generation, so the instruction sets are always compatible on any host the VM could migrate to, i.e. pick Skylake if that's your oldest, or the same-generation Epyc or Opteron if you're using Zen or construction equipment families, even if they aren't server chips. Mixed AMD/Intel: you have to either use kvm64 only (+AES), or manually find an instruction set that is fully compatible with both CPUs. It will probably be a generation or two older than your oldest CPU. If any of your CPUs are not the 'mainstream' microarchitecture (i.e. Intel Atom/Celeron or AMD big cat families like Jaguar) you will probably not find a profile that fits your CPUs, and will have to stick with kvm64 (+AES). This should always work. kvm64 is the default if you didn't change it. If you aren't live migrating VMs then you should usually pick cpu=host for best performance.
@NiHaoMike64
@NiHaoMike64 3 ай бұрын
Could QDevice be ported to OpenWRT so that an existing router on the network can perform that role?
@apalrdsadventures
@apalrdsadventures 3 ай бұрын
It's not in the OpenWRT repos, but that doesn't mean it won't build on OpenWRT as-is (qnetd only depends on nss and corosync, and nss is already in the OpenWRT repos)
@adaywithdante9623
@adaywithdante9623 Жыл бұрын
How did you config zfs? I’m trying set up replication but I’m unable to set up zfs with a single disk
@apalrdsadventures
@apalrdsadventures Жыл бұрын
For single-disk ZFS you select 'raid0' and then only one disk in the setup menu
@paulmaydaynight9925
@paulmaydaynight9925 2 жыл бұрын
i have 2 nodes with both created cluster by accident so they only allow 'join information' how do i revert one back to allow 'join cluster' of the other one ?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
I would wipe one and reinstall one of them. Only the first node in the cluster can have any VMs/CTs existing, so whichever node is the second node needs to be essentially a blank slate anyway.
@Renull55
@Renull55 11 ай бұрын
So none of the cluster devices need to actually mirror in hardware, I've seen this being mentioned before on various forms but what is it clear to me is if I had a VM that had a certain amount of storage and RAM allocated to it how would the HA play into migration for that specific VM if it's higher than the given hardware on the backup or voted cluster for it to be reinitialized?
@apalrdsadventures
@apalrdsadventures 11 ай бұрын
They don't need to be identical or even similar. If you are not live migrating VMs they don't need to be the same team red / team blue. If a VM requires more CPU cores than the hardware has total, it will fail to start. If the VM requires more RAM than the system has free, it will also fail to start. If you have some resource-heavy VMs and resource-light nodes, you can create a group of only the nodes which are allowed to run those VMs so it won't migrate to any other nodes.
@scuzzynate11
@scuzzynate11 4 ай бұрын
9:30 - Getting an error "missing replicate feature on volume" when trying to do this, with an error 500. Anybody know what I'm missing?
@Mr.Jean-Paul
@Mr.Jean-Paul 2 жыл бұрын
Thanks for this awesome video 🙏. I was about to buy a second device for a 2nodes cluster, but one important question that remains unclarified for me: what about VMs or LXCs that are running, and one node fails. Do they stop or continue to run on the remaining device (knowing now that I cannot start a stopped VM)? Also, would it be too weird to install a third node on my Synology NAS only to have enough votes when one of the two main devices fails? I have already installed Proxmox Backup Server on my NAS and used its storage as NFS share, so theoretically, this should work??!
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
The services will continue running and you cannot change their state, if you lose quorum. In your case, to avoid the overhead of running another VM, I'd install the QDevice package on your Proxmox Backup Server VM. You can also technically install PVE and PBS on the same host (by using the instructions to install PVE on Debian, on the existing PBS install). But QDevice is easier setup. You will still have a single point of failure in the NAS, so any failures there will cause you to lose storage for the whole cluster. If you have anything really critical you might want to keep the VM disks on local ZFS and replicate, which removes the NAS single point failure at least for those VMs. I do this for Home Assistant and nothing else.
@Mr.Jean-Paul
@Mr.Jean-Paul 2 жыл бұрын
@@apalrdsadventures wow…thank you so much for your quick response!! Ok, installing the Qdevice on the PBS is not the problem (good to know that this works even on PBS!). Did I understand that right?: if PBS goes down, I loose access to all nodes, well not access, but starting/stopping them on the „main“ nodes?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Well you have multiple things on the Synology, so it's not necessarily a PBS issue. Assuming you have 2 freestanding nodes + the Synology, running NFS (on Synology) and PBS (as a VM), with QDevice running on the PBS VM. PBS itself is never critical to operation of VMs but you will get backup errors. So loss of the PBS system will have no impact on critical functions, at least in the short term. With a 2 node + QDevice cluster, any single one of those can fail (either node or the qdevice) and the cluster will still operate. So again, loss of the PBS VM and its QDevice won't immediately impact critical functions but you have a loss of redundancy. But if you are using shared storage for VMs, and the Synology goes down, the VMs have no access to their disks. So, even though Proxmox doesn't need the QDevice for quorum, the VMs still need their disks over NFS.
@Mr.Jean-Paul
@Mr.Jean-Paul 2 жыл бұрын
@@apalrdsadventures Hi again, it seems I misunderstood your first answer. I will not use the NFS share of PBS for the other two nodes (on NUCs). Both nodes would have their own storage. So in this scenario, PBS could fail, and the other two nodes would function as nothing happened…having still enough votes. Is that somewhat correct?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Yes, that's correct, as long as you don't lose another node everything will function just fine. You have to use ZFS local storage, not LVM, and use replication if you want high availability of VMs (to migrate themselves if a single node fails).
@AdrianuX1985
@AdrianuX1985 2 жыл бұрын
18:24.. Is adding more "QDevice" nodes to the cluster a good idea?
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
The docs don't mention more than one QDevice at all, and that's not a scenario I tried either, so I can't say how it would handle it.
@JohnSmith-yz7uh
@JohnSmith-yz7uh 2 жыл бұрын
I think it doesn't make much sense as you should always have a uneven number of cluster member. Example: 2 pve hosts + 1 qdevice Or 4 pve hosts + 1 qdevice 2 pve hosts + 3 qdevices won't really get you anywhere. You shouldn't design your cluster on a unreliable qdevice anyway.
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
Basically, the Qdevice adds an additional point of failure in exchange for one or more additional votes. It should really be a last resort for cluster stability. With 2 nodes, you have 2 points of failure and 0 excess votes to maintain quorum. So, adding a QDevice gets you 3 points of failure and 1 excess vote, so any one can now fail. That's positive. With 3 nodes, you go from 3 to 4 points of failure and 1 to 2 excess votes, but the QDevice itself has 2 votes, so if it fails it takes all of your excess. On the flip side, you could lose 2 nodes and operate 1 node + 1 QDevice, so it may be a valid strategy if you don't have enough backup power for everything and power loss is your primary concern. Above 3 nodes, adding a QDevice can only help even-numbered clusters if there's a danger that the cluster could end up in two even-numbered halves due to something like a network switch failure. Once you're in the 4+ node territory you should focus on redundancy in your power and networking instead of QDevices.
@hpsfresh
@hpsfresh 2 ай бұрын
Interesting. How does replication works after node1 is up again? Replication is only one way, right?
@LucasHartmann
@LucasHartmann Жыл бұрын
Is it possible to add 2 qdevices? This could allow 2 nodes lost on the 3-node cluster.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
QDevices behave differently on odd nodes vs even node clusters, and the QDevice will get 2 votes on a 3 node cluster (bringing the vote total to 5 and quorum to 3). The intent of this is to prevent an even number of total votes (where 1 node + Qdevice could go off on their own while 2 nodes are otherwise healthy). This means that the QDevice now becomes a single point of failure, since it takes its two votes with it.
@LucasHartmann
@LucasHartmann Жыл бұрын
@@apalrdsadventures I got that. My question is whether 3 nodes + 2 qdev would work. Total number of votes would be 5, if which you need 3 for quorum. You you still have 3 votes then at least one would be a working node, and could keep the cluster up. This would allow 2 dead nodes on A 3 node cluster.
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Corosync doesn't allow multiple qdevices in a cluster.
@gramzon
@gramzon Ай бұрын
I wish you showed what happens in the 3 node setup with replication when the original node comes back up... Will the original node take over the VM? Will it retain the changes made on pve2? Will it continue to replicate its copy every 15 minutes, potentially erasing any changes made on pve2 while pve1 was down? Will pve2 continue running the node and start replicating it to pve1?
@apalrdsadventures
@apalrdsadventures Ай бұрын
Once a node is moved by HA, the new node becomes the owner of the VM. Once PVE1 comes back up, all configuration changes made while it was down are replicated (this includes PVE2 being the new owner). PVE2's replication job will now replicate to PVE1. The replication task will fail if the destination data is different. So if the HA migration was due to shutdown all will be fine, but if the node actually failed then the 15-minute of 'extra' data on PVE1 will sit there until you either delete it or do something with it.
@FreedomAirguns
@FreedomAirguns Жыл бұрын
Does anybody know if proxmox let you use multiple independent systems as *one* huge virtual machine? quote "ScaleMP is the leader in virtualization for high-end computing, providing increased performance and reduced total cost of ownership (TCO). The innovative Versatile SMP™ (vSMP) architecture provides software-defined computing and software-defined memory by aggregating multiple independent systems or high-performance SSDs into single virtual systems" ScaleMP vSMP can do just that but it's basically out of the reach of "normies". I'd like to simulate a multi-socketed system with cheap hardware which doesn't scale nor supports multi-socket options, which is what ScaleMP excels at. To repeat, does anybody know if this is even remotely possible with proxmox or even XCP-ng? Sorry for the redundancy in the sentences.
@tenekevi
@tenekevi Жыл бұрын
How lazy do you need to be to play the ethernet disconnection clip in reverse in order to illustrate plugging it back in. 😂 Really great explanation though!
@sebastianreal4363
@sebastianreal4363 2 жыл бұрын
great video man, so much thing i will do in my cluster btw, strangely i cant modify corosync.conf to give 2 votes to main server
@apalrdsadventures
@apalrdsadventures 2 жыл бұрын
If the cluster isn't quorate already (both nodes alive / working) then it won't sync modifications and you will be in big trouble. Also editing corosync.conf manually is also slightly dangerous, since changes are sync'd across the cluster immediately and if you make any mistakes and save all of the cluster nodes will be very unhappy with your configuration. You may need to log in to each node and do 'pvecm expected 1' (which lets the node operate without quorum) to fix the issue, then change expected back to 2 or reboot to undo the override. Also don't forget to increment the config version with each edit
@johnwashifi
@johnwashifi Жыл бұрын
Hello, I have created an cluster with our tutorial. I have an issue now that after removing 1 node and adding an new one, the new node is NR, so I do not have quorum despite the fact that I have run on the new node apt install corosync-qdevice and also pvecm qdevice setup MyQdevideIP -f, after that I got command 'ssh -o 'BatchMode=yes' -lroot myNodeIP corosync-qdevice-net-certutil -m -c /etc/pve/qdevice-net-node.p12' failed: exit code 127 Please help me! thanks in advace!
@d4v1ds
@d4v1ds Жыл бұрын
# cp corosync.conf corosync.conf.bak
IMPORT a Virtual Machine Template (OVA, VMDK, RAW, ...) into Proxmox!
9:22
apalrd's adventures
Рет қаралды 47 М.
Turning Proxmox Into a Pretty Good NAS
18:31
apalrd's adventures
Рет қаралды 214 М.
Why You Should Always Help Others ❤️
00:40
Alan Chikin Chow
Рет қаралды 137 МЛН
Haha😂 Power💪 #trending #funny #viral #shorts
00:18
Reaction Station TV
Рет қаралды 14 МЛН
MEU IRMÃO FICOU FAMOSO
00:52
Matheus Kriwat
Рет қаралды 31 МЛН
Proxmox SOFTWARE DEFINED NETWORKING: Zones, VNets, and VLANs
20:34
apalrd's adventures
Рет қаралды 37 М.
SELF-HOSTING behind CGNAT for fun and IPv6 transition
36:12
apalrd's adventures
Рет қаралды 12 М.
Proxmox 8 cluster setup with ceph and HA
14:13
Distro Domain
Рет қаралды 20 М.
Exploring Proxmox from a VMware User's Perspective
24:53
2GuysTek
Рет қаралды 114 М.
APPLE совершила РЕВОЛЮЦИЮ!
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 4 МЛН
ПОКУПКА ТЕЛЕФОНА С АВИТО?🤭
1:00
Корнеич
Рет қаралды 3,2 МЛН
iPhone 12 socket cleaning #fixit
0:30
Tamar DB (mt)
Рет қаралды 51 МЛН
SSD с кулером и скоростью 1 ГБ/с
0:47
Rozetked
Рет қаралды 219 М.
Asus  VivoBook Винда за 8 часов!
1:00
Sergey Delaisy
Рет қаралды 1,1 МЛН