Proxmox VE Dedicated Migration Interface

  Рет қаралды 4,378

Tech Tutorials - David McKone

Tech Tutorials - David McKone

Күн бұрын

Пікірлер: 27
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
If you want to learn more about Proxmox VE, this series will help you out kzbin.info/www/bejne/qXm6ioiqZbtgmZo
@uniXlyTV
@uniXlyTV 11 ай бұрын
This was exactly what I was looking for. Thanks for all your proxmox videos David. They've been so useful in expanding my proxmox knowledge beyond the initial basic configuration.
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone 11 ай бұрын
Thanks for the feedback Good to know these videos have been helpful
@victorsimeonov
@victorsimeonov Жыл бұрын
Благодарим ви!
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
Моля
@michaelcooper5490
@michaelcooper5490 Жыл бұрын
Thank you David I was wondering how to do this you are awesome sir.
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
Good to know the video was useful, so thanks for the feedback
@michaelcooper5490
@michaelcooper5490 Жыл бұрын
@@TechTutorialsDavidMcKone Sorry to bother you again, I have a situation where I need to move VM hds from the current nas it is on so I can rebuild it and then move them back on to it. I have 2.5 gb switch which I am using for the migration under options will it move on the same network or will it use the management network?
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
@@michaelcooper5490 The migration interface is more for hypervisor to hypervisor transfers e.g. when the hd files are stored in local storage But when the hd files are put on a NAS, they stay where they are when the VM is migrated and the migration interface will be used for syncing the RAM contents between the two hypervisors In this case, if the VM hd files need to move from the NAS to another computer the hypervisor will pull the files over the NIC that connects it to the NAS And then send them over the NIC that connects it to where the files need to be sent It could be the same NIC, it could be more than one, it really depends on your situation If this transfer involves the migration or management interface depends on if they provide connectivity to the source or destination
@michaelcooper5490
@michaelcooper5490 Жыл бұрын
@@TechTutorialsDavidMcKone Got ya thank you very much I appreciate it.
@Harry-si5uz
@Harry-si5uz Жыл бұрын
Thanks for this, it's confirmed that the problem I have are actually that my interfaces are not setup correctly to start with!
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
I must admit PVE isn't as obvious as some other hypervisors I've set up when it comes to interfaces But I do still like it alot
@subpixel2234
@subpixel2234 Жыл бұрын
Great info! Just what I needed. I switched to a 10gbe interface from a 1gbe interface and my migration times got cut in half. I'm using ceph, so just the RAM contents needed to be moved (4GB). I'm still scratching my head as to why the speed up was not greater given the 10x bandwidth increase. Using iperf3 I've confirmed the interface is 10 G (Transfer 10.9 Gbytes, Bitrate 9.39 Gbits/sec).
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Жыл бұрын
The problem is that benchmarks don't reflect reality Applications tend to be a lot slower when transferring files and you have to go down a rabbit hole to try and find where the bottleneck is I would suggest though making sure Jumbo frames are enabled on the switch and on network cards in the same network/VLAN What that setting is though depends and you'll have to experiment I've maxed out the switch to 9216 bytes but because I have some computers with Intel NICs, all the computers had to be limited to 8996 bytes as anything higher can be a problem for some Intel NICs Increasing the transmit and receive buffers on the network cards can help a bit as hardware buffers tend to be small After that you have to factor in things like disk and disk controller speeds When I was doing my own testing, uploading a file to a mechanical disk was hardly better than why I was on 1Gb But when I uploaded the same file to an SSD it was much much faster Monitor the switch interfaces as well as I once had a port max out during big file transfers. Replacing a DAC with a fibre cable and SFP+ ports resolved that for me At the end of the day though, 10Gb+ networks are better suited to lots of concurrent traffic flows When I uploaded several files at once and they weren't too big, the computer receiving them must have been able to cache them as the throughput was very high for that short duration window But when I transferred just one very large file it usually maxed out to 2.5Gb/s, with the rate dropping and rising no doubt due to congestion algorithms kicking because it was too much for the computer to cope with Transfers like that would go faster mind if I was using NFS instead of SMB, which brings me back to how applications can be the problem...
@georgec2932
@georgec2932 Ай бұрын
Hi David. Thanks for this. Couple of questions, one of my three nodes doesn't have a spare NIC - I assume all three would need a dedicated NIC for this to work (so they are on a separate network)? Also, do you know if this would route all replication traffic over the same interface (or is it only for live migrations)? I set up replication as per one of your other videos and that's really the traffic I would like to separate from my main network. Cheers.
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Ай бұрын
According to the documentation, you select a network for migration traffic so each server needs an interface in the same network "... the network must be specified so that each node has exactly one IP in the respective network" It doesn't need to be a physical interface mind, even a virtual one will do There's a Bandwidth Limit setting which allows you to set a bandwidth limit for migration traffic so you could carve up a 10GB NIC for instance, putting the interfaces into different VLANs and set an upper bandwidth limit for the different types of traffic I'm not seeing anything about what interface the replication traffic uses or how to set a different one I'm not seeing a separate bandwidth setting for it either I did find this forum post though and looking at the feedback I suspect replication and migration traffic use the same interface forum.proxmox.com/threads/force-zfs-replication-traffic-over-separate-nic.56081/
@georgec2932
@georgec2932 Ай бұрын
@@TechTutorialsDavidMcKone Thank you, using a virtual interface on my node without a third NIC is a good idea. I can use a spare physical interface on my other two nodes. Unfortunately my home network is limited to 1gbe for now. I currently only have only one physical interface being used for proxmox on each node (they also have a separate physical NIC for WAN because I've recently virtualised pfsense, combined with moving all of my VMs/LXCs from a single node to a cluster (followed your other video!)). The replications happen pretty fast but I have noticed occassional (very) minor performance issues and I assume it's because replication is saturating the link. Thanks again, your videos are excellent.
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Ай бұрын
@@georgec2932 After the initial replication you get deltas so there should be less traffic Like most file transfers though I wouldn't be surprised if it didn't try and grab as much bandwidth as possible If you go to Datacenter | Options there's a Bandwidth Limits setting that should let you restrict the traffic
@adampoconnor
@adampoconnor Ай бұрын
I know this is an older video, but I’m hoping you can offer some wisdom. My setup has 3 interfaces per server. I’m renaming the physical ports from what they really are just for simplicity. eno1 is my management 1GbE with vmbr0 assigned by default eno2 is my VM network 10GbE with my own created vmbr1 eno3 is another 1GbE that is on its own switch with the other node(s) for cluster traffic. Bridge named cluster1 The issue I am having is that if I assign an IP that is out-of-subnet with VMBR0 to VMBR1 in an attempt to allow migration traffic over it, the GUI immediately becomes inaccessible on both vmbr0 and vmbr1. vmbr0 is 192.168.1.2/24 and vmbr1 is 192.168.10.2/24. They are both being routed by the same router, so I suspect that may be part of my issue. How would you get around this?
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Ай бұрын
You have to be really careful with routing Typically a server should have only one interface that's assigned with a default gateway It depends but it might be the management interface for instance and so it's used as the one to access the Internet for server updates as well The other interfaces should have just an IP address as they are meant to be isolated subnets So migration traffic for instance stays on one interface and so all servers in the cluster need an interface in that subnet for direct connectivity None of those subnets should be reachable via a router or firewall as it results in asymmetric traffic and things can fall apart, more so when it's a firewall, but even a router can sometimes cause problems For remote access, the servers should only be targeted by the IP address on the interface with a default gateway In corner cases, you have to create static routes on the server Let's say a computer in a remote subnet 10.1.1.0/24 is trying to reach the server on 10.2.2.20/24 The router has links in both subnets and so it can route between them In which case, the server needs a static route for 10.1.1.0/24 pointing to the router 10.2.2.1 For other traffic, the server will still use the default gateway and the interface that's configured with one
@hpsfresh
@hpsfresh Ай бұрын
Does it have fallback to main network if migration network is not ready?
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Ай бұрын
I haven't seen anything in the documentation to suggest that it would
@hpsfresh
@hpsfresh Ай бұрын
@ so am I :) that’s is why I am asking. If you have this kind of installation (with dedicated migration network) can you please test it?
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone Ай бұрын
@@hpsfresh Well a server should really have two NICs bonded together for this so it would have its redundancy that way A more typical solution I've seen is to bond two high speed NICs for all traffic and use VLANs PVE has a setting for migration bandwidth to go with that to avoid overloading the NIC Typically though migrations tend to be restricted to out of hours anyway Mind you some servers I've seen "break" a NIC into virtual NICs so that the OS thinks it has multiple NICs Again, bandwidth limits are then imposed to avoid network overload
@dr.michaelhermes5218
@dr.michaelhermes5218 7 ай бұрын
Hello David, thanks for your great videos! In my case this does not work. Dependig of the node in Migration settings differ the Network address. Trying to migrate i get the error message: "could not get migration ip: multiple different, IP addresses configured for network '10.XX.YY.ZZ/16' "? Greetings Micha
@TechTutorialsDavidMcKone
@TechTutorialsDavidMcKone 7 ай бұрын
Normally computers don't allow multiple interfaces in the same subnet but that error suggests you might It's unusual to assign IP addresses belonging to a /16 network as it's too large. Typically it would be broken down into /24 subnets for instance I'm wondering if a server has a NIC with an IP address and /16 mask in error. If so that would overlap with a lot of other subnets and lead to confusion I suggest you check to make sure all of the servers in the cluster have a network interface in the same subnet and that these are unique before you try to assign a migration network You won't want a mix or overlap of subnets, for instance, one server with an IP of 10.1.1.127/24 and another with an IP of 10.1.1.130/25 for instance From the first server's perspective, the second server is in the same subnet, but the second server will try and connect using it's default gateway as the subnets are different And what you'll want are all servers with a network address in the same subnet
@dr.michaelhermes5218
@dr.michaelhermes5218 7 ай бұрын
@@TechTutorialsDavidMcKone Hallo David, you are right - i found my mistake - two devices in one subnet... - Because of any errors i have to change my firewall. On the occassion i installed the proxmox cluster new and changed from 192.168.x.x addresses and /24 subnets to 10.x.x.x addresses and /16 subnets and VLAN's for clearer organisation. I used different addresses in the same subnet for different lan ports. An ceph installation error message i understood... ;) As you suggest i changed for this device back to /24 subnets and now it works. I'm not sure but it seems that vlan's not everywhere work and i'm searching for a way to implement trunk interface in SDN... Thank you very much. Sincerely Micha
How To Disable Proxmox VE Subscription Notification
7:15
Tech Tutorials - David McKone
Рет қаралды 13 М.
Proxmox, How To Remove A Server From A Cluster And Add A Rebuilt Server
29:59
Tech Tutorials - David McKone
Рет қаралды 2,1 М.
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 700 М.
Proxmox NETWORKING: VLANs, Bridges, and Bonds!
25:09
apalrd's adventures
Рет қаралды 170 М.
How to Setup LINSTOR on Proxmox VE
15:07
LINBIT
Рет қаралды 15 М.
How To Create VLANs in Proxmox For a Single NIC
28:35
Tech Tutorials - David McKone
Рет қаралды 125 М.
Creating a Debian Template in Proxmox (Updated)
19:49
Hardwood Homelab
Рет қаралды 428
Proxmox VE SDN VXLAN Setup
32:54
Tech Tutorials - David McKone
Рет қаралды 8 М.
Proxmox VE How To Setup High Availability
30:37
Tech Tutorials - David McKone
Рет қаралды 9 М.
Simplify Your Proxmox VE Tasks: Ansible Automation Made Easy
19:42
Tech Tutorials - David McKone
Рет қаралды 14 М.
Converting a Physical system to a Proxmox VM
18:51
ElectronicsWizardry
Рет қаралды 112 М.
How To Setup Proxmox Backup Server
16:52
Barmine Tech
Рет қаралды 2,5 М.
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН