Half your videos are things I've thought about doing but never put in the effort to try. This is no exception. Keep it up! I imagine a not-too-future video will involve inter-VM traffic passing through these links.
@apalrdsadventures Жыл бұрын
I'm working on the script for a video on migration and Ceph networks, and one on Proxmox SDN (which includes VM-traffic over a setup like this). So it's coming eventually.
@andrewmabbettАй бұрын
@@apalrdsadventuresHi Did you ever manage to get a script for this setup? Any info gratefully received. Awesome video thanks helped a lot.
@apalrdsadventuresАй бұрын
I did make a SDN video, although I didn't include vxlan due to bugs in ifupdown. I haven't finished another Ceph video.
@Darkk6969 Жыл бұрын
Awesome video! Good use of OSPF for fault tolerance. :) Also, brilliant naming for your internal IPv6 fd69:beef:cafe::555. I know we're allowed to use A through F for the address but that is simply a genius way of making use of those limited letters.
@apalrdsadventures Жыл бұрын
The ULA range (fd00::/8) is supposed to be followed by 40 random bits, with the intent to avoid the issues with everyone using 10/8 and stepping on each other when networks merge or you VPN across them. In reality, for isolated networks, it's fine to do whatever you want.
@KeithWeston Жыл бұрын
Thanks! Consistently the best information presented clearly and thoroughly.
@apalrdsadventures Жыл бұрын
Glad it was helpful!
@GeoffSeeley Жыл бұрын
This was great! Learned something new today even after 25+ years in IT.
@sofmerightАй бұрын
Thank you so much for your video! I really appreciate it! You helped me find the motivation and find that it was possible to have a functioning and stable ceph cluster with my pfsense still virtualized in between it all! The config format of ospf that you can verify thru vtysh has changed a little now I think. But this gave me a lot of context. The process of actually getting everything deployed took me a couple days cause I also wanted to rename my hosts but found the easiest way was making fresh installs and migrating my vms back in from backup! Its been a wild ride! Thanks for your channel I have been watching a ton of your videos and Idek how I just now subbed!
@wayneeseguinАй бұрын
Love the quality of your explanations as you go.
@yas-k2j5 ай бұрын
You always have a great step by step videos. Thank You.
@apalrdsadventures5 ай бұрын
Thanks a bunch!
@tech-trials-and-tribulations10 сағат бұрын
This is great. Will try it sometime. Another option is to use BGP and BFD instead of OSPF, although arguably OSPF is a bit quicker to setup and more suited for this particular application.
@geesharp66379 ай бұрын
Man, it's been a while since i configured OSPF on network equipment. Brings back memories.
@apalrdsadventures9 ай бұрын
It's still a great protocol for many deployments of this scale
@randyandy-o8g5 ай бұрын
Man I got a bunch of NICs coming in the mail to set this up, but I'm also a little bit stupid. Everytime I google this people are just like "yeah you just connect your nodes directly", which...like...duh...So having someone ACTUALLY walk through the process was so helpful
@yankee-in-london Жыл бұрын
Next level. Love your videos as it really helps push me beyond the basics and I love it.
@berniemeowmeow Жыл бұрын
Great video! Very cool. Appreciate you going deeper on these topics. Love learning new stuff.
@apalrdsadventures Жыл бұрын
Glad you enjoyed it!
@Ownermode3 ай бұрын
Great video, this was just what I was looking for. I was thinking about modifying the routing table by hand. Totally forgot about OSPF!
@swiftlabbuildstuff7 ай бұрын
I found this video excellent. I usually learn by example but I was hesitant to give the "ring network" a try since all the examples I found were 3 node ring networks. This video gave me the confidence that I could make this work with a 5 node node Proxmox cluster. I already have dual 10g LACP bond physical switched networking per node, which is plenty fast. After configuring the dual Thunderbolt on each node in a ring, I found it pretty easy. Even with a node (and its links) down, any other node is no more than a few hops away. Maybe still faster than the 10g LACP bond. Need to test though. I love that you show how to run the iperf, traceroute and nload commands for checking the connectivity. Very easy to follow. My next challenge is that this Proxmox cluster already hosts a Ceph cluster using IPv4 addressing on the 10g bond, but now I want to move the Ceph backend network over to the ring network. From what I can find, Ceph can't run dual stack - needs to be all ipv4 or ipv6. I'll be looking forward to your blog post/video on Proxmox Ceph running on the ring network. Once again, thanks for an excellent video!
@curtalfrey1636 Жыл бұрын
Thanks!
@ExillNetworks Жыл бұрын
Awesome work! Fantastic video! At 5:23, I didn't know that! I have been playing with Linux for years, but I dident know this! Thank you so much!!
@apalrdsadventures Жыл бұрын
I should have been more specific, it's only a feature when forwarding is enabled via sysctl (it's disabled by default, but enabled by FRR).
@LampJustin Жыл бұрын
Awesome one! This is exactly the setup I planned for my Homelab and my 40G adapters. The only difference is that I'd go with BGP (unnumbered) instead of OSPF :) Btw if you use vtysh you can build the config like with cisco. It's so much nicer with tab-completion and the occasional "?" for help. To save the config use write . The changes will be applied instantly so keep that in mind.
@apalrdsadventures Жыл бұрын
My background is with Linux and not Cisco, so the config-file option is most familiar to me
@LampJustin Жыл бұрын
@@apalrdsadventures fair. But to use the cli you don't need any prior knowledge really. And to learn some of the cisco CLI is never bad for the job. Most switches copy the cisco style or just use frr. So it's never bad to get a bit familiar. Especially good for debugging. But still you did do an excellent job! ^^
@proxymoxylinks Жыл бұрын
Great content, instant sub :)
@jaykavathe Жыл бұрын
Can you please make another video on moving existing ceph network onto the ring network you just created please? Will be very helpful method to understand ceph network configuration
@simo477684 ай бұрын
Most video are basic. This is advanced. Awsome.
@amosgiture Жыл бұрын
Quite impressive, kudos!
@westlydurkee6230 Жыл бұрын
A video series on networking like setting up a local DNS, LDAP, and Samba Active Directory server would be great. I really like the way you explain things keep up the good work!
@apalrdsadventures Жыл бұрын
Those are all on my list, so I'll get to them eventually!
@MelroyvandenBerg5 ай бұрын
This is great! Thanks for sharing!
@apalrdsadventures5 ай бұрын
Glad you enjoyed it!
@MelroyvandenBerg5 ай бұрын
@@apalrdsadventures Thanks for replying! This can also be used for ceph right? Or is this redundant?
@juliansbrickcity5083 Жыл бұрын
Now I want to redo my old 3 node Mini Micro Cluster and set this up for my self :)
@pauliussutkus526 Жыл бұрын
Ačiū!
@apalrdsadventures Жыл бұрын
You're welcome!
@BrianThomas Жыл бұрын
That was amazing. Thank you 💕
@apalrdsadventures Жыл бұрын
Glad you liked it!
@TheOnlyEpsilonAlpha Жыл бұрын
Okay, i came to that video because of something else: My Proxmox Instance (a test one) has the issue, that i can't ping anything outside the network. But i was stuck into the video cause i noticed: FINALLY someone with network expertise, the "beef:cake" IPv6 is freaking funny, and it's interesting to have failover routes if the major route fails. But my initial issue ist still there... and i see youtube recommends me your Video "Proxmox Networking: VLANs, Bridges, and Bonds" that looks more what i'm searching for!
@RobertoRubio-ij3ms9 ай бұрын
Each time I watch one of your videos, my entire datacenter goes on overhaul. Amazing content. Do you do consulting?
@Felix-ve9hs Жыл бұрын
Now I know why there are loopback addresses on OSPF and BGP ^^
@postnick Жыл бұрын
I got some 2.5 gigabit cards. So far I just direct connect between proxmox and Trunas for slightly faster backups and nfs.
@pedroporrasmedina Жыл бұрын
Really nice video, thanks! ipv6 is a big challenge to embrace now, so I need practice with this, so you give some ideas to play with some of the proxmox, I configure OSPF in pfSense but it is way easy in the proxmox servers.
@enrica66163 ай бұрын
Thank you for All your interesting videos. Why do you use OSPF. The original doc in Proxmox uses fabric from frr. What is the difference and advantage of each?
@apalrdsadventures3 ай бұрын
OpenFabric was never a standard - it's a draft proposal by a single guy, and he also wrote the FRR implementation. It's basically a different SPF flooding algorithm for IS-IS which has advantages in Clos fabric toplogies (this is not a Clos fabric topology). IS-IS itself is also ... quirky ... as a result of being an OSI protocol and not an IP protocol. So I don't want to deal with the quirks of IS-IS and OSI addressing, don't need multiprotocol support, and OSPF is easier to implement and to integrate with other equipment which you might have in your network to expand the setup.
@rbartsch Жыл бұрын
Great video for ISO layer 3! 😀 How is the performance compared with ISO layer 2 switching (all network devices in vmbr0 bridge with Spanning Tree Protocol enabled)?
@apalrdsadventures Жыл бұрын
For the 3-node setup, spanning tree would disable one link, so all traffic would flow over only 2 out of 3 links, and traffic between those two nodes would have an extra hop / more load on the middle node to do packet forwarding / more packets on the links which stay up. In a larger setup, some number of links would be broken due to spanning tree, there is no guarantee that the routes are optimal, and packets could potentially take a much worse path through the system, but it depends on the physical topology and which links get disabled due to STP. So when you get to more complicated systems, going to L3 is really required so that loops can intentionally be created for redundancy and load balancing.
@GJSBRT Жыл бұрын
Could you talk about software defined networking(SDN) in Proxmox? I'm currently figuring out evpn-vxlan
@apalrdsadventures Жыл бұрын
There are some quirks and interactions between SDN (especially BGP EVPN) and the ring setup, since both will try to write / edit frr.conf and step on each other. Using regular unicast VXLAN shouldn't interfere with the frr config, but the Cumulus Networks ifupdown2 that Proxmox uses has problems with IPv6 peers due to an oversight in their input validation that they still haven't fixed because they don't exist as a company anymore to develop it well. So I'm working through all of those issues before making a video on VXLAN.
@GJSBRT Жыл бұрын
@@apalrdsadventures Thanks for the info! Can't wait for the video :)
@ebaystars Жыл бұрын
thanks u answered the only q i had in the tail out re IPV4 :-)
@dn4419 Жыл бұрын
That was awesome. I've been thinking about looking into both OSPF and IPv6, but never really found a great way to do so. Do you have by any chance have a video on how your megalab node works or plan on doing so? Seems like such a nice playground for testing out stuff.
@apalrdsadventures Жыл бұрын
It's really just a single Proxmox system that I use to its fullest
@dn4419 Жыл бұрын
@@apalrdsadventures I thought so and maybe it's not that interesting, but if you ever plan on doing a guide on how to build such a virtual lab (for instance how you implemented having multiple virtual NICs), I'd personally find it very interesting. I'll definitely start looking for options once I got some proper hardware to run such a server. Currently I'm running my "production homelab" on 3 nodes with Ceph where I don't want to run such extensive experiments. I have to say Proxmox is running so smoothly (even upgrading to 8.0 was a breeze last week) and your videos have helped me tremendously so far. So thanks again and I hope you keep it up. Definitely one of the best Proxmox channels on KZbin!
@M9OCD7 ай бұрын
Great video and well explained dude! I've got all the nodes pinging over the ring network, so well happy but how do we get ceph to use them in proxmox given we can't run dual stack, and the ring network is not seen in the GUI? (next vid?)
@frandrumming10 ай бұрын
you mad lad using ipv6 jk... your videos are great!
@bravestbullfighter Жыл бұрын
Thanks for the video. I'm particularly interested in knowing what happens to throughput in a Thunderbolt 4 ring network across multiple points. Does Thunderbolt 4 have some sort of zero overhead copy/forwarding or is throughput diminished as the number of nodes in a ring increases and by how much?
@apalrdsadventures Жыл бұрын
Nothing special about thunderbolt networking vs normal Ethernet, packets still flow through netfilter in Linux. So it’s similar to a software router, it’s quite good at packet forwarding but it will use some cpu. It’s more a function of Gbit/s going through each node rather than then number of nodes.
@adamtoth9114 Жыл бұрын
This is an awesome guide, I'll use it to setup a 3 node PM-Ceph cluster. I have dual 10G SFP+, dual 40G QSFP and quad 1G interfaces in each node. My plan is to use the dual 10Gs lagged for vmbr0, the dual 40G for Ceph ring. I'm consdering setting up to more rings, one for corosync and one more for the ceph cluster. The fallback would be the dual 10G vmbr0. My questions are: - Is this a totally dumb idea? I guess it would be ideal to have separate ring networks for the different cluster communications. - How can I setup the 2 other rings with frr? - Which should be the private and public network for Ceph, cause that's not clear to me which needs the high speed connection?
@npradeeptha Жыл бұрын
This is great and what I actually need. However, I don't have the same ipv6 setup. Would it be feasible for nodes to communicate with ipv6 but have the public network be ipv4? Or does that not make sense? I am very interested in learning the ipv4 way of doing this.
@apalrdsadventures Жыл бұрын
You can mix ipv4 and ipv6 subnets in Proxmox. Since the ptp links don’t have manually assigned IPs in this example (just fe80 link local), they can’t pass ipv4 traffic, but having an ipv4 public network and ipv6 cluster is fine as long as all of the software on the cluster network supports ipv6. In general that’s just Proxmox itself and ceph, so it’s fine unless you want to carry vm traffic.
@apalrdsadventures Жыл бұрын
As to the ipv4 way, you’d need addresses on all interfaces and a small subnet (/30) on each ptp link, all unique. Then use ospf2 instead of ospf3. So ip ospf instead of ipv6 ospf6 in frr.
@npradeeptha Жыл бұрын
I'll have to try this out. So on top of adding the node address to the loopback interface I'd have to assign a unique address to each of the ipv4 interfaces?
@npradeeptha Жыл бұрын
@@apalrdsadventures I would definitely want VM traffic to carry. A use case for that would be direct access to a NFS volume on a NAS in one node to another.
@apalrdsadventures Жыл бұрын
If the node itself is doing NFS, that shouldn't be a problem (the Proxmox nodes route across the cluster network). Since it's an L3 network instead of L2, we can't just bridge the VMs directly to it and expect them to route properly, but we can use vxlan to tunnel VM network traffic across our cluster network.
@jgren40483 ай бұрын
hate to ask but do you know if its possible to install frr on freenas 11.2? its a different animal for sure, but I have had a problem with finding the right files on github given the only set of instructions found on the net a video- a text instructions. I know just enough to destroy my whole setup and I'm gettting pretty good at re-installing proxmox on new harddrives. actually I've been using linux since Vista but Coding isn't my strong point
@apalrdsadventures3 ай бұрын
FreeNAS (and TrueNAS CORE) is a bit of a special thing because they don't retain the root filesystem. There is a single sqlite db which is stored in a few places (the 'system' dataset) which contains the config from the web UI. On upgrade, FreeNAS will download a complete new copy of the root filesystem onto a new zfs dataset, then generate the conf files for each underlying service (networking, samba, ...). So you really cannot install anything on the OS, since FreeNAS will wipe it out for you. TrueNAS SCALE is a little bit different in their upgrade process but they still actively try to make it difficult to go outside of their UI and the UI will intentionally reset a lot of config file and networking changes you would make. On a normal operating system which isn't intentionally hostile to modification, it would not be hard to run FRR on FreeBSD or Linux.
@CanisLupusRC Жыл бұрын
How are you able to achieve a point-to-point connection between the virtual pve instances? I tried it using OVS-bridges, but could not get OSPF to work at all. How did you set up your virtual pve nodes for this to work? Thanks in advance.
@apalrdsadventures Жыл бұрын
On the 'host' hypervisor? I have a Linux bridge with no hardware assigned and assigned a different vlan id to each point to point bridge. In general you shouldn't need to use OVS networking with modern kernel features (bridges can now be vlan-aware, etc.)
@olokelo9 ай бұрын
Thank you for the video! I successfully configured OSPF and I have a link between my nodes. However this ring network isn't visible in Proxmox when creating cluster. How can I get it to display in the GUI? I have only vmbr0 as of now.
@apalrdsadventures9 ай бұрын
You have to use the console version instead of the gui version, and type out a subnet that encapsulates all of the addresses (i.e. /64 instead of /128)
@GrishTech Жыл бұрын
This makes hyperconverged with 3 nodes awesome. No switch needed, all though a switch would still be preferable.
@apalrdsadventures Жыл бұрын
You still need a switch on the 1G backup network, but not in the high bandwidth path
@GrishTech Жыл бұрын
@@apalrdsadventures yes, that right, I meant if you have a hyperconverged cluster using ceph, the replication network can be the point to point one.
@apalrdsadventures Жыл бұрын
Yes, you can also use the ring net for ZFS replication and live migration traffic
@RODRIGOLUZURIAGA Жыл бұрын
This video is awesome, thanks! Do you know if this setup would work for a ceph cluster in proxmox? I have three servers, all with dual QSFP+ 40Gb network cards. I want to direct connect them (so that I don’t have to buy a switch). I am unsure if I need to do any other setup than what you have done in this video.
@apalrdsadventures Жыл бұрын
Basically you just have to run `pveceph init` from the command line, and set the ceph network (public and private) to a /64 subnet which contains all of the /128 loopback addresses (fe69:beef:cafe::/64 in my example). Then you can install Ceph as you normally would. The local nodes will find their address which falls within the subnet, and will use the ring network. VM traffic is a bit more complex. But Ceph is easy.
@michaelcarinhas6445 Жыл бұрын
Learning a lot from your videos, thank you!! I am building a small Proxmox home-lab cluster with 3x nodes. Each node has 2x interfaces, one is a 1GB and the other a 2.5GB. (wish I had three). Any suggestions on how to network this setup?. I would like to have the 1GB interface on each of the nodes for the cluster management (192.168.1.x) and then the 2.5GB on each of the nodes for Ceph storage (192.168.2.x). I have a limited Vodafone router from the ISP which connects to my 8x port cisco catalyst 1000 switch.
@apalrdsadventures Жыл бұрын
You should be able to get a small unmanaged 2.5G switch for the cluster network, it doesn't need to connect to a router or anything
@bachlap7969 Жыл бұрын
Thank for you very informative video. I have a quite noobist question is how can I pass the connection to the VM and connect between nodes? I have tried in the last few day with configurations in proxmox SDN. It's seem that SDN is not support ipv6 very good at the moment so I have to resort to use OSPF with IP4 and try to use the lo interface as peer but without any results. Cheers
@apalrdsadventures Жыл бұрын
Oh wow looks like ifupdown2 development ground to a halt when nvidia acquired Cumulus Networks (who developed ifupdown2), so that's why support is lacking. In addition to that issue, ifupdown2 removes the extra lo addresses that FRR added when reloading interfaces to apply SDN changes.
@bachlap7969 Жыл бұрын
@@apalrdsadventures yeah I noticed that issue too, so i have to copy the frr.conf to notepad and paste it back every time I change the SDN module in proxmox and restart it so that I can have back the IP. I tried to follow the examples in SDN document but stuck at the step when they say add vNIC to the virtual machine. If I skip that and add directly the Vnet through proxmox GUI then the VM can't communicate with the VM on the other node.
@apalrdsadventures Жыл бұрын
Doing unicast VXLAN instead of BGP EVPN (VXLAN) means SDN shouldn't touch the FRR config, and VXLAN without BGP EVPN is scalable for small to medium sized networks. That means the frr.conf isn't touched at least.
@ahovda Жыл бұрын
Proxmox SDN has the BGP-EVPN mode which could be used to establish routed underlay/overlay networking similar to what you showed in this video. Do you still recommend OSPF for the underlay (I guess you'll use VXLAN overlay across the lo addresses)?
@apalrdsadventures Жыл бұрын
BGP-EVPN is a bit of a different use case. OSPF is used to route the cluster network at layer 3. If the cluster network is not a layer 2 domain, a routing protocol is required to route all of the lo addresses. With the routed cluster network, Proxmox itself can now use the lo addresses without anything else for all of its business (migration, Ceph, backups). This does not extend to user-plane VM traffic. VXLAN is used to pass user-plane data across any (routed or not) layer 3 network and create multiple layer 2 tunnel(s) for user traffic. Like a normal layer 2 network it works by flooding packets across the network to discover the MAC addresses at each port, but since VXLAN itself is unicast this can lead to packet multiplication across the network which limits scalability. VXLAN (alone, not with BGP) could easily be added to this setup to pass user plane traffic over the OSPF routed cluster network. BGP EVPN solves the scalability problem of regular VXLAN by adding routing to the VXLAN tunnel, using BGP's multiprotocol abilities to route MAC addresses within the tunnel and improve MAC learning. So it's forming MAC tunnels but routing MACs using BGP for efficiency, with the appearance of L2 for the benefit of the VMs using them to pass data. BGP still requires that every node can connect with every other node via its lo address, so we still need a protocol to route those, or they need to be on-link on the L2 domain. So this setup is to route the cluster traffic, not user plane traffic, and BGP-EVPN (and VXLAN) is for user plane traffic. They are not mutually exclusive.
@martijnm44727 ай бұрын
If have set this up network wise but I have no cluster yet. when try to create one it confuses me which NIC to choose for the cluster network, I have my internal NIC connected with ipv4 and ipv6. (the outside of vmbr0), should the cluster run on LO or said otherwise why do I not see the fd69:beef:cafe:: network in the cluster config?
@apalrdsadventures7 ай бұрын
When you create the Proxmox cluster, choose the other interface and let it set itself up. Then, add a second 'ring' network manually in corosync.conf with the addresses of each node, so it will use either one for corosync pve.proxmox.com/wiki/Separate_Cluster_Network#Redundant_Ring_Protocol has a guide on this. You can use the fd69 IPs in ring1_addr. For Proxmox migration / replication, there's an option in /etc/pve/datacenter.cfg to force a specific subnet for migration - `migration: secure,network=fd69:beef:cafe::/64`. For Ceph, use `pveceph init` and specify the subnet there (instead of the Proxmox ceph configuration GUI).
@moatasemtaha3019 Жыл бұрын
Thanks mate for the great video, I'm trying to setup Point-to-point network to on a 3 nodes Proxmox cluster to use for Ceph storage, the issue I'm having after following the steps that my routing table doesn't show any routing entries, only the dev list.. any idea why? When I try to ping any other node, I get "Network is unreachable"
@apalrdsadventures Жыл бұрын
Is FRR up and configured for the right interfaces? Individual interfaces up (even though no IPs are configured other than IPv6 link-locals)? Can you ping across the link-locals?
@slobma79737 ай бұрын
I also cannot ping the link-locals. Followed the instructions and replicated the environment, have the same nested 3 pve nodes in a cluster with the same three nics, even the names are the same. The only difference is the vmbr0 is an ipv4. I get root@pve-lab-01:~# ping fd69:beef:cafe::552 ping: connect: Network is unreachable Please someone help me!. I´m going crazy here! (Yes FRR up, right interfaces all up but no ping across the link-locals :(
@phiwatec2576 Жыл бұрын
I can't figure this one out: I have practically the same setup as you but i get no routes. Looking in the frr.log I see 'interface_up: Not scheduleing Hello for enp0s8 as there is no area assigned yet' even though they have areas assigned in the config file. Do you know why this might be happening? Google didn't bring up anything related to frr.
@apalrdsadventures Жыл бұрын
Is the area assigned for ospf or ospf6?
@popeter Жыл бұрын
so does this work on rerouting traffic if a public link goes down? for refrence my current setup is 2 Nucs that each have 2 1G links to my switch that carry vlans thats not the best as one of the 2 links on each are USB, would this let me get reducency via a Thunderbolt link across them so i can have 1 1G uplink on each and one TB crosslink?
@apalrdsadventures Жыл бұрын
In general, no, since the public network isn't participating in the route exchange. However, if you just need to handle traffic between the two nodes, that can be done via the direct link (including VXLAN tunneling for VM traffic). In your example, with two nodes (X and Y) connected via Thunderbolt using OSPF, if X loses its connection to the public network, a few issues will all happen at once which will cause routing to break: - X's IP is attached to a network interface which is down, so that address does not exist in the system (hence putting addresses on loopback for fully routed networks) -X can route to the public network via Y (assuming it has an address), and Y can use its routing table to send to the final destination (the default router or the on-link hosts via ARP / NDP), so packets can go in one direction -The public network has no knowledge of this arrangement, so it will be unable to find X on-link (via ARP / NDP) and won't be able to return packets to X Depending on what switches you have, another option is to bridge the networks and rely on spanning tree to disable one of the links, but this will leave one of the three links disabled at any given time (the dual 1G and the thunderbolt), and spanning tree isn't smart enough to do it based on an optimal routing algorithm as it's just designed to break loops into a tree.
@vdarkobar Жыл бұрын
Hello, If someone could answer one question: pve node already has an ipv6 address on the vmbr0, so, the address that needs to be added to lo interface is not the same address but a different one? This part is little confusing to me...
@apalrdsadventures Жыл бұрын
The addresses are on two different subnets. The vmbr0 address is what we use for the web UI and to communicate outside of the cluster. The lo address is what is used across the ring net but is not accessible from anywhere else.
@vdarkobar Жыл бұрын
@@apalrdsadventures Hi! Thanks for your answer! If I can ask another one, how hard could it be to make ipv4 variant of the setup? Is there anything I should be aware of? Thanks 🙏
@apalrdsadventures Жыл бұрын
To use v4, you'd need to assign addresses out of a unique /30 subnet on each point-to-point pair (in v6 you can use the link-locals), other than that you the commands are fairly similar (ip ospf instead of ipv6 ospf6).
@robinxiao919010 ай бұрын
I followed your video and got it working with 2 node SFP+ P2P link, you made it very straight forward and very clear, I was able to verify10gb speed iperf IPv6. But found when migrating VM it still go through my GBe switch & IPv4, I never touched IPv6 before this. So I did tested iperf same node both v4 & v6, IPv4 will route via Gbe, IPv6 will route 10Gbe. Is there some other setting I missed?
@apalrdsadventures10 ай бұрын
There's a migration setting to force a subnet to use if it isn't picking the right one, in /etc/pve/datacenter.cfg: migration: secure,network=fd69:beef:cafe::/64
@robinxiao919010 ай бұрын
@@apalrdsadventures Thanks for the feedback, my gut feeling is issue with routing rather than overriding config, 10gbe link is up confirmed with iperf, but neither replication nor PBS go through it. This is all new to me. I have dug around and found 2 suspects My loop back has "noprefixroute", but not in your video. It's in loop back line "inet6 ::1/128 scope host noprefixroute", learned it mean no auto routing, but didn't find how to get rid of it. Then I have my Gbe on DHCP (all value blank on that NIC in PVE), vmbr0 manually set to the same ip, and this seems like the only way PVE GUI lets you config it. In my search I come across a post @Nov 2023 saying there's a bug if static IPv4 + dynamic IPv6. We have the opposite following your guide, not sure if this is related here
@apalrdsadventures10 ай бұрын
Adding a replication network is a perfectly normal thing to do. As for PBS, you can specify the IPv6 of the PBS server in the storage config and it will use it as well. In my case, I only setup IPv6 on the test system, so the only options were v6 over the public network or v6 over the ring network.
@sebastiendeliedekerke5251 Жыл бұрын
"Or maybe you want to be the crazy guy who uses intel NUCs with thunderbolt between them."... Yes, that's exactly my case 🙂. With NUC 11 & NUC 12 now featuring dual Thunderbolt 4 ports I could very much see myself not investing into expensive 10 Gig NICs or adapters and using straight 40 Gig Thunderbolt networking between 2-3 nodes. My only question would be: how do you get ProxMox to recognize the Thunderbolt ports as full-fledged network interfaces? Any practical guidance on config steps for this would be highly appreciated... Keep up the super work!
@apalrdsadventures Жыл бұрын
It doesn't need to know about them, but you'll have to do a little config in /etc/network/interfaces on your own. Basically, just add an `auto yyy` and `iface yyy inet6 manual` line for each one, the interfaces will come up with an IPv6 link-local, and you can add them to the FRR config. OSPF will figure the topology out, you don't need to have specific ports in specific places (at least with IPv6). Proxmox itself just needs to know to use the loopback address, which it also won't be aware of in the GUI, so you'll need to set the replication / migration network and Ceph network through the command line as well, but once that's done it will use it for any gui commands that rely on the storage / datacenter / ceph configs.
@bernhardkonig3282 Жыл бұрын
trying the same thing as you. did you succeed?
@joshhardin666 Жыл бұрын
does this work similarly with ipv4? I don't have any ipv6 running on my network.
@apalrdsadventures Жыл бұрын
It's a bit more work in v6 since you need to actually set addresses on all of the point to point links, all of which need to be matched to the same subnet for each end of the link. In v6, we use the link-locals which are automatic. But other than that, the process will work similarly (using ip ospf instead of ip6 ospf6)
@alex.prodigy Жыл бұрын
awesome
@linuxbasics70605 ай бұрын
might be a stupid question but can I follow this with ipv4 rather than 6?
@apalrdsadventures5 ай бұрын
Yes, but with minor changes - Instead of /128s out of a /64, you'd use /32s for each node out of a /24 or /28 - OSPF needs IP addresses on each interface, here I'm relying on the link-local addresses, but you'd need to set a unique /31 subnet not out of the cluster subnet for each point to point link
@ziozzot11 ай бұрын
will it automatically loadbalance if two equal cost connections are available?
@apalrdsadventures11 ай бұрын
Yes. Equal cost is across the entire path to the destination , not just a single link.
@pauliussutkus526 Жыл бұрын
Maybe I missed preparation, but I cannot get loopback to take ipv6 adress it stays the same(default inet6 ::1/128 scope host) after editing both of those files. Can you give some hints and show how you add ip6 addresses?
@apalrdsadventures Жыл бұрын
Loopback will take both. ::1 still exists, but the other one is also on the lo interface
@pauliussutkus526 Жыл бұрын
@@apalrdsadventureson vmbr0 inet6gateway same for all nodes? Your tutorial is good, but for persons like me, need some preparation video, how to setup ipv6 adresses for networks. Now I go to network and adding ipv6 to devices, linux bridge
@apalrdsadventures Жыл бұрын
It doesn't actually matter if you are using ipv6 on the 'public' network or not, since the ring is a separate subnet. You can continue to use your vmbr0 address (IPv4 or IPv6 or both) for the web UI and management, and the new IPv6 cluster address for migration, Ceph, storage, ... simultaneously. No need to move vmbr0 to IPv6.
@yannickpar Жыл бұрын
Does we need crossover cables between hosts ?
@apalrdsadventures Жыл бұрын
1G and higher don't require it ever, so unless you're using 100M Fast Ethernet you're fine.
@lifefromscratch28184 ай бұрын
I wish I had the mental bandwidth to implement this.
@karloa719410 ай бұрын
Why do you need to copy the frr.conf to /etc/pve/?
@apalrdsadventures10 ай бұрын
I just copied it there to copy it to the other cluster nodes, since /etc/pve is synchronized across the cluster.
@autohmae Жыл бұрын
Next up is multipath ?
@apalrdsadventures Жыл бұрын
This will do equal-cost multipath automatically if the topology has paths which are equal cost (such as a 4-node cluster going left or right around the ring).
@kwnstantinos79 Жыл бұрын
there is easy way to add as bond the ethernet card per proxmox , and that's . 🎉
@zparihar Жыл бұрын
Nice work. Question 1: How were you getting 16GB/s on 10 GB cards? Question 2: I'm assuming the best use case for this would be CEPH storage? Question 3: In terms of also doing fast Backups, could we also add a Proxmox Backup Server to that ring?
@apalrdsadventures Жыл бұрын
Answers: -All of this was tested in a virtual environment, so 16G is what the virtual links get without any limits. I did also run a setup like this on the mini cluster, although it's a lot harder to film. -You can use this for Corosync (although that should have redundant networks), Migration, ZFS replication, and Ceph as-is, and doing user-plane traffic is also possible with some more work using vxlan. -You can add PBS to the ring as well, or as a branch, or whatever your hardware allows, and OSPF will 'figure it out' when routing to the PBS server. -You can also add routers like RouterOS and maybe OPNSense to the ring also, and both of those can do vxlan for user plane traffic.