Proprietary closed source, binary only, kernel module... whatever could go wrong?
@FredrikRambris23 сағат бұрын
What performance can we expect using stock linux drivers and networking stack?
@DJDocsVideos23 сағат бұрын
Don't forget that your USB ports need power and some SSDs are thirsty.
@YonatanAvhar23 сағат бұрын
How does having a single core pinned to 100% affect "idle" power consumption over using Linux kernel based networking?
@YonatanAvhar23 сағат бұрын
How does having a single core pinned to 100% affect "idle" power consumption over using Linux kernel based networking?
@minifig404Күн бұрын
Thank you for fighting through this. I'm really glad to hear you have fully open-source options on the table. CPU microcode being closed is not new, and I consider that just something that you have to put up with in this world (so far).
@outseekerКүн бұрын
mm i like what a few ppl have mentioned in the comments here about testing with 10gb/s each direction, with the data being all tiny packets like you might see on a super busy network. 10gb/s in a solid data stream isn't the same as 10gb/s of all varieties of packet hammering the device?
@CRCinAUКүн бұрын
Soooooo, no iptables? no nftables?
@spx2327Күн бұрын
Tomaz talks like an US drill sergeant, I am always stressed out after watching his video's. Sometimes I even start doing push ups 😅
@DJDocsVideosКүн бұрын
Oh yes the fascist states of America but who lets stuff like that get in the way of profits?
@hubertnnnКүн бұрын
I see two possible issues with this approach. First is the power usage and heat generation, since one of the cores is constantly at 100% even when your network does nothing at all. Second is possible latency increase, since when you poll instead of using interrupts then you don't respond to events immediately when they happen but on the next poll, so time between polls is your extra latency. This one may not be an issue, since 100% cpu suggests busywaiting without any sleep, but I would still like to see a test confirming if latency is not increased.
@Adam130694Күн бұрын
All of that work for 2xSFP+ & 3x2.5GbE?
@sledgex9Күн бұрын
I wonder why DPDK chose to constantly poll the interface vs asking the kernel to notify it when a packet arrives. And then continue in userspace. Aka use the kernel only for the raw packet notification.
@triffid0hunter23 сағат бұрын
Context switching is expensive - en.wikipedia.org/wiki/Context_switch#Cost
@DJDocsVideosКүн бұрын
It's a 4090 Mobile. That is a castrated AD103 GPU. If you run it in 120W mode it's performance is around that of a GeForce RTX 3080 Ti.
@shephusted2714Күн бұрын
can you make one more enterprise with more ram etc - cheap 100g router/switch - i think quite a few people will start going from 10g to 100g fiber - it is the first place you invest
@DJDocsVideosКүн бұрын
A little tip: use partition label and mount by label. you can skit the editing of fstab by hand.
@DJDocsVideosКүн бұрын
"keep in mind we only have 3.5 GB lol that's an entire Debian Desktop system.
@michaelsoutherland3023Күн бұрын
Unsubscribing and down voting until I no longer see the videos... Do some research about Michael Shellenberger and Matt Taibbi with "Twitter Files." 10:25 Know the J6 Committee couldn't even figure out correct charges? What's scary, Operation Mockingbird mainstream media seems to have not noticed the flaw with 18 USC 1512 witness tampering charges. Anyone can cross-reference the indictment with written law statutes to see this blatant error hidden in plain sight. Imagine if 1930's Germans were just a bit more skeptical of government propaganda.
@EndreSzaszКүн бұрын
100% non stop on one core... there goes power efficiency and heat. So you keep the CPU pegged 24/7 for that 5 minutes of 10Gb transfer you do per day.
@jackipieggКүн бұрын
You can do 10g but refuse to do 2.5g instead of 1g facepalm
@hubertnnnКүн бұрын
He said it in one of the previous videos. That CPU have modes with specific lists of interfaces in each mode. You cannot just distribute the bandwidth as you please.
@jackipiegg23 сағат бұрын
@@hubertnnn Its 2024 and he's still releasing 1gbe nic and calling it "pro". instant fail and no one will buy it.
@SkeptiSquidКүн бұрын
Hats off to you, this is a great development.
@BobWidlefishКүн бұрын
High-end networking geek here. Small packet performance is critical for core internet equipment. If you can’t send 14.4m 64 byte frames plus receive 14.4m 64 byte frames at the same time: your not doing 10 GbE. Bulk throughout is trivial and doesn’t require any fancy hardware: a mundane PC can easily do tens of Gbps with large packets.
@lyth1umКүн бұрын
there is linux vpp stuff, there is a guy doing a ring with x86 of the shelve hardware.
@desperateopportunist586Күн бұрын
@@lyth1umDo you know which guy is doing that stuff? I want to check it out
@Galileocrafter23 сағат бұрын
That’s what i have been saying all along. 10 Gb/s the easy way is 812743 Mpps with 1500 byte MTU packages. 10 Gb/s the hard way is 14,88 Mpps with 64 bytes MTU packets. 10 Gb/s the realistic way is a realistic IMIX traffic profile.
@ShadowFandubКүн бұрын
HAHAHAHA
@pianoman4JesusКүн бұрын
Oh wow! The magic of DPDK+VPP! +1 vote to go in that direction from me. And I do not do Twitter, or X, or BlueSky... I hope you still check your old fashion email address for when I need to get in touch with you outside of a KZbin reply comment. 😎 I will next send this to my "Right Hand Man in America" who built our custom Linux firewall platform 20 years ago. I really like your enthusiasm dedicated to this much needed space in the IT industry. Thank you SO much! Over time, I hope you will have enough volume growth that you would consider a next level up model with more network ports. 🥳🧐
@deeeezelКүн бұрын
I need to get my hands on one of these router when it available
@mazensmzКүн бұрын
I liked your video because you put the answer in the title.
@AtTheLetterMКүн бұрын
Please no white backgrounds im dying .
@originaljwsКүн бұрын
Not only am I excited about the results (which are solid. nicely done.), I'm so grateful you listened to the comments here and the other people following this project. Thank You for listening. I can't wait for availability of these routers. This is a fun project to follow and at the end of this rainbow is a useful and maintainable tool.
@SeandotcomКүн бұрын
lmao as an embedded developer I totally feel the cross-compiling mess
@SB-qm5wgКүн бұрын
Mikrotik crs305 has 4 SFP+ ports and runs on a tiny 1-core 32bit 800mhz cpu.
@EndreSzaszКүн бұрын
That is a switch, the packets don't get to the cpu, switch chip deals with them. If they go to the cpu it can barely do 1Gb. Check the test results on their product page.
@appcraft-brasilКүн бұрын
NetCraft DataCraft TitanLink AlfaNode
@cheako91155Күн бұрын
Not great for a home network for 1-core 100% 24/7? Interrupts and *DMA are wonderful, why go back to a world without them? * You shouldn't be able to get physical addresses from userspace.
@hubertnnnКүн бұрын
Interrupts use CPU resources. They are good for low traffic, but for high traffic polling is better. A perfect situation would be the ability to disable interrupts until all data in the queue is processed. Some microcontrollers I worked with had this feature where you would receive just one interrupt and no more until it is cleared (which happens after queue empties).
@georgehooper429Күн бұрын
Very nicely done. From the outside it seems a long way to get this type of through put. It looks like you added a few more gray hairs setting this all up. But in the end it works. Well done! I realize you might have a limited lab environment but it would be interesting to setup all 10GbE ports with a iperf system. I think there was 4 in your build (sorry poor memory). See if you can push 10GbE per port through the router. You might have a 2x2 setup for iperf testing. The idea is to see at what point you saturate that single core, and then need to dedicate a second or third core to networking while keeping the remaining core(s) for the kernel and system management (snmp, dhcp and such). I think its a good plan to keep one of the interfaces (1GbE) connected to the kernel as an out of band management interface. This will keep port forwarding of data interfaces offline until the system is fully booted and the system status/setup is confirmed.
@jamess1787Күн бұрын
I ran an EPC (vEPC) LTE core using DPDK. It was "black box" aside from the usual sysadmin/sysop stuff. It's cool how fast you can push your hardware. Vendor had some weird DPE (data plane) bug but that's aside from the point.
@sezam84Күн бұрын
Nice Video… are You planning to put this project on kickstarter or similar platform? I am interested in the product :)
@raresmalene5569Күн бұрын
64 sized packets, or it is not 10gbit/s,small packets are quite important, if it doesn’t do it with the rfc 2544, on imix and both 64 packet size, not gonna do anything better than a 100 dollar router. Max mtu on your device is just the limit at which the computer starts to fragment the packets, it is like having an earthmover( mtu 9000) vs a wheelbarrow(mtu 64), yes you can carry more faster, but you are building a clay pot, you can not use an earthmover to carry the clay
@originaljwsКүн бұрын
Recognize he's capping the CPU performance and testing the performance limits. Internet-Mix packet sizes and full clock rate operational benchmarks are obviously important as you point out, but that doesn't make a good thumbnail. All zero payload benchmarks are almost as useless as all max(MTU) without more context, I'm impressed with the metrics he shared, and am pleased to see the positive response to the kernel/source license problems. I look forward to this project continuing to develop and becoming a tangible, order-able device.
@maksiodzidek1Күн бұрын
good job
@noxos.Күн бұрын
Can you make a homelab tour
@xdevs23Күн бұрын
I didn't know DPDK existed. Looks very promising. However, it feels like it is non-standard. I don't know if administrators really want to deal with DPDK/VPP if Linux already provides really good infrastructure.But hey, as long as it works and doesn't interfere with what I do - that's fine. It would be interesting to see the latency on this. The bandwidth may be high but latency is also something to keep in mind.
@jamess1787Күн бұрын
DPDK is used in server environments where the host OS doesn't need to interfere with the data, when you can tunnel the traffic to separate containers or guest VM's. Cellular infrastructure/mobility cores for LTE infrastructure do this to optimize hardware requirements (and space available). It's really cool tech, weird to think about it's complexities tho. Not sure how Linus or any of the kernel maintainers agreed to implement it 😂
@hubertnnnКүн бұрын
Never heard of it either, but the interface looks like CISCO's router command line interface, and since it was made by cisco, I wouldn't be surprised if it actually is their router CLI. And if it is, then administrators already know it. It would be a bit worse for non administrators, because cisco's cli is a pain to learn with many non obvious things.
@kristopherleslie8343Күн бұрын
What about L3
@bennetb01Күн бұрын
It's not that the packet size needs to be downsized but the PPS that VPP can do. There are a lot of things that use small packets like DNS, etc and there is specifically a test called IMIX. It's not perfect but the idea is to test throughput using various packet sizes that mimic more of a real world solution. A lot of commercial routers can put up huge numbers with 1500 (and more with 9000) byte packets but even when you MTU is set that high you will find the average package size is much lower. It would be good to know the performance of the router with 64-byte packets (the lowest) as well as IMIX (or something else that is not the ideal max packet size). Again it doesn't matter what your MTU is set to, it's the average sized packet. Thinks like DNS or ACKs are going to be a lot of smaller packets.
@BitZorgКүн бұрын
I'm very happy to hear that NXP was willing to open source what would be needed, both options seem very promising to me.
@hubertnnnКүн бұрын
Yep, with them open sourcing the necessary parts, proprietary solution starts to feel better than the vpp, due to compatibility with commonly used tools.
@sekanderbast452Күн бұрын
First, I‘m very impressed by the performance! One question though, as one Core is now constantly pegged, in what way does this impact power consumption? Is there a notable difference between this solution and the old proprietary sdk when at idle or when routing?
@xdevs23Күн бұрын
It's pegged, but probably not actually using much power. I guess it's just busy-waiting on packets, which should be just some conditional branches and compares, nothing too crazy. Nevertheless, it's taking CPU time that could have been used elsewhere.
@tomazzamanКүн бұрын
You're right, the core is at 100%, but it's basically just a constant loop of polling the interfaces.
@foxfoxfoxfoxfoxfoxfoxfoxfoxfoxКүн бұрын
What kind of performance difference is there between VPP and native linux packet forwarding? What happens performance wise if you switch the network driver to polling mode in linux?
@qdaniele97Күн бұрын
My guess would be that at zero to very little traffic, Kernel networking with interrupts would be slightly more efficient/faster, but at any level of traffic above that VPP would get the advantage. That because, even with no traffic at all, VPP would still be polling NICs to know if there's something new while the kernel would doing other things waiting for NICs to tell it something happened. With more traffic instead things wouldn't change much for VPP, but a lot for the kernel which would be receiving lots of interrupts and having to constantly stop what it was doing to listen to what NICs have to say.
@hubertnnnКүн бұрын
@@qdaniele97 I see that as well. Maybe release 2 versions of the OS/firmware, one with classic Linux kernel (the default one) and one with VPP.
@wolfgangpreier9160Күн бұрын
For Bluesky you have to become diverse and woke. Sorry, that is not suitable for me.
@Auto5k23 сағат бұрын
Just don't say racial slurs or tell people to off themselves, shouldn't be too difficult right?
@alexgartrellwork335Күн бұрын
The reason "downsized" packet performance is important is that TCP-ACKs and other small packets exist organically. So packets-per-second is actually a relevant and important metric for router performance. With sufficiently large payloads, throughput is just a direct memory access benchmark because you're just copying stuff around and not doing that much "thinking."
@jamess1787Күн бұрын
DPDK has a way to handle this, but you're right, it's optimized for heavier throughput, with smaller packets seeing higher latency then you would normally see. (Set your icmp packet sizes larger and you'll see the latency problem disappear) Think about DPDK like more of a "pipe" connecting the two end devices together, it doesn't matter how much volume of fluid you put in there and it'll get there in a timely fashion, especially if there is heavy throughout on the system. DPDK works great for things like SCTP. 🤘
@gcs8Күн бұрын
Nice, I am used to only seeing DPDK and it's other friends in the data center as part of NSX-T, I never brought it up as I only ever saw it for specific NICs and not on any embedded stuff outside of something like a DPU. I think this would be a super cool thing to get into more common use. I think if you make the router OS able to be virtualized with the same feature set when paired with a supported NIC (it was mostly Intel and Broadcom last I checked) that could really open up home lab stuff for some cool things like "pocket universes" or just an easier way to play with OSPF/BGP with enough oomph behind it to make it fun for the lab. This could open a lot of SDN fun up.
@0zux45Күн бұрын
I really like performance graphing with grafana/prometheus or whatever else. I assume vpp already has the capabilities for an external software to pull that info?
@Holy_HoboКүн бұрын
Bluesky is cringe
@chimpo131Күн бұрын
this guy also sounds like such an insecure douc he whenever he talks 😂
@rapamuneКүн бұрын
It has virtually already turned into an echo chamber with extreme moderation. Only viable for radical progressive left users at this point in time.
@PR-cj8pdКүн бұрын
There's nothing wrong with twitter
@marshallb5210Күн бұрын
xitter is cringe
@eat.a.dick.googleКүн бұрын
X is the worst cringe.
@D9ID9IКүн бұрын
I guess Mikrotik like rb4011 or rb5009 can do that without any issue