The 100gig Adventure

  Рет қаралды 125,382

Level1Techs

Level1Techs

Күн бұрын

Wendell has been busy upgrading everything to 100 gigabits!
Check it out here:
www.amazon.com/...
www.lr-link.com/
********************************
Check us out online at the following places!
bio.link/level...
IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
-------------------------------------------------------------------------------------------------------------
Music: "Earth Bound" by Slynk
Edited by Autumn

Пікірлер: 370
@CraftComputing
@CraftComputing Ай бұрын
"I ended up upgrading everything to 100Gb, because that's what you do, right?" Yes. That is what you do.
@SideSweep-jr1no
@SideSweep-jr1no Ай бұрын
Yes. That is what you do.
@HoangTheBoss
@HoangTheBoss Ай бұрын
@@Prasanna_Shinde did you account for full duplex
@ryandenotter9064
@ryandenotter9064 Ай бұрын
Off to convince my wife of how this will significantly improve our quality of life. Wish me luck! :P
@Level1Techs
@Level1Techs Ай бұрын
good luck
@piked86
@piked86 Ай бұрын
In just talked my wife into using pfsense to upgrade to 10gig networking by using the promise of a VPN into our network for using self hosted AI. Llama and stable diffusion.
@awarepenguin3376
@awarepenguin3376 Ай бұрын
life is short, need 100g
@ciroiriarte8804
@ciroiriarte8804 Ай бұрын
oh, budget review session
@MrMartinSchou
@MrMartinSchou Ай бұрын
I know how you feel, except I'm single. I only have 1 computer and 1 phone. A NAS would be semi-useful, but realistically it'd be more useful to have offsite backup. But it's cool tech and damnit, I want it!
@gingerman5123
@gingerman5123 Ай бұрын
I'm a network engineer at an ISP. We're starting to install 100g internet connections. Pretty wild.
@brians8664
@brians8664 Ай бұрын
I was just thinking the same thing, we deployed 400Gb backbones years ago. 10Gb wan connections are a dime a dozen now. We’re starting to see dual/quad 100Gb backhauls to cell sites routinely now. Dual is for standard redundancy, quad is for dual-path redundancy.
@sznikers
@sznikers Ай бұрын
68Mbps here ... 😂😂😂
@Maverick00555
@Maverick00555 Ай бұрын
​@@sznikers 50mbps😅
@peq42_
@peq42_ Ай бұрын
judging by how badly affected my ISP was by a DDoS not long ago? I'd say they're still installing 100gbps connections as well(which is funny since they OFFER 1gbps to clients)
@brians8664
@brians8664 Ай бұрын
Sidenote, I always laugh when I see KZbinrs cringe at the price of optics. I always throw up a little when I see the price of a 100 km 100Gb QSFP+ module. For most companies, field side amplification has been gone for many years. It’s far cheaper to buy longer range optics and have multiple links.
@lennard9331
@lennard9331 Ай бұрын
"...so I ended up upgrading everything to 100Gbps, that's what you do, right?" No, Wendell, most of us don't have the kind of equipment to do that 😂
@jonathanyang6230
@jonathanyang6230 Ай бұрын
i was pretty jazzed when i got myself a 2.5gbe switch
@lennard9331
@lennard9331 Ай бұрын
@@jonathanyang6230 I'm not a massive home networking guy, so I make do with what I get from my ISP and what on-board solution offers. That being said, I'm also on 2.5Gbps right now, as French ISPs have started supporting 2.5Gbps, 5Gbps and even 10Gbps modems for end users at affordable prices. The difference it makes is actually insane! I didn't expect moving from gigabit/WiFi 6 to dual 2.5Gbps/WiFi6e to make such a massive difference, even with the connections between my devices at home.
@TheIgor449
@TheIgor449 Ай бұрын
Me with my Mel- Nvidia 10Gbps sfp+ card thinking it's overkill for at least 5-10 years
@r00tyschannel52
@r00tyschannel52 Ай бұрын
I just recently upgraded to 2.5gbe with 10gbit link between floors. Ah yes, future proofed for a little while at least. "You should all be getting 100gbe, it's old hat by now" the fuuuu. Yeah, I know we're talking enterprise, but still.
@hugevibez
@hugevibez Ай бұрын
It really isn't that far out of reach, you can find a lot of 100gbps switches for the price of a high-end Ubiquity switch, since the hyperscalars are dumping 100gig en masse. If you are on a 3 node cluster, you can just crossbar them and forego a switch. This means you can spread out buying the NICs and transceivers on the client side, then afterwards buying switches and transceivers on the networking side. Edit: I went full balls to the wall, where I upgraded every switch to ONIE/SONiC and my entire network stack came down to about 3000 Euro. I did this because I wanted to learn SONiC and how to build an overlay network. A more reasonable approach would be to find one or two SN2040 switches for redundancy with 8x 100gbps and 48x 25gbps ports, this is more than enough connectivity for any homelab in my opinion. You only need something that has rj45 and POE+ for clients and APs on the side.
@bruce_just_
@bruce_just_ Ай бұрын
Network engineer here, we’re already on 100G between Metro POP sites and intercapital links for several months already, and now standing up multiple 100G link bundles on intracapital core links. Also our colleagues in the internet peering team are running multiple 100G link bundles on our internet borders.
@BryanSeitz
@BryanSeitz Ай бұрын
Why so slow?
@alexz1232
@alexz1232 Ай бұрын
@@BryanSeitz A lot of the heavy traffic like streaming services and game downloads will have a local cache in major cities. With good management you only need 100G worth of peering bandwidth per 100K clients.
@TheMrDrMs
@TheMrDrMs Ай бұрын
sys eng and I dabble in network with the net eng guys. We got some 400gb switches maybe 6mo ago or so. so wild. Then I saw the netflix 400gb over PCI docs, also wild.
@floriantthebault521
@floriantthebault521 Ай бұрын
Nah, you don’t need ASIC for deep trafic analysis on 100Gb/s network. At my place, we do DPI at 100Gb/s with only one CPU (32-cores, though) and 64GB of RAM. Full line-rate certified by Spirent, at 14 millions new sessions per second and 140 mpps. But to do that, we had to redevelop the drivers of the e810 from scratch in Rust for everything to work in userspace (DPDK is… too limited for that). So it’s possible, took us 3 years of R&D, though ;-)
@JeffMcJunkin
@JeffMcJunkin Ай бұрын
Can you share a link or any context? This is intriguing stuff!
@quackdoc5074
@quackdoc5074 Ай бұрын
out of curiosity, was this done using BPF?
@floriantthebault521
@floriantthebault521 Ай бұрын
@@quackdoc5074 Nope. We tried using AF_XDP (because of eBPF), but... it didn't scale enough to reach 100Gbit/s full line rate. It started dropping around 40G and we had to throw 40+ codes... To costly. That's why we took the high road and re-developped brand new NIC drivers from scratch for the whole intel family (from the 500 to the 800), only way to achieve true linear scalability.
@slidetoc
@slidetoc Ай бұрын
So when will it be on github?
@jfbeam
@jfbeam Ай бұрын
@@quackdoc5074 Doubt it. BPF is too slow. That's why DPDK came about - it's mostly just a NIC driver in userspace, but you're very limited in what you can do in userspace.
@alc5440
@alc5440 Ай бұрын
God the FEC nightmare is real. I spend days trying to figure out why I couldn't get a Mikrotik router to talk to a Ubiquiti switch at 25 gig and the answer was FEC.
@sporefergieboy10
@sporefergieboy10 Ай бұрын
FEC my NAS
@dbcooper7326
@dbcooper7326 Ай бұрын
My first corporate job had 4Mb/s Token ring networking. What a leap since.
@truckerallikatuk
@truckerallikatuk Ай бұрын
Wendel waves around a connectX-5 calling it old... I'm gonna go cuddle my beloved connectX-3...
@guspaz
@guspaz Ай бұрын
I recently had to swap out my ConnectX-3 cards with ConnectX-4 cards because nVidia dropped driver support for ConnectX-3 after 2020 (so Ubuntu 20.04 is fine, but 22.04 and 24.04 is a no-go), but still support the latest and greatest distros/kernels with the ConnectX-4. Luckily, 25 gig ConnectX-4 cards are now dirt cheap and are backwards compatible with SFP+, so I could simultaneously fix my driver woes, set myself up for a future 25 gig upgrade, and avoid replacing anything in my network other than the NICs.
@samegoi
@samegoi Ай бұрын
I installed x-4 cards today :D
@BattousaiHBr
@BattousaiHBr Ай бұрын
bruh i'm on x-2...
@Vatharian
@Vatharian Ай бұрын
I am absolutely happy I got Connect-X 512 under $100/piece few years ago. I started from ConnectX-2. You will get there!
@peterpain6625
@peterpain6625 Ай бұрын
Those are still doing fine if you update the firmware. We got lots of those in production.
@q5sys
@q5sys Ай бұрын
"Experimental RDMA capable SMB Server that's floating around out there on the internet" ... GO ON...
@fuzzydogdog
@fuzzydogdog Ай бұрын
Likely referring to ksmbd, it's an in-kernel server that got declared stable late last year. There's a couple threads on the forums about it, but Windows seems to have trouble establishing RDMA connections with it.
@q5sys
@q5sys Ай бұрын
@@fuzzydogdog I thought of that, but since that has been marked as stable and he said 'experimental' I was thinking maybe he has heard of something else.
@rawhide_kobayashi
@rawhide_kobayashi Ай бұрын
@@fuzzydogdog man, I've been fighting linux server > windows client rdma file sharing for years. I tried ksmbd before it was 'stable' (but after rdma was supposed to be supported) and it never worked. but now I don't have any rdma-capable connections between windows and linux machines anymore anyway...
@NathanaelNewton
@NathanaelNewton Ай бұрын
​@@q5sys oh hi there
@funkintonbeardo
@funkintonbeardo Ай бұрын
In a previous job, I worked with 2x400G cards using DPDK. It was glorious to run TRex on the other side of the wire and see 800G flowing through our program.
@abavariannormiepleb9470
@abavariannormiepleb9470 Ай бұрын
Video/content suggestion: Boot Windows over such a 100 GbE adapter from a ZFS Server and how to get the most performance out of it.
@Kfdhjgethfdtgh774rvbjs
@Kfdhjgethfdtgh774rvbjs Ай бұрын
+1
@BasedPajeet
@BasedPajeet Ай бұрын
U need iscsi boot capable motherboard the last time I saw that option it was in an intel nuc.
@fujinshu
@fujinshu Ай бұрын
@@BasedPajeetYou mean PXE boot, right?
@magfal
@magfal Ай бұрын
​@@fujinshuPXE is not the same as iSCSS
@nadtz
@nadtz Ай бұрын
@@fujinshu No, he meant isci boot and like he mentioned it's a boot option in some motherboard bios (nic also needs to support it). Can be a little quirky to get working (did it with some supermicro motherboards once upon a time) but once you get it working it's pretty neat. That said I don't believe intel still supports it but you can still do something similar with UEFI boot options on hardware that supports it (and for that you will need PXE).
@michaelrichardson8467
@michaelrichardson8467 Ай бұрын
Ahh yes 100Gbit. The real 10Gig
@JireSoftware
@JireSoftware Ай бұрын
10 gig was rad for 2014! 10 years later and now it's 100 gig!
Ай бұрын
nice autism jire @@JireSoftware
@DavidEsotica
@DavidEsotica Ай бұрын
Fair to say I didn't understand most of what he was talking about but it was fun to listen to.
@dbattleaxe
@dbattleaxe Ай бұрын
You should do a review of QNAP's 4x100GbE + 8x25GbE switch. It's reasonably priced and uses current gen tech, so much lower power/fan noise and has more ports than the Mikrotik 100GbE switch. It won't have all the fancy layer 3 capabilities of the used switches, but I'd like to see how it compares for those of us who care about noise.
@jttech44
@jttech44 Ай бұрын
I'd argue that the mikrotik cloud routers don't actually have usable L3 features, being that even simple things wind up limiting thruput to ~400mbps
@dbattleaxe
@dbattleaxe Ай бұрын
@@jttech44 Yeah, that was in reference to the used Mellanox, Dell, etc. switches, not the Mikrotik ones. These low cost switches don't really have usable L3 features, but most home labs don't really need those.
@makinbacon21
@makinbacon21 Ай бұрын
We've got that server deployed as a VM host actually. Proxmox, ZFS (RAIDZ2 on SAS HDDs + L2ARC & ZLOG on NVMe). Wonderful piece of hw, though we're likely only getting to 10 GbE this year. Might future proof with a 25 GbE capable switch, but the upstream switch we're linked to only got to 10 GbE recently (low priority, other buildings are up to 25 and 100).
@dbattleaxe
@dbattleaxe Ай бұрын
I got used 100GbE-CWDM4 transceivers for $5 each off ebay. Those run with LC terminated duplex single mode fiber, which is much easier to deal with than 8 fiber MPO.
@danmerillat
@danmerillat Ай бұрын
same for 40gbe-lr4 (lite). They're basically paying you to buy them when you save so much in cable costs going from MPO to LC and you don't have to deal with crossover mismatch.
@jame358
@jame358 Ай бұрын
it feels weird working in a bigtech DC and seeing people talk about 100gig, meanwhile im regularly working switches with 32 400gig QSFP ports.
@Ex_impius
@Ex_impius Ай бұрын
Same thing i said. 100G is not new. We were installing 100G links in google DCs in 2017. There were only a few 100G links on the juniper and cisco routers in the CNR(campus networking room) then but we had them.
@darklordzqwerty
@darklordzqwerty Ай бұрын
lol im working on 800gig final development phase before production version. seeing this is funny, there's much more exciting stuff.
@jamesfmilne
@jamesfmilne Ай бұрын
We use Xinnor for NVMe RAID in film post production where bandwidth is more important than data integrity of something like ZFS.
@niklasp.5847
@niklasp.5847 Ай бұрын
Remember the intel cards are 100g full duplex, while the Mellanox could push line rate per port if the pcie bus wouldn't limit it. The cx4 is still supported, as it uses the same driver as the cx7. If one does not need the new features like 200g or 400g the old cards are almost as capable. That slcould however not be said for 100g cards from qpogoc which are a pain in the ass compared to mlx and intel I would love to see some stuff with dpdk and vpp. A 100G router in x86 is very cool
@PaperReaper
@PaperReaper Ай бұрын
Work at a small cloud provider. We only just upgraded to 100gig in our backbone a year or so ago and are expanding that soonish. A few 100gig switches went through my hands the other day for software updates.
@vdis
@vdis Ай бұрын
100 Gigabit. Gigabit! Well thanks, now I feel really old. 10 megabit coax old.
@piked86
@piked86 Ай бұрын
Wendell is my Mr. Wizard for computers.
@vaughn1804
@vaughn1804 Ай бұрын
💯👍
@ikirules
@ikirules Ай бұрын
it's the Crawly of the computers :D! (tiktok reference)
@makeshiftsavant
@makeshiftsavant Ай бұрын
In the current year, I don't learn anything from watching anyone else. Love the Wendell evolution.
@lilricky2515
@lilricky2515 Ай бұрын
"We're going to need another Timmy..."
@keyboard_g
@keyboard_g Ай бұрын
Slaps 100Gbps. You can fit so many KZbins in there.
@porklaser
@porklaser Ай бұрын
25 Gig was easy and worked out of the box.. So naturally I had to go the hard route. Hehe. Upgraded the home net to a 10gig backbone and I was feelin' pretty good.
@newstandardaccount
@newstandardaccount Ай бұрын
I just upgraded part of my home network from 1 gbe to 10 gbe and it was a huge quality of life improvement. Moving large files to/from my NAS is fast! Upgrading to 100 gbe sounds insane to me.
@mrmotofy
@mrmotofy Ай бұрын
Need storage speeds to make use of it
@newstandardaccount
@newstandardaccount Ай бұрын
@@mrmotofy yes - in my case I'm using ZFS on rotational media. What I've noticed is that for files that are about 7 gigs or smaller, I can copy them to my server at over 1 GB/sec, but eventually the speed drops to the rotational media speed, about 250 MB/sec. My guess is that ZFS is caching the writes in some way but once I blow out the cache, it is forced to write at the slower speeds of the media.
@ingframin
@ingframin Ай бұрын
I bought 2 cards with Intel e810 at work. They work like a charm and the driver is open source. Although, you need to compile it yourself for Debian… but for the rest they are basically plug and play. I am very happy with them.
@guspaz
@guspaz Ай бұрын
From a consumer perspective, 10 gig hardware (be it NICs, transceivers, or switches) are dirt cheap now. You can get any of those for under a hundred dollars now. But 25 gig and 100 gig, while the NICs/transceivers are affordable or even cheap, the switches are still in the high hundreds to low thousands of dollars. And those switches will probably need a secondary switch to connect 1 or 2.5 gig devices to your network, you can't really get an all-in-one switch like you can for mixing 1/2.5 and 10 gig clients. The costs add up fast and your minimum investment ends up in the thousands of dollars.
@mrmotofy
@mrmotofy Ай бұрын
Sure, cuz right now I'm looking at an 8 port SFP+ Mikrotik switch for $230
@vylbird8014
@vylbird8014 Ай бұрын
I have noticed one small problem though: That 10gig hardware consumer significantly more power than regular gig-eth. Notice the great big heatsink on every pci-e 10gbit interface.
@mrmotofy
@mrmotofy Ай бұрын
@@vylbird8014 Fiber doesn't
@edwardallenthree
@edwardallenthree Ай бұрын
I still use old infiniband stuff (40 gig) at home. It's still shockingly fast and low latency. And cheap for all the parts from switches to nics.
@edwardallenthree
@edwardallenthree Ай бұрын
NFS over rdma is very very very fast.
@runforestrunfpv4354
@runforestrunfpv4354 Ай бұрын
40GB is decommissioned stuff. Get them fast.
@michaelgleason4791
@michaelgleason4791 Ай бұрын
I was going to upgrade to 10gb, but went with 25gb, so I get the notion. I just love watching Wendel when he's like a little kid about this stuff. It's so fun and engaging.
@RaidOwl
@RaidOwl Ай бұрын
Ahh yes 100gig...my nemesis
@seanunderscorepry
@seanunderscorepry Ай бұрын
"and if you made it to the end of the video... You are my RaidOwl comment on a level1techs video."
@ewenchan1239
@ewenchan1239 Ай бұрын
I've been running Mellanox ConnectX-4 dual 100 Gbps VPI Infiniband cards in the basement of my home since December 2018. Skipped 10G, etc. and went straight from 1 G to 100 Gbps. IB has its quirks but it is anywhere between 1-3% faster than 100 GbE off the same card.
@marktackman2886
@marktackman2886 Ай бұрын
Network Engineers support Wendell's 100g journey. Now lets get a desktop system capable of the throughput without an accelerator card.
@DEJ915
@DEJ915 Ай бұрын
My experience moving from 10g to 25g was interesting since no links would come up by just plugging in. After a long session of messing around with it for a few hours I figured that if I enable FEC then disable it again the links activate. I thought maybe the firmware was the issue on my x695s since extreme mentioned FEC in update notes but the same behaviour was still going on even after updating so just a bit odd. These weren't even LACP ports or anything either just basic ports. (host side was dual-port 25G broadcom NICs on vmware 7)
@literallycanadian
@literallycanadian 16 күн бұрын
Its so funny hearing you talk about how the orange OM1 fibre is dinosaur age. Industrial plants still live on the stuff. Heck the protective relays that control the breakers that protect our electrical grid are still being installed to this day with OM1 fibre.
@jeroenlodder5838
@jeroenlodder5838 Ай бұрын
I just upgraded to 10 and 2.5… it required special fumbling with driver versions on windows, because of course intel. But it works! Now I need faster WiFi and fiber internet…
@ajhieb
@ajhieb Ай бұрын
I upgraded everything in my rack to 40Gbe a couple of years ago (and it was pretty dang cheap at the time) and seeing as I don't have and uber-fast kioxia drives, I don't think the jump to 100Gbe is worth the cost for me. Might wait for 400Gbe to come down in a few years. My core switch is a Mellanox SX6036 It connects to a Dell/Force-10 S4810 and a pair of Powerconnect 5548 switches (one with POE) My nics are mostly Mellanox ConnectX-3s with a few Chelsio Intel based nics. It all worked together surprisingly well. IIRC for the nics, dac cables and 2 switches I was all in for around $600-$700.
@mrmotofy
@mrmotofy Ай бұрын
Your Corn collection must make you really good money
@rmp5s
@rmp5s Ай бұрын
See?...this is how I end up replacing my server, all my storage, all my networking gear, everything...the chase!! Speed is never enough!! lol
@TechAmbr
@TechAmbr Ай бұрын
Good lord, I just upgraded my home network to 2.5gig and now 100gig is a thing? Wendell, how much speed do you need????? Hopefully the Level1 Supercomputer Cluster will be showing up in the Top100 soon lol
@guspaz
@guspaz Ай бұрын
With 10 gig switches being under a hundred bucks now, 2.5 is old hat ;)
@m4dizzle
@m4dizzle Ай бұрын
I love when Wendell gets excited. Personally, I'm impressed when Linux does anything at all 😂
@ReeseRiverson
@ReeseRiverson Ай бұрын
And here I was thinking about going 25gbit for my Dell storage server and my main desktop. Wendell has made the want for 100gbit more desirable. lol
@ewenchan1239
@ewenchan1239 Ай бұрын
If you're running a point-to-point connection, it's not so bad. If you need a 100 Gbps switch in between, THAT is probably going to be your most expensive piece of hardware. (Depends on how many ports you need, but there are, cheaper (absolute price) options, albeit at a higher $/Gbps throughput level. I bought my 36-port 100 Gbps Infiniband switch for $3000. $3000 is a lot of money, but that $3000/7.2 Tbps throughput = $0.416666/Gbps. You can have cheaper switches, but the $/Gbps will be higher.
@sventharfatman
@sventharfatman Ай бұрын
100g has options for DR to shoot 1310nm 500meter and FR to shoot 1310nm 2km. Both are plenty safe for short runs in the same rack or within the datacenter. Even the 10km optics are unlikely to burn out the receiving side these days. Most of them have a RX window starting at or above the top of the TX window. So should be good to go once you add some loss through connectors.
@danmerillat
@danmerillat Ай бұрын
Don't they modulate their optical power? 40g-LR4 lite I'm transmitting 0.5 dBm, receiving -1 and it's rated for up to 3.5 (TX and RX)
@rush2489
@rush2489 Ай бұрын
we deployed pensando dpus in our latest datacenter refresh (dual 100GB ports) + a second connectX-6 dual 100GB card for RDMA support (and dual DPU wasn't yet supported when we designed the hardware refresh) Its stupid fast.
@pixiepaws99
@pixiepaws99 Ай бұрын
Big thing is RDMA/ROCE, and thankfully E810 supports that. You can easily do iSCSI with iSER and NVMeoF with RDMA.
@wabash9000
@wabash9000 Ай бұрын
I've been thinking about getting 40gb for home just for the heck of it. The used 10gb hardware is actually quite expensive, but because there was a bunch of datacenters that had 40gb and then upgraded to 100gb or 250gb, the used hardware on ebay is CHEAP. $20 for a dual NIC, $150-250 for a 36 port switch. The same stuff in used 10gb is $100 per NIC and $500 per switch.
@Aliamus_
@Aliamus_ 29 күн бұрын
I semi recently upgraded to 10gbe, and I'm barelly keeping it saturated, one way, 2 single port connectx-3's actually, I'll be happy with this for a long while, doesn't stop me from drooling over this though.
@awarepenguin3376
@awarepenguin3376 Ай бұрын
Unfortunately I'm a Mac so we can't have nice 100G. 40G is the max. (cries in speed)
@FrenziedManbeast
@FrenziedManbeast Ай бұрын
And I'm sitting over here acting smug with my 2.5 and 10 GbE...I don't think I could saturate 100GbE but the 25 would be nice under certain workloads!
@nadtz
@nadtz Ай бұрын
Yeah I wouldn't need it most of the time but 25 would be nice. Problem I've found is the affordable 25/40gb switches are older, loud and power hungry. Newer stuff that is a lot more power efficient is still pretty expensive. For now I'll stick with 10 but I'm keeping my eye out for an upgrade.
@StaceySchroederCanada
@StaceySchroederCanada Ай бұрын
@@nadtz what is it about the 10 that you don't like?
@nadtz
@nadtz Ай бұрын
@@StaceySchroederCanada Never said I didn't like it, price vs. performance 10gb is great because it's so (relatively) cheap. It would occasionally be nice to have a faster connection when I'm dumping large amounts of data across my network is all.
@peq42_
@peq42_ Ай бұрын
to be fair 2.5GbE isn't something to be smug about when all current motherboards come with support for it out of the box xD
@FrenziedManbeast
@FrenziedManbeast Ай бұрын
@@peq42_ This is patently false - you can find both AM5 and LGA 1700 mobos that are $160+ with only a single 1GbE port, and that's not even looking at the ghetto chipsets on each platform. Even figuring that, I'd reckon most people don't have 2.5 GbE switching/routing infrastructure unless they went out of their way on purpose for it. I personally have my workstation and server running on 10GbE, and everything else is 2.5GbE at this point. But I had to sit down and purposely plan/buy all the gear to make the jump possible for my whole house.
@burningglory2373
@burningglory2373 Ай бұрын
If you do end up using long haul cards you can get attenuators, I jus recommend having a light meter handy so you can see exactly how much you need to attenuate. Not that it makes sense to pay the extra money but if you have the hardware just sitting around you make due with what you got.
@mcpr5971
@mcpr5971 Ай бұрын
I want the 2x 100g port just so I can push 100gb/s back to myself for the thrill of it. Do you have to bolt your chassis down to protect against the inertia of all those bits slamming into the receiver so quickly?
@nemesis851_
@nemesis851_ Ай бұрын
Omg 😂 25 years ago I worked in the ILEC CO and we had OC-3, all the way down to 1 Meg (and less). Here today, we the general public can have OC3 in our hands at home 😊
@HupfderFloh
@HupfderFloh Ай бұрын
I think you're definitely getting into Level 2 territory here.
@cmdr_talikarni
@cmdr_talikarni Ай бұрын
I am just starting into the 2.5G realm, just don't have the need or equipment to relay data that fast on my LAN. 2.5G is a good spot since I will be getting 2G fiber WAN soon.
@Chris_miller192
@Chris_miller192 Ай бұрын
A guide / video on the experimental RDMA SMB server that you mentioned would be lovely.
@henlego
@henlego Ай бұрын
Now that you have 100G, may as well setup a lustre volume
@lolmao500
@lolmao500 Ай бұрын
Me getting 500mbs internet. Me after getting new internet : realizes that my computer, router and wifi cant handle it and can handle at most 100 mbps... goddamn it
@adampope5107
@adampope5107 Ай бұрын
Most likely your computer and switch can handle it fine and it's just your router being the choke point unless you have a seriously old computer and switch.
@whohan779
@whohan779 Ай бұрын
​@@adampope5107 Yeah; even Desktop PCs from 2004 having mainboards such as the Gigabyte 915PM-ILR come with Gigabit/s Ethernet. I'd be seriously embarrassed to not have that as a minimum when such boards regularly end up in scrapyards.
@danmerillat
@danmerillat Ай бұрын
@@whohan779 he mentioned wifi, so I'm guessing not a hard link.
@theforthdoctor7872
@theforthdoctor7872 Ай бұрын
For a moment I thought you had discovered a huge lost infocom text adventure.
@tomhollins5303
@tomhollins5303 Ай бұрын
That switch @15:52 Definitely an IT cabling job.
@jttech44
@jttech44 Ай бұрын
Oh mannn yeah, FEC bit me in the ass to the tune of like 4 hours on those XG switches. It is fixable in any condition, provided you actually have control over FEC on both ends... it's just a pain, and I'd highly recommend just using what Ubiquiti uses (Base-R aka FEC74 in the rest of the world) and not trying to get the ubiquiti setting to persist. I'd also recommend filing a ticket with UI to complain about their nonsensical, nonstandard FEC settings.
@kelownatechkid
@kelownatechkid Ай бұрын
Great for Ceph! I don't trust my important data on anything else at this point other than with erasure coding in a ceph cluster of machines running ECC lol
@WiihawkPL
@WiihawkPL Ай бұрын
having failed to get into even 10gig because of price i'm sure this is worth watching
@danmerillat
@danmerillat Ай бұрын
10g copper is expensive for a variety of reasons: hard to drive copper, used market has huge demand since it works (mostly) with existing cat6 wiring, highend boards come with 10g-baseT connections. It cost me far more for a 10gbase-T connection using existing wires than a 40gb-lr4 where I had to run a new fiber drop from my network closet to my office - including the cost of the fiber, keystones, patches. If you are in a position to run fiber it opens up a world of cheap secondhand highspeed networking gear. I spent under $400 total for 40g, and 100g would have only doubled that. (not exactly comparable, 100g point-to-point to my NAS vs 40g with a switch included)
@LinniageX
@LinniageX Ай бұрын
I just built an iSCSI setup using Hyper-V and Windows Server as a file server and it is actually really performant. I did have to fiddle with the iSCSI and NIC settings to get the optimal throughput but with 2 10G fiber channels per server, I have a very stable very fast fail-over cluster using built in software. Things have really changed.
@marktackman2886
@marktackman2886 Ай бұрын
Network Engineers love Intel NICs, 5xx,7xx,8xx generations have been so reliable....snatched 4x x550 from AliExpress.....my home lab has 10% the capabilities but yet still cost 1k for 4x NIC, a 10g switch, and a 10g router. If you want to know the exact setup, lemme know, i'll post it.
@JamesHarr
@JamesHarr 27 күн бұрын
I hate to nit pick, but we run a lot of 100G-LR4 for short runs so we don't need multiple kinds of optics and the whole "burning out the other optic" isn't much of an issue anymore for decent quality transceivers.
@CubbyTech
@CubbyTech Ай бұрын
100G 10km / LR optics are super common in my data centers! Used for 3ft to 15 miles! You only have to think about it when you get to ZR optics / 80km reach. This is also over single mode fiber.
@DavidtheSwarfer
@DavidtheSwarfer Ай бұрын
so what started as a "2.5Gb workstation to server" direct link ended up as a 5 port 2.5Gb switch and 3 machines linked at 2.5Gb, with a 4th coming soon. Upshot is I have faster network at home than at work (everything still 1Gb there) . What a wonderful world we live in.
@max-du9hq
@max-du9hq Ай бұрын
I like your calm no-nonsense presentation.
@russell2952
@russell2952 Ай бұрын
A few times a year it'd be nice to have that at home, otherwise it's overkill. The most I normally do on my LAN is stream movies.
@JeremyBoothRossiter
@JeremyBoothRossiter Ай бұрын
Still struggling to get more than 18Gb/s out of my mellanox connectx 5 on windows 11, but only in 1 direction, iperf gives 85Gb up but only 18Gb down, very odd (works fine on a linux machine instead of the windows 11 box)
@smeezer
@smeezer 28 күн бұрын
16:18 Wendell about to get roasted by Jayz2cents for cable management xD
@drvcrash
@drvcrash Ай бұрын
I spent days getting fec to work between my mikrotik and unfi doing 25gb.
@SirDimpls
@SirDimpls Ай бұрын
don't know what most of this video is talking about, but the other day I discovered the reason my 2 years old home file server was slow is because I used a really cheap 1m lan cable that could only run at 100mbps, I changed it with another cheap lan cable and magically got 1000mbps upgrade 😂
@bingo475
@bingo475 29 күн бұрын
At 2:05 you mentioned the fiber optics cable colors, could you do a video in more detail on the colors and their uses? I work at a company that manufactures and test fiber optics equipment. We use the yellow fiber for 100Gbe and 800/1,600Gbe. The aqua fiber for 400/800Gbe but I have no idea why the 400/800Gbe gets aqua fiber.
@AtanasPaunoff
@AtanasPaunoff Ай бұрын
I have upgraded to 100G last year as well. I see you have bought ColorChip model B trnsceivers which are little hotter and with lower temp treshold than model C but I hope they would be fine :) I have both B and C but prefer to use C.
@fuzzydogdog
@fuzzydogdog Ай бұрын
Wow, this was the exact video I needed considering I'm starting to plan my 100GbE buildout! Have you had any success configuring SMB-over-RDMA on a Linux host yet? I know ksmbd *technically* supports it, but I haven't seen any evidence of a successful Linux-Windows transfer.
@timothyvandyke9511
@timothyvandyke9511 Ай бұрын
Meanwhile my home router runs our home lab and my NAS is on a 1gb Ethernet speed. Thankfully it’s the only machine any work happens on, so nothing needs to talk to outside for large file use. I just docker use everything and run against the disks ☺️
@lemmonsinmyeyes
@lemmonsinmyeyes Ай бұрын
The thing with fibre op tuning/distance setting; I wonder why the onboard controller could not just, 1) detect that a cable was inserted. 2) have a 'self-tuning' mode where it starts with the lowest signal and gradually increases until the signal is detected on the other end. 3) detect when a cable is unplugged and sets the interface to 'reset' mode and run the self-tuning again. 4) do have this work when things are plugged/unplugged without power, there could be a physical switch that is manipulated upon insertion. I am dumb idiot, so maybe this exists, or there are significant problems with doing the aforementioned.
@larslrs7234
@larslrs7234 Ай бұрын
"Even for Windows systems. You just put a NIC in and it works." Some UDP stats please.
@MatthewSmithx
@MatthewSmithx Ай бұрын
Technically infinband is an IBTA technology and there used to be several vendors who implemented it. Ironically, Intel was one of the founding members of IBTA but abandoned it for omnipath
@kirksteinklauber260
@kirksteinklauber260 Ай бұрын
You should consider Mikrotik CRS5xx switch series. They support 100 Gbps, affordable and tons of enterprise features!
@pleappleappleap
@pleappleappleap 26 күн бұрын
I've run into compatibility issues with transceivers even on 10Gbps. Especially with Aruba.
@9072997
@9072997 Ай бұрын
My experience with Intel NICs is that they are excessively picky about SFPs and DACs. Admittedly though, I'm dealing in "old" 10g stuff. How is your experience with getting the card to accept non-Intel SFPs?
@ByteMeCompletely
@ByteMeCompletely Ай бұрын
I bought a 2.5G switch nearly a year ago. It was reasonable cost. This 100G stuff is very expensive.
@locusm
@locusm Ай бұрын
Keen to see a video looking at Xinnor having just started down the SPDK route myself. Even better if its done in the context of Proxmox and virtualisation in general.
@bcredeur97
@bcredeur97 Ай бұрын
Use RDMA at work with 25G connections for a storage cluster. It helps even for that! It's pretty much the only way to get ALL the IOPS from what I've found. Wish Ceph would use it :/ Not much in the open source storage clustering world that can go super fast on a small scale
@spicybaguette7706
@spicybaguette7706 Ай бұрын
Curious about Gluster, I believe it supports RDMA but only for FUSE clients
@wasab1tch
@wasab1tch Ай бұрын
upgrading to 100gig networking just because you can. Bruh I needed this in my life
@xanderlander8989
@xanderlander8989 Ай бұрын
For me Hyper-V is not a VMware competitor. It has no USB passthrough. So I'm going with VirtualBox.
@frankwalder3608
@frankwalder3608 Ай бұрын
Even on eBay, those Dell 5200 series switches are around $4,000! There is no way I will need that kind of bandwidth. A lot of things on my LAN do even have 10 gig capacity. So after spending four grand on a switch, and realizing that my local DNS still doesn't work, then what am I going to do?! Just like Patrick from STH, you show a lot of interesting equipment that I can't afford, and would have no use for if I could.
@deimosian
@deimosian Ай бұрын
The connecting to each socket is old hat, I have an R730 that has an FM10840 2x100Gb card which connects to an x16 slot of each CPU via an extra cabled card.
@killroy713
@killroy713 Ай бұрын
Been playing alot with 100G at work a few Pure FB //S systems it crazy to see 100 terabyte migrations happen over a lunch break
@corporealexistence9467
@corporealexistence9467 Ай бұрын
Hi Wendel and team. Might I ask why POP OS in the lab? I have not played with Linux for years and it surprised me is why I ask. Thank you for your great videos!
@Level1Techs
@Level1Techs Ай бұрын
installed a system to keep an eye on the state of pop in case Gordon has a question about it
@DrathVader
@DrathVader Ай бұрын
Man I wish energy wasn't so expensive. I'm on the fence between 10G and 2.5G because the 10G adapter eats up extra 10W in idle and I'm not sure if I wanna pay for that. 100G is unattainable at all
@TechnomancerStream
@TechnomancerStream Ай бұрын
Me over here with an old Gbit server and praying to get 2.5
@L0rdLogan
@L0rdLogan Ай бұрын
Here I am considering upgrading to 2.5gig, currently on 1 gig for LAN networking
@jensmander1223
@jensmander1223 Ай бұрын
I moved from 4x1gbit (smb multichannel) to 1x10gbit recently and most of the time I still can't saturate the link because of slow spinning rust disks.. the one's in mergerfs provide the max of what a single hdd can read/write, which for seagate X18 is around 280 MB/s with 10gbe being able to do around 1 GB. the array of Exos in the NAS on an Areca RAID Controller would do around 1 GB/s (at least in quickly done benchmarks). ;-D guess I'm saying I wouldn't even need 25g at the moment. at my job we have 10gbe aswell as 25 for the newer vsphere clusters, nobody is currently thinking about 100gbe there (for the regular server ethernet)
@postnick
@postnick Ай бұрын
I'm over here happy to have some SFP+ 10G cards and I can't crack 3.5 gig. I assume it's my PCIE Slot or my budget card but still i'm jelly!
@bentomo
@bentomo Ай бұрын
We don't do this because we can, but because we MUST!
@LanceThumping
@LanceThumping Ай бұрын
I'm jealous of all your toys. I don't have near the performance of storage to even come close to being able to use a single 100G lane. Hell I don't think I can saturate my 10G lane that I can't even use atm because of reasons.
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh Ай бұрын
Here I am sitting at 40g... and using 100g switches, because the nics for 40g are dirt cheap but 25g and 100g are still espinsive.
Trash to Treasure? A 25 gig NAS DIY
25:26
Level1Techs
Рет қаралды 63 М.
Microsoft Is KILLING Windows | ft. Steve @GamersNexus
19:19
Level1Techs
Рет қаралды 421 М.
GTA 5 vs GTA San Andreas Doctors🥼🚑
00:57
Xzit Thamer
Рет қаралды 25 МЛН
When you discover a family secret
00:59
im_siowei
Рет қаралды 33 МЛН
АЗАРТНИК 4 |СЕЗОН 2 Серия
31:45
Inter Production
Рет қаралды 786 М.
I tried finding Hidden Gems on AliExpress AGAIN! (SPECIAL Part 10)
15:11
They Put Epyc Onto AM5? Can They Do That?
13:18
Level1Techs
Рет қаралды 104 М.
Cutting mirrors with Diamonds
19:13
Breaking Taps
Рет қаралды 493 М.
Speedrunning 30yrs of lithography technology
46:07
Breaking Taps
Рет қаралды 575 М.
NEVER install these programs on your PC... EVER!!!
19:26
JayzTwoCents
Рет қаралды 3,3 МЛН
What's on this old 286 PC | Nostalgia Nerd
15:58
Nostalgia Nerd
Рет қаралды 49 М.
Is this my Fault?
15:41
Linus Tech Tips
Рет қаралды 2,4 МЛН
The ULTIMATE Raspberry Pi 5 NAS
32:14
Jeff Geerling
Рет қаралды 1,9 МЛН
Harder Drive: Hard drives we didn't want or need
36:47
suckerpinch
Рет қаралды 1,7 МЛН
GTA 5 vs GTA San Andreas Doctors🥼🚑
00:57
Xzit Thamer
Рет қаралды 25 МЛН