Why are 25GbE and 40GbE not a THING for Home and Smaller Businesses?

  Рет қаралды 8,559

NASCompares

NASCompares

Күн бұрын

Пікірлер: 106
@Yandarval
@Yandarval 4 ай бұрын
In addition to what Robbie stated. Machines at both ends have to be tuned correctly to get anywhere near the throughput people expect. Higher end networking need proper tuning and real managed switches to achieve the full thoughput.
@jeffcrowe4899
@jeffcrowe4899 4 ай бұрын
my 40gb setup cost me less than 300 American dollhairs, 5 Mellanox CX314A's + dac cables. 3 Proxmox servers directly connected to a truenas box the last port connects to a dell powerconnect N4032F. Do I get close to 40gb HELL NO I can get closer to 15-16ish, its all about having fun and tinkering with things!
@drpainjourney
@drpainjourney 4 ай бұрын
Remember also, the higher Gigabits, more power is also being used. Many people are overlooking this real world facts.
@nadtz
@nadtz 4 ай бұрын
That depends on a number of factors, quite a few 25/40GbE cards use very little power and 10Gbase-T switches generally use more power than SFP/QSFP switches so it will depend on the exact hardware used.
@elalemanpaisa
@elalemanpaisa 4 ай бұрын
So if you put 1min at 40gbe is more power needed than if you use the connection for 40 Min at one Gig? No Dude 40gbe is more energy efficient
@vladimir.smirnov
@vladimir.smirnov 4 ай бұрын
Newer chips are usually more power efficient and for some generations that power efficiency overcomes the drawback of higher power consumption (but not for all).
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
@@nadtz The still use more power compared to 10Gb and considerably more so compared to lower speeds.
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
@@vladimir.smirnov They still use more power, period.
@jeffnew1213
@jeffnew1213 4 ай бұрын
When I bought a Ubiquiti Pro Aggregation switch with four 25Gb SFP28 ports, it was a natural next step to put those ports to use. With two big Synology NASes and two new PowerEdge servers running virtual machines it just fit to use those ports to connect those servers to that storage. Fibre turned out easy to use and not any more expensive than copper. So, three years later, I am very happy with things and wish I had more 25Gb ports. If I had to do things over and 25Gb was as affordable as it is now, I would have most 10Gb hosts doing 25Gb instead.
@grizredford8407
@grizredford8407 4 ай бұрын
Most common consumer motherboards have limited PCIe lanes, that’s what stops the use of old mellonox cards, 5gbe is about to hit the mainstream in new AMD motherboards with a new low power 8126 chip to replace the usual 8125 chip, and Realtek are about to release a low cost 5gbe switch chip, 10gbe won’t hit the mainstream until PCIe gen 4 is commonplace and 10gig can be achieved using only 1 lane of gen 4.
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
It's going to be longer than just PCIe 4 being commonplace before that will happen. A cheaper and lower power chipset will have to come before that will happen.
@berkertaskiran
@berkertaskiran 3 ай бұрын
Why can't you just use gen 3 4x? Too many NVM-Es? I will be having one NVM-E and will be aiming 10GbE with my gen 3 limited mobo as my first server. I already got a gen 5/4 (gen 5 is only for 16x) PC, I can always later turn into another server to take advantage. Did I miss something?
@mytech6779
@mytech6779 4 ай бұрын
25→100Gb tend to be used for aggregating trunk lines in large networks more than inside of data centers (Which internally tend to prefer specialized networks like infinity fabric and fiber channel rather than ethernet). Anyway some years ago I did a few basic tests on my old home equipment and I found that the CPU demand for the end points was about 1Ghz of CPU per Gb of ethernet on the sender side, somewhat less receiving. This was with consumer NICs that can't do much offloading, and offloading is actually a rather task specific setup suited to specialized enterprized needs. The reason is most of the CPU load is in dividing the data into packets at the higher application layers and a low layer NIC simply can't know enough about which data is which and what the application protocol wants to do with it. Simple stuff like packet checksums and related resend could be offloaded but that isn't a major portion of the workload. This is part of why jumbo packets are beneficial Though Nobody rally took it far enough or set a reasonably firm size standard so they became too much fuss to migrate to or maintain. (128kiB would have been a good standard, instead we got a bunch of devices waffling around 7,000-9,000 octet limits.) At the time I could only test single threaded and was comparing 100Mb and 1Gb networks, so I can't say how this rule of thumb holds up to many slow cores (relative to the ethernet speed). I'm sure changes in instructions per cycle has helped reduce this a bit in the last 10 years but I don't see it changing too much. Tested using /dev/random and /dev/zero as sources with /dev/null as the sink to avoid effects of drive read/write speed. First running it internally without the network components to get the base CPU loads of generating and sinking the zeros & random data so that can be subtracted out. Maybe using drives there could be some sort of DMA benefit but I haven't noticed any obvious effect, likely because DMA has no effect on packet creation and reconstruction, though an offloading NIC may leverage DMA for large single extent block device transfers (One of those special situations).
@LordApophis100
@LordApophis100 4 ай бұрын
Mikrotik has great 8x25g + 2x100g for about $800. And the 100g ports you can use breakout cables for another 8 25g. Used 25g and 100g equipment is quite cheap these days, you can get a 25g Mellanox card for less than $80. You can build a 25g/100g network with used gear for the price of 10g buying new.
@Galileocrafter
@Galileocrafter 4 ай бұрын
I just last month scored an E810-XXVDA2 for 50.- CHF !! Other deals are a bit more expensive, but that was an instant buy ^^
@espressomatic
@espressomatic 4 ай бұрын
I have two dual 25Gbe cards in two systems - they were very affordable. Unfortunately the switching equipment isn't, so I'm rocking 10Gbe with them. :)
@MarkWebbPhotography
@MarkWebbPhotography 4 ай бұрын
Ran into the same issue 😂 But I’m ready for when someone makes an affordable switch
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
You can get switches both used and new that are pretty inexpensive nowadays.
@Galileocrafter
@Galileocrafter 4 ай бұрын
Same here, i like the newer SFP28 cards because of PCIe 4.0 support thus not requiring a whole x4 or x8 link for the full bandwidth.
@denvera1g1
@denvera1g1 4 ай бұрын
40G adapters have been around for quite some time, and their price hasnt really gone down, despite 4-800GB being available. I think these companies realized that, if they keep lowering the price to reflect the cost to manufacture, that they're losing profits. You used to be able to get motherboards with 40G ports on them for ~$700 2 enterprise sockets, 12xDDR3 ECC slots(6/socket) and what do we have today? About the same price for a single socket consumer board with 4 DDR4/5 slots Heck 10 years ago you could pick up that dual socket with 40G networking used for less than $200 and you can pick up switches with 2x40G 40x10G and 48x1G for $550 new, but lard forbid a single port is included on a $400 motherboard. Though dual 25G ports are getting more common. Maybe the SFP+ 10G/QSFP+ 40G are just getting skipped over and we're going to see QSFP28(4 lanes of 25G) before we see QSFP10
@berkertaskiran
@berkertaskiran 3 ай бұрын
It's just like SSD capacity not increasing. They realize a few TB is good enough for most so why would they make 8+TB dirt cheap so most people get the 1-2TB ones for even cheaper? Exactly the same in networking. Most would be fine with 10Gbit, so why make 100Gbit dirt cheap? Corps already pay whatever they ask for so it doesn't matter to them. All of this is vastly slowing down technological progress. Looks like all of those companies are in some sort of agreement so there's basically no competition to drive the progress.
@eat.a.dick.google
@eat.a.dick.google 3 ай бұрын
@@berkertaskiran That made zero sense.
@oppressorable
@oppressorable 4 ай бұрын
All that is true. That's still doesn't stop me from lusting about a 100gbe home network for... two computer and a nas 🤩
@rui1863
@rui1863 4 ай бұрын
I just jump on 25GbE bandwagon from 10GbE. 40GbE is on the way out. 25GbE is what you want; it's more efficient and also scales up to 100GbE.
@javaman2883
@javaman2883 4 ай бұрын
With my 6 spinning drive setup I can't exceed 180MBps transfers within the machine itself. Would I like 2.5GbE network, yes, but is it necessary, not really. The 1GbE is a bottleneck, but for normal usage you'd never notice. Only when transferring many or large files is it even noticeable. Running Plex, Jellyfin, media servers, etc. you never even saturate gigabit. When I upgrade my NAS (DIY setup, reusing old business hardware) again, I will add an NVME drive for cache as well as add 2.5GbE to it and my PC.
@gustcol1
@gustcol1 4 ай бұрын
good explanation, but I have a datacenter at home because I research big data and deep learning. 25gbps networks are excellent when you work with nvmes, switches and your operating system prepared for it. you need to configure a few things so that everything works as you need it to. - mtu and more..
@denvera1g1
@denvera1g1 4 ай бұрын
Imagine these ITX nas boards came with one QSFP28 port rather than 4x2.5Gbit ports and used that extra space for say, another M.2 (it would be a tight fit and PCIe probably cant be routed that way)
@sfalpha
@sfalpha 4 ай бұрын
It will require CPU that have enough PCIe lanes (mobile or Desktop CPU, not embedded one). I would say at least 12 PCIe 3.0 lanes from CPU/Chipset. 4x2.5G ports require only 2 PCIe 3.0 lanes. QSFP28 for 100G require 8 PCIe 3.0 lanes. Most embedded process not have that available, it's need to be full desktop CPU. For example, Intel N100 only have 9 lanes available, that 8 lanes would need PCIe switch to share with others peripherals.
@denvera1g1
@denvera1g1 4 ай бұрын
@@sfalpha This is why i hate on the Atom class processors like the N305(for their price otherwise they're nice products) N305 should not have launched with the same $309 CSP as the i3-1215u which has something like 4x the PCIe bandwidth, 2x the RAM channels, based on a 4 core complex design rather than 2 core complexes like the N305(though the 1215u only had 3 core complexes enabled disabling one of the atom core complexes for only 2P and 4E cores enabled as opposed to 2P+8E) Also its not artificially limited to 16GB of RAM like the Atoms. Do you know how many DDR5 sticks i had to try to get my N100 to post with more than 16GB of RAM? I should have less trouble getting 256GB working on the 1215u than 32GB on the N100
@sfalpha
@sfalpha 4 ай бұрын
@@denvera1g1 I think it target small NAS or router less than 2.5G throughput. N100/305 cannot handle more than 2x10G ports.
@denvera1g1
@denvera1g1 4 ай бұрын
@@sfalpha CPU wise i'd be surprized if the N305 could handle a single 10G, but 4 lanes of 3.0 is enough for 30Gbps, if the board only gave the M.2 drives 1x you could get 4 M.2 and an 8 port SATA controller with the rest of the But there are boards like this with a 7840HS which significnatly increases performance, almost 4x the PCIe bandwidth(16 lanes of 4.0) and in theory up to 512GB of DDR5 support
@DigitsUK
@DigitsUK 4 ай бұрын
If you've only got 2 machines you don't need a switch - you can pick up used/refurb ConnectX-4LX dual 25Gbe cards for about £180 each, 25Gbe DAC cables start at about £25 for 1m new. If you have more machines to connect, a Mikrotik CRS510-8XS-2XQ-IN will give you 8 x 25Gbe ports and 2x 100Gbe ports for about £800. Yes, it's more expensive than 10Gbe, but it's not as crazy as you suggest...
@famitory
@famitory 4 ай бұрын
given the lack of price pairity i wonder what the price to performance ratio would be of a network that used all pairs of LAGG'd 10 gig connections to create a "20 gig" network
@bernhardschuepbach4533
@bernhardschuepbach4533 4 ай бұрын
You can do 25Gbit with DAC (Direct Attach Cable/Copper) cabels. Just not very long distances, only a few meters
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
There are multiple media types and none of them are expensive.
@JokerTheVlad
@JokerTheVlad 4 ай бұрын
Guess I will stick with max 10gbe thanks for this video. So needed it.
@patrickcennon5617
@patrickcennon5617 4 ай бұрын
Thanks!
@nascompares
@nascompares 4 ай бұрын
Thanks for the donation man. Just finishing upf or the week (5pm here in UK) and you out a spring in my step! Thanks for being brilliant!!
@1xXNimrodXx1
@1xXNimrodXx1 2 ай бұрын
25 and 40 GBE you can use copper as well, those passive DAC cables use around 0.2 watts and they are twin copper cables. So that statement from 5:15 aint that true but I gues you were refering to the old copper patch cables.
@ronwatkins5775
@ronwatkins5775 4 ай бұрын
My 48 port 10GB switch includes 4x of the 25GB ports. So, im looking for a SSD NAS with 2x 25GB ports. This should be able to support VM's running across the network.
@simon3121
@simon3121 4 ай бұрын
I 100% agree, but also completely disagree. As a home user I want NVMe performance from my NAS without paying Enterprise $. 40G Ethernet over Thunderbolt is a thing, so why can’t I have that over the network? Market segmentation. Maybe it takes a company like Minisforum to show the big one how it can be done.
@nascompares
@nascompares 4 ай бұрын
Couple of things. 1) I love this comment, as you are totally right! 2) I definitely should have mentioned this! And 3) I cover exactly this on a review of the Zimacube pro soon + a dedicated vid + the next Minisforum ms-01 vid. Sorry I missed it here, and got keeping me on the straight and narrow
@Galileocrafter
@Galileocrafter 4 ай бұрын
Thunderbolt Ethernet is not popular because the length of TB cables is very limited (shorter than DAC cables). And they are even more expensive than fibre transceiver + cable. In short, you basically have a DAS when using TB-Ethernet. And the support for it is spotty at best, you are better off just using SFP28.
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
We're well past the point of paying Enterprise $ for 25 GbE. That was many years ago. You can pick up NICs dirt cheap and used switches are quite cheap too.
@Elemino
@Elemino 4 ай бұрын
“Do you hear that?” Me: that’s a quiet network switch… You should hear some of the switches I own and the ones I’ve worked with.
@davelamont
@davelamont 4 ай бұрын
So .. what is the highest speed I can expect from my TVS874-H with 8 Toshiba 12tb drives in Raid 6. I'm running it on a 2.5 GBE lan. I only have 1 of the 2 2.5gbe ports connected.
@vgololobov
@vgololobov Ай бұрын
What are you talking about? 40/56gbe cards like mellanox ConnectX-3 - 30-40£ , 56Gbe 36 port switch is 100£ - optic transceivers for 100G like 5£ About performance - cards like that support RDMA , which reduces CPU usage to almost none
@PlayingItWrong
@PlayingItWrong 4 ай бұрын
Does direct connect 40GBE bypass the worst of these excesses? If i have just my nas and my workstation connected, maybe 2 workstations if all 3 have twin nic cards? (connected in a ring)
@jaapkamstra9343
@jaapkamstra9343 4 ай бұрын
Well I also think because quite some people use a nas with a laptop, and then 10Gbe is the max. And yeah: above 10Gbe is the max that you can do with copper. And not everyone is willing to figure out how fiber works. So yeah: it will be for a enthusiast minority for a while I think. With usb4 and cat8 that might change though. For people that need more than 10Gbe we will first see devices with multiple 10Gbe connections. That would make more sense I think.
@ryanmalone2681
@ryanmalone2681 4 ай бұрын
I have a 25G network at the core and as an uplink to my media and homelab. I’ve never seen it use more than 5G.
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
There is very little use for small business as you're implying of that size. No one has infrastructure that would take advantage of it. For the higher end of small business and medium sized business this stuff is quite cheap nowadays.
@ChrisCebelenski
@ChrisCebelenski 4 ай бұрын
2.5G never really caught on, except in limited cases like Wifi endpoints and the occasional laptop dongle or lower-end NAS. 10G is about where 1G was 10 years ago, but still home and small business will only just be able to make good use of it - Word Processing and Excel just won't need it. In my homelab, I can max it out with Ceph and large file moves. 25G is still overkill for the hardware most home and SMB uses and the price today doesn't justify it. As the hardware improves and makes 25G actually usable it will move, just like 10G did, into the networks of those who can make use of it, like video creators. Today I go "up" the rack with 10G, and across with 25G from the edges - ultimately I'd like to do 50G there, but that's not realistic right now. 25G moves the bottlenecks to the endpoints. Prediction for at least the next five years - 1G will go away, replaced with 2.5G, 5G still won't happen, and 10G will be the new standard for higher end mobos and minis.
@espressomatic
@espressomatic 4 ай бұрын
10G is where 1G was 10 years ago? What? 10G is nowhere right now. 1G was already the de-facto common consumer standard 20 years ago. 10 years ago every consumer system with networking had 1G. Today you're hard pressed to find 2.5G and absolutely never find 10G on consumer systems.
@leexgx
@leexgx 4 ай бұрын
2.5gbe been used on a lot of isp routers,, ONT port (the fibre to rj45 before the router) starting to have 2.5 and newer ones been 1/2.5/5/10gbe ready)
@ChrisCebelenski
@ChrisCebelenski 4 ай бұрын
@@espressomatic Point taken, more than 10 years for sure - I can't remember when exactly we went from 100Mb to 1Gb, but it was a while ago. And consumers hardly saw the 100Mb stuff. I do remember 10Mb coax ethernet tho.
@ChrisCebelenski
@ChrisCebelenski 4 ай бұрын
@@leexgx Yeah exactly - I think 2.5Gbe is just a waypoint for now as it's "cheap enough". Four port 2.5Gbe cards are very inexpensive now, and mobo's are starting to include it even at the consumer level. Of course most consumers are probably using laptops and not desktops, and there's not usually any wired options without a dongle or docking port.
@leexgx
@leexgx 4 ай бұрын
@@ChrisCebelenski 2.5gbe is also easy to implement over existing wiring, 5gbe and more so 10gbe may require higher grade cat cable
@waynetaylor2784
@waynetaylor2784 4 ай бұрын
Thanks I'll stay with my 40gb home network which doesn't use all dacs, house runs pure fibre. Everywhere utilising all mellanox gear.
@JonathanSwiftUK
@JonathanSwiftUK 4 ай бұрын
I see what you did there 😜. 25Gb is for data center, not home, even SMBs don't need it, but you certainly could. Price wise it is out of the reach of home users. There aren't many switches, or weren't, and the PCI requirements exceed home systems. If you have a hypervisor in an enterprise running 50 VMs then 25Gb would be important. However, on that hypervisor if your storage is remote, on a SAN, then 16Gb and 32Gb is needed for that, and I've used those, those are storage HBAs, not network traffic, with storage switches, but you could use iSCSI on a 25Gb network switch instead.
@berkertaskiran
@berkertaskiran 3 ай бұрын
What kind of a data center uses 25G? I can definitely see myself taking advantage of 100G. And last time I checked I didn't own a data center. I think 100G in a data center should be absolute minimum. If they at all have respect for themselves.
@JonathanSwiftUK
@JonathanSwiftUK 3 ай бұрын
@@berkertaskiran Easy, colos, co-location where you have your equipment in a shared data center, which is most people, almost no businesses have their own data center, they share one with another customers, like Telehouse, or Global Switch where we had ours. We don't need 100Gb, 10Gb was fine for networking, and 25Gb would have been nicer, and I'd recommend 100Gb as it is more standard, and I doubt you save much with 25Gb. And fibre storage uses 4Gb, 8Gb, 16Gb and 32Gb - which are entirely different switches.
@vk3fbab
@vk3fbab 4 ай бұрын
Most people just use ethernet for internet. Most people can't do 100MB for that. 5G mobile networks are starting to change that but most internet applications cannot do gigabit speeds. So most people don't care for even 10GbE.
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
None of that is based in reality.
@OVERKILL_PINBALL
@OVERKILL_PINBALL 4 ай бұрын
because they have not finished *MILKING* 2.5Gbps... which should have been tech of 20 years ago...
@jasperverkroost
@jasperverkroost 4 ай бұрын
In Synology's case, they haven't even finished milking 1Gbps.
@O4KAST
@O4KAST 4 ай бұрын
Well with 20 years you're pushing it, since 1gb protocol was invented at about that time. But realistically, 2.5 was supposed to be discontinued and considered outdated about 10 years ago. That I can stand behind. But really everything is slowing down, progress in everything. You can still mostly use a PC from 10 years ago, now imagine telling this to someone in 2005.
@PatrickDKing
@PatrickDKing 4 ай бұрын
I've been saying this as way back when I was a kid about CPU speeds. I remember even 20 years ago they could crank 2ghz cpus up to 5ghz with extreme cooling but even today you're hard pressed to find too many 5ghz cpus. They're still milking the 3.0-4.0 ghz processors.
@berkertaskiran
@berkertaskiran 3 ай бұрын
@@PatrickDKing For CPUs I would say that would be core count. Clock speed isn't really something you can rely on forever. It's like going at 1000 KM/h on ground. It's doable but not sustainable. For core count however we're seeing a continuous adoption of more cores despite people always saying it's harder to work with more cores. Even games today can use 8 cores and even more, when it used to be 1-2 cores.
@Phil-D83
@Phil-D83 4 ай бұрын
Even 10gbe, the price is still a bit high. Realtek,etc need to make an inexpensive chips for it
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
Not really. People are just poor and entitled.
@Mr76Pontiac
@Mr76Pontiac 4 ай бұрын
I have a 3 Dell, 1U systems setup with Proxmox, and a dedicated PC running TrueNAS, 48-port GigE fully managed switch. Other than my not being able to transfer a DVD to any of the machines over GigE in under a second, I can't understand why a homelabber needs more than a GigE? I can understand for some functionality of Video Editing, but not ALL homelabbers are doing that. I also have machines all over the house (Three floors, several network GigE switches in between because "drop point" or systems that are needed there), running Cat5e all over the place. Never a problem (Until a NIC starts broadcast bombing the network). What high bandwidth items are NEEDING more than GigE? Don't discredit me for not WANTING 10GigE, and I acknowledge 10x speed, but, I don't get WHY for a homelabber that isn't doing commercial type work?
@nadtz
@nadtz 4 ай бұрын
I can't imagine still running 1gig after using something faster. Do I 'need' faster, no. But when I'm transferring large files across my network the last thing I want to be the bottleneck is the network itself.
@berkertaskiran
@berkertaskiran 3 ай бұрын
"Isn't doing commercial type work". Everyone does some kind of commercial work with their home servers. You do programming, host website or whatever. Even if you make a few bucks of return that still makes it commercial. It's not all home assistant and Plex. For what I do, I'd probably saturate 100G. And even if I didn't do that, I'd definitely find something to do with 10G.
@Mr76Pontiac
@Mr76Pontiac 3 ай бұрын
@@berkertaskiran Sorry, but no. Not "everyone" does commercial work on their homelab. I certainly don't. I don't make one red cent, I don't offer services, I don't do anything of the sort that would appease any person on the internet for free or for pay. Period. However, I have a cluster that I'm running just in the other room that's got over 500gig of RAM, 100+ threads to toy with, and I can't even remember how much drive space between the three machines. To top it off, I also have 32tb (16 usable) of Truenas goodness. None of which is commercial. Personally, I'd rather not trust you, or anyone, to use my equipment for their own use. If I were going to do anything for a commercial gain, I'd be punting it to a VPC of some sort where they can deal with the DDOS, hardware and internet connection availability. And given that this fictional setup goes offsite from my home, it's no longer a "home lab" by the very nature that it's not in my home.
@TheCynysterMind
@TheCynysterMind 4 ай бұрын
I upgraded straight from 1GB to 10GB.... the cost was NOT cheap $350 for a 5 port Switch $100 per Nic (I have 2) and the $150 Nic for my Synology and CAT8 cables. and yet I am lucky if I see 4GB transfers (this is in home office and nodes are less than 10feet from the switch. I should be seeing *bang on* 10GB transfers for such short runs... so until I sort out that mess .. higher speeds are NOT going to make it into my budget!
@eat.a.dick.google
@eat.a.dick.google 4 ай бұрын
That is cheap.
@drescherjm
@drescherjm 4 ай бұрын
I am not sure if I should laugh or cry when I see videos like this.. The reason is my company in the last 5 years upgraded their network to allow 1 GbE. For a long time I wish I could get a 10GbE connection for my 200TB to 300TB servers as 1GbE is a serious bottleneck and basically makes offsite backups to a different building in the same company take a ridiculous amount of time.
@denvera1g1
@denvera1g1 4 ай бұрын
I lucked out at a corporate garage sale. a pair of 6x40G and 48x10G ports for Get this $40 per switch I thought these were the MUCH older 4GB/1GB switches and i wanted to cut my teeth with fiber and pluggables. Idle power, without anything plugged in is 105w VSP 7254XSQ
@annebokma4637
@annebokma4637 4 ай бұрын
Because for orivate and small business 10gb is usually enough and just getting affordable. Next step will probably be 100gb for these secrors in 5 to 10 years. If there is a use case. Why 100gb? Because companies like BMD just launched a 2110 switch with 16*10gb And 2*100gb 😁
@vladimir.smirnov
@vladimir.smirnov 4 ай бұрын
You are slightly off with a pricing for entry level. If you'll go for Used Mellanox ConnectX-4 NICs (mcx4121a-acat) you'll get a dual port card for
@dab42bridges80
@dab42bridges80 4 ай бұрын
Cost, heat, cabling, noise.
@mvp_kryptonite
@mvp_kryptonite 4 ай бұрын
Pity I can’t tag Synology so they can watch this
@aznravensdrive5900
@aznravensdrive5900 4 ай бұрын
they milk business for the high prices until they don't use the tech anymore... then the price goes down for regular consumers... a lot of business are using 2.5gbe less now that's why the price has gone down for consumers to use and we're seeing 2.5gbe in routers, switches, etc... if a business is still using a tech, then the price will remain sky high
@Sea_Jay
@Sea_Jay 4 ай бұрын
Because 100GbE or gtfo?
@MK-xc9to
@MK-xc9to 4 ай бұрын
I swapped my 10 Gbit Cards with 40 Gbit QSFP+ Cards ( server grade , 2x Dualport 35 Euro + 16 Euro PCIe Adapter ) 2 x HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR Adapter 764737-001 = 35 Euro , 2 X PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR = 16 Euro Why so cheap ? because the HP Nics are in LOM , an HP specific Format b u t there are Adapter to PCIe aviable , the most expensive are the Transceiver + Multi Patch cable I have a Infiniband direct connect to my NAS =2 - 3 GB speed kzbin.info/www/bejne/a362nGaVoNCpgZY
A Guide to NAS - Things People STILL Get Wrong
16:29
NASCompares
Рет қаралды 20 М.
40 Gig LAN - Why did I even do this...
8:06
Raid Owl
Рет қаралды 38 М.
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 14 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 57 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 45 МЛН
The PROS and CONs of UniFi in 2024
21:59
NASCompares
Рет қаралды 96 М.
Network Switches - Before You Buy!
22:16
NASCompares
Рет қаралды 234 М.
THIS 25GbE Server and Firewall Has it All
22:21
ServeTheHome
Рет қаралды 137 М.
Everything I Learned About Home Networking - A Newbie’s Perspective
26:36
Jimmy Tries World
Рет қаралды 421 М.
Upgrading To The Cheapest 10GE Network...
12:47
Dawid Does Tech Stuff
Рет қаралды 172 М.
10GbE and 2.5GbE Switch for $65 - HOW??? What Is The Catch?
10:42
BEST Low Power 25GbE and 100GbE Switch MikroTik CRS510-8XS-2XQ-IN
17:31
A $15,000 Network Switch?? - HOLY $H!T - 100GbE Networking
18:01
Linus Tech Tips
Рет қаралды 1,8 МЛН
I hope you don't need internet.... - PfSense Router Update
27:31
Linus Tech Tips
Рет қаралды 2,5 МЛН
M.2 to 10GbE Adapters ARE A THING! - Review and NAS Testing
9:50
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 14 МЛН