40 Gig LAN - Why did I even do this...

  Рет қаралды 34,733

Raid Owl

Raid Owl

Күн бұрын

Definitely glad I did this...but I prob won't be telling my friends to do the same lol
Mellanox Connectx-3
ASUS HyperX 4x NVME Card - amzn.to/3Ry7Akj
QSFP+ Fiber Cable - amzn.to/3O442n8
-------------------------------------------------------------------------------------------
🛒 Amazon Shop - www.amazon.com/shop/raidowl
👕 Merch - / raidowl
-------------------------------------------------------------------------------------------
🔥 Check out this week's BEST DEALS in PC Gaming from Best Buy: shop-links.co/cgDzeydlH34
💰 Premium storage solutions from Samsung: shop-links.co/cgDzWiEKhB8
⚡ Keep your devices powered up with charging solutions from Anker: shop-links.co/cgDzZ755mwl
-------------------------------------------------------------------------------------------
Join the Discord: / discord
Become a Channel Member!
/ @raidowl
Support the channel on:
Patreon - / raidowl
Discord - bit.ly/3J53xYs
Paypal - bit.ly/3Fcrs5V
Affiliate Links:
Ryzen 9 5950x - amzn.to/3z29yko
Samsung 980 2TB - amzn.to/3myEa85
Logitech G513 - amzn.to/3sPS6yv
Logitech G703 - shop-links.co/cgVV8GQizYq
WD Ultrastar 12TB - amzn.to/3EvOPXc
My Studio Equipment:
Sony FX3 - shop-links.co/cgVV8HHF3mX / amzn.to/3qq4Jxl
Sony 24mm 1.4 GM -
Tascam DR-40x Audio Recorder - shop-links.co/cgVV8G3Xt0e
Rode NTG4+ Mic - amzn.to/3JuElLs
Atmos NinjaV - amzn.to/3Hi0ue1
Godox SL150 Light - amzn.to/3Es0Qg3
links.hostowl.net/
0:00 Intro
0:43 Why I'm Upgrading
1:11 Parts
2:03 Hardware installation
2:47 10 hours later...
4:58 It works! Speed tests
5:35 Another issue...
6:08 Final speed test
6:28 Final thoughts

Пікірлер: 162
@thaimichaelkk
@thaimichaelkk Жыл бұрын
You may want to check your cards for heat. Many of the cards expect a rack mount case with high airflow, I believe your card requires Air flow: 200 LFM at 55° C, which the desktop case does not provide. You can strap a fan on the heat sink to provide the necessary cooling (Noctua has 40mm or 60mm which should do the trick nicely currently waiting for 2 to come in). I have a Mellanox 100GB nic and a couple Chelsio 40GB NICs(I would go with the 100GB in the future even though my current switch only supports 40GB) though they definitely need additional airflow after 5 minutes could cook a steak on them. Mikrotik CRS326-24S+2Q+RM is a pretty nice switch to pair with them for connectivity.
@jamesbyronparker
@jamesbyronparker Жыл бұрын
you probably want to run something in the private ip ranges on the nics, the odds of it causing an issue long term are low with just 2 ips but its not great practice
@7073shea
@7073shea Жыл бұрын
Thanks owl! The “transceivers gonna make me act up” bit had me dying
@dragonheadthing
@dragonheadthing Жыл бұрын
4:03 Thank you for showing the command you used, and where you have it saved. Many times in a video where someone talks about a command that's all they do. "I set up a config file to change that." and then that's all they say about it. But they never show what the file looks like leaving a Linux noob like me not learning anything at all.
@jeffer8762
@jeffer8762 Жыл бұрын
I did the similar thing with 10gbps home networking but little did I know the speed was capped by my SSD
@ryanbell85
@ryanbell85 Жыл бұрын
NVME drives are definitely the way to go
@ejbully
@ejbully Жыл бұрын
Spinning rust on zfs... don't listen to fools who blindly say nvme.... Storage layout is important... there will always be a bottleneck - it's almost certain... defeat it
@ryanbell85
@ryanbell85 Жыл бұрын
@@ejbully Why can't NVME on ZFS be an option?
@ejbully
@ejbully Жыл бұрын
@@ryanbell85 it is an option. I think its better for caching then for data i/o as you will not be able to achieve or maintain those advertised throughput speeds. Value wise IDENTICAL spinning rust (7200rpm and better)on a mirrored -not to wide vdev - and preferably SAS drives with the correct jbod controller will net you great speeds and your wallet will thank you. Of course standard disclaimer- depends on your io workload will vary the results
@ryanbell85
@ryanbell85 Жыл бұрын
@@ejbully "I think its better for caching then for data i/o as you will not be able to achieve or maintain those advertised throughput speeds." Are you familiar with the iSCSI and NFS protocols? Do you have any data to backup your claim that ZFS NVME is only suitable for caching? JBODs full of SAS drives definitely have their place and you would be greatly mistaken if you think that NVME drives are only suitable for caching.
@jeffnew1213
@jeffnew1213 Жыл бұрын
I've been running 10Gbit for everything that I could pop a 10G card into for a good number of years. The better part of a decade, actually. I started with a Netgear 8-port 10G switch. A few years ago I replaced that with an off-lease Arista Networks 48-port 10G switch (loud, hot, and power hungry). Last year, I replaced that with the new Ubiquiti 10G aggregate switch. That device has four 25G ports. I have two 12th generation Dell PowerEdge servers running ESXi and two big Synology NASes, both of which are configured to, among lots of other things, house VMs. There are about 120 VMs on the newer of the two NASes, with replicas and some related stuff on the older box. Both of the PowerEdge servers and both NASes have Mellanox 25G cards in them with OM3 fibre in-between. ESXi and Synology's DiskStation Manager both recognize the Mellanox cards out of the box. So, now, I have a mix of 1G, 10G and 25G running in the old home lab. Performance is fine and things generally run coolly. Disk latency for VMs is very low.
@JasonPVermeulen
@JasonPVermeulen Жыл бұрын
In this use case when working of your server, and for the according price, this is definitely a worthy upgrade. On the topic of things that people in general think are overkill, maybe a homebuilt router next that is low on energy consumption but still be able to route a Wireguard VPN at a minimum of 1Gig? As more and more people and places get a fiber connection (I know you have an awesome Ubiquity setup, but it could be a fun project with some old server gear)
@RaidOwl
@RaidOwl Жыл бұрын
Heck yeah man I’m always looking for projects. Many of the things I do aren’t necessarily the ‘best’ or even useful for most people but at least it’s fun lol.
@JasonPVermeulen
@JasonPVermeulen Жыл бұрын
@@RaidOwl Well, at least your video's are really inspiring and the way you explain the matter with your humor makes it easy digestible and "honest" instead of some youtubers that put some "fake-sauce" layer on their video's. Keep up the good work!!
@markpoint1351
@markpoint1351 Жыл бұрын
don't know if id do this lol... but thanks to your videos i think about networking more and more!!! keep the videos coming!!!
@esra_erimez
@esra_erimez 6 ай бұрын
I can't wait to try this myself. I ordered some ConnectX-3 Pro EN cards
@ntgm20
@ntgm20 Жыл бұрын
Crontab to make the setting persistent - That is also how I keep my MSI B560M PRO set so it can wake on lan. I did a short video on it too.
@tinkersmusings
@tinkersmusings Жыл бұрын
I run a Brocade ICX6610 as my main rack switch. I love that it supports 1Gb, 10Gb, and 40Gb all in one. I also run a Mellanox SX6036 for my 40Gb switch. It supports both Ethernet (with a license) and Infiniband through VPI mode. You can assign the ports that are Ethernet and Infiniband. Both are killer switches and I connect the SX6036 back to the Brocade via two of the 40GbE connections. Most of my machines in the rack now either support 40Gb Ethernet or 40/56Gb Infiniband. I have yet to run 40Gb lines throughout the house though. However, with 36 ports available, the sky is the limit!
@DavidVincentSSM
@DavidVincentSSM 9 ай бұрын
do you know what the cost of the license for ethernet be?
@tinkersmusings
@tinkersmusings 9 ай бұрын
@@DavidVincentSSM I'm not sure NVIDIA still sells the licenses to this switch, but there's good info on ServeTheHome on the SX6036.
@YHK_YT
@YHK_YT Жыл бұрын
40Gb/s is actually at least 1.4x faster than 10Gb/s
@RaidOwl
@RaidOwl Жыл бұрын
🤯🤯🤯
@Alan.livingston
@Alan.livingston Жыл бұрын
Doing shit just because you can is a perfectly valid use case. Your home lab is exactly for this kind of thought project.
@Nathan_Mash
@Nathan_Mash Жыл бұрын
I hope you and your servers stay nice and cool during this heatwave.
@asmi06
@asmi06 Жыл бұрын
You've gotta try getting your hands on Mikrotik's flagship 100G gear for even more insanity 😜 I'm hopelessly behind as I just recently upgraded to Netgate 6100 and 10G core switch with leafs still at 2.5G (can't afford to do a complete upgrade in one go, so have to do it in stages). I plan to buy a few mini pcs with 5900hx cpu and 64GB of ram to build a microk8s kubernetes cluster - probably over proxmox cluster to make administration easier.
@geesharp6637
@geesharp6637 3 ай бұрын
100G, nah. Skip that and just add a 0. Go for 400G. 😜
@jonathanhellewell2756
@jonathanhellewell2756 Жыл бұрын
... crawling around your attic in Houston during summer... that's dedication...
@RaidOwl
@RaidOwl Жыл бұрын
I was up there for like 10 min and I was dripping by the end...crazy
@dmmikerpg
@dmmikerpg Жыл бұрын
I have it in my setup, like you it is nothing crazy, just host to host; namely from my TrueNAS system to the backup NAS.
@bopal93
@bopal93 10 ай бұрын
Love your humour
@bradbeckett
@bradbeckett Ай бұрын
40 gigE + Thunderbolt FTW!
@ted-b
@ted-b Жыл бұрын
Oh it's all fun and games until one of those fast packets has someone's eye out!
@marcin_karwinski
@marcin_karwinski Жыл бұрын
Frankly, since you're not doing any switching in between the devices, instead opting for direct attached fibres, I'd say go with IB instead... IB typically nets better latencies at those higher speeds, and for directly accessing, as in working of the net disk in production, this might improve the feeling of speed in a typical use. Of course, this might not change a lot if you're only uploading/downloading stuff to/from the server before working locally and uploading results back onto the storage server, as then burst throughput is what you need and IB might not be able to accomodate any increase due to medium/tech max speeds. On the other hand, SMB/CIFS can also be somewhat limiting factor in your setup as on some hardware (as in CPU-bottlenecked) switching to iSCSI could benefit you more due to less abstraction layers in between the client and disks in the storage machine...
@fanshaw
@fanshaw 4 ай бұрын
I've got the Chelsio 40G cards with truenas. 25G SFP+ is probably a better option for home than QSFP which is x4 cabling. It all runs very hot, but if you have more than one nvme ssd, 10G won't cut it. Either get a proper server chassis or at least use something in a standard case that you can pack with fans - those SSD's don't run cool either. Don't forget you'll need to exhaust the whole thing somewhere - putting it into a cupboard will probably end badly. Also bear in mind that the transceivers are usually tailored to the kit into which they plug. You may not get a cheap cable off ebay if you don't have a common setup.
@GanetUK
@GanetUK Жыл бұрын
Which edition of windows are you using? As I understand it RDMA helps speeds when getting to 10G+ on windows and is only available in enterprise edition or pro for workstation (that's why I upgrade to enterprise).
@godelrt
@godelrt Жыл бұрын
Next video: “I did it again! 100gig baby!” Would I recommend it? NO! Lol nice vid!
@paulotabim1756
@paulotabim1756 Жыл бұрын
I use Connectx-3 pro cards between linux machines and a Mikrotik CRS326-24S+4Q+RM. Could achieve transfer rates of 33-37Gb/s between the directly connected stations. Did you verify the specs of the pcie slots used ? to achieve 40Gb/s must be pcie 3.0 x8. 2.0 x8 will limit at 26Gb/s .
@Darkk6969
@Darkk6969 Жыл бұрын
I need to point out that iperf3 is single threaded while iperf is muilti-threaded which makes a difference in throughput. It's not by a wide margin but figured best way to saturate that link.
@UncleBoobs
@UncleBoobs Жыл бұрын
im doing this with the card in infiniband mode, using the IP over infiniband protocol (IPoIB) running openSM as the subnet manager, im getting the full 40g speeds this way
@IonSen06
@IonSen06 Жыл бұрын
Hey, quick question: Do you have to order the Mellanox QSFP+ cable or will the Cisco QSFP+ cable work?
@shephusted2714
@shephusted2714 Жыл бұрын
you should go to 100gbe - the procedure is mostly the same and price is not all that much more - mikrotik has nice 100g switches now also - 2023 will see more smb and soho goto 100gbe in lieu of 25/40 - you can get breakout cables that split 100 to 4 25gbe - can be a major time saver for people that move around a lot of big data and lower cluster overhead also
@XDarkstarXUnknownUnderverse
@XDarkstarXUnknownUnderverse Жыл бұрын
I love Mikrotik! They have so much value and flexibility!
@timramich
@timramich Жыл бұрын
100 gig is too expensive yet if you want real enterprise switching.
@shephusted2714
@shephusted2714 Жыл бұрын
@@timramich 100g mikrotik switch is less than 100 bucks now - compare cost per port 2.5 vs 100g and you will see 100g is actually cheap - don't leave all that performance on the table
@timramich
@timramich Жыл бұрын
@@shephusted2714 Less than one hundred dollars? No.
@shephusted2714
@shephusted2714 Жыл бұрын
@@timramich i meant to say 800 - sorry - per port 100g is still a bargain when compared to 2.5 you can use 25g in breakout cables - try ebay - lots of surplus and refurb fiber cards - it is way to go for smb and if you value your time
@prodeous
@prodeous Жыл бұрын
I'm slowly working on getting my 10gb setup.. but 40 being 5x faster.. hmmm... lol. jokes aside thanks for sharing, seems like i'll stick to 10gb for now. Though I have cards iwth dual 10gb, so maybe i shoudl try for 20gb setup. I know Unix/Linux/etc have such capabilitty, but Widnows 10 Pro doesn't.. any recomendations on how to link the two ports together?
@mpsii
@mpsii Жыл бұрын
Would like to know if you could Infiniband the cards and see what is involved with that. Totally needed out on this video.
@sashalexander7750
@sashalexander7750 Жыл бұрын
Here is a good switch for 40g/10g setup Brocade ICX6610 48 port
@TorBruheim
@TorBruheim Жыл бұрын
My recommendation described in 4 important things to prepare before you use 40GbE: 1) Enough PCIe lanes 2) Use a motherboard with a typical server chipset 3) Don't use an Apple MAC system 4) In windows set high priority to the background services instead of applications. Good luck!
@jackofalltrades4627
@jackofalltrades4627 Жыл бұрын
Thank for making this video. Did your feet itch after being in that insulation?
@RaidOwl
@RaidOwl Жыл бұрын
Nah but I got some on my arms and that sucked
@computersales
@computersales 5 ай бұрын
Crazy ro think 100Gb is becoming more common in homelabs now and 10Gb can borderline be found in the trash. 😅
@MK-xc9to
@MK-xc9to 5 ай бұрын
Budget Option : HP Connect X 3 Pro cards ( HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR ) , payed 27 Euro for the first 2 and now they ere down to 18 and i bought another 2 as spare parts , they need an Adapter from LOM to PCIe , thats why they are cheap , the Adapter costs 8 -10 Euro (PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR ) a n d you get the PRO Version of the Mellanox Card = ROCE 2.0 . Besides TrueNAS Scale supports Infiniband now and Windows 11 pro as well = you can use it , its not that much faster but the latency is way lower . Its about 1-2 GB with the 4 x 4 TB NVME Z1 array . HDDS ~ 500MB , smaller Files way less ( as usual )
@MM-vl8ic
@MM-vl8ic Жыл бұрын
I've been using these for a few years....look into running both ports on the cards, auto share RDMA/SMB .... VPI should let you set the cards for 56Gbs ethernet.... test I set up 2 ram disk 100GB and speeds were really entertaining.... Benching marking NVMe gen3 was only a tick slower or the network......
@vincewolpert6166
@vincewolpert6166 Жыл бұрын
I always buy more hardware to justify my prior purchases.
@seanthenetworkguy8024
@seanthenetworkguy8024 Жыл бұрын
what server rack case was that? I am in the market but I keep finding either way too expensive cases or ones that don't meet my needs.
@moellerjon
@moellerjon Жыл бұрын
Seems like you’d get better speeds with less overhead doing thunderbolt direct-attach-storage over optical
@charlesshoults5926
@charlesshoults5926 Жыл бұрын
I'm a little late to the game on this thread, but I've done something similar. In my home office, I have two Unraid servers and two Windows 11 PCs. Each of these end points have Mellanox ConnectX-3 cards installed, connected to a CentOS system acting as a router. While it works, data transfer rates are nowhere near the rated speed of the cards and DAC cables I'm using. Transferring from and to NVMe drives, I get a transfer rate of about 5Gbps. A synthetic iper3 test, Linux to Linux, shows about 25Gbps of bandwidth.
@RandomTechWZ
@RandomTechWZ Жыл бұрын
That Asus Hyper card is so nice and well worth the money.
@RaidOwl
@RaidOwl Жыл бұрын
Loving it so far!
@Nelevita
@Nelevita Жыл бұрын
i can give you 2 tipps for youre 40gbit networkcards. 1 Use NFS for filetransfer its posibil easy to activate it in windows only the drive mounts must be every restart used as a startup thing. 2 if you realy realy realy need on youre local SMB use the Pro version of "Windows for Workstations" and use SMB Direct/Multicannel witch the cpu dosent get hit by network traffic there are some good tutorials out there even for linux.
@anwar.shamim
@anwar.shamim Жыл бұрын
its great
@JavierChaparroM
@JavierChaparroM Жыл бұрын
Re-visiting a video I once thought I would never be able to re-visit, haha Im trying to set a Proxmox cluster with network storage, and oddly enough in 2023 40gbps stuff is almost as cheap as 10gbps stuff
@LampJustin
@LampJustin Жыл бұрын
V2.0 would be using SRIOV to pass through a virtual function to the VM ;)
@ryanbell85
@ryanbell85 Жыл бұрын
Crazy.... literally did this a month ago using the same 40GB cards linking a TrueNAS (also version 11) and 2 Proxmox servers. It was such a long process to get mlxfwmanager working correctly and setup Proxmox with static routes between each of the servers. I didn't have to passthrough the Mellanox card in TrueNAS but I get 32.5Gb/s in ETH mode. Let me know if I can help.
@ryanbell85
@ryanbell85 Жыл бұрын
Essentially Proxmox itself runs off a single SATA SSD while all the VMs run through the 40Gbs network on NVME drives on TrueNAS via NFS.
@RaidOwl
@RaidOwl Жыл бұрын
Impressive, was that 32.5Gb/s in iperf or with file transfers?
@ryanbell85
@ryanbell85 Жыл бұрын
@@RaidOwl It was while using iperf. I haven't tried a file transfer but KDiskMark gets 3.6GB/s read on a VM in this network over the wire.
@trakkasure
@trakkasure Жыл бұрын
@Ryan Bell : I've had this same configuration for the past 4 months. I put 40g cards in 3 servers. Downloads are not needed to configure/flash these cards. Use mstflint to flash latest firmware and mstconfig to switch modes. There are more tools in the "mst" line to do much more. I also get around 30Gb, but only directly from the host. I only get 22Gb from VM. I believe if I raise the MTU to 9000 I could get a lot more, but I'm having issues getting my switch (Cisco 3016) to pass jumbo packets.
@ryanbell85
@ryanbell85 Жыл бұрын
@@trakkasure I wish I could justify getting a 40GbE switch like that! A 4-port 40GbE switch would be plenty enough for me if I could find one. I'll have to settle for my mesh network.... at least for now :) MTU at 9000 helped a bit for me.
@jacobnoori
@jacobnoori Жыл бұрын
Network speeds are like the lift kits of an IT nerd - You're compensating the higher you go. This coming from somebody who recently went to 10G in my home. 🤓
@RaidOwl
@RaidOwl Жыл бұрын
lol I can agree with that
@Veyron640
@Veyron640 9 ай бұрын
you know ... there is a saying.. right? "there is NEVER enough speed" so... give me 40 Give me fuel Give me fire.. ghmm the end.
@jamescox5638
@jamescox5638 10 ай бұрын
I have a Windows server and a juniper EX 4300 switch that has the QSFP+ ports on that back. I have only seen them used in a stack configuration with another switch. Would I be able to buy one of these cards and used the QSFP+ ports the switch as a network interface to have 40G connection with my server? I ask cause I am not sure if these QSFP+ ports on my switch is able to be used as a normal network port like that others.
@liquidmobius
@liquidmobius Жыл бұрын
Go big or go home!
@jonathanmayor3942
@jonathanmayor3942 Жыл бұрын
Next, I’ve buy a 100 gbe nic
@sashalexander7750
@sashalexander7750 Жыл бұрын
Why did you go with AOC type of cable. 10 meters is not long enough to warrant active optical cable)
@levelnine123
@levelnine123 Жыл бұрын
try crossflash lätest firmware on both card fix some problems for me @ last. What i remeber full 40gbe single port you get only in IB mode.
@RaidOwl
@RaidOwl Жыл бұрын
Yeah IB wouldn’t play nice with Proxmox tho. Def worth looking into at some point.
@RemyDMarquis
@RemyDMarquis Жыл бұрын
I really was hoping that you found a solution for my problems. *sigh* That 10GB cap is so damn annoying. I have been trying to find a way to get it to work but it just doesn't work with vrtio for me. If you check the connection speed in terminal "sorry I forgot which command" it will show that the connection is at 40gb. But no matter what I do I can't get the virtio to run at that speed. One tip: If you want the DHCP server to give it an IP, do what I do. Bridge a regular 1gb lan with a port on the card and just use that bridge in the VM and connect your workstation to the same port. It will give you IPs for both machines from the DHCP server and you don't have to worry about the IP hassle. Of course you will be limited to the virtio 10gb but it is a piece of mind I'm taking until I can find a solution for that 40gb virtio nonsense. And please hear my advice and don't even bother trying Infiniband. Yes it is supposed to be a better implementation and runs at 56gb but don't believe anyone that says it is plug and play, IT IS NOT. Any tiny adjustment you do to the network, it won't work anymore and you have to reboot both machines. I even bought a Mellanox switch and I gotta say, it is horrible. I don't know about the modern implementations of it like on CX5 or CX6 but I don't believe it is really ready for the market as it is believed to be. Just stick to regular old Ethernet.
@ajv_2089
@ajv_2089 Жыл бұрын
Wouldnt SMB Multichannel also be able to accomplish these speeds?
@cyberjack
@cyberjack Жыл бұрын
network speed can be limited by drive speed
@marcoseiller8222
@marcoseiller8222 Жыл бұрын
you have two NICs per card, right? have you tried running them in parallel as a bonded NIC? In theory that should double the speed and would "only" require a second cable run. I think proxmox has an option for that in the UI, no idea how to do that on windows...
@RaidOwl
@RaidOwl Жыл бұрын
Def worth looking into but that’s gonna be for future me lol
@SurfSailKayak
@SurfSailKayak Жыл бұрын
@@RaidOwl Run that second one over to my house :p
@logan_kes
@logan_kes Жыл бұрын
I don’t think windows consumer versions can do LAG / LACP and it usually requires a switch for true link aggregation. Also not great for single tasks, better for say two 40 gig streams rather than a single 80 gig which would still cap at 40 gig
@Veyron640
@Veyron640 9 ай бұрын
I have Ferrari.. But, would I want you to have it?? Absolutely Not! lol Thats kind of the tone of this vid on the receiving end.
@Bwalston910
@Bwalston910 8 ай бұрын
What about thunderbolt 4 / USB4?
@noxlupi1
@noxlupi1 Жыл бұрын
Windows network stack is absolute bs.. But with some adjustments you should be able to hit 35-37Gbit on that card. It is the same with 10gbit, by default it only gives you about 3-4gbit in windows. But you can get it to around 7-9gbit with some tuning. It is also dependent on the version of windows. Windows server is doing way better than home and pro.. And workstation is better, if you have RDMA enabled on both ends. Good places to start are: Frame size / MTU (MTU 9000 - jumbo frames is a good idea when working with big files locally) Try Turning "large send Offload" off, on some systems the feature is best left on, but on others it is a bottleneck. Also interrupt moderation is on by default. On some systems, this can be good to avoid dedicating too much priority to the network, but on a beefy system, it can often boost network performance significantly, if turned off. If If you want to see your card perform almost at full blast, just boot your PC on an Ubuntu USB, and do an iperf to the BSD nas.
@ronaldronald8819
@ronaldronald8819 Жыл бұрын
No gone stick to 10Gb happy with that
@RaidOwl
@RaidOwl Жыл бұрын
Smart
@jumanjii1
@jumanjii1 Жыл бұрын
I can't even get 10gb to work on my LAN, let alone 40gb.
@inderveerjohal7218
@inderveerjohal7218 8 ай бұрын
any way to do this for Mac? Off an UNRAID server?
@Maine307
@Maine307 Жыл бұрын
WHAT ISP COMPANY PROVIDES THAT TYPE OF SPEED?? Here i am, just a few months into StarLink, having HughesNet for 8 years. ..I get 90s MPS download reliable now and i feel like i am king! How and who provides that much speed?? wow
@RaidOwl
@RaidOwl Жыл бұрын
That’s not the speed through my ISP that’s the just speed I can get from one computer to another in my LAN
@MrBrutalmetalhead
@MrBrutalmetalhead Жыл бұрын
That is awesome. Its getting so much cheaper now for 40G
@logan_kes
@logan_kes Жыл бұрын
Ironically it *used to* be even cheaper in 2017 ish… prices of used server gear has increased dramatically over the past 3 years. Look at Linus tech tips video on a similar years ago, I want to say he got his for less than half the price that they sell for now. I got some back then for like $35 a card for the same cards.
@MrBcole8888
@MrBcole8888 Жыл бұрын
Why didn't you just pop the other card into your Windows machine to change the mode permanently?
@nyanates
@nyanates Жыл бұрын
Because you can.
@cdurkinz
@cdurkinz 3 ай бұрын
It's sad that you go from 10G to 40G and only double your speed I am just looking into this and seems to be normal at least while using windows file copy.
@RaidOwl
@RaidOwl 3 ай бұрын
Def diminishing returns
@notafbihoneypot8487
@notafbihoneypot8487 Жыл бұрын
Based
@RaidOwl
@RaidOwl Жыл бұрын
Tru
@SirHackaL0t.
@SirHackaL0t. Жыл бұрын
What made you use 40.x.x.x instead of 10.x.x.x?
@RaidOwl
@RaidOwl Жыл бұрын
Easy to remember since it’s 40G and wanted it easily distinguishable from my regular subnet.
@kingneutron1
@kingneutron1 6 ай бұрын
@@RaidOwl possibly others have mentioned this but you'd be better off using 10.40 or 172.16.40 private address range ;-)
@RaidOwl
@RaidOwl 6 ай бұрын
@@kingneutron1 yeah I've since changed it
@jonathanmayor3942
@jonathanmayor3942 Жыл бұрын
Good video but, please clean up to cable in your nas God please pardon him
@RaidOwl
@RaidOwl Жыл бұрын
Lmao yeahhhh I’ve been doing some upgrades so cable management will come when that’s finished
@meteailesi
@meteailesi Жыл бұрын
Sound got noise , u can clear the sound.
@meteailesi
@meteailesi Жыл бұрын
Great content by the way :)
@SilentDecode
@SilentDecode Жыл бұрын
Why the strange subnet of 44.0.0.x? Just why? I'm curious!
@RaidOwl
@RaidOwl Жыл бұрын
Cuz I picked a random one for the sake of the video lol. No real reason.
@draskuul
@draskuul Жыл бұрын
@@RaidOwl Please, please do yourself (and everyone else) a favor by using a proper private IP space (192.168/16, 10/8, 172.16/12). I worked at a place pre-internet days that used the SCO UNIX manual examples, which turned out to be public IP space, for all servers. Once we got internet-connected across the board it was a real pain to deal with later. The unknowing users may make the same mistake using your examples.
@RaidOwl
@RaidOwl Жыл бұрын
@@draskuul Yeah, its been updated since
@psycl0ptic
@psycl0ptic Жыл бұрын
These cards are also no longer supported in vmware.
@ChristopherPuzey
@ChristopherPuzey Жыл бұрын
Why are you using public ip addresses on your LAN?
@RaidOwl
@RaidOwl Жыл бұрын
Those have been changed to private since then
@jereviitanen6883
@jereviitanen6883 Жыл бұрын
Why not use NFS?
@RaidOwl
@RaidOwl Жыл бұрын
Worth a shot I guess.
@LampJustin
@LampJustin Жыл бұрын
But be sure to have a look at pNFS and NFS + RDMA...
@urzalukaskubicek9690
@urzalukaskubicek9690 Жыл бұрын
How come you have 40.0.0.x addresses on your local network?
@RaidOwl
@RaidOwl Жыл бұрын
It’s my lucky number. But yeah it’s not in my subnet so I just picked something.
@urzalukaskubicek9690
@urzalukaskubicek9690 Жыл бұрын
@@RaidOwl i mean.. you can do that? i dont understand netwoks im more on the developer side of things, so networks are like dark magic for me :) so i am just surprised, i would expect something like router to complain or something..
@RaidOwl
@RaidOwl Жыл бұрын
@@urzalukaskubicek9690 Yeah it's because there is no router in that setup. It's just a direct connection between computers :)
@pg_usa
@pg_usa 2 ай бұрын
@@RaidOwl 40 like 40Gbit... :D
@donny_bahama
@donny_bahama Жыл бұрын
It seems like the only reason ANYONE does this is because they can. Transfer a file in .01 seconds vs .04 seconds? No thanks. It’s like modding a car for more, more, more horsepower when you almost never get to put all those horses to work. I, personally, wouldn’t spend the extra money on anything above 1Gb.
@RaidOwl
@RaidOwl Жыл бұрын
I agree and even said that this is dumb even for my use case. This belongs in enterprise solutions where you NEED that bandwidth, not in a home setup.
@donny_bahama
@donny_bahama Жыл бұрын
@@RaidOwl I know you did and didn’t mean to be critical of you or this video. I understand and appreciate why you did it; I’m just saying that - in general - spending the money on anything above 1Gb is foolish. May as well spend it on hookers and blow…
@logan_kes
@logan_kes Жыл бұрын
@@wojtek-33 for home use, I agree, which is why the shift is to 2.5 g rather than 10g. However 10g or more has its place, a single HDD can typically saturate a 1 gigabit link, which should show how slow it truly is. A single SSD even a crappy sata one on a NAS could saturate 4x 1 gigabit links. So anyone wanting to host a VM on shared storage is gonna cry when they try to do it over 1 gig
@XDarkstarXUnknownUnderverse
@XDarkstarXUnknownUnderverse Жыл бұрын
My goal is 100GB...because why not and its cheap (I use Mikrotik).
@minedustry
@minedustry Жыл бұрын
Take my advice, I'm not using it.
@jfkastner
@jfkastner Жыл бұрын
40Gbps looks like a 'dead end', if you look at industry projections RE port quantity sold its 10, 25, 100, 400
@ryanbell85
@ryanbell85 Жыл бұрын
The dual port 40GbE cards are cheaper than 10GbE dual port cards on eBay right now. Why pay more for a point-to-point connection?
@jfkastner
@jfkastner Жыл бұрын
@@ryanbell85 Many times when a manufacturer declares a product 'obsolete', 'legacy' etc active driver development stops or slows down to a crawl
@ryanbell85
@ryanbell85 Жыл бұрын
@@jfkastner most home labs are full of unsupported, legacy, and second-hand equipment. It's just part of the fun to figure it out and stay on budget.
@JasonsLabVideos
@JasonsLabVideos Жыл бұрын
You don't need 40gig in your home lab. Show that you can saturate the 10g.
@RaidOwl
@RaidOwl Жыл бұрын
I agree. That was the whole point of the video lol
@repatch43
@repatch43 Жыл бұрын
Did you actually watch the video?
@JasonsLabVideos
@JasonsLabVideos Жыл бұрын
​ @Raid Owl Exactly kinda of my point, i could have worded it better. You could do a 10g video and show that 40 isn't needed too..
@ryanbell85
@ryanbell85 Жыл бұрын
10g would have cost more.
@JasonsLabVideos
@JasonsLabVideos Жыл бұрын
@@ryanbell85 2 10g cards 50$ 1 DAC cable 20$
@R055LE.1
@R055LE.1 Жыл бұрын
People need to stop saying "research". They've been researching for hours. No you haven't. You've been studying. You didn't run real scientific experimentation with controls and variables, you read stuff online and flipped some switches. Most people have never conducted research in their lives. They study.
@RaidOwl
@RaidOwl Жыл бұрын
I used the scientific method. I also had a lab coat on…and nothing else 😉
@R055LE.1
@R055LE.1 Жыл бұрын
@@RaidOwl i heavily respect this reply 🤣
@npham1198
@npham1198 Жыл бұрын
I would change that 40.x.x.x network into something under the rfc1918 private address space!
@RaidOwl
@RaidOwl Жыл бұрын
Good call
@SimonLally1975
@SimonLally1975 Жыл бұрын
Have you looked into Mikrotik CRS326-24S+2Q+RM, I know it is a little on the pricey side, or if you are going to go for this then the 100Gbps with this Mikrotik CRS504-4XQ-IN just for sh!ts and giggles. :)
@johnkristian
@johnkristian 10 ай бұрын
calling yourself a tech youtuber and are COMPLETELY clueless about infiniband. LOL
40 Gigabit Server Rack Upgrade FAIL
18:34
Craft Computing
Рет қаралды 94 М.
Talking About Mellanox 100g
14:25
Level1Linux
Рет қаралды 71 М.
I’m just a kid 🥹🥰 LeoNata family #shorts
00:12
LeoNata Family
Рет қаралды 18 МЛН
A pack of chips with a surprise 🤣😍❤️ #demariki
00:14
Demariki
Рет қаралды 55 МЛН
Пробую самое сладкое вещество во Вселенной
00:41
I tried Linux…its not for me
13:34
Raid Owl
Рет қаралды 16 М.
Trying out 40GBe, Does It Make Sense In A Homelab?
10:03
ElectronicsWizardry
Рет қаралды 3,6 М.
FASTEST Server Networking 64-Port 400GbE Switch Time!
15:53
ServeTheHome
Рет қаралды 114 М.
Ethernet Won’t Replace InfiniBand for AI Networking in 2024
26:14
My ENTIRE Home-Lab On A SINGLE CPU???
25:08
Hardware Haven
Рет қаралды 240 М.
We’re running out of internet - Steam Game Caching Server
21:09
Linus Tech Tips
Рет қаралды 2,9 МЛН
Single Board Computers are lame
9:29
Raid Owl
Рет қаралды 34 М.
Main filter..
0:15
CikoYt
Рет қаралды 12 МЛН
Ждёшь обновление IOS 18? #ios #ios18 #айоэс #apple #iphone #айфон
0:57
Хотела заскамить на Айфон!😱📱(@gertieinar)
0:21
Взрывная История
Рет қаралды 4,2 МЛН
CY Superb Earphone 👌 For Smartphone Handset
0:42
Tech Official
Рет қаралды 826 М.