INSANE Home Networking: Tips, Tricks and Installation

  Рет қаралды 23,591

Digital Spaceport

Digital Spaceport

Күн бұрын

Пікірлер: 81
@spyrule
@spyrule 11 күн бұрын
One other suggestion, put cable separators on your cable runs. Dont run Fiber under ethernet. If you have enough, the Ethernet weight can cause microfractures in the fiber, and its a royal pain to find. Keep fiber on one side, and Ethernet on the other. I 3d printed some clip-on brackets that create kind of a comb effect (but wider), to help create "sides" of the runs. Helped a lot, and prevents this problem from happening.
@LithiumSolar
@LithiumSolar 11 күн бұрын
I like seeing fiber everywhere, yes!! Genuinely curious though what the actual use case is for 100Gb? Don't get me wrong, we all want much speed lol but... I had that CRS504 that I tested and ended up selling just because I couldn't find an actual use for it. Even with ram-based Chia plotting, a 10Gb link was sufficient. I suppose if I had better GPUs, but that's a very specific use case. Also, you're very gentle with your fiber. They're usually reinforced with kevlar strands around the fiber and I've found them to be quite difficult to actually damage unless you're purposely being negligent and just yanking it hard.
@bmyjacks
@bmyjacks 10 күн бұрын
For me (and our HPC team), I run MPI across a bunch of servers, and it can easily use up all the bandwidth on a 100G network.
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
The orange fiber I have in lots of runs is um highly abused and ill handled (I only am careful when filming, sadly) which is my fault. Plus orange OM1/2 I buy is always bargain priced/free. However the OS2 I got in 15m strands was only $3 on Amazon! That oddly spawned the entire video. On 100gbe, its latency is one major improvement over 10gbe my mikrotik css and old dell provides by about 10x. I hit a folder with like 2K things to thumbnail, it sails through them. That is one nice common use case that benefits well. Also I have started using block storage in TN scale for iSCSI and I cant tell its remote storage. I dont have a good A/B for that one however as my prior impressions for iSCSI had been formed around a super old Dell MD3000i iirc. It was horrible and I just wrote it off for like 5 years. Hitting peak transfer of ~60gbit/s happened just recently for me on single stream and took unnatural levels of effort and possibly luck, so pushing high BW transfers is likely not a good use case actually aside from a storage headend that might get a lot of NFS traffic. A 40gbit switch can do that excellent and max up every machine easy.
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
IB or ETH?
@LithiumSolar
@LithiumSolar 10 күн бұрын
@@DigitalSpaceport OM1 bargain/free lol. You'd be surprised how much of that I still use on a daily basis, including some new work unfortunately. Cable plant replacement is... expensive and often not budgeted for 😐 Responsiveness of remote storage is definitely a nice win, need to look at tuning that myself - but limited by the io of the RAID1 volume (HDD) I'm running for my primary NAS storage pool.
@sfalpha
@sfalpha 10 сағат бұрын
Usually at home no one would need everything beyond 25Gb (or at 100GB QSFP uplinks), and usually not need any QSFP port in your home. 25Gb is quite good for Video Editing over SAN/NAS tho. Don't use any 4 lanes in client PC for what ever reasons, it quite specifically only use for Uplinks or Servers. (Like in Office where multiple 10Gb/25Gb clients can take advantage of that speed, and have CPU power and proper NIC offloading to do so).
@garchafpv
@garchafpv 12 күн бұрын
if you are wondering, unless you plan on NOT buying enterprise level equipment you should really just plan on not having your server rack anywhere you can hear it. anything you put in there that is enterprise and some consumer level rack mount equipment will be loud enough to where it will bother you, i myself had to build a separate room for my homelab (rack in the Livingroom was ok until i mounted a synology rack station) dont bother reading noise levels. just stick to the above and you will save yourself alot of headaches.
@spyrule
@spyrule 11 күн бұрын
It will also actually save your hearing to later in life. Many old-school DC engineers all have damaged hearing from working without hearing protection. Its a stupid way to lose your hearing.
@TSPhotoAtlanta
@TSPhotoAtlanta 8 күн бұрын
@@spyrule Unless we don’t know what we don’t know-ask the Curies
@spyrule
@spyrule 11 күн бұрын
Because DACs dont have to convert electrical to optical, they always run much cooler then Fiber. That is one of the main reason when running in-rack cabling, DACs are generally better, unless you have a huge volume of connections to complete in-rack. I try and only use fiber for inter-rack, long-run connections.
@wiedapp
@wiedapp 11 күн бұрын
13:32 that was actually a thing back in the early 2000s for pre-modded PC cases. Many had a side oriented fan back then. Over the years that came out of trend, so to say, because it is said that it messes with airflow in the case. Nowadays it comes back slowly for enthusiasts, because they actually need to push air on to some add-in cards, like you do.
@drewlarson65
@drewlarson65 8 күн бұрын
I hot-glue a fan to cards and don't bother with holes in the case. Hell, half the time I leave the side panels off even.
@omitsura
@omitsura 2 күн бұрын
Fantastic content, btw!
@TheSasquatchjones
@TheSasquatchjones 12 күн бұрын
Great content as always
@DCTekkie
@DCTekkie 11 күн бұрын
Very cool video! I found your channel via one of my viewers ~ Out of curiosity, do you have a use case for 40-100Gbit connections or is it a learning experience mostly?
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
I might do a video on use cases for high speed eth traffic. It is not pushing high BW transfers really. That is a pita to pull off. Latency on 100 and 40 gear is about 1/10 that of lower speed switches. Im not sure if its the mellanox switches I am using of a general thing, but that alone makes access to remote storage instantaneous. Also its very nice for remote iSCSI block storage for VM access in TN. That will be in a video soon.
@vladimir.smirnov
@vladimir.smirnov 10 күн бұрын
@@DigitalSpaceport in addition to iSCSI you can also try NVMe-oF and in particular NVMe over TCP. You can export ZFS zvol as an NVMe device and connect them with nvme-fabrics driver and nvme-cli tools. There is not a lot of information about NVMe-oF for homelab in a field and I haven't seen anyone making any homelab-friendly comparison of those (well, SAN vendors of course do, but that is a different story as that is as far away from homelab as it could be)
@TSPhotoAtlanta
@TSPhotoAtlanta 8 күн бұрын
@@DigitalSpaceport well I had the same Q….
@michaelharbuck3314
@michaelharbuck3314 9 күн бұрын
I have about 6 40gb cards now but no switch yet. works good. sometimes the pro/non-pro cards are weird to configure, and or do not show all the config options. I like fiber but the DAC cables always work.. where the QSFP are finicky. ( especially the HP cards )
@mrteausaable
@mrteausaable 8 күн бұрын
I have 10G dual NIC card SPF++ on TrueNAS server and I cannot get over 1G on this card. Do you know what setting I need to change on the TrueNAS such as buffer size, MTU etcs?
@michaelsegel8758
@michaelsegel8758 11 күн бұрын
interesting. In my home lab, I've got a couple of the AMD threadrippers 24core/48threads in towers sitting in a wired shelving unit... rather than an actual rack. I'd say skip the lower speeds, although 10Gb/E is a given these days... but go w a Mellanox 100GbE switch. And then the cards. The PC tower cases have space for a lot of fans and you have some different layout options to maximize airflow. You could even go w a mesh case. These could also become desktop or side workstations too. I have to wonder if you went w better transceivers would you still have the temp issues? I think the Heatsinks are interesting, but really how effective if you don't already have good airflow? Definitely a good video. While you're looking at compute workloads, I'm looking at data fabrics ...
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
I do want to go with better optics but $$ is an issue and at least ruining one of these is only a few bucks. I didnt show the 100gbe in the Fractal Meshify XL 2 but yeah cooling that has been no issue. Fans all over the place.
@matejskerjanc7703
@matejskerjanc7703 12 күн бұрын
have i been living under the rock, last time i checked these (even 10gb) were in the 100 eur area, interesting prices!
@ethanwaldo1480
@ethanwaldo1480 11 күн бұрын
I've had good success using cheap little 40mm ender 3 (popular 3d printer) fans to keep my Mellanox card temps down. You'll have to come up with your own custom mounting solution.
@cracklingice
@cracklingice 12 күн бұрын
I've got 2 ConnectX-4LX (HPE 640SFP28) and 1 ConnectX-3 (HPE 546SFP+). No switch though. I just have a SFP28 DAC between desktop and NAS, a SFP+ DAC between NAS and a really anemic machine I've been using to try out proxmox (i3-7100/16gb). I also got a pair of 10gig transceivers with one of my ConnectX4-LX cards so I got a fiber between the desktop and the anemic proxmox machine as well. I'd like to have a switch to simplify things but it's not likely I would find an unmanaged desktop switch with the ability to have up to 4 SFP28 and up to 8 gigabit base-t ports.
@vladimir.smirnov
@vladimir.smirnov 12 күн бұрын
Your best shot is to get 2 switches instead - one for QSFP28 or SFP28 (you can use breakout cables to get 4 SFP28 out of single QSFP28) and use one of the tails to connect a 9-10 port gigabit switch to it (you just need to get switch which have SFP+ uplink ports). You can probably look at Mikrotik offers, they have something. It would be quiet expensive though and managed.
@cracklingice
@cracklingice 12 күн бұрын
@@vladimir.smirnov I mean the 9300 Cisco line has a switch that's like $250 ish on the used market right now, but it's a loud rack unit type and likely to stop getting updates in the relatively near future. That's why I just stick with having the PCs direct attached.
@vladimir.smirnov
@vladimir.smirnov 12 күн бұрын
@@cracklingice yeah... There are few somewhat cheap 100G switches, but anything home friendly costs roughly 600-800$ (brand new mikrotik or you can mod mellanox sn2100 to be quiet)
@cracklingice
@cracklingice 12 күн бұрын
@@vladimir.smirnov I don't need 100gig. I only have 25gig so I would be perfectly happy with that. (tho I do know 100gig is really quad 25gig)
@ewenchan1239
@ewenchan1239 12 күн бұрын
To simplify my networking, I've actually deployed more larger servers than a bunch of smaller systems. That way, since my systems are running Proxmox, I can just use virtio-fs for VM host communication and then also have host-to-host communication go over 100 Gbps IB link. Less total networking infrastructure overhead, overall.
@gustavocadena5089
@gustavocadena5089 11 күн бұрын
🎉 great video
@michaelharbuck3314
@michaelharbuck3314 9 күн бұрын
for the side panel i just print out a pattern on paper and tape in to the side... then drill away.
@PrimalNaCl
@PrimalNaCl 11 күн бұрын
I have some connectx-6 cards w/Arista tranceivers, running om5. Thing I am running into is what is the magic to get Proxmox VB to show up as 100Gbe in the VMs. All I am getting is the VMs seeing 10Gbe. Proxmox itself sees it as 100Gbe so why not the vms?
@vladimir.smirnov
@vladimir.smirnov 11 күн бұрын
That depends on how proxmox configures VMs. By default, virtio-net would always display 10gbps as negotiated speed, but that doesn't mean it can't go faster. Try running iperf with few streams and see if it actually gets faster. If you really want your VMs to see actual NIC, you need to figure out how to enable sr-iov on your system and pass Virtual Functions to the VMs, instead of virtualizing the NIC - if you follow nvidia's guide to set it up you'll see virtual PCIe device that would have "Virtual Function" in its name and you can do a PCIe device passthrough to the VM. Also that way your guest OS would be able to utilize hardware offloads of the NIC and reaching higher throughput might be actually easier.
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
Proxmox the host wont present the connections as 10gbit virtio but they can go whatever speed. You need to tune your RSS and set windows and use a different traffic control algo is a high level to tune it up. Just use debian references that are all over the net and you will be hitting 3-4GB/s single stream easy.
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
I should do an sr-iov video 🤔 Good topic idea.
@PrimalNaCl
@PrimalNaCl 9 күн бұрын
@@vladimir.smirnov Interesting. I guess I figured the reported cap, 10Gbe, would be the cap. I had already tried the sr-iov thing; kernel args in grub and all. The issue is that it's not sticky in proxmox. Have to create a service, if wanting to be anywhere approaching proper, to enable the VFs for the card and manual assignment of VF card instances to already created VMs. Unclear how proxmox up{grades,dates} will enjoy that sort of alteration/addition. I also didn't look at race condition issues as to whether the autostart of a given VM is/can be easily gated on the starting of that proxmox service. I guess I'll have to bite the bullet, as it were, to see if it's really as insurmountable as it seems. Thanks.
@vladimir.smirnov
@vladimir.smirnov 9 күн бұрын
@@PrimalNaCl it should be fine, as doing something like that is what they suggest for GPU pass through. You just need to ensure that your service would be one of the first to start and that should be viable solution.
@SpikeNansid
@SpikeNansid 12 күн бұрын
Wouldn't adding attenuators of somewhere between 5-10dB make up for too strong signal over short distances with OS2 and then not burn out the trancievers?
@cracklingice
@cracklingice 12 күн бұрын
Was going to mention needing to potentially adjust strength. Could also just need different transceivers if the ones in use are designed for only long distances.
@SpikeNansid
@SpikeNansid 10 күн бұрын
@ Adding attenuators will also help with risk for overbleed causing packet loss.
@ipgucker
@ipgucker 12 күн бұрын
very interesting! thx 4 sharing!
@vladimir.smirnov
@vladimir.smirnov 12 күн бұрын
Got your video recommended. I'm doing a bit of high speed home networking myself, and I have few things to add as extra (it would be a bit chaotic as youtube comments are not great for that): 1. For the switches, probably cheapest option you have is some decent brands that are mostly known for ODM - if you'll go for Celestica DX010 you can get it for 1/2 or even 1/3 of the price of SN2700. It would be slightly hotter and louder (SN2700 is really power efficient, if you replace its fans it can idle at sub-40W, while Celestica consumes 120W + whatever your transiver consumes). But with celestica you need to be careful about AVR54 bug, however most of the switches with production date in second half of 2016 or 2017 have either board-level fix or fixed revision of the chip (I have one Celestica that I bought for less than 350$ used that have board-level fix applied from the factory and it still had the warranty) 2. Generic - don't be afraid of offering lower price, quiet a lot of sellers would agree to give you 20% discount on HW just to get rid of it quickly 3. Don't be afraid of buying cheap DACs as well. Also with Celestica and Mellanox SN, generic-coded or vendor-mismatched DACs and transceiver works just fine, but that really depends on a vendor and a switch model. They are coded for a reason. 4. Mellanox CX3/4/5 are easy to reflash to a different part number, so if you can get a cheaper 50G ConnectX-4 that have x16 slot, you can upgrade it for free to a 100G version with a forced firmware update (search on how to ignore psid). ConnectX-6 and hit-or-miss, branded cards are vendor-coded and won't accept firmware from different vendor 5. Some ConnectX-3/4/5 have outdated firmware and won't work in modern systems until firmware is updated. They won't be even detected as PCIe device. Therefore you should keep something relatively old (Xeon E5v2/v3 generation system for best chances) or known to be compatible motherboard (I had good experience with chipset slot on MSI X570 board and Ryzen and with both slots on Gigabyte MC12-LE0) 6. I'd really recommended to go for ConnectX-4 or newer for any kind of cards, CX4 is still supported by the driver, supports ROCEv2 and more offloads, therefore it is less taxing for the CPU and works in general faster. 7. Pay attention to PCIe generation. It is ok to put ConnectX-6 2x25 into a PCIe Gen4 x4 slot, it would still have full bandwidth (and reason why it is still x8 is mainly because they have 2x50 version of the card) 8. DACs vs AOC vs Transcivers vs RJ45 - DACs consumes less than 1W of power, AOC is about 1W, Transciver - depends on the technology, but it is usually about 2W for 100G CWDM4 ones, and slightly less for more modern ones. Avoid SFP+->RJ45 at all costs, because those consumes about 5W and requires active cooling to work properly and also requires to have a free port around them - so by inserting one you are losing neighbouring ports as well. It is cheaper and easier to put a cheap switch to get 1/2.5/10G cat5/cat6. 9. There are cheap 1x200G bluefield-2s on the market, they have some firmware issues and if you really want them - be sure to check that seller offers more recent firmware than stock (24.40 range is already fine, but stock 24.33 is bad and you won't be able to easily update them from stock) 10. Apart from Mellanox, there are sometimes cheap Intel E810 cards, they are also good and probably the cheapest way to have 4x25G right now as they also support breakout cables (I mean you put breakout into the card and get your 100G converted to 4x25, on a receiving end they works just fine), while Mellanox/nvidia doesn't. Intyel XL710 is not that great, but it works. 11. In general - don't forget to update firmware on your NICs, it had updates for a reason. 12. Buy a small card to detect laser, both 12xx/13xx nm and 8xx nm wavelengths are invisible but the card is crucial to debug fiber issues. 13. For the fiber cable there is no real reason to go for OM3 in my opinion, price difference doesn't worth dealing with multiple types of cables 14. But if you really want, you can have single mode BiDi for 10 and 25G, there you can go for really thin cables and pull them wherever you want. 15. Make sure you have matching distance for transicvers and reality. Don't buy long-range ones for short ranged applications, you are risking blinding the other part of the transicver and in worst case - damaging it. 16. Have a cleaning tips for your trascivers. Dust is a nasty thing and you can have unpredictable failures because there is a dust inside the connector. And always clean transciver before plugging the cable, tips are cheap, your time is expensive. 17. For CWDM4 - they are known to slowly drift away when exposed to high temperature for long, therefore even if you don't see drops, you might eventually have ones, so ensure they are cooled and have like double amount you need. Also try to go for somewhat branded ones, e.x. Finisar, Intel, etc. they are slightly more reliable while (used) they cost not that much more (4$ vs 3$) 18. 9k MTU is not always better. You would have less overhead, but some HW offloads won't work well with 9k MTU and you would likely have higher CPU usage. 19. Sometimes you can get bluefield cheaper than corresponding NIC. I can't post any links in the comments, but there are stores that sells brand new Dell Bluefield-2 2x25G for less than 200$ (well, listing price is higher, but they accept offers with about 30% less price tag, their own shop is also slightly cheaper than other sources) and that is basically 2x25G card that is on PCIe Gen4 and that have ARM CPU onboard that can do some extra tasks (like offloading encryption or compression onto the card itself)
@RobloxLabGames
@RobloxLabGames 4 күн бұрын
Your internet must be 100gig
@csdstudio78
@csdstudio78 12 күн бұрын
Must be hit or miss. I've got a 40gb link over fiber and the transceivers stay cold even running iperf3 for 20 minutes getting 39.5gbs. It's a direct attached fiber using OM4 fiber (4 pairs inside), something I didn't even know existed until it arrived 😂
@Nightowl_IT
@Nightowl_IT 12 күн бұрын
Try cheap USB fans for the rear of the computer. How expensive is your 100G Internet?
@mattmanandeddie
@mattmanandeddie 12 күн бұрын
Plastic gutters from the hardware store make good cable runs also.
@johnvillalovos
@johnvillalovos 12 күн бұрын
Is there a reason for filming in 1080p60? I only ask because on my Roku when I play at 1.5x speed the quality drops to 480p. The 1080p24 videos will play at 1080p though. Thanks.
@timramich
@timramich 10 күн бұрын
If you are looking to go with one of those cheap old Brocade ICX 6610 switches, DO NOT consider an Intel NIC for the 40 gig side of things. They are NOT compatible with each other. I spent weeks of firmware fiddling, multiple cables flashed with different vendors. Nothing worked. Got a Mellanox card, and it just worked. I now have hundreds of dollars in waste here (two XL-710 cards). The cards were not broken. Having two machines, one in each machine, linked directly together, was a perfect connection.
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
Mellanox is absolutely the way. Intel started down a bad path way back with vendor locking optics.
@timramich
@timramich 10 күн бұрын
@DigitalSpaceport The vendor locks are easily removable from the cards, and stuff from FS costs the same regardless of which vendor you want flashed. It's just plain weird their stuff doesn't work with some other gear (it was determined in my case that the Ethernet frames weren't lining up).
@DigitalSpaceport
@DigitalSpaceport 6 күн бұрын
True but it shows a prior (unsure if still) mindset that led to many future problems they are having now.
@veneratedmortal4369
@veneratedmortal4369 12 күн бұрын
I don't understand networking. You could buy 1gbe switches for about $20 in 2005, and a 10gbe switch is still about 1000 today. And they still get hot and need cooling. We should be using 100gbe now, and it should cost $20. My network has been the bottleneck for years now. Even 10gbe would be a bottleneck if it became common today.
@annebokma4637
@annebokma4637 12 күн бұрын
Go 40gbe second hand is the best price performance now
@nadtz
@nadtz 12 күн бұрын
You can get various consumer or enterprise 10gbe switches for substantially less than $1000. I bought my 3x SFP+ 8x 1gbe for $150 in 2021 and am just getting around to replacing it with a Mikrotik that costs about $250 for 8 SFP+ ports (they also have a 4 port for $150). Yes they run warmer but they are also pushing more data than 1gbe, physics is still physics. No idea why you think 100gbe should cost $20, that's like saying an Indy car should cost the same price as a Tesla because you want to go faster, that's not how that works. Right now if you need more than 2 ports 40gbe is probably the way to go for most, 25 if you have a higher budget. You can grab a Mellanox SX6036 for under $200 (or if you get really lucky sometimes you can score a SX6012 for ~$100).
@ewenchan1239
@ewenchan1239 12 күн бұрын
I deployed 100 Gbps IB because on a $/Gbps basis, it's cheaper than everything else that's out there today. But on an absolute cost basis, it's still more than 2.5/5/10/25/40/50 Gbps networking. It's gotten cheaper since I deployed my 100 Gbps IB in 2019, but the other speed options still aren't cost competitive enough on a $/Gbps basis for me to deploy anything else.
@annebokma4637
@annebokma4637 12 күн бұрын
@ewenchan1239 the bigger problem I have is the lack of x16 lane wired slots. Did you also put 100 Gbps in "lesser" slots? Like a x8 or X4
@Nachiel
@Nachiel 11 күн бұрын
Nope. varying degrees of demand🤷‍♂️ My homenetwork is still 1gb. I'm not ready to spend a single cent, let alone 10/100/1000 dollars for 2.5 Gbit, not to mention more. There is such a concept - enough. Personally, even for watching movies in 4k, it’s enough for a file server🤷‍♂️
@omitsura
@omitsura 2 күн бұрын
Around 25 min, check if that ssd is failing. had 4/8 fail last year
@ronwatkins5775
@ronwatkins5775 12 күн бұрын
My biggest "wierdness" is that on 1GBit network I should get around 100 MBytes/Sec on file transfers, but im lucky if I can get 5-10. Im not sure how to identify the bottleneck. NVME on both sides.
@spyrule
@spyrule 11 күн бұрын
What are your CPUs, whats your switch, Do you mean, your getting 5-10MB/sec write speeds?
@theosky7162
@theosky7162 11 күн бұрын
RDMA Please !
@DigitalSpaceport
@DigitalSpaceport 10 күн бұрын
Opens up can. It's worms. But seriously I am writing one up trying to make it approachable to users so it gets views. That is challenging in itself.
@BrianMartin2007
@BrianMartin2007 3 күн бұрын
You’re burning out your fiber transceivers because the laser power is too high for those short runs that you’re using. Do you need an attenuator. Basically think of it like sunglasses for the laser going through the fiber optic. If you were looking at those transceiver adapters, they will tell you the range that they’re rated for. If you were doing much shorter runs that the range that they are rated for, you need to signal attenuators, or you will burn them up every time.
@DigitalSpaceport
@DigitalSpaceport 3 күн бұрын
That... makes total sense. They are rated for 1km 😅 so its time to do some shopping for new transceivers (and a fiber spool rack tray thing) Thanks!
@BrianMartin2007
@BrianMartin2007 3 күн бұрын
@@DigitalSpaceport I replied with a link, that had a video that explained the overpowering of the input side, but now my message seems to have gotten deleted or something?
@Winkelknife
@Winkelknife 12 күн бұрын
With 100Gb you basically get to TB5 lands 😅
@PureKNFDrake
@PureKNFDrake 12 күн бұрын
i don't understand why you have all that stuff? what are you doing with it? is it just to flex on youtube? or are you the guy seeding all the torrents on pirate bay?
@efimovv
@efimovv 10 күн бұрын
Such speed can be used for network drives connected to workstation and feels like local ones (while editing video for example). Also I see some AI-related videos and AI is also bandwidth-hungry.
INSANE Homelab Storage Server
57:21
Digital Spaceport
Рет қаралды 55 М.
32/43 GPU Rig Build (Part 2) with some things to come
5:09
Backwards Miner
Рет қаралды 323
SLIDE #shortssprintbrasil
0:31
Natan por Aí
Рет қаралды 49 МЛН
Jaidarman TOP / Жоғары лига-2023 / Жекпе-жек 1-ТУР / 1-топ
1:30:54
Война Семей - ВСЕ СЕРИИ, 1 сезон (серии 1-20)
7:40:31
Семейные Сериалы
Рет қаралды 1,6 МЛН
03 - Routers & Firewalls - Home Networking 101
51:45
Crosstalk Solutions
Рет қаралды 195 М.
I'VE WAITED SO LONG - 100Gb/s Switches from Ubiquiti
20:41
ShortCircuit
Рет қаралды 412 М.
We Need Internet! Running a 3,000 ft Fiber Optic Cable
19:53
The Homesteading RD
Рет қаралды 441 М.
ONE HUNDRED GIGABIT - MikroTik CRS504-4XQ-I9
30:51
Craft Computing
Рет қаралды 85 М.
"DIY Smart Home: Don’t Make This Expensive Cable Error!" Loxone
20:33
The Jurassic Jungle, Dorset Eco renovation
Рет қаралды 3,3 М.
Ai Server Hardware Tips, Tricks and Takeaways
26:30
Digital Spaceport
Рет қаралды 23 М.
INSANE Homelab Proxmox Cluster
16:12
Digital Spaceport
Рет қаралды 5 М.
Upgrading our FREE internet to 25 gigabit! - Running Fiber to our Merch Office
32:19
Everything I Learned About Home Networking - A Newbie’s Perspective
26:36
Jimmy Tries World
Рет қаралды 533 М.