2:36 Ooooh that perfect drive cube stack!! Wow 1PB in a singe array - you're making me 8x 18TB look tiny.
@DigitalSpaceport Жыл бұрын
I had a hard time committing to taking them and putting them in the jbod the stack looked so good.
@BigBenAdv Жыл бұрын
You probably need to look into NUMA and QPI bus saturation being the issue on your Truenas box since it's and older dual-socket Xeon setup. Odds are the QPI bus is saturated when performing this test. For some context: I've successfully ran single connection sustained transfers up to 93Gbit/s (excluding networking overheads on the link) between on Windows 2012 R2 boxes in a routed network as part of an unpaid POC back in the day (2017). Servers used were dual-socket Xeon E5-2650 v4 (originally) w/ 128GB of RAM, running Starwind RAMdisk (because we couldn't afford NVME VROC for an unpaid POC). Out of the box without any tuning on W2012R2, I could only sustain about 46-50Gbit/s. With tuning on the Windows stack (RSC, RSS, NUMA pinning & processes affinity pinning), that went up to about 70Gbit/s (the QPI bus was the bottleneck here). Eventually, I took out the 2nd socket proc for each server to eliminate QPI bus saturation and the pinning/ affinity issues and obtained 93Gbit/s sustained (on the Arista switches running OSPF for routing, the actual utilization with the networking overheads was about 97Gbit/s). The single 12C/24T Xeon was only about 50% loaded with non-RDMA TCP transfers. The file transfer test was done with a Q1T1 test on Crystaldiskmark (other utilities like diskspd or Windows Explorer copies seem to have some other limitations/ inefficiencies). For the best chance at testing such transfers, I'd say that you should remove one processor from the Dell server running Truenas. 1) Processes running on cores on socket 1 will need to traverse the QPI to reach memory attached to socket 2 (and vice versa). 2) If your NIC and HBA are attached to PCIe lanes on different sockets, that's also traffic that will hit your QPI bus. 3) Processes on socket 1 accessing either the NIC or HBA attached to PCIE on the 2nd socket will also hit your QPI bus. All of these will potentially end up saturating the QPI and 'artificially' limit the performance you could get. By placing all memory, NIC, and HBA to only one socket, you can effectively eliminate QPI link saturation issues.
@DigitalSpaceport Жыл бұрын
Incredible info. Thanks for writing this up I will remove a processor and test this out with the nic and HBA attached
@rodrimora Жыл бұрын
I believe that the windows explorer copy/paste is limited to 1 core go that would be the bottle neck. Also I think at 14:40 you said the "write cache", but the RAM in ZFS is not used for write cache as far as I know, only for read cache.
@DigitalSpaceport Жыл бұрын
Yeah I'm checking into robocopy GUI here. I spent a day trying to get smb multichannel to work with the other 10gb nic in the computer so I hope to be able to track it down soon. In the past my Ryzen 3600 stomped this transfer speed so your spot on. I thought ram buffered writes in ZFS?
@xxcr4ckzzxx840 Жыл бұрын
@@DigitalSpaceport Rodri is right here. Single Core only for Windows explorer copies. SMB Multichannel is a PAIN between Windows and Linux. If you ever get that to work reliably make a dedicated Video about it PLEASE! For the buffered writes; you are right here. ZFS buffers ~5s of writes in RAM and moves it to the Disks. Thats btw the reason Disk benchmarks on ZFS are useless and also why the copying speed flactuates quite a bit on most spinning rust setups. There is an option to tune it ofc, so that you buffer exactly as much data in RAM, as your drives can write until the RAM buffer is full again. If I only could remember the Name... EDIT: If SMB Multichannel is a nono, then try NFS v4 with a Linux System. It will perform substantially better, as SMB is single-threaded only too iirc. EDIT2: openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-txg-timeout - Have a look at that. Its the Cache thing above and might need tuning in your setup. BEWARE! This is a deep, deep rabbit hole!
@pfeilspitze Жыл бұрын
19:38 "now we have this set up in a much more common-sense [...]" -- I'm a ZFS noob, but is 60 disks in a single Z2 really a good idea? Seems like the odds of losing 3/60 disks would be relatively high, particularly if they all come from one batch of returned drives. What if it was 6x (RaidZ2 | 10 wide) instead, say? Then it could stripe the reads and writes over all those vdevs too...
@CaleMcCollough Жыл бұрын
He must be single. There is no way the wife would allow that much server hardware in the house.
@quochung9999 Жыл бұрын
He paid off the house I guested.
@GaryFromIT Жыл бұрын
He does in fact have a wife, she even videos him with it for hours at a time.
@slug.racing7 ай бұрын
Maybe you should borrow some big boy pants.
@verschworer14943 ай бұрын
For example my not wife but girlfriend, in which apartments i have living (God save the tinder)😅, keeping two mini horses, and i just help with them, and i have 30 unit rack where sm blade, 2 switches and kvm, apartments power supply only 6KW, so i think a cant have more🤷♂️, but have no restrictions from my gf side
@RickMyBallsАй бұрын
@@quochung9999 you... guested??
@PeterKocic5 ай бұрын
Hey just wanted to say your videos inspired me to purchase a DE6600 and they were invaluable to the decision. The result has been so friggin' good. Extremely happy!
@DigitalSpaceport5 ай бұрын
Oh cool glad it helped. These are some massive machines and have a lot of ticky parts, but once running they are great.
@arigornstrider Жыл бұрын
That stack of 20TB drives! 🤤
@DigitalSpaceport Жыл бұрын
#density
@TheSouthernMale Жыл бұрын
He is just trying to insure he keeps a head of me in the pool 🤣🤣🤣🤣
@DigitalSpaceport Жыл бұрын
that word you use there "Trying"....are you sure your using it right?
@TheSouthernMale Жыл бұрын
@@DigitalSpaceport Of course I am, be careful or one day you will fill a strong wind as I pass you by. 😛 Maybe someday you and I will be number 1 and 2 in the pool, you of course being the later. 😛😛
@TheSouthernMale Жыл бұрын
@@DigitalSpaceport You also seam to forget that while you are using compressed plots to stay a head of me, I am not so far, just imagine that strong wind as I pass you by once I compress them. 🤣🌪🌪🌪🌪
@philippemiller474011 ай бұрын
60 wide raidz2 doesn't make much more sense haha. Try 10 wide raidz2 x 6. That would make much more sense no? Maybe you're limited by smb, have you tried using iscsi or nfs?
@punx4life85 Жыл бұрын
Awesome vid! Thanks g! Picked up another 66tb for my farm
@baystreetwolves Жыл бұрын
Love catching up on your build. You never stop building.
@DigitalSpaceport Жыл бұрын
I'm going to get into software more at some point here, but man machines are fun!
@Mruktz Жыл бұрын
I have a humble homelab, but what would you even realistically need a petabyte storage system for?
@DigitalSpaceport Жыл бұрын
video on that all soon
@RickMyBallsАй бұрын
@@DigitalSpaceport video full stop
@chrisumali9841 Жыл бұрын
thanks for the demo and info, MegaUpload lol... Have a great day
@LampJustin Жыл бұрын
You don't plan on using a single Rz2 in production, right? Right? One Rz2 shouldn't be much wider than 8 drives for optimal performance and redundancy. Recovering from a failure with a 60 drive z2 would take a freaking long time and chances are really high that other drives will go boom as well. It has to read all 1PiB after all...
@DigitalSpaceport Жыл бұрын
Nooooo. I have a second video filmed with the same array and it's much more common sense and safe in it's layout.
@LampJustin Жыл бұрын
@@DigitalSpaceport good good, the other setup would have been pure insanity :D
@BloodyIron Жыл бұрын
Thanks for showing these examples!
@DigitalSpaceport Жыл бұрын
I try man I try. It's a lot of guesswork over here so appreciate the feedback on things you find valuable.
@BloodyIron Жыл бұрын
@@DigitalSpaceport Well I've more been watching a bunch of your eps for IT specific infra, not the crypto stuff myself. So thanks again!
@andreas495924 күн бұрын
Wicked setup man! I saw you tried running these with the IOM6 without luck, but I have heard a couple of people claiming that IOM12's work in these?.. Do you have any knowledge about that? I'm looking to build a proper storage setup with a DE6600 instead of having multiple old PC's filled to the brim with drives, but really not finding any info on whether or not the IOM12 would work in these...
@DigitalSpaceport22 күн бұрын
I do not think it would work with the IOM12 myself but you should checkout my full video on the DE6600's if you have not yet. kzbin.info/www/bejne/b6jaZ4Gso5WUirc There is also a version that is SAS3 and I can tell you for sure, you want that vs the sas2 version if you are thinking of ever maximizing your HDDs potential fully all at the same time like say ZFS can do. 60 disks at SAS2 speeds are limited to around 1/3 of their max performance. 24 drives is the full utilization for a SAS2 JBOD.
@caiolopes126 ай бұрын
How did you get TrueNAS running in a infiniband switch? I always hear that IB is not supported by TrueNAS
@electronicparadiseonline2103 Жыл бұрын
That's freakin insane. Your out of your mind DS. That's a ton of storage and you look like you just came home from the grocery store or something.
@contaxHH Жыл бұрын
will the floor collapse?
@DigitalSpaceport Жыл бұрын
No but I did static load calculations on the 4" 6 slump reinforced slab in the garage and decided to put plate steel squares under the risers to distribute the load slightly better. So far zero issues. It should be okay even fully loaded, which it's not. Calculated to 800, 1200 and 2000 lbs per rack from left to right facing from front. Good question. Also don't put racks on wooden subflooring if they approach 1000 lbs without additional load deflection bracing.
@laughingvampire7555 Жыл бұрын
that sound of fans is just relaxing to me
@DigitalSpaceport Жыл бұрын
10 hour server white noise video hummm yeah
@guytech73103 ай бұрын
What do people store where you have hundreds of TB of storage? I have at most 40TB (mirrored) of storage, that includes an ESXi, 24 Camera NVR, DLNA, & a few other servers.
@DigitalSpaceport3 ай бұрын
GIS data and project files is what I store as it is useful to my work. Data hoarding newspaper clippings is pretty cool, in a very boomer way, also.
@guytech73103 ай бұрын
@@DigitalSpaceport Still a Petabyte is a lot of storage. I do manage about a PB of data for a company, but not for a home lab.
@RickMyBallsАй бұрын
bluray rips
@juaorok Жыл бұрын
That awesome Right now I have a Supermicro SC836 16 bay with 7 x 12tb hdds and 96Gb of ram I'm upgrading little by little, saving money to upgrade my network
@DigitalSpaceport Жыл бұрын
I'm fairly unhappy with my 40Gb performance but attribute it to the EPYC not having a high core speed + smb multichannel not working as I was hoping. The 10Gb nic is an easy win on almost any machine and fully maxes out here however.
@christiandu7771 Жыл бұрын
Thanks for your video, can you tell me where you buy these disk (not available in your shop) ?
@DigitalSpaceport Жыл бұрын
Hard drives do sell fast. We notify paid channel members first of new stock as soon as it is posted to the store. Members also receive a 3% or 5% discount code depending on which level they sign up for (a code is generated for each month which has unlimited use while the code is active.) kzbin.infojoin Another way you can get notified is to subscribe to the e-mail list on shop.digitalspaceport.com. We send out an e-mail notification when hard drives come back in stock if there are any left after the channel members have been notified.
@samishiikihaku Жыл бұрын
Not sure of the differences, but Dell, before EMC, used the same enclosure style as well. PowerVault MD3060E and other varities. Though the prices may be a bit different.
@DigitalSpaceport Жыл бұрын
NetApp made both of these variants and there are not meaningful differences that I have seen at all provided you use the SAS controller for the management node. Just stickers and of course the Dell front bezel. I do have some of the Dell branded EMMs and they work out perfect.
@samishiikihaku Жыл бұрын
@@DigitalSpaceport Yep. Just wanted to show another option. Incase people can't find the netapp version.
@TannerCDavis Жыл бұрын
Arn't you limited to 6gbps sas cable connections? Do you have multi path option on to get above 6? The speeds above 12gbps are probably due to writing to ram, then slows down to write to disk thru the wire connections.
@DigitalSpaceport Жыл бұрын
SAS2 is pretty decent if you have wide mode running. The DE6600 can do that on the first connected device but not on the second daisy chained (that I have been able to figure out at least)
@thumbnailqualityassurance7853 Жыл бұрын
How does TrueNAS know how to light the disk failure LED on the netapp if a disk fails?
@ernestoditerribile Жыл бұрын
It’s not TrueNas doing that. It’s the HBA/RAID/Disk controller of the NetApp checking the S.M.A.R.T. Status
@TheSasquatchjones Жыл бұрын
My God! it's been a while. Great video!
@DigitalSpaceport Жыл бұрын
Howdy 🤠
@TVJAY Жыл бұрын
I am new to your channel, is there any chance you can do an overall tour of your setup and how you got to where you are?
@DigitalSpaceport Жыл бұрын
What a great idea. I'll get one in the works here. Welcome and thanks!
@何旦聃-c7u Жыл бұрын
How do you solve the problem that the indicator light of the sata hard disk is not on?
@DigitalSpaceport Жыл бұрын
The SATA light is technically on, its just very very faint. It does not show well on the JBOD with the camera. If it is important to you to have the full blinking light, then an SATA to SAS interposer would be needed.
@ewenchan1239 Жыл бұрын
I couldn't find the comment where you asked me about SMB Direct in Windows 11, but it does look like that if you go to Control Panel -> Programs and Features -> Turn Windows Features On and Off -> SMB Direct -- it does appear that in Windows 11, you can enable it. Just thought that I would pass that along to you.
@notmyname1486 Жыл бұрын
just found this channel, but what is your use case for all of this
@mitchell1466 Жыл бұрын
Hi, loves your video I noticed when you where in the iDrac you were on a Dell 720XD, I am looking at going to 10GB for my setup and was wondering what 10GB NIC you have installed?
@DigitalSpaceport Жыл бұрын
Mellanox connect x2
@mitchell1466 Жыл бұрын
@@DigitalSpaceport hey thanks for the reply Did you have any difficulty getting it to work with Scale or did you just plug it in and TrueNAS picked it up?
@JasonsLabVideos Жыл бұрын
Nice setup !! Looking skookum man !! Keep going !
@DigitalSpaceport Жыл бұрын
Thanks appreciate it 👍
@xtlmeth Жыл бұрын
What SAS card and cables did you use to connect the JBOD to the server?
@DigitalSpaceport Жыл бұрын
LSI 9207-8e and 8088-8088 cables. Linked in description to exact ones I bought.
@apdewis8 ай бұрын
The power bill on that setup must me monstrous. My more modest setup of R630s and 320s costs me enough. I am envious.
@DigitalSpaceport8 ай бұрын
One would think but I'm base rate 10c/kWh and so it isn't that bad. I also shed load dynamically via HA and proxmox and don't run all the machines at once very often, unless needed. Usually about 250 total with the house for the electric bill. Cooling is consistently the largest user in the garage.
@apdewis8 ай бұрын
I can only envy that per/kWh rate as well. Mine is somewhere around A$0.22. That said still ends up being a lot better value than AWS is at work...
@StenIsaksson Жыл бұрын
I heard something about Windows not being supported with Epyc CPU's Wrong apparently
@DigitalSpaceport Жыл бұрын
Yeah so just use the ryzenmaster drivers and it works great. I was discouraged initially also by what I had been seeing others say.
@visheshgupta9100 Жыл бұрын
I was wondering if you could do a video on TrueNAS scale with multiple nodes. There is no video on youtube that discusses this in detail. In layman terms, I would like to deploy 3 different servers, and control them all from one place. My question is do we need to install TrueNAS scale on every server? Or is it that we have 1 TrueNAS Scale server, and others are TrueNAS Core?
@DigitalSpaceport Жыл бұрын
If you need to use True Commander (the software that manages multiple nodes) I think you need to contact them about a license. I have heard its affordable by I don't really know that. You dont mix and match Core + Scale really.
@visheshgupta9100 Жыл бұрын
@@DigitalSpaceport Thank you, that is exactly what I was looking for.
@visheshgupta9100 Жыл бұрын
@@DigitalSpaceport I am planning on building a homelab, I was thinking of having multiple servers for different kinds of media. For instance, one for game storage, the other for critical data & backup, and so on and so forth. So essentially, I wont need all the NAS running at all times. I want to be able to power on just the system I need, so it would work as a standalone system, and also the ability to control all the systems from one place so that I don't have to configure users/permissions/shares for each and every system individually.
@visheshgupta9100 Жыл бұрын
@@DigitalSpaceport I was considering TrueNAS at first, but now I am kind of leaning towards UnRaid due to their recent implementation of ZFS. So essentially I could use UnRaid known for its parity, or use ZFS known for its speed and reliability or use both at the same time. In terms of flexibility, to mix and match drives, ease of use, low hardware requirements, I believe UnRaid has an upper hand. What are your thoughts?
@MHM4V3R1CK Жыл бұрын
@@visheshgupta9100 I think truenas command is free now. Truenas scale is the Linux/debian flavor of truenas and core is the older but very stable BSD flavor btw
@linmal2242 Жыл бұрын
Does this array run 24/7 or is it powered down most of the time? What is your power bill like?
@DigitalSpaceport Жыл бұрын
Runs 24/7 and is around 2.2 amps at 245V so 540w per jbod. Per disk that is 9 watts which is among the best per tray efficient jbods I have measured
@maximloginov Жыл бұрын
Hello. What kind of raid you finaly using for plotting? Stripe? Raidz2?
@DigitalSpaceport Жыл бұрын
RAID0 of 12 disks. So far none have blown out and the performance is awesome. Worst case one blows out and I need to replot it, not a huge deal. I can do that in a few days.
@snapu-g1j Жыл бұрын
Quick question, could the DE6600 handle SATA HDD or only SAS? Could i just buy a SAS HBA card and plug it into my ubuntu server? Thanks
@DigitalSpaceport Жыл бұрын
It handles SAS or SATA. Get like a 9207-8e vs say a 9205 or 9200
@snapu-g1j Жыл бұрын
@DigitalSpaceport thanks for the reply. One controller can handle all the 60 drives with a acceptable response time? ( chia farming ;-) )
@何旦聃-c7u Жыл бұрын
Hello,Is it possible to update the ESM/EMM firmware?
@TheDropForged Жыл бұрын
Serious question, I see people with server equivalent to enterprise. Do people really need that size?
@TheDropForged Жыл бұрын
Aa you do crypto stuff
@DigitalSpaceport Жыл бұрын
Yeah ai had a smaller half rack prior which is more than enough nowadays for most homelabbers
@capnrob97 Жыл бұрын
For a home lab, how could you even begin to use a petabyte worth of storage?
@jonathan.sullivan Жыл бұрын
I'm interested in seeing what you can get for performance with multiple Vdevs, Tom Lawrence has a good breakdown video for logically how many disks per vdev one hsould have for performance. 1 Raidz2 vdev across 60 disks def isn't it but it's fun to see. #subcribed
@DigitalSpaceport Жыл бұрын
For sure it was just to show the size lol but I am working on a followup here. Looks like I have a solution for the pathetic SMB transfer speeds also.
@watb8689 Жыл бұрын
you have some insane homelab. how is the energy bill coming
@DigitalSpaceport Жыл бұрын
runs 225-250 per month. Not really that bad.
@fisherbu Жыл бұрын
nice job! how to make a plot only 74gb?
@DigitalSpaceport Жыл бұрын
Gotta create it with the Gigahorse or bladebit cuda. Bladebit isn't farmable yet, but Gigahorse is
@hescominsoon Жыл бұрын
try it over iscsi isntead of smb. also sclae does not perform as wlel as core does across the board in my own testing. unless you want to run vm's then sclae is the way to go. if you want only storage then core is your best bet.
@DigitalSpaceport Жыл бұрын
I've had a lot of folks tell me to go core for performance and I'm going to check it out. I use proxmox for VMs also so no reason really. I will do a video on iSCSI I think after I learn some more about it and give a "noobs perspective" to iSCSI. I used it only once in the past via a Dell MD1000i and it was a painful thing as a result of that. Time for another round!
@hescominsoon Жыл бұрын
@@DigitalSpaceport iscsi in core is easy.... I do not use proxmox as my hypervisor so I don't know about setting up from iscsi on that end.
@rfitzgerald2004 Жыл бұрын
On your shop, do you ship to UK?
@DigitalSpaceport Жыл бұрын
Yes just make sure to add your phone number on the form. Customs will stop the shipment w/o that.
@trousersnake1486 Жыл бұрын
This is waaaay above my knowledge base of pc hardware but its impressive what I do understand. Looking to upgrade from my ryzen 5900x x570 system to an eypc system when finances allow.
@DigitalSpaceport Жыл бұрын
Its funny how things get out of control in life lol
@trininox10 ай бұрын
How's the electric bill?
@DigitalSpaceport10 ай бұрын
250/mo
@ewenchan1239 Жыл бұрын
The other problem that you might be running into it looks like according to wiki, your Intel Xeon E5-2667 v4 are only 8-core/16-thread processors each, which means that having to deal with/manage a 60-wide raidz2 ZFS pool is going to tax the processor quite heavily when it is trying to manage that many drives with only 16-cores/32-threads total (possibly). Keep an eye on out on your %iowait.
@DigitalSpaceport Жыл бұрын
I have part 2 filmed here of me trying out other more logical configurations then 60 wide. Out this week!
@ewenchan1239 Жыл бұрын
@@DigitalSpaceport I look forward to watching that video. (The reason why I mention to keep an eye out on your %iowait is because my server has 36 drives, 32 of which are handled by ZFS under Proxmox, and under heavy disk load tasks, the %iowait will jump/start to climb until those tasks are finished, and then the %iowait will fall back down.)
@MyersJ2Original Жыл бұрын
Are those drives used you sell or new?
@DigitalSpaceport Жыл бұрын
Refurbished from Seagate but they dont have power on hours like used pulls do.
@marcelovictor3031 Жыл бұрын
how many hds of 20 tb do you have?
@DigitalSpaceport Жыл бұрын
The real question is how many of 22TB do I have😜
@skyhawk21 Жыл бұрын
Need help, got whs server with 50tb of drives, need a cheap good quality 2.5gb switch maybe with 10gb port and also cheap quality 10 gb card for server??? 1gb don’t cut it
@DigitalSpaceport Жыл бұрын
Skip the 2.5G and just roll out 10Gbit. You will be much happier. This mikrotik switch is great to start with. geni.us/goNi9C
@bokami34459 ай бұрын
OMG! How long would a scrub take on this monster!
@carbongrip2108 Жыл бұрын
I hope you enabled Jumbo Frames on all your NICs...
@DigitalSpaceport Жыл бұрын
Yes I do. I have a home side segment that bridges to these devices but the big gear is all on Jumbo
@Murr808 Жыл бұрын
What do you do about windows updates ?
@DigitalSpaceport Жыл бұрын
Suffer through them eventually. They only allow you to postpone so much unfortunately. I try to do them when behind KB
@jondonnelly3 Жыл бұрын
You can block it by added some entries to the hosts file.
@gustcol1 Жыл бұрын
I have the same problem with my 100Gbps network..
@ewenchan1239 Жыл бұрын
Unless you're writing to an array of enterprise grade NVMe U.2 SSDs, 100 Gbps for storage for a home lab user -- you'll never be able to hit more than a few percent of the 100 Gbps line speed/capacity. (I have 100 Gbps as well (Infiniband).) Even if you enable NFSoRDMA, if you're going to be using spinning rust, it's not going to make THAT much of a difference. (The highest I've been able to momentarily get is about 32 Gbps kernel/cached writes. More often than not, my system hovers around 16 Gbps nominal max.)
@Firesealb99 Жыл бұрын
You had me at "caddies not needed"
@DigitalSpaceport Жыл бұрын
It feels good to not need to screw in caddies for sure
@Solkre825 ай бұрын
Don't want to brag, but I have mirrored 18TB drives in my NAS and I almost have 6TB of data on it. 💪💪💪💪
@DigitalSpaceport5 ай бұрын
Bragging about storage is allowed and encouraged here
@seelook1 Жыл бұрын
I'm going to sound like a kid. THIS IS SO COOL! I WANT IT. My wife will kill me if i ever add something like this lol.😅
@DigitalSpaceport Жыл бұрын
You have 1 life. Live your best one. (advice that gets folks divorced hahaha)
@blueprint4221 Жыл бұрын
watch that bend radius, sir
@DigitalSpaceport Жыл бұрын
so about that.... Needed this comment before the video lol
@GreatVomitto8 ай бұрын
You know that your raid is fast when you are limited by the CPU.
@DigitalSpaceport8 ай бұрын
A never ending problem except I got some RDMA going now!
@DennisJlg-cu1vw Жыл бұрын
Hello my tip do not use Win 10 or 11 but a Win server why, Alex from The Geek Freak found out that there is a limit to the network in normal systems and this is not active in servers. For info
@DigitalSpaceport Жыл бұрын
This is easy to test out! Will toss a server 2022 on and see what kinda perf benefits I can get
@FakeName39 Жыл бұрын
This dude running amazon from his home
@TheSouthernMale Жыл бұрын
Hey, if you see me leave the Pool its because I am losing money on it. I calculated that if I was solo farming I would have earned 4.5 additional XCH in the last 10 days. I will wait and see if the wining trend continues.
@DigitalSpaceport Жыл бұрын
I have gone that path indeed sir. GL to you, if you show back up I'll know why 😂
@TheSouthernMale Жыл бұрын
@@DigitalSpaceport I am not saying I will leave yet, but if I do leave and then come back it will only be to jump a head of you in the pool. 😛
@trillioncap Жыл бұрын
wow invredible
@jondonnelly3 Жыл бұрын
Why do you need a PetaByte as a home user/ small office user ?
@DigitalSpaceport Жыл бұрын
I have a lot of isos
@rsqq8 Жыл бұрын
This man ISO’s 😂
@Alan.livingston Жыл бұрын
That’s a lot of adult material you seem to be hoarding there, sir.
@DigitalSpaceport Жыл бұрын
The backups backup has backups
@MelroyvandenBerg6 ай бұрын
Cat door. You get it? 😅
@rustyZ_stacks Жыл бұрын
5 axis stabilization makin me dizzy ahaha
@DigitalSpaceport Жыл бұрын
Ill try to use less 🚁
@nexovec Жыл бұрын
Uhh 60 drives in one vdev :D this video is so wrong :D Good job.
@franciscooteiza Жыл бұрын
Not if you are using dRaid (not the case in this video)
@sm8081 Жыл бұрын
Envy….Bad feeling, I know…😅
@pudseugenio4118 Жыл бұрын
looks like a gold bar from a cave
@DigitalSpaceport Жыл бұрын
YARRRR 🏴☠️🦜
@NilvinPerpinosas6 ай бұрын
that is not a homelab, that's corporation shit bro
@Paberu85 Жыл бұрын
I wonder why would somebody do such a thing to his own house, and wallet?..
@ajandruzzi Жыл бұрын
I assume you’re looking for an answer better than “because he can”.
@DigitalSpaceport Жыл бұрын
you and me both
@charliebrown1947 Жыл бұрын
you need higher MTU. you're breaking these files into too many 1500 byte packets (think DDoS attack). btw i hope you didnt keep that 60 wide raidz2 configuration... that's silly. Oh, you also have your ARC set to use 50% of your ram (default) and half your ram is literally doing absolutely nothing at all. change the default to allow it to use 90% or even more
@leo_craft1 Жыл бұрын
They cost 1000€+
@InSaiyan-Shinobi Жыл бұрын
Is this at your house Wtf?
@NilvinPerpinosas6 ай бұрын
143
@jj-icejoe6642 Жыл бұрын
All that wasted money…
@16VScirockstar Жыл бұрын
The throughput is constrained through the PCIe 3.x bottleneck, de.wikipedia.org/wiki/PCI_Express
@DigitalSpaceport Жыл бұрын
I think SAS2 is bottlenecking even before PCIe3 here. I have another video in the works that will check into but pushing the limits of my spindles is a topic I am diving more and more into.
@16VScirockstar Жыл бұрын
@@DigitalSpaceport nice! Just out of curiosity, what do you need this setup for? It must be immensely expensive. As a former SAN admin, I can tell, striping over more disks doesn't give you more linear performance. I remember the threshold, 17 years ago, was at around 10 disks per stripeset.
@FunkyKong4 ай бұрын
@@DigitalSpaceport How is it possible that 8 lanes of SAS2 (48Gbps / 6GiB/sec) are saturating when your throughput is < 2GiB/sec? In my experience pulling my hair out doing exactly what you're doing in the 2nd half of your video, Windows just seems to not manage above 10-15Gbps no matter what you do, even using network testing tools. I am using 100G Mellanox ConnectX-5 cards with exactly the same limitations and no amount of driver tuning seems to get there. But if I reboot into Linux I can hit wirespeed between machines. The only solution for good high-speed SMB on a Windows client, particularly for single streams (where multichannel cannot help) is with RDMA, aka SMB Direct. That requires Windows Server on the other end, or ksmbd if you dare go down that rabbit hole. I hope for an update with whatever you've found since this video.