Block size is the size of the plastic bin in your cupboard. An IO is taking a bin down, getting something from it and putting it back. Throughput is the total number of things taken out of the cupboard per unit time.
@kingneutron111 ай бұрын
Pertinent questions: Can it saturate 100Gbe fiber network? (Probably / assuredly) - if it does this easily, what is the next step beyond that? How many simultaneous users for SMB / NFS shares? Simultaneous 8k video editing / transcoding benchmarks? ZFS benchmarks? (As much of a ZFS fan as I am, my understanding is they still need to tweak the code for nvme / non-spinning speed) Potential zpool layouts (don't forget ZIL mirror / L2ARC) - mirrors would obv be fastest but you lose capacity, how would you recommend setup RAIDZx vdevs or possibly DRAID? How many SQL transactions per second? How many Linux kernel compiles per hour? How many $big-open-source-project compiles per hour (Firefox, Gentoo, Libreoffice, etc)? How long would it take to compile the entire Debian distro (all packages) from source? Can it potentially replace X model/series of mainframe? --TIA, just some stuff to think / brag about :)
@MarkRose133711 ай бұрын
The drives do very little for compiling. Netflix cache servers are already encrypting video at 400 Gbps using a 32 core 7502P Epyc, a CPU from 2019. They did require some tuning to get there, especially around reducing memory bandwidth usage. I bet they're getting 800+ Gbps in the lab already using a modern 32 or 48 core Epyc, which have 2.25 times the memory bandwidth of the older 7502P.
@ArmoredTech11 ай бұрын
It's reassuring to see that someone has the same handwriting as myself 🙂 -- Good explanation
@sevilnatas11 ай бұрын
How does a ZFS read and write cache improve the rust numbers you talked about? Most of my pools are from M.2 drives on bifurcation cards, followed by SATA SSDs with read/write caches on Optane drives and finally a few rust drives with the same Optane drive setup in front of them, should I be seeing the type of performance you were talking about?
@jttech4411 ай бұрын
ZFS doesn't have a write cache. I'll say it again, because people are very confused by this, ZFS does not, in any way, have a write cache. Read caching is handled by ARC and L2ARC if you've got it, and will be as fast as your RAM, or the L2ARC devices. Realistically, if you have NVME storage, you see no benefit from an L2ARC, but, you will see a benefit by adding as much RAM as possible, as cache hits are basically guaranteed to run at wire speed.
@sevilnatas11 ай бұрын
@@jttech44 Hmm, OK, no such thing as read cache, got it. jj 🤣 All my NVME pools have no caching, besides RAM, and my SSDs will have read caching, plus RAM, mentioning RAM just to be annoying, but I do have 128gb on the NAS, with a 24 core EPYC cpu and more PCIe lanes than you can shake a stick at. I will be putting my VMs on the NVME pools and files on the SSD pools. I have a very specific need to have super fast small file access of the last few files written. Not as important as VM speed, but close behind. The rest of the SSD space will be just regular file shares. Also, I will probably want to put a REALLY FAST NVME disc in as a paging disk for RAM overflow but that might be overkill for 128gb of RAM.
@jttech4411 ай бұрын
@@sevilnatas Depending on your working set size, 128GB of RAM may or may not be enough. Also depending on what your read/write mix is, it can make sense to just have SSD's and spend the extra money on an absolutely massive amount of RAM so you can fit your entire working data set into cache.
@visheshgupta910011 ай бұрын
A big fan of 45Drives, you guys are doing a fantastic job. Can you please explain the following:- 1. In the video you talk about the capability of NVMEs to handle 4000 VMs, however, can you also please explain how many VMs can a 64 Core CPU handle? Do the VM's share CPU cores? 2. What is the maximum size of SATA drive available in the market today? VS the maximum size of NVME drives available in the market. 3. What is the power draw of a SATA drive vs an NVME drive 4. What is the idle power consumption of the sever with 32 NVMEs installed?
@glmchn11 ай бұрын
Your VM can have the compute in a dedicated box and just hit the storage elsewhere as SAN. Anyway they said at some point this is theoretical just to explain the scale of power of this kind of solution so don't need to be too picky on accuracy.
@nadtz9 ай бұрын
What is the maximum size of SATA drive available in the market today? VS the maximum size of NVME drives available in the market. What is the power draw of a SATA drive vs an NVME drive Sata 24tb, NVME up to 30tb right now last I checked. For power SATA about 5ish idle to ~7-10w and NVME idle is about the same and can hit as high as 18w depending on the drive. From there with some math the idle/max power consumption can be figured out though you will also have to account for HBA's, network cards/whatever else as necessary.
@dj_paultuk705211 ай бұрын
We are using NVME Drive arrays in the Data Center i work in, far bigger than this. Some of the units we have use 75 NVME's. Cooling is an issue as they generate a ton of head when packed so densely.
@polygambino11 ай бұрын
Good video and good questions, a few minor technical inaccuracies such as the IOPs decreases the larger the block size but that's nitpicking. While i dont represent 45drives the amount of VMs you can run on a 64 core always depend on the use case for the VM, the application requirements, the hypervisor vm maximum plus the performance the storage can provide. There are 100TB Sata drives on the market but u can get 61TB Nvme drives. And lastly NVMe will pull more power when it's being used just because it's that much faster and need more energy. But always check the spec sheets for the devices that you want to buy and use. Then look at the number of Vms you can run for the IO and power it takes if u use NVME vs SATA. You will get a better picture of the cost and power for the performance of the vms
@mitcHELLOworld10 ай бұрын
IOPs do actually decrease as the block size increases, but the reason is different for HDD's vs solid state. When we're talking NVMe or SSD - it is typically simply because the bandwidth limitations of the bandwidth of the connection - (SATA / PCIe 3/4 2x/4x) - In regards to HDD Seagate EXOS are rated for 440 read IOPS, but you will have a very hard time getting 440 1MB read iops out of it I think you'll find! In regards to the number of VMs you can run on a 64 core CPU, of course this is true! However, we may have not done a sufficient job explaining - we are not speaking with the intention of the VMs being run on the storage server, but instead this server being the storage back-end for dedicated hypervisors. Hope this clears it up! - Sincerely , one of the guys in the video! haha
@64vista11 ай бұрын
Hi Guys! Thanks for the video! Do you have any plans to show us the real life capabilities with this storage with vmware vms with NFS, iSCSI, NVMEoF? It will be really good :) Thanks!
@mitcHELLOworld10 ай бұрын
We defeinitely do:) we have some great content coming up in the next month ! be sure to tune in
@djordje199911 ай бұрын
EDSFF? Long (E1.L)?
@glmchn11 ай бұрын
Those guys are something 😅
@kyleallred98411 ай бұрын
Send to ltt so they can test the 4000 VM stat.
@mattkeith5303 ай бұрын
Beefy guy 🎉
@kingneutron111 ай бұрын
BTW, you guys are working on some really cool stuff (and I wish this video was monetized 💎)
@shephusted271411 ай бұрын
they really aren't selling anything here other than industry standards basically - that is fine but no ip here - they have contributed to open source and seem trustworthy - for small biz the documentation and support is what they are paying for not the hardware - really
@mitcHELLOworld11 ай бұрын
This is actually untrue. We developed and built our own firmware for the microcontroller. I explain this a little bit in the previous nvme stornado teaser video. Everything else you mentioned however, is fairly true. That being said, we are very much leading the industry here with a tri mode UBM backplane with u.3 NVMe. This is brand new platform that in our research we were the first to release with. Finally, 32 NVMe in a single 2U form factor is much less common as well. Thanks for the comment!
@Anonymous______________11 ай бұрын
I want that smb.conf file lol... I have tried every combo and can never break 700-900 MB/s on a single thread/client with the latest open source version of smb.
@rhb.digital11 ай бұрын
just send me one already 🙂
@elmeromero30311 ай бұрын
Where the CPU/RAM Performance for 8000 "high" VMs comes from? And how the heck you want connect the Compute Nodes. I doubt also that all the VM's will run in parallel. Ok, maybe the Storage Server can run 8mil IOPS in a single (or frw) thread but not with 8000 Threads. Too many Bottlenecks: Network, Storage-Controllers etc. Not talking about Dedup/Compression and all the fancy Options as "Real" Enterprise Storage must have..