The Price is an instant non-starter for me. I like a plug/play NAS but that $1K is a HARD no go. A Jonsbo build is better and as I type this, you said it.
@HardwareHaven3 ай бұрын
"as I type this, you said it" thanks for at least getting to that point 😅
@williebrortАй бұрын
@@HardwareHaven Hey can you do a video on building something comparable to this, but then bulding it yourself and see if it really would be cheaper for the same functionality?
@ASKZ783 ай бұрын
It's a company that sells discontinued 2016 Intel Celeron for $200 and submits low-value declarations for ZimaCube Kickstarter backers who have paid full tax. Big NO.
@moortu3 ай бұрын
I picked up - Fractal Design Node 804 + 4 fans for 75 euros (2nd hand) - 550w platinum PSU for 72 euros (2nd hand) - 32 GB DDR4 3200mhz for 54 euros (2nd hand) - N5105 nas cpu/motherboard for 128 euros (aliexpress) 325 euros for a nas without storage. uses less power then the i5-1235u but is also 1/3th the performance alternatively you can get a n305 nas cpu/motherboard for 352 for like 90% of the performance of a i5-1235U for a total +- 555 euros. still half the price yet so much more reasonable.
@Cynyr3 ай бұрын
The only issue with the N5xxx, N100, N200, N305 is the 9 lanes of pcie3.
@-.eQuiNoX.-3 ай бұрын
Hope we will get lunar lake motherboards, those cpu will be very efficient. For the moment for a nas N100/N300 is the best option, currently they have a purple board with 6 sata that has ASM1166 (better efficiency) and 2 nvme slots.
@Max248712 ай бұрын
Yeah okay, but that board only has 1/4 speed ethernet compared to the zema cube. Personally I went with the Minisforum MS-01 (2x10G+2x2.5GbE) i5 version plus an old 24 drive SAS2 backplane plus enclosure I forcibly liberated through unskilled application of a cutting disk from an old decommissioned server chassis.
@necronymnoninveniАй бұрын
@Max24871 "forcibly liberated through unskilled application of cutting disk" killed me lmao
@torak4563 ай бұрын
Thank you for having an audio voice over to correct the price discrepancy. Too many people just do a visual correction.
@zrizzy69583 ай бұрын
it's 1100$ for the base model and 1250$ the ram upgrade 💀 I would just get a good gaming pc and turn it into a nas at that price + have a gpu if I need
@rezenclowd33 ай бұрын
Interesting....that'll be at least 3k.
@loganmitchell13823 ай бұрын
3k? You could probably fo it for 5-600 easily and have a upgrade path@rezenclowd3
@zrizzy69583 ай бұрын
@@rezenclowd3 I didn't mean the highest tier of a pc, but about like an i7 12700K 4060-70 or 7800xt build in that price range, or go lower to a cheaper build without a gpu or i5 gen 12-11
@PileOSchiit3 ай бұрын
Apples and oranges. Not everyone needs a gaming PC to hold files. For sure you could build a better system for the price but that defeats the purpose of a pre-build.
@crashniels3 ай бұрын
@@PileOSchiit the only difference is the software installed though. Install something like FreeNAS on it and you made it into a NAS.
@0xKruzr3 ай бұрын
I don't see the value add of this over Jonsboing something yourself tbh.
@michaelgleason47913 ай бұрын
No prefab will be better and cheaper than DIY. You're paying for possibly lower power consumption and a turnkey solution (in general).
@Alan.livingston3 ай бұрын
I’ve got a factory nas and a custom built nas. I can see how people would appreciate the low barrier of entry with a factory unit.
@subrezon3 ай бұрын
@@michaelgleason4791 30 watts idle, with a laptop CPU, without any drives? This power consumption is terrible! My server draws 9W at idle, with a desktop CPU, with spundown drives. It doesn't have 10GbE, but I bet I could add it without climbing 21W in power consumption.
@O4KAST3 ай бұрын
@@subrezonWhat's your build, if you don't mind sharing? Seems super low on consumption, I'm curious
@subrezon3 ай бұрын
@@O4KAST it's an HP EliteDesk 800 G3 with an i3-7100, 32GB RAM, a 500GB SSD and 2x12TB HDD. The exact model does not really matter actually, you can get this low power consumption out of any one of those business PCs with Intel 6th Gen or newer, be they HP, Dell, Lenovo, Fujitsu, Acer, etc. I picked the HP because it has 2x 3.5" slots inside the chassis. In many countries, those business PCs have to meet stricter power consumption regulation standards, so they use 12VO power supplies and have excellet firmware support for C-States. Also, their cost-cut minimalistic motherboards consume much less power than any off-the-shelf desktop motherboard.
@texan85803 ай бұрын
So, I learned about the Zima Cube around the start of this year. I kept looking at it and thinking about getting one, but I also had the itch to build a computer. I wish I had seen your original video on the standard Cube sooner (only just watched it and this one today), because I might have built my NAS earlier than I did. I ultimately went DIY with mine, using a Jonsbo N2, and got it done under that $600 mark (I already owned 4x3TB WD NAS drives). So very glad for your review, makes me feel better that I built my own. :-)
@TeamRedSCOUT3 ай бұрын
Re. Coil whine: Check the brick power supply, dirty electricity is also a probability with cheap qualitity power bricks
@Spreadie3 ай бұрын
I looked at Zima and eventually decided to grab a N100 NAS board from AliEx and chuck it in a Jonsbo chassis with a bunch of drives. Hasn't skipped a beat.
@HardwareHaven3 ай бұрын
Not a bad decision!
@scytob3 ай бұрын
great if that meets your needs, wouldn't meet my needs, i need the dual USB4 ports for external devices like eGPU etc and the internal PCIE slots for more nvme devices.
@HelloHelloXD3 ай бұрын
What board?
@Spreadie3 ай бұрын
@@HelloHelloXD MW-N100-NAS - ITX N100 based board with 2xm.2, 6xSATA, 2x 2.5GbE and 1 x 10GbE. Great little board but the AQC113C 10GbE NIC isn't currently supported by TrueNAS (Core or Scale). Not an issue for me but something to bear in mind if that is your planned OS. Support will likely come, but it ain't there yet
@HelloHelloXD3 ай бұрын
@@Spreadie thanks
@pallasplaysyt3 ай бұрын
It's so odd seeing USB 3.0 ports that are black. When I see the black USB ports I always think of USB 2.0. I'm so used to USB 3.0 being Blue or some other color and immediately knowing ok if I plug into this port I'm gonna get faster speeds. I'm also glad to see ZimaCube adding in a 10Gbps RJ45 port. I don't think enough companies do this. They default to SFP+ but many of us homelabers don't have fiber running to every room of our house. Inside the house it's all RJ45 so why not more 10Gbps RJ45 ports on computers and switches? I feel like not having an upgrade path other than swapping out a GPU is pretty standard for business that build a NAS product. Think Synology and the upgrade to 10Gbps. That's their only upgrade path to my knowledge.
@MH-kc5jr3 ай бұрын
You dont need fiber to use SFP+, just get a SFP to RJ45 transciever, also there are SFP DAC Cables (Direct Attatched Copper Cables) so you only need fiber for long runs. Many devices use black usb ports regardless of their spec, just to match the overall asthetic. its not uncommen at all to have the color not match the spec.
@pallasplaysyt3 ай бұрын
@@MH-kc5jr I actually didn't know about SFP to RJ45 ports so I learned something today.
@MH-kc5jr3 ай бұрын
@@pallasplaysytyeah thats why i would rather have sfp, because you have the option of copper over sfp or rj45 or fiber
@Andy-fd5fg3 ай бұрын
@@pallasplaysyt one small issue with SFP+ to RJ45..... some of them run a little on the warm side. If your NAS is close to your switch, then go for a DAC cable.... they come in a few forms, but are basicly two SFP+ modules with a bit of fixed Twinax cable between them.
@ironfist77893 ай бұрын
or even a small fiber patch cable if not
@johnperekopsky32713 ай бұрын
About 3 years ago, I built a custom NAS for my sister for under $500. Ryzen 4600G, 32GB RAM, NVME ssd. Granted, the case I got had only 4 3.5" hdd slots and no front accessible NVME slots - but the motherboard has a full x16 slot that in theory could be used for 4 NVME drives with a bifurcation card. It's currently has an 10G SFP card as their wifi router has a slot. Running just TrueNAS Scale - the CPU is overkill, but gives the option of running VMs or other services. Just over 6 months ago, my brother-in-law saw a four slot Synology on sale for about $550 - and we priced out parts (this time with an Intel 12th gen desktop i5 for video transcoding) and were maybe $10 more, but with a much more powerful cpu, and option to run VMs, etc. These types of solutions are great for those that want something pretty much plug'n'play, but you can get much better specs at a much better price if you're willing to get your hands dirty.
@someoneelse50053 ай бұрын
4600G does not support bifurcation, just a heads up before you attempt this, I attempted it with 4750G only to learn bifurcation doesn't work
@ThatGamePerson3 ай бұрын
I love the IDEA of this device, but as someone who also loves used enterprise hardware, it's a hard sell for a machine this costly. I love it for people who want something simpler that they don't have to configure as much (and who may not have a rack to put enterprise gear in) but for me? I'd take a used Dell r630 over this 7 days of the week because for the same price I can get 256 gigs of ram, 32 cores, and even toss in some basic hard drives. It just doesn't make sense.
@konradpietrucha15303 ай бұрын
Done that, then replaced it with 2 Dell Wyse to limit power consumption. Used Enterprise hardware has its benefits but require space, money for energy bills and place where you will not hear it all the time. Good for basement 48U rack connected to solar;)
@ThatGamePerson3 ай бұрын
@@konradpietrucha1530 Yeah, I'll definitely add that it's not always viable depending on power where someone lives. I live somewhere with monumentally cheap power (that all comes from green sources) so for me it's almost impossible to justify upgrades for efficiency alone.
@dash8brj3 ай бұрын
@@konradpietrucha1530 Enterprise gear is made to expect the worst - hot racks full of its brothers all whirring away, thats why they have aggressive fan profiles. On Dell machines at least you can tame those fans right down without compromising cooling of the servers innards. My 730XD lives in the lounge room, near the TV. Movie nights are just as enjoyable as they were before the Dell took up residence in my rack, thanks to a little script running on an ubuntu VM. My fridge (a reasonably modern model) is louder than the server, unless I've done maintenence, whereby the server sounds like a jet until the ubuntu VM kicks in.
@PARitter3 ай бұрын
20:37 re coil whine. There are quite a few comments about obnoxious coil whine in the icewhale discord. But it doesn’t seem to be endemic to the design, likely just a bad pull in the parts lottery on your unit. But - since it has been well covered in their own discord - it’s a bit dishonest for Icewhale to pretend not to know about it when you brought it up to them.
@HardwareHaven3 ай бұрын
Interesting... to be clear, they didn't specifically say they had no idea what I was talking about. But they weren't like, "oh yeah we had an issue with such and such". They were just asking if it was from the fans or actually coil whine.
@scytob3 ай бұрын
i dont hear it on mine
@NiHaoMike643 ай бұрын
Use a piece of plastic tubing to narrow down where the noise is coming from, probably an inductor. A bit of glue can help, particularly with open inductors. Some old (as in Geforce FX era) EVGA GPUs were infamous for buzzing like a substation when running Survivor. They used open toroids while many other manufacturers used potted inductors.
@HardwareHaven3 ай бұрын
@@NiHaoMike64 thanks for the tip!
@ledoynier36943 ай бұрын
@@HardwareHaven if the backplane receives 19V from the PSU, then you'll have two DC to DC converters on the backplane for 5 and 12V, maybe even 3.3V, who knows :) Potting the inductors from those circuits should quiet the thing down as @NiHaoMike64 said
@VincentJulianOng3 ай бұрын
If I wanted this level of jank, I feel like grabbing some of those funky Nas motherboards from AliExpress would be a lot more cost effective route.
@HardwareHaven3 ай бұрын
Possibly more cost effective and possibly more fun. Also possibly much more of a headache haha
@scytob3 ай бұрын
@@HardwareHaven Indeed, not to mention trying finding a generic m-itx motherboard on ali or anywhere that has two USB4 ports for TB networking, eGPU, etc, sips less than 29w power, with 2 PCIE slots and supports 8 bays with at least one bay using u2 form factor...... we ware not going to see generic motherboards like this for a year or so.... but gonna be fun once we do. In the meantime i am really liking my ZC Pro - currnetly have two eGPUs connected alongs with 2 xSFP port card via thunderbolt and this weekend will have it packed with 10 nvme drives and 6 spinning drives.
@EnlightenedSavage3 ай бұрын
Unusual choices, not jank. Probably mostly due supply chain pressures. It seems like a nice kit just a bit expensive.
@moogs3 ай бұрын
CWWK q670 and it rocks in a Jonsbo n3
@user-kg6uj6ji5p3 ай бұрын
I still remember there being a similar turn-key NAS like this with n100 + 10gbe, six drives and 2 NVMe. It costs around 300-350 USD, but it's only available in taobao
@jonjohnson28443 ай бұрын
Thanks for realising that, at least for us Europeans, 55 watts isn't nothing - although it is acceptable if it's not just purely an occasional media server. My unraid box is primarily Home Assistant but also Plex and totally legal downloading and idles at 26w running an i5 4590T (passively cooled), 32GB DDR3, 4 x 4TB HDD with 1TB SSD cache (so the HDDs only spin up when serving media).
@spg33313 ай бұрын
42c for an NVME is normal, I wouldn't worry about it until 60-65+. Also they will just thermal throttle unlike hdd's
@yuan.pingchen30563 ай бұрын
For non-data center maintainers, hot-swappable drive bays are not a necessary investment. The Antec P101S is good enough for a home server, but you'll need a little tinkering. Buy some used Asus or Dell plastic drive caddy. Allows you to open only one side panel to replace the drive. If you want to avoid messy SATA cables, you need to do some preliminary work. Arrange the SATA cables with zipties and heat shrink tubes.
@xXREDHEAD93Xx3 ай бұрын
From my experience in Laptop repairs and what you described with the coil whine, its likely a voltage regulator coil, probably 5V or 12V if harddrive related... if the coil is not mounted VERY good, they can whine depending on the load on their line...
@kevinhu1963 ай бұрын
The power consumption is unforunate given that it has a ASM1166 instead of JMB585 found on flood of aliexpress board and the advantage of ASM over JMB is the ability to enter higher C-state.
@codegame0273 ай бұрын
coil whine is typically covered by warranty if it is caught early on
@denvera1g13 ай бұрын
23:41 in this area, you can probably only get a single occulink connector, if they wanted to go occulink they'd have to place the other one somewhere else, placing it below the PCIe slot would require a vertical mount which is not only very fragile, but would also mean the cable interferences with any PCIe card in that last slot.
@michaelerdman8703 ай бұрын
Now we need a video on what you can build to match it using standard components and what's the best you can build for $1100
@HardwareHaven3 ай бұрын
I need $1,100 first... lol
@Andy-fd5fg3 ай бұрын
@@HardwareHaven can't give you $1,100..... but how about you rummage through your parts bin and see what you can come up with which matches the performance, draws less power. And if you can remember what you paid for the parts... costs less.
@michaelerdman8703 ай бұрын
@@HardwareHavenuse those 30 day return policies lol
@BrianThomas3 ай бұрын
You should 3D print a case top with a cut out for the cooler like a hot rod with a hood scoop for the engine. That would look pretty cool IMO. 😅😅😊😂
@HardwareHaven3 ай бұрын
Hahahahahah I considered it for a second
@HelloHelloXD3 ай бұрын
@@HardwareHaven dew it
@philosoaper3 ай бұрын
I'm very glad I decided to finally leave prebuilts behind me last year and build my own...
@win7best3 ай бұрын
I got my HP Proliant ML350 G9 for under 1000€ and it came with way more then this, it even included 6 12tb hard drives
@doq3 ай бұрын
If IceWhale made a "ZimaCube Core" with only the basics and maybe the guts of the Board and sold it for $300-400, it would become THE NAS killer.
@aviaviavian3 ай бұрын
Honestly thatd be amazing. Remove the ESP32, the fancy case, ect ect, and itd be fucking perfect
@haplopeart3 ай бұрын
I have one been in testing mode for over a month now. It works well, it has some build mistakes. They installed the OOB cooler with the protective plastic still on the heat plate. I resolved that when I replaced the cooler and now the processor runs much cooler. I am going to swap the fan in the back next. Add a fan to the top plate, and put ventilation in the top plate. ZimaOS is a disappointment, it is missing some basic NAS features. A NAS without iSCSI isn’t really a NAS. It runs TrueNAS Scale like a dream so I’ve gone that way.
@balex963 ай бұрын
A wireless adapter inside a metal case. Very smart.
@jumpmaster52793 ай бұрын
Humm, good good But i think An Core i5 4gen With 2, 2tb hhd in mirror With 256gb ssd And 16 gb ram in an old pc case is enough for me Jellyfin, photo prism, a simple smb share is all i need
@HardwareHaven3 ай бұрын
If it works, it works!
@JohnDlugosz3 ай бұрын
re shared lanes: A traditional RAID PCIe card (or integrated on the MB, equivalent) also uses the shared lanes of that card for all the drives. I guess it comes down to how many (and what generation). And, I suppose what you consider adequate bandwidth is different for mechanical spinning HDD vs M.2 solid state.
@davidwestra81813 ай бұрын
I built my own “version” of the base Zina cube using an AliExpress n100 motherboard. Boot SSD, 2 nvme, and 8 8tb drives in an N304 case. I don’t need 4 2.5gb nics, so I wish some other choices were made for pcie lane distribution. But overall, fantastic system that runs great without an issue. Though mine idles around 72w.
@HardwareHaven3 ай бұрын
Yeah I've never understood the quad 2.5Gb stuff on EVERY board haha
@sybreeder863 ай бұрын
Those HDD trays looks very similar to DELL ones. I'm curious if DELL r730 or 740 ones would fit :D
@HardwareHaven3 ай бұрын
Hmmm... maybe
@ABaumstumpf3 ай бұрын
The idea of the N100 is being a cheap efficient little CPU. But for NAS they are not all that suited. having only 9 lanes is already not good, those being PCIe 3 only makes it really bad. The 1235u not only has way more lanes (20) but they are also PCIe4. that is more than 4 times the available bandwidth. And HOLY MOLY - 29W at idle!?!? That is bad. My full-size desktop is just about 50W... that is in Win10, having the browser open, SSD and HDD, dedicated GPU and a few fans running.
@MyersJ2Original3 ай бұрын
1 lane of Gen 3 for boot drive would be fine. Its a boot drive. Don't do much. Use that other lane for a perm mount 10GbE and free up the M2 slot.
@HardwareHaven3 ай бұрын
1 lane of gen 3 wouldn’t be enough for 10Gb. Like I said, it might not be the best product for everyone but I think they did a fairly solid job of balancing PCIe lanes for all the stuff they shoved in here haha
@tim31723 ай бұрын
@@HardwareHaven 1 lane of Gen 3 is 940 MBps, which when you account for overhead, is slightly more than is required for 10,000Mbps. So... no... it would definitely be enough for 10gb.
@HardwareHaven3 ай бұрын
I think your math is a bit off. 10Gb/s / 8 = 1.25GB/s. Obviously there is network protocol overhead, but you still end up with 1,100-1,200 MB/s on a good 10Gb connection. So while you can get most of the way there on a gen3x1 connection, it will still be like 10-20% slower. If you don’t believe me, just look for a 10Gb card that uses a x1 connector. Most use x4 so they can still work properly on a gen2x4 slot.
@adubs.3 ай бұрын
I have to thank you for showing me casaos in general and now zimaos. Its not the full fat unraid replacement I want, but its cool to see something else in the market.
@NPzed3 ай бұрын
MCIO connectors over OCulink would probably meet their structural limitations as well as the signal requirements and the connectors/cables/uses are standardized in server/enterprise hardware.
@Lunolux3 ай бұрын
thx for the reviews not gonna lie, it's just out of my price range imo, even at 1100$ this is still to expensive
@Napert3 ай бұрын
12:53 i run a very similar setup (mobo sata passed through to truenas scale vm) and had the exact same issue, did you try either ssh into truenas or using the web shell and running sudo smartctl -x /dev/sdx | grep "Current Temperature:" ? for me this worked fine to check temps and after fiddling and some black magic (i forgot how i fixed it, probally with removing smart extra options under storage - pool - manage disks) truenas started to log drive temps normally
@HardwareHaven3 ай бұрын
No but I'll have to look into that, thanks!
@djvidual82883 ай бұрын
I achieved lower idle power consumption with my full size ATX Z270 Mainboard and an undervolted and underclocked 7700K. Not comparible in terms of performance yes, but still impressive, since I used a regular 500W ATX power supply from beQuiet. Seeing this power consumption figures is a little discouraging.
@tim31723 ай бұрын
16:08 I'm sorry, are we addressing a 42c SSD being "warm"? It's designed to run all day, every day at 70c. It won't throttle until 85c. Yikes.
@HardwareHaven3 ай бұрын
42C when it was literally doing nothing.
@scytob3 ай бұрын
The ZC Pro draws about 25W at idle when there are no drives spinning, the bay7 nvme carrier is not in and the 10gb NVME ethernet adapter is unplugged. Adding the NVME 10gbe adapter adds about 3W. Inserting the bay7 sled with 2 NVME drives brings it upto ~37W when running, along with 4 passive containers (this is on ZimaOS), seems like the switch chip in that sled is pretty power hungry. Adding 6 x 24TB Ironwolf pros with ZFS brought it up to ~80W. Adding 2 PCIE cards with a total of 6 NVME drives on them (the cards use PCIE switch chips) brought the system up to 100 - 110 W (as i said earlier PCIE switch chips are power hungry). Also note in the 25W ide case CPU was around 9W, once fully loaded the CPU was around 13W. I have no idea if this is good or bad, do with it what you will.
@Yuriel19813 ай бұрын
Occulink over EDP at least mitigates the 1200$ price tag. Better standardization is always worth a bit more. But man thats expensive and if its expandability and interoperability i want, I'll build my own. Or for that price have it built by a local SI.
@peyton_uwu3 ай бұрын
5:25 does it deliver cupcakes as well?
@imjamf3 ай бұрын
The biggest use-case for WMR was inside-out tracking. Even when the Rift S and Quest came out, WMR had better headsets.
@MazeFrame3 ай бұрын
Something like a Shuttle SW580R8 with CPU, RAM and a 10G NIC results in a cheaper "starter" than this. While not turn-key, the Shuttle solution is considerably cheaper.
@Aruneh3 ай бұрын
That's a lot of money for a lot of jank.
@KarlMeyer3 ай бұрын
I'd be interested to see what ugreen would do rackmount wise. Could be interesting.
@RafaGmod3 ай бұрын
The 28W was without the OC running or with proxmox running? Idling in BIOS the CPU do not have a governor and it sips more power that with the OS running. If it was with the OS thats a lot. My 2650 v4 xeon with 32GB, 2 disks and 1 ssd sips 38W in idle
@kaminekoch.74653 ай бұрын
New AMD CPUs idle at like 80W, so it's all relative.
@scytob3 ай бұрын
You can reduce the PL1 in the BIOS if you want the CPU to run at less than the default ~14w, also on the pro that 10gbe nvme card draws quite a bit
@tim31723 ай бұрын
@@kaminekoch.7465 What AMD CPU idles at 80w?
@lets-automate3 ай бұрын
Had your Windows PC been to sleep by any chance? I've been having a problem for about a year with Win 11 that file copy speeds are only about 115MB/s after it comes out of sleep, until I reboot my PC then full speed again. (Even local HDD file copies)
@jewlouds3 ай бұрын
what is the benefit of passing through the pcie and sata hardware directly vs creating qemu disks and assigning those to the truenas VM?
@dmille62 күн бұрын
how does the thunderbolt 4 ports work? can I plug a MacBook into it and use it like an external drive/DAS?
@ewenchan12393 ай бұрын
The PCIe 4.0 x4 slot in a PCIe 4.0 x16 slot is such a tease. If that had at least the full x16 electrical connection so that I can drop my Mellanox ConnectX-4 100 Gbps IB card in there, then it might be a possibility. But without it, then it's a no-go for me. The N100, non-Pro version is a little bit more than the Qnap 4-bay NAS that I bought years ago.
@tim31723 ай бұрын
You don't need a 16x electrical to connect a 16x card. You can plug it into a 1x electrical and have it operate at 1x speed. The CX4 isn't a high-power device (25 or more watts), so there is no reason to require 16x electrical, which in HP mode delivers up to 75 watts. The CX4 was designed to use exactly 24.80 watts specifically so it didn't require a 16x electrical slot. Please learn how PCIe works before spouting off nonsense. To the scruds who upvoted their comment, stop believing everything you see bleated out without verifying it.
@ewenchan12393 ай бұрын
@@tim3172 "You don't need a 16x electrical to connect a 16x card." PATENTLY false. If you actually WANT it to operate at the full PCIe 3.0 x16 128 Gbps of theoretical bandwidth, then you NEED all of the electrical connections to be in place for it to be able to do that. The fact that the slot supports PCIe 4.0 is pretty much irrelevant here because the CARD itself, does not. "You can plug it into a 1x electrical and have it operate at 1x speed." That's so dumb. (And by the way, it does NOT scale linearly with the ELECTRICAL width of the connector. Ask me how I know.) With said Mellanox ConnectX-4 card, plugged into the primary PCIe 4.0 x16 slot of an Asus X570 Prime motherboard, I can get 96.9 Gbps with IB bandwidth send benchmark. (ib_send_bw) If you plug it into the bottom x4 slot, you only get about 14 Gbps from the same card, running exactly the same test. The theory would have suggested that you only give it 1/4 of the lanes, you should EXPECT 1/4 of the maximum throughput (25 Gbps). But in actual testing, that ended up NOT being case. (And I tested it with a point-to-point connection as well as through my 36-port Mellanox MSB-7890 externally managed 100 Gbps IB switch.) So, whilst you can, in theory, operate a x16 card in a x1 electrical/x16 physical slot (or open ended slot of any other (fewer) number of lanes), you would be giving up so much performance that it would be a terrible waste of a 100 Gbps IB NIC. "The CX4 isn't a high-power device (25 or more watts), so there is no reason to require 16x electrical, which in HP mode delivers up to 75 watts." That is so fucking stupid. "Slot power All PCI express cards may consume up to 3 A at +3.3 V (9.9 W). The amount of +12 V and total power they may consume depends on the form factor and the role of the card:[28]: 35-36 [29][30] x1 cards are limited to 0.5 A at +12 V (6 W) and 10 W combined. x4 and wider cards are limited to 2.1 A at +12 V (25 W) and 25 W combined. A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device. A full-sized x16 graphics card may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a high-power device.[23]: 38-39 " (Source: en.wikipedia.org/wiki/PCI_Express#Power) Just because a that x16 card MAY draw up to 5.5 A @ +12 V (66 W) and "75 W combined **after initialization and software confirguration as a high-power device**" DOESN'T MEAN that it HAS to. What fucking part of "**MAY** draw **up to**..." don't you seem to fucking understand? You're talking about slot power. That says NOTHING about the fact that each lane of PCIe 3.0 operates at 8 GT/s (i.e. 8 Gbps raw data throughput). 8 Gbps/lane * 16 lanes = 128 Gbps of raw data throughput. The shit that you just wrote there, says NOTHING about the DATA throughput capacity, that is provided with each additional PCIe 3.0 lane. "The CX4 was designed to use exactly 24.80 watts specifically so it didn't require a 16x electrical slot." You're a fucking moron. If you do not make the ELECTRICAL connection for pins 19 through 82 inclusively, then each lane that you are NOT giving the card, then you are starving it of bandwidth at the rate of 8 GT/s/lane. Thus, if you put it in an x4 electrical lane where by pins 19-22 (lane 2), 23-26 (lane 3), and pins 27-30 (lane 4) are ELECTRICALLY connected, then you CANNOT transmit data to lanes where it does NOT make an ELECTRICAL connection. That's fucking electrical and computer engineering 101 -- you can't pass electrons through where ELECTRICAL connections don't exist. So fucking stupid. Again, you talk about POWER. You don't say ANYTHING about the fact that each PCIe (3.0) lane takes up 4 pins from the PCIe connector itself. (cf. en.wikipedia.org/wiki/PCI_Express#Pinout) Read the fucking pinout table. "Please learn how PCIe works before spouting off nonsense." Yeah, no fucking shit. Please learn how PCIe works before spouting off nonsense. If you aren't making the ELECTRICAL connections, then how the fuck are you going to transmit DATA when the slot isn't ELECTRICALLY wired up for higher number of lanes, dumbass??? (It's literally in the fucking pinout table as well as the comparison table. You can LITERALLY wiki that shit.) How the fuck are you going to get 100 Gbps (out of a 128 Gbps interface) through 8 lanes of PCIe 3.0??? And no, the Mellanox ConnectX-4 will NOT run at PCIe 4.0 speeds. (You can LITERALLY prove that by booting the Ubuntu Desktop 24.04 installer, clicking the 'x' when it wants to install it so that it'll boot into the live desktop, and then running `lspci -nv` to find the link capacity and link state of said Mellanox ConnectX-4. If it says it's 8 GT/s, then it's PCIe 3.0. "PCI Express Interface The ConnectX®-4 adapter card supports PCI Express Gen 3.0 (1.1 and 2.0 compatible) through an x8 or x16 edge connector. The device can be either a master initiating the PCI Express bus operations, or a subordinate responding to PCI bus operations. The following lists PCIe interface features: PCIe 3.0 compliant, 2.0 and 1.1 compatible 2.5, 5.0, or 8.0, link rate x8/x16 Auto-negotiates to x16, x8, x4, x2, or x1 Support for MSI/MSI-X mechanisms" (Source: docs.nvidia.com/networking/display/connectx4ib/interfaces, docs.nvidia.com/networking/display/connectx4ib/introduction) Yes, you should really learn how PCIe works before spouting off nonsense. I 100% agree with you on that one. Read. What a load of bullshit. (You've CLEARLY never ran the Mellanox ConnectX-4 (MCX456A-ECAT) in a x4 slot before. It would probably help if you're ACTUALLY running the hardware rather than just talking outta your ass, where you ONLY talk about the POWER aspect of the PCIe Specification, and then COMPLETELY ignore the rest of the spec which outlines the bandwidth that is supplied per PCIe 3.0 lane.)
@Battlewear3 ай бұрын
Yes, check out the base model
@lifefromscratch28183 ай бұрын
4 M.2 drives all at 1 lane each while only giving the same peak throughput as a single NVMe is going to have better sustained performance from more caching capacity with consumer SSDs, so there's that.
@HardwareHaven3 ай бұрын
Yeah, I think I said something along those lines. Maybe I only said benefits from IOPS and latency, but there’s still a benefit!
@frankwong94863 ай бұрын
Is that possible to 3d print a hat/top cover on it , to support larger cpu cooler and full height pcie card , plus some additional fan ?
@HardwareHaven3 ай бұрын
I would imagine it would be very easy to do so!
@scytob3 ай бұрын
Yes, there are several lids for fans already, TBH this doesn't need a larger cooler - it needs a better cooler, the issue is the fan is super loud for no real reason (a bit like a dodge charger) - plug a noctua fan in, with the same ramp and it is basically normal sort of noise.
@LucodeHome3 ай бұрын
I would love to see a home blade server with multiple alder lake based SBC's, rather than another NAS hardware. Even in this case it is pretty powerful
@Nathan150383 ай бұрын
10:35 overclocking your NAS would be crazy😂
@zedfauc85403 ай бұрын
the hardware haven intro song feels like im hyperventalating after running a 10k and my chest is gonna fill up with nitrus oxide and impode from within
@rezenclowd33 ай бұрын
Interesting that it uses a U.2 to M.2. I'd rather remove the M.2 carrier card and use a 15.36TB or 30.72TB U.2...however those get hot which will be a problem either way. Consumer M.2 sucks, especially when hot, U.2/3 is hot either way and the cooling on the Zima Cube Pro is inadequate either way.
@omegatotal3 ай бұрын
nand flash chips don't care about getting quite warm they will continue to work fine until they get like over 100 c, The storage controller on the drive, is fine at higher temps, a lot of them will operate up to 90C at full speed without issues.
@tim31723 ай бұрын
NAND flash stores information with lower power consumption and less reliance on refresh cycles (therefore less wear) when written at higher temperatures. That's why older NVME SSD designs have heat spreaders: to move the heat *from* the controller *to* the NAND flash. It's always funny seeing the big heatsinks from people trying to avoid NAND running "too hot".
@rezenclowd33 ай бұрын
@tim3172 my u.3 run at 50 to 55degC with 50% duty on my watercooled rig with external rad and a touch of airflow internally. At 70degC is critical. When writing they consume 20 to 30w each.
@benjiderrick45903 ай бұрын
So many options to make a desktop i3 or something at less than a grand that match or even outperform this zimacube while drawing nearly the same amount when idling. 60w with drives idling is what I do with a 1st gen Ryzen that has the c state bug that prevents it sipping power while doing nothing. Crazy that it's not even more power efficient in that regard while it's using 19v adapter and mobile class CPU rated at 15w tdp
@gustersongusterson41203 ай бұрын
Great conclusion. It's way to expensive, it doesn't make a ton of sense from an engineering perspective. I think you were on to something with the atx format nas video. A double decker itx case and a backplane solution seems like it would make a lot more sense.
@fossacornrow3 ай бұрын
Hi! I had the coil whine issue with my ZimaCube (N100, not the Pro). Fortunately, my unit has been replaced
@HardwareHaven3 ай бұрын
Through an RMA? I'm curious
@fossacornrow3 ай бұрын
@@HardwareHaven We’ve tried many tests to diagnose this issue. I was fortunate to receive a new ZimaCube. Some of the users who encountered the same problem got a new backplane. Therefore, I consider myself lucky and am very satisfied with this NAS.
@My03Tundra3 ай бұрын
Your sponsorship, made me laugh as it reminded me of when one of my employers got new, easy to use chairs. One of my coworkers, an older lady, INSISTED on a training session on how to use the new chair. I’m not making that up. In this training session, she was mad at the rest of us, as she took it seriously as even the manager who was “responsible” for the training had a hard time not laughing as everyone else played with the simple and easy to use chairs weren’t that bad.
@GreedoShot3 ай бұрын
JONSBO: "am i a joke to you?"
@TemplePate013 ай бұрын
I'd like to see a Zima OS alternative called Cosmos Cloud. Seems pretty interesting.
@denvera1g13 ай бұрын
Like i've been saying ever since the i3-N300 ahnd N305 launched There is a huge difference between single channel 16GB soft limit with 9 lanes of PCIe 3.0 And dual channel up to 256GB and 20 lanes of 3.0/4.0 found on the i3-1215u The alderlake N really should have never carried the i3 name, and it especially should have never carried the $309 CSP of the 1215U
@jan-Juta3 ай бұрын
ZimaOS is basically debian + CasaOS, it's quite popular outside of the zima branding.
@thespencerowen3 ай бұрын
The TDP of this cpu is 6 watts. (12 watts for the pro model). I don’t understand why an upgraded fan would be needed.
@PileOfEmptyTapes3 ай бұрын
Nope. TDP for an i5-1235U is 15 W, with max turbo power of 55 W.
@scytob3 ай бұрын
its a noisy fan, changing to a better cooler and fan doesn't really change the temps much, biggest temp difference comes from repasting the IHS and CPU Die - either way with stock and upgrade coolers before and after repasting i neve hit junction max. Some units do need the fan module rotating. Units will start shipping with upgraded fan and cooler sometime in the next couple of months.
@commander33273 ай бұрын
When did you change your logo?
@Cynyr3 ай бұрын
Are they selling that m.2 carrier separately?
@afriendlynorwegianguy32843 ай бұрын
For the reading from the server with truenas scale, i had similar experience with reading and writing on scale on a hp dl320 g6 server, but that dissapsired when switching to core🤷♂️
@Cam.Klingon3 ай бұрын
Would an aliexpress laptop motherboard be a better alternative, with potentially more pci lanes, for potentially similar money? Just posted an x99 board to your discord that looked interesting.
@sotamso9424Ай бұрын
hello what do you think about the security of zimaos
@Viking88883 ай бұрын
I think the default setting for all companies is "greed" first. They don't want you to have an upgrade path that doesn't include giving them more of your money. The only way we as consumers will EVER get the best of all worlds is a company that doesn't default to greed first, and that will just never happen in this current system of things. Perhaps the only way would be a non profit, but would that non profit even survive very long? Not without making enough money to keep making products lots of people want. If that non profit starts having money issues, will they just disappear or switch over to "greed mode" to survive?
@HardwareHaven3 ай бұрын
Well, to be fair, a company’s job is to make money. It’s possible we have disagreements on the benefits of capitalism haha, but I would at least say that one thing we can try to do is support companies that make things easier to repair and upgrade. Voting with your wallet is a good way to get the change you want to see!
@scytob3 ай бұрын
Given i have been able to install what OS I want, install what drives I want, install the PCIe cards i want and connect what TB devices i want (including 2 eGPUs) characterizing IW as greedy feels a bit off.
@Viking88883 ай бұрын
@HardwareHaven I understand that, what I don't get is charging WAY more than something is really worth. Nvidia is a good ex. The RTX 3080 was $699, but the RTX 4080 surpassed inflation by leaps and bounds. It should have been about $830. I have no issues with a company making money, it's just the gouging to extract every penny they can that chaps my hide.
@Viking88883 ай бұрын
@@scytobI'm glad you are getting your money's worth from it, I just don't see their price as fair. I see it as way too high for what you get seeing as a person could build something far better for far less. I'm of course not telling you what to do or what something is worth to you or anyone else. These are just my thoughts in writing. I truly hope you continue to enjoy your nas.
@scytob3 ай бұрын
your definition of better is not the same as mine - you can't build a machine with the same features in the same form factor. I know you don't want all the features but that doesn't change the fact you CANT build something with a superset spec of this box plus more. I get it if you say you don't value those extra features, that's fair. But asserting you can build better is disingenuous. As you have yet to point me to a m-itx mobo with 2 pcie slots and two TB4 connections that support SW CM to allow for full 40gbps x-domian. until you can your claim you can build better is bogus. You can build less for a better price and that has more value you to you. And that's ok. But better for you doesn't mean you can build something objectively better, just something subjectively better. I am only jumping on people who keep asserting their subjective view is an objective statement of fact for everyone.
@TrTai3 ай бұрын
The total package is a little too jank for the price. I do like it in theory, but with how it's laid out I'd be more interested in just grabbing the chassis
@Cooper33120003 ай бұрын
I wish some company would make a U.2 NVME NAS enclosure that was affordable.
@shephusted27143 ай бұрын
meh - they are trying harder but i am worried about people who buy this instead of building something better on their own for less money - for biz that need a quick fix of dual nas this could be the ticket
@HardwareHaven3 ай бұрын
For sure. If the specific features are desired though, it's actually a bit tricky to replicate with off the shelf parts. The question is how many people REALLY need quad NVMe, 6HDDs, , 10Gb, and Pcie cards.
@scytob3 ай бұрын
you can't build the same feature set of the pro by buying of all (at least no in m-ITX form factor ) for example I needed the USB4/TB4 connections and the two PCIE slots - if you can find a mobo off the shelf that does that i am all ears..... i have another build coming up where i need the same...
@scytob3 ай бұрын
@@HardwareHaven not tricky, impossible if you want USB4 & 2 PCIE slots (and trust me y'all want USB4 [ USB-40 ore higher] in about 6mo....)
@martontichi86113 ай бұрын
you can get a proper Dell tower server for that kind of money with a Xeon. maybe even new.
@garrettrinquest16053 ай бұрын
To me the hardware tinkering is the easy bit. I'll just build my own. It's getting around the software jank that I'm worried about
@psynchro2 ай бұрын
Really invormative, thank you. Would be great if you link this video in your last video's comments and info.
@dagamore3 ай бұрын
So glad you got another video out, I was starting to get worried about you. Hope all is well.
@HardwareHaven3 ай бұрын
I appreciate the concern! I was already behind from being on vacation, so I missed a week. Then had an issue with a miscommunication on the sponsor spot (partly my fault), so this got delayed as well. KZbin is hard sometimes... but thanks!
@i7-1260P3 ай бұрын
you use arc a310 for av1 ?:)
@giannistsolebas69623 ай бұрын
I would like to see the Zima Cube now that it's officially realised and a video dedicated to zimaOS!!
@gotelldonn3 ай бұрын
Really great job on this video. Thanks!
@HardwareHaven3 ай бұрын
Appreciate it!
@Machistmo3 ай бұрын
wasnt zima a clear malt beverage in the 90s ?
@blacklion792 ай бұрын
RJ45 non-Intel 10G nic is a shame SFP+ and X520 is a must
@johnbeer49633 ай бұрын
Yeah, this is what happens when people call the low power soc based systems "bad" we end up with much more expensive stuff that uses more power
@rogerhalt39913 ай бұрын
I get that people pay for convenience but this is a lot more expensive than other off the shelf options and I personally would rather build a Jonsbo NAS for $600.
@scytob3 ай бұрын
awesome you do that, tell me how you get on sourcing a 2 slot PCI mobo with integrated USB-40 ports - i am keen to do the same if you can find all of that for 600, or at all. Oh you don't want 2 slots of PCIe or TB4/USB4? Then why would you compare to this product at all, you want something different. Thats ok, but your point of comparison is irrelevant.
@elduderino77673 ай бұрын
if you care about electricity consumption/efficiency then you can't beat amd any u-series ryzen processor is the way to go, even the older 4000 series, if it has to be intel then the relatively under powered n305 could be an option don't have to break the bank either, if you wanted to go really cheap you can just buy a busted up old laptop, harvest the motherboard and build your own server out of it
@pani_alex3 ай бұрын
Oculink, m 2, u.2 and also a normal pcie, all easy to reuse
@pkt12133 ай бұрын
It is a bit expensive of me. I would have also like to have seen a VPro chip at that price.
@AndyIsHereBoi3 ай бұрын
new logo colors?
@hyde81183 ай бұрын
Awesome hardware, but i think it will cost A LOT, and it seems that it's better to buy a NAS and a separate miniPC for the same price
@spambot71103 ай бұрын
the infuriating thing about coil whine is that some people just don't hear it, at least for higher frequencies. i think in some cases this is just because different people's ears' high-frequency response ages differently, so most people are physically incapable of hearing a high frequency sound that someone else might find incredibly irritating. the other thing is neurological, i'm constantly hearing sounds that other people don't hear (that once i identify the source of it if they listen really carefully they can hear too, so i know it's not hallucination). if you're maybe neuroatypical in just a certain way you might literally be hearing sounds that other people's brains filter out and artificially render imperceptible, which is absolutely infuriating because the sound is right there, you can see it on the spectrogram, and people will be like "no, our definition of perceptible sound specifically excludes sounds like that one, you're just doing perception wrong"
@ajpenninga3 ай бұрын
Given the pricing, I'm 100% onboard to go 45Drives HL series compared to this. :(
@oscarfiala21043 ай бұрын
This seems Silly, it draws similar amount of power to my xeon e3-1245 v3(and i its silent). Though the zima cube is smaller. (I have a ATX case)
@pm2tube3 ай бұрын
40 seconds watching the video and already appreciating the effort on the colour scheme of both the channel's logo, the footages and background lighting. You are nowhere near out of ideas ! My men.
@oscarcharliezulu3 ай бұрын
Super interesting use of hardware that i can see is very flexible. Your review is a bit negative but you gave no reason except its ‘weird’.
@HardwareHaven3 ай бұрын
And also $1100 lol
@HardwareHaven3 ай бұрын
And that it can’t be upgraded or repaired if the manufacturer stops making components because they use a proprietary connector.
@scytob3 ай бұрын
@@HardwareHaven oh you can upgrade and repair a synolgy or qnap? no thought not....
@HardwareHaven3 ай бұрын
Did I ever say they could be upgraded? Did I even mention either of those? At least with those (well at least synology) the platform is very mature with really good software.
@d3xbot3 ай бұрын
“The power consumption isn’t great…” *looks at my enterprise surplus Dell using 200W at idle…* I think it’ll be fine
@c0p0n3 ай бұрын
No need for edp or occulink. Thunderbolt 4 is more than capable and is already present on the board