N305 , interesting how the manage those pcie lanes 8 slot takes 8 lanes and 1 for 10G network ? If then the 10G might bottleneck and nax out 8Gbps ?
@ozmosyd3 ай бұрын
Hmm🤔
@c0unt_zer03 ай бұрын
Did I miss it or is there no reference for the price point? Not really a complete review without it imo.
@T3hBeowulf3 ай бұрын
My understanding is that it pre-release with no firm price yet. It is a neat concept, but I'm not sold on the proprietary OS and projected price. The Gen3 PCIe drive limits and limited lanes make this a bit of a niche product.
@rogerfinch76513 ай бұрын
LTT has a similar one in their latest vid that’s cheap
@TheN4UM4N3 ай бұрын
@@rogerfinch7651couldn’t find the video you’re talking about, do you have a link?
@engineer62503 ай бұрын
@@rogerfinch7651another request for link, please? Thank you.😊
@Akuba3 ай бұрын
@@TheN4UM4N Its the "Paying for Cloud Storage is Stupid" video from 5 months ago. It recently started to show up in recommendations.
@bastian7753 ай бұрын
At the pricepoint of $1200 I had to read somewhere else while having only 9 PCIe lanes and slow write speeds for some reason i'm not so sure about this.
@JamesGreen-gv4yn3 ай бұрын
If they ask for feedback on your review. First thought was regarding the rubber bands provided to hold on the heat sinks. Rubber degrades over time and faster with greater heat. I would expect these to fail over time.
@VirtualizationHowto3 ай бұрын
James I agree...I already had one breack just tustling around with it to install. But I do like the fact that heatsinks are included to begin with. The heatsink itself is good quality.
@Equality-Justice-Transparency3 ай бұрын
usually M.2 SSD-Heatsinks just gets "glued/taped on" - and buying this tape is very cheap!
@JamesGreen-gv4yn3 ай бұрын
@@Equality-Justice-Transparency I believe the point is that you should not need to buy extra items other than the SSDs. This type of product is best when all inclusive. Even an option to purchase with pre-installed M.2 SSD modules would be a plus.
@logiclust2 ай бұрын
just installed unraid on mine because the thought of burning one of the slots for the boot really irked me... let you know how it goes
@bsl25013 ай бұрын
Further things I‘d like to know: - Energy Consumption? (Idle, load) - Can I (without jumping through hoops and extra hurdles) run whatever flavor of Linux or BSD on this? - Noise? I really like the 10GBase-T. Very neat small form factor.
@HikaruGCT3 ай бұрын
power wattage -- you can run anything on this even proxmox noise 19 db
@g.s.33893 ай бұрын
can you please make a video on how to install hcibench and vdbench with grafana? thx
@drmyothant91282 ай бұрын
Can we do file sync with google drive?
@manaspatnaik61342 ай бұрын
Very informative and nicely done video. Can you confirm if I can change the OS to truenas completely?
@grtitann74253 ай бұрын
Nice device, but will wait for the AMD version with ECC.
@gaboguerra9503 ай бұрын
No Thunderbolt option?
@zparihar3 ай бұрын
BTRFS for RAID 5/6? Am I missing something here?
@Cpgeekorg3 ай бұрын
only 280MB/s throughput!? that's terrible! that's like early sata ssd speeds... if you're going to do an NVME NAS built around m.2, you specifically should give 4 dedicated pcie lanes to each m.2 slot (because that's the spec, and if you half the lanes, you typically end up with half the performance per drive). that said, the pcie3 x4 nvme drives get you roughly 3000MB/s (or 28000mb/s) per drive. giving the benefit of the doubt for real-world performance (random 4k RW q32t16) call it 2400MB/s (or 19200mb/s) per disk. that would mean in order to fully utilize that performance, you'd need at least 20gb/s of networking per drive (allowing losses for RAID). so a 4 drive system with pcie3x4 would need 80gb/s of networking (and how that's laid out would depend on how you want to architect the rw load of the clients of this device i.e. a single 100gb/s qsfp-28 vs. 2 qsfp+ 40gb/s connections vs 8 10g sfp+ connections (which would max out per any single client at the max ratings for those individual connections if set up with link aggregation. even assuming half the nvme throughput because they had an oversight like they built it and only gave pcie lanes per drive, that'd STILL be roughly 1100MB/s or 8800mb/s per drive of real-world performance.per drive which would require a sum of around 35gb/s of networking (for a box like this, catering to homelab / inexpensive customers, I'd budget for 4 10g base-t ports and use link aggregation to tie them together. tl;dr: putting your FAST nvme in a different enclosure that has insufficient networking and insufficient pci allocation OUTSIDE of the vm hosts is a complete waste of money when I can build a 16 sata HARD DRIVE solution that gives you WAAAAY more storage that can do roughly the same 250-280mb/s of throughput (for the same random 4k q32t16 benchmarks over 10g networking) for similar money. put the nvme in your virtual hosts themselves if you want fast vm storage and back that up to a NAS of spinning platters regularly... OR build a better 4 nvme NAS out of a desktop motherboard that has 4 m.2 slots and drop in a quad port 10g, dual port 40g, or a dual port 100g network card. you could even fit that in one of those nice 1u atx/eatx rack cases if you wanted and it would provide SIGNIFICANT throughput advantage VS. this nas solution at not THAT much more expensive (especially if you went with more a reasonably economical cpu and memory configuration and booted something like truenas from a cheap sata ssd or whatever. another way to go might be a hyper-converged infrastructure with ceph, which seems to be a favored course for many (though I haven't studied it close enough myself and would require some really high speed networking between your virtualization cluster nodes, and you'd have to have enough nodes to make it worthwhile (I'd estimate at least 5 nodes minimum) which would reduce it's usefulness for homelab users, but may be perfect for a mid-range SMB solution.
@Shirosak13 ай бұрын
Well put. I agree with you your view
@VirtualizationHowto3 ай бұрын
@Cpgeekorg, this is not true in the context of a test like vdbench. it all depends on the block size. Throughput depends on the amount of data being moved. The reason I chose 4k blocks is this is the most common type of workload block size for virtualized environments in most real world data centers and is common for DBs and other random type workloads. I like vdbench as tools like crystalbench and I/Ometer and others are just not realistic for shared storage testing in virtualized envs. The formula is this: Throughput (MB/s)=IOPS×Block Size (KB) / 1024
@thenextension91603 ай бұрын
Yeah network needs upgrade. This is a waste.
@bsl25013 ай бұрын
I‘d really like to see an AMD based version of such a device.
@BerNieSLU3 ай бұрын
Can I install unRAID on this unit?
@ThatHz-3 ай бұрын
Could TrueNAS be installed on it?
@bastian7753 ай бұрын
N95 or N305 means no ECC ram, so maybe but it's not really recommended.
@Eldorado66Ай бұрын
@@bastian775 truenas doesn't need ecc to run properly.
@TumescentPuma3 ай бұрын
Cost?
@VirtualizationHowto3 ай бұрын
There is not a firm cost as far as I know yet. But, it is supposed to be around $1000 I believe.
@samegoi3 ай бұрын
Nice
@tdevosodense2 ай бұрын
32gb og ram ? The cpu can only handle 16gb acourting to intel Have anyone tried to upgrade to 32gb ram??
@markmonroe73303 ай бұрын
Excellent presentation. Thank you. I am honestly not impressed by any NAS, more less an SSD based unit and even more than less with an NVME based unit, if it cant fully saturate a 10g network connection. Folks really need to do their homework on these nowadays. The marketing hype will show fast PCIe 4/5 drives, 10g and even dual 10g networking and fancy pants processors only to see real worked networking performance slightly better than a couple of modern SATA hard drives.
@sabraitisАй бұрын
WTF… Terramaster says up to 8 X 8TB… but their compatibility chart has all but one 4TB only. Such a Limted list of compatible SSD M.2
@shephusted27143 ай бұрын
meh - any pc can be a nas - you were not really raving about it