If i had a grand for each time wendell say to throw away my LGA2011 server i would have enought money to get another used LGA2011 chassis with all the caddies.
@pixiepaws993 ай бұрын
What? Those things go for like $300...
@marcogenovesi85705 ай бұрын
The main benefits of SAS are that it's always hotswap (vs nvme where it's not always supported), has simple dual channel so it can be used with dual-controller storage appliances in an easy way, and the signal integrity isn't a huge mindboggling issue like with nvme so it's simpler and cheaper to make a large setup with them (with or without expanders). A lot of servers still don't really need a battery of nvme drives, even if a lot of servers do need nvme
@ImAManMann5 ай бұрын
Most don't need nvme..... with raid and storage tiering you can get close to nvme speeds at much lower cost. I also rarely see workloads which need the storage speed of even fast ssd arrays.... most of the ti.e nvme is just a waste of money.
@iamamish5 ай бұрын
A few years ago my dad gave me his PC to work on. He still had a mechanical boot drive in it, and boy did I realize how spoiled I'd been these last 10 years or so. I gave him a new SSD - it is such an insane upgrade from a mechanical drive.
@Charles_Bro-son5 ай бұрын
It was the gamechanger of snappiness =)
@joshuaspires92525 ай бұрын
in the early 2000's i used raptor hardrives to get past slow storage drives.
@PoeLemicАй бұрын
@@Charles_Bro-son Well, the NVMe option and then booting from it, changed my world. That's the next upgrade.
@marcogenovesi85705 ай бұрын
Wendell lives on the bleeding edge, there are lots of use that don't and still see low end and lower midrange hardware, and old, and very old hardware too
@axn405 ай бұрын
I agee: an 70usd supermicro X10 + e3 1230v3 is still relevant in comparaison with a rpi
@SquintyGears5 ай бұрын
He's talking to actual professional deployments. Not your homelab setup. For people who rely on the server for making money. At home we will continue to use every ancient configuration imaginable with no problems. But these videos are very often used by sys admins as evidence for the decision boards they have to appeal to...
@romevang5 ай бұрын
@@SquintyGears Or in the case of the company I work for, they don't like to spend Money. We're using Ivy Bridge/Sandy bridge (with a random mix of Broadwell) hardware for our cluster. If it fails, we're moving to the cloud... but the problem with that is that we don't have enough staff to make such a move. We're already overloaded as is.
@SquintyGears5 ай бұрын
@@romevang yeah and you all know in the department that it's just a time bomb. They've been warned. 🤷 Those companies everyone is just working on planning their exit...
@joshuaspires92525 ай бұрын
@@SquintyGears well i partly agree,, but my older r720 dual 8core xeon and 16 hard drive 10k's is hurting my electric bill. so i have to rethink my setup for next year.
@insu_na5 ай бұрын
I'm a fan of sas ssds simply because I can run a ton of them with SAS expanders and in external enclosures. You can technically to that with nvme too, but it will fight you all the way. Sas just works
@TheKev5075 ай бұрын
SAS, and by extension SCSI has remarkable staying power
@yumri45 ай бұрын
yep also as the PCIe NVMe drives at most you can have 12 of them in an Intel system though 24 ish in a AMD EYPC system. Then you run into how to cool all the drives and both CPUs . As some Enterprise NASes have 50 physical drives having a limit of only 24 without bifurcation is not good while with bifurcation then you run into issues of how to get them so the chips will have a low chance of being blown off from the fans while still close enough to the controller to be quicker than 2.5" and 3.5" SAS drives.
@tolpacourt5 ай бұрын
SCSI survives as a protocol, mainly.
@tman61175 ай бұрын
"if you have a Broadwell cpu you even a high end one it's time to upgrade" Me with my ivy bridge based server
@alan_core5 ай бұрын
Haswell here ;=)
@PhAyzoN5 ай бұрын
Dell R720 intensifies
@ButtKickington5 ай бұрын
He says this right after I just bought a high end broadwell server. Whatever. 88 threads make fans go brrrr.
@IanBPPK5 ай бұрын
I've been doing well with my R620, Z620, Hyve Zeus, and DL360p Gen 8's, though the power consumption leaves a little to be desired 😬
@reki3533 ай бұрын
me with R920, R910, R720, DL360 G5, Z820
@nextalcupfan6 ай бұрын
multiple PB from a 256GB SSD sounds insane. frankly IMO that would be very impressive for a HDD.
@coraldayton5 ай бұрын
I just shutdown my C240 M4 LFF server that was primarily spinning rust. I went from Dell RX10s and RX20s to older SM chassis to Cisco chassis and a whitebox. I'm trying to migrate all to SSDs, but costs for SSDs aren't as low as spinning rust. Once the costs go down, I'll be all over full flash/SSD for my homelab. I've got 2x 3.2TB, 4TB, 7.68TB U.2 NVME SSDs, a 6.4TB PCIe NVME SSD, but I wish I could have more.
@Makeshift_Housewife5 ай бұрын
One of my favorite servers had to be retired a few months ago after about 6 years of use. It was a little 1U HPE DL160 with dual 10 core cpus, and six 1tb 7200 RPM spinners. I checked the total disk use, and they were about 1077 TB each. We kept buying servers for a while with SSD OS arrays and spinners for storage so we could maximize our small budget
@Alan.livingston5 ай бұрын
Still nothing wrong with spinning rust for some mass storage tasks I reckon.
@88Elzee5 ай бұрын
I can't get over how tiny those hard drives look in his hands.
@FaithyJo5 ай бұрын
He is the anti-Linus. Cannot drop a HDD or a processor.
@piked865 ай бұрын
He looks to be a pretty big guy when you see him stand next to someone. I'm guessing 6'3"
@annebokma46375 ай бұрын
@@FaithyJohe seems not to play a nice guy on camera either. Truly anti Linus 😂😂
@marble_wraith5 ай бұрын
If you consider mechanical drives, they have to be. Smaller disk platters in terms of surface area means you don't need to worry as much about centrifugal forces, so you can spin faster without as much vibration dampening. Why spin faster? Seek times.
@88Elzee5 ай бұрын
@@marble_wraith I know how big those drives are, they aren't as small as they look in his hands lol.
@anothersiguy5 ай бұрын
We’re those people who are still running Broadwell era Xeons and spinning rust some of our branch office servers lol. Hopefully will be put out to pasture soon but SAS SSDs would be an awesome way to keep them rolling if we needed to.
@ICANHAZKILLZ5 ай бұрын
Same 😅 We did stick some SATA SSD's in most of them but they go slow after about a year of writes. Let us pray we can convince upper management for something made post 2017
@asm_nop5 ай бұрын
I've been running a used Dell R510 at home with a pair of Westmere Xeons and a pile of DDR3 and 600GB 15K drives. I paid so little for it, that it was basically a gift from a friend. This hardware is now nearly 15 years old, and I could easily run it another 5 if it doesn't fail outright. It's keeping up with my current needs surprisingly well. The only pressure to upgrade I have is that newer gear is vastly more power efficient, and really cheap on the used market. Is it just me, or is old hardware staying relevant for much longer than it used to?
@romevang5 ай бұрын
My work is 1 or 2 steps worse. Mostly Ivy/Sandy Bridge cluster with a brand-new Dell Unity below it all. Their long-term strategy is to go to cloud once all the hardware just gets "too old." Like it isn't already.
@rkan25 ай бұрын
@@asm_nopSuch old hardware uses so much electricity, that any newer stuff will save the new hardware multiple times in that 5 years. Unless you havr basically free electricity of course...
@marble_wraith5 ай бұрын
Toshiba has recently come out saying they're investing in both HAMR and MAMR drives. I'd be interested in L1T objective analysis on the pros / cons of each. Typical use cases for home servers would include steam cache's and media servers. In the case of latter, say you wanted to ahem, backup / transform your physical optical media to streamable files. A drive with high write capacity is *required* for this, and high throughput is required especially if you have multiple optical drives operating simultaneously. Having one with enterprise logs so you can have an estimation of time to failure is super useful. The other piece of the equation being once the media is "backed up" and compressed, what drives would you archive it to? Hence the question / statement on the first line 😁
@TheMarkRich5 ай бұрын
Had two in my ibm storage unit to act as ssd top tier in dynamic store. They work well.
@chromerims5 ай бұрын
3:46 -- Broadwell (socket 2011) in 2024? Can I call it, "Greatly outclassed by the latest gen," rather than pure "trash"? 10:35 -- Good point. Whereas a qualified SAN device will always look pricey to us DIYer's unfortunately. Nice video 👍 Kindest regards, friends and neighbours.
@duduoson13065 ай бұрын
I really appreciate the old Macintosh graveyard aesthetic in your shop.
@chaosfenix5 ай бұрын
This is why I wish SATA would just go the way of PATA. MOBO and CPUs should just drop support for SATA and move to SAS. SAS are backwards compatible with SATA drives so there would be zero issues with people moving their drives over. NVME will always be better but if hardware manufacturers are worried about backward compatibility they should just switch to SAS which would provide backwards compatibility for HDDs all while actually giving them a way forward. The last major revision was completed in 2008, 16 years ago. It could drive now. SAS-4 on the other hand is only 7 years old and goes up to 22.5Gbps or about 4x the speed of SATA.
@agw54255 ай бұрын
Sure, with a unlimited budget anything new will be better/faster than 5-10 year old equipment but a server that is still doing what it did 10 years ago is not trash, especially if you can replace the power hungry hdds with power sipping ssds. For home use servers from 15 years ago that are still fully functional will do just fine and save you a ton of money in hardware as most is free or near free used. There are also sas to M.2 adapters for both ssd and nvme disks that would serve the home user well for a long time, regardless of the servers age. If you match your activity to the servers capability there is no "to old" . Some still run pre 286 pc´s and servers/mainframes from the 60´s and 70´s and enjoy it as a hobby. With what you know you could be a big help to us who can´t buy new for what ever reason, instead of trash talking older systems. The best server is the one you can afford, any thing else is pointless.
@PoeLemicАй бұрын
Good point. What you said, really applies to students like me. We can't go out and buy threadripper or epyc systems. And, picking between SAS SSD's and just normal SSD's are a no-brainer.
@andibiront23165 ай бұрын
I have 12 SAS3 7.68TB SSDs in my TrueNAS. They are rated for 2000MB/s read, but SAS3 is limited to 1200MB/s. I guess they use 2 links? They are currently working at 1200MB/s. Do they require a special backplane? They are directly connected to a LSI 9300-16i, without a backplane. I don't really need the extra BW but I was wondering how do you connect them to fully utilize the rated BW. Also, they support 2 modes of operation regarding power consumption... 11W and 9W. I don't know how to set that up. And they are running on a Hawell Xeon v3 with 2x10Gbps, dont be so hard on them! :P
@Gryfang4515 ай бұрын
I've used enterprise SAS SSDs for years as VFlash drives (VMWARE) and server boot drives. One of our FIber channel SANS uses them exclusively and an ISCSI SAN uses them as it's performance tier in an auto-tier setup with an expansion unit holding 8GB NL SAS drives. We're fairly small, so footing the bill for NVME SANs isn't going to happen any time soon. If you're using shared storage that is still spinning hard drives, using caching methods to 12Gbs SAS SSDs or NVME drives really helps out.
@dancalmusic5 ай бұрын
Enterprise SSDs still cost a lot more than their equivalent (enterprise) spinning drives. Read intensive SSDs cost about double, mixed use about triple, and write intensive about quadruple. A write intensive SSD of modern size (not 512GB please) costs as much as a server. And you need at least two of them. It makes me smile when Wendell talks as if all of us are generously gifted our disks by Kioxia :) My enterprise HDs typically last 10 years, then get replaced due to overall server obsolescence, not because they broke. And I'm talking about servers with 5-8 MS RDS VMs, MS SQL, File servers and other write intensive roles. I highly doubt a Qlc SSD will last 10 years under that load, unless you pay a fortune for them.
@blahorgaslisk77635 ай бұрын
it's also a case of knowledge and experience. We know spinning rust pretty well after forty or so years of use. We can calculate lifetime cost and performance. SSD's have what, 10 years of reasonably common use, and not anywhere near the same amount of data about long time use. Predictability is worth a lot in professional server environment. I remember the first SSD I saw. it was DRAM based as there were no such thing as flash memory at the time. The DRAM was backed up by a battery that could keep it alive for a bit more than 24 hours. Battery goes empty and all storage is gone... But as primitive as it may seem the performance was phenomenal. First FLASH SSD I got my hands on was a prototype from a manufacturer. It was dog slow. Installing the OS on it took ten times as long as installing it on spinning rust. It was hilarious as you knew that this was a SLC SSD, and still it sucked so bad. Now I've seen SSD's fail badly. But they were all consumer grade devices.
@dancalmusic5 ай бұрын
The term “spinning rust” annoys me. When 4TB SSDs that can last 10/12 years like HDs with the same usage on high-transaction servers and cost $400 then we can call those splendid examples of technology that are rotational disks “spinning rust” without necessarily being a youtuber (very good, but a bit far from the pockets of normal sysadmins)
@bloomtom5 ай бұрын
@@dancalmusic Spinning rust is not a derogatory term. It's an informal, cutesy term. Not necessarily correct either, as HDDs haven't had iron oxide media layers for a long time, but that's beside the point.
@14m13375p1c35 ай бұрын
"Socket 2011 CPUs are trash" Don't you talk about my sons like that! LOL At least for homelabbing it doesn't matter too much, but I will say, having gone down the rabbit hole to look at more recent ewaste on ebay recently, I did find out just how far the gap between the 2630L-v3s I have, first gen Xeon Scalables, and then the super impressive 3rd gen threadripper and even earlier epycs is. Been trying to find used enterprise gear that would work well for a decently capable editing server that doesn't send my wallet into a panic.
@louisharkna94645 ай бұрын
That Packard Bell tower you have in the background took me straight back to working at Best Buy in the EARLY 90s... oof.
@jrm5235 ай бұрын
Time is a cruel bitch
@wargamingrefugee90655 ай бұрын
Packard Bell made outstanding color televisions back in the '60's.
@levygaming31334 ай бұрын
Am I correct in assuming Packard Bell is packard of Hewertt-Packard and bell of Bell Labs (AT&T)?
@wargamingrefugee90654 ай бұрын
@@levygaming3133 Good question. I didn't know the answer. Wikipedia says no. "Packard Bell Corporation (also known as Packard Bell Electronics or simply Packard Bell) was an American electronics manufacturer founded in 1933 by Herb Bell and Leon Packard." "The Hewlett-Packard Company, commonly shortened to Hewlett-Packard...was founded in a one-car garage in Palo Alto by Bill Hewlett and David Packard in 1939..."
@Movingfrag5 ай бұрын
I was slowly replacing mechanical drives in my systems with SAS SSDs and the funny thing is in my experience Toshiba ones were the least reliable. Had four 3.2TB SAS3 drives - after less than a year of a moderate use three died within a month and fourth gave signs of imminent failure so i retired it too. Replaced them with the HGST drives of the same capacity - these are working nicely for years already.
@Daniel-k4t3n5 ай бұрын
Broadwell is fine for 99 percent of situations for home and even small business. First time I feel Wendell is taking shots and the plebs
@owenness61465 ай бұрын
NGL, I've been debating moving my plex system to SAS ssd for a while. It is easier & cheaper to do than move to a newer box with trays for nvme, e.3, or e.1. Plus, I could keep costs down because I don't need more transcoding power.
@pephathalok5 ай бұрын
Broadwells go for $5-$50, used SAS3 HDDs as $5/TB. Even accounting for TDPs those are the most cost-effective, way ahead of newer stuff. Try to build an OLAP rig able to scan a petabyte of CSVs with some historical data in
@Exzeph5 ай бұрын
As someone who's trying to make a good homelab who just wants power efficiency over anything else, it's really surprisingly disappointing to me how few options exist for 2.5" SATA SSD's with like... a lot of TB onboard -- at an affordable price. Like what gives? Why is that segment so underserviced?
@DaleEarnhardtsSeatbelt5 ай бұрын
The same can be said about pcie lanes. The gap from consumer to enterprise is huge. It's crazy how limited you are on consumer gear. SATA SSD's pretty much stop at 4TB. I assume it's because of the form factor. All the larger 2.5 inch drives are about 2x as thick. U.2 is where it's at. You can get 30TB 2.5 inch SSD's that way. They do take 25watts each though. Opposed to the 8 watts required by m.2 nvme.
@LtdJorge5 ай бұрын
Because no one wants to really make SATA SSDs. The speed limits were reached a long time ago and the protocol is very very inefficient for flash. Same PCB layout for SATA vs SAS/NVMe would leave the SATA one so far behind. But I do get you, you want a replacement for spinny ones without the spinny thing, and yeah, the offer is not good. I’m thinking that the NAND needed for the high density would be a waste on SATA, so that’s why they don’t do it.
@michaelsanders58155 ай бұрын
It reminds me of what people said about hard drives when they came out. It's a perception thing. Drives are far more delicate. We just think of them as safe because we protect them so much. But it's a spinning delicate piece of glass. When you think about it it's crazy to use them.
@ImAManMann5 ай бұрын
I use tons of sas ssd drives in my environment. As for servers... For our environment... most things don't need the extra performance as we hame a lot of service running at relatively low utilization.... we get better value by having many more servers a gen or 2 back clustered.... overall reliability is better having the ability to have maintenance performed by moving containers and vms to other nodes. For example... I have 5 dell r320 servers with 10 core cpus 192gb ram and 8 sas ssd drives each and 10gb networking in a cluster..... all of that for less than a single new server and for what it does the speed is a non-issue... many VMs and containers running on proxmox... everything could run on 3 nodes if needed .... maybe 2.... and that gives us a super robust environment and I can wait for newer gen servers to drop in price.
@TheFickens-xr4rr5 ай бұрын
I wish I could afford to populate my Xeon E5 V4 Supermicro CSE216 24-bay 2.5" chassis with 8TB SAS drives 😥
@gedavids845 ай бұрын
For me it seemed like what was hold back NVMe drives in this segment was the seeming lack of lanes to plug them into. And before you say "Epyc has 128 lanes" that's good and all but still only 32 4x devices. Our nimble has 48 drive capacity before we even add a shelf. Older Intel systems are like 56 lanes or something? Not a lot, and that's assuming you can even split them in a way that would be useful. We need like PCIe switch add in cards that can give us ports to actually connect to.
@BillLambert5 ай бұрын
Sure it's "middle of the road", but it's very attractive for homelabs as an affordable and scalable gateway into flash arrays. There is plenty of wiggle room for a tier of fast-ish big-ish storage in between spinners and NVMe.
@Marc_WolfeАй бұрын
Sounds good, I'm over here trying to game on Ivybridge and really wanting a cheap upgrade.
@alexbold46115 ай бұрын
I am sticking to my Dell T430 and R430, so SAS SSD is way to go for me.
@DelticEngine5 ай бұрын
NVMe may be very fast, but that's the only advantage. My main system is running SAS and SATA. SAS is great because it only needs a host adapter and I can run several drives and a lot more if I choose to use an expander. I'm also running a couple of SATA SSDs on the SAS controller very happily. Frankly, I'm really not at all impressed with NVMe technology. At present, each one takes four PCIe lanes and is basically a four-lane PCIe slot you can't use for anything else. Depending what you put in your system, PCIe lanes can be a limited resource that can be better utilised on something other than storage, and that's before going on to the limitations of PCIe splitting. It's a very inflexible storage system and I really dislike motherboard manufacturers dictating how I can use the limited number of PCIe lanes; I need to be able to swap drives so embedded M.x slots are a complete waste of resources for me. I did hear that there may be single PCIe lane NVMe devices, which could be a improvement. -------- One possible solution to this could be some sort of 'NVMe Host Adapter' that would utilise just one PCIe slot and provide connections (ports) for several NVMe drives. Maybe it could take the form of a 16-lane PCIe 5.x Host Adapter and provide, for example, 16 4-lane ports a PCIe 3.x speeds enabling up to 16 NVMe drives to be connected to one PCIe slot. Such a Host Adapter could be made that would also support SAS and SATA drives, which could be relatively straightforward if miniSAS connectors were used. This would facilitate choosing how resources are allocated in terms of PCIe lane utilisation, number of storage devices and make NVMe a much for viable alternative to SAS. It could also make system expansion rather interesting if standards existed that enabled, for example, a front or even rear panel hot-swap array to also be used as a general expansion slot. This could be used for card readers, network adapters, video capture hardware, sound cards, or even custom home-brew expansion or scientific expansion cards. Could the 2.5" U.x also be used as a kind of 'form factor' for expansion cards?....
@esunisen3862Ай бұрын
My Bloomfield isn't quite happy hearing this.
@DeKempster5 ай бұрын
The disks in my jobs 10 year old DL380 G8 only started to fail this year.
@Elinzar5 ай бұрын
Living on the bleeding edge sometimes dont let you see whats going on on the middle of the blade For most people and specially homelab enthusiasts who usually adquire old servers would be thrilled to put ssds on their hardware
@adrianandrews22545 ай бұрын
Re Homelab use: I bought an X10DRi-LN4F+ motherboard (Dual 2011 CPU) on ebay for $120 which can support 14 x NVME drives and dual 10GBit ethernet with the appropriate PCI-E cards. Started with 4 x 2TB PCI-E V3 Consumer SSD in RAID 5. . So it need not cost the earth.
@MarkRose13376 ай бұрын
Sometimes you don't have the funds for all new gear. Or you may have use-cases where CPU doesn't matter so much, and that hardware could be repurposed for Ceph storage or whatever.
@joshuaspires92525 ай бұрын
i love the sas ssd, for a home brew media server,, i cant afford fancy new gear on every thing.. but i am rocking a 16drive 10k array and it is shreading power,, so it was a learning deal.. so now i computer money on hardware against energy usage now.. changes every thing for me. in short spinning hard drives is not going away for me as ssd all the things is just way to much money as of now.
@rafiq9910085 ай бұрын
lol, I just dropped a few of those SATA SSDs on production system this week. Better than HDD for now. also, nice to know that SAS SSD exists, SAS HDD is not good for DB hitting the servers with random read/writes.
@idied25 ай бұрын
I have a 800Gb sas SSD vut haven't used it yet for my server. I wanna get 3 more but can't seem to get more
@churblefurbles5 ай бұрын
Mic is muffled.
@reinekewf79875 ай бұрын
i have a r630 with e5 2683v4. i could use nvme but booting from is dificult. i use 6 521gb sata ssd in raid 10 and those drives getting heavy redden and written. one drive last about 18 moths and cost me only 30€ i have a sas ssd but those a bit expensive if you ask me. also i dont need the extra features of sas drives. i needed a powerful and cheap server so the r630 cost me only 600€ and it came with 2 xeon e5 2683v4, 512gb of 4rx32 ram modules, 8 drive bays, perc h330 mini, 2x2 10gbit nic, idrac8 enterprise. i am happy with this it is perfect for my needs maybe a bit old but powerful.
@wobblysauce5 ай бұрын
Yep, just like an SSD for that old laptop... they boost the server response just as nicely.
@tolpacourt5 ай бұрын
Pedantic pronunciation coach here. Anachronistic has stress on none of the syllables. uhn-nak-cruh-nis-tic. Maybe a slight emphasis on the second syllable, the nak.
@Decenium5 ай бұрын
and here I am trying to make a personal nas with a Q9550....
@rkan25 ай бұрын
Don't 😅 buy a Xeon D or similar
@Elemino5 ай бұрын
Wendell, maybe you can answer this question for me... why does SATA still exist? Why haven't consumer drives transitioned over to SAS? It seems like the technology should be old enough and mature enough that the cost difference is negligible at this point... especially when the new hotness is NVMe.
@xandrios5 ай бұрын
How trustworthy would you consider the consumer drive DWPD ratings to be? You mentioned not being happy with using a Samsung consumer drive in an enterprise setting - but would that be actually a possibility when doing low writes? For instance, dirt cheap M.2 drives are often rated for 1DWPD. They are so cheap that even only using only 10% of their capacity, effectively making them 10 DWPD drives, is cost wise very possible.
@Hugh_I5 ай бұрын
I've been using bottom barrel consumer drives for a home server setup for ~10 years. I never had issues with hitting the write endurance ratings. Rewriting an 8TB SSDs daily would be A LOT, you'd have to have a very IO intensive task for that to happen. For use cases that aren't constantly writing to the disk, the endurance ratings on consumer drives is often sufficient for those drives to last until you want bigger ones anyways. In case they do fail, its probably still cheaper to replace them than to use enterprise grade. Though I would still probably not do that in an enterprise setting. (I did though had two Samsung SSDs fail catastrophically, but not due to exhausting spare cells. Both simply died at the same time, the controllers just went away. I think it may have been one one of the Samsung firmware issues that fried drives, but not sure. As it happens, of course, those were two drives in a RAID1 mirror, so both copies gone. Gladly I had backups. With mechanical drives you generally have them fail slowly not abruptly like that. So there's that).
@frankwong94865 ай бұрын
I have some QLC which come with laptop , they still have 93% of life span 😂 And if you see how those Chia miner plotting on SSD and how they replace / resell used ssd, most SSD die due to other issues such as controller or pcb components issue , not common dead by writing too much or TBW reach
@monkeyrebellion1175 ай бұрын
My hot take: As long as the hardware runs what you want within your spec, it's still good. The SSDs do make a huge difference though.
@kristeinsalmath19595 ай бұрын
Did some one use Kingston DC SATA drives? I was thinking to buy some for old but working servers.
@bcredeur975 ай бұрын
For small business, SATA SSD’s are fine No need to burden the small guy with high costs of SAS/NVMe But yeah the moment you get to “we’re losing serious money if this goes down” then you need to think different
@johnpaulsen18495 ай бұрын
So your telling me I can't add these into my vnxe3200 for my homelab?
@galileo_rs5 ай бұрын
Price of one of those, Dell branded drives is in kilobucks range. Price of your used Dell server is ... ?
@sk8lucas5 ай бұрын
I does not make sense, but I really want to put like 4 os those drives in a Micro-ATX Video editing build.
@LackofFaithify5 ай бұрын
Are you saying that a 4TB 7.2K HDD should be less than the, "all things must go, store closing, clearance price," of $450? But I signed a multi-decade contract to get that price.....
@beansnrice3215 ай бұрын
Lol, ouch, me and my ol Broadwell workstation feel attacked. XD
@ArmChairPlum5 ай бұрын
Hmm, this would be interesting for the likes of schools with older hardware. In my case I have 9 year old twin vmware 128gb hosts on a 12gbit sas storewize v3700 that I am getting... concerned about. 16tb total on itty 2.5 inch drives. Schools need for compute has dropped and storage too (shifting to online onedrive usage) so a couple of the lower capacities in mirror would be sweet - depending on cost. I do want to get a newer server though! Then migrate the need for DCs but have papercut and their finance package and potentially their student management system that require a physical server.
@piterbrown15035 ай бұрын
Im thinking many people in the mid and small bussnes dont now that existing sas ssds. We are using dell server and in the dell configurator the sell you ssd with sata not sas. And we are talking about R550 or R660 Dell. Also Kingstone Enterprise ssd are stil sata or nvme. Also on micron you need to search deeper to found sas ssd.
@dustingodin53235 ай бұрын
The problem with these are they straight more expensive than NVME. Even some u.2
@shawnmcelroy18295 ай бұрын
These still seem great for homelab NAS cold storage. Where can you get these. How much should they cost
@ddobster5 ай бұрын
damn, i just got a serious case if itchy rash after seeing that old Lexmark 2390 in the back....
@nicknorthcutt76805 ай бұрын
Isn't that the kioxia drive that the ISS put up in their servers?
@AnnatarTheMaia5 ай бұрын
I don't know what you were running on those servers you labeled "garbage", but I got Solaris 10 running on some of that hardware... and it just flies. Solaris 10 is unbelievably fast on that "garbage" hardware (it's garbage, but not because of generation, but because it's PC-bucket hardware, but that's a different discussion).
@redhonu5 ай бұрын
Would you run a sever with a E5-2698v3 if you got it for the price of the ssd's in a beginner home lab?
@eDoc20205 ай бұрын
The only thing wrong with them is power _efficiency._ You'll probably draw at least 100 watts at idle. Newer equivalent servers will give more compute for the same power. Newer small servers will give the same compute at less power.
@frankwong94865 ай бұрын
Hopefully one day these ssd become more affordable 😢
@fhpchris5 ай бұрын
I don't think broadwell-e is trash. My 2699V4 system can do ~2.48 GiB/sec in Windows file transfer from a single windows 11 client. SMB and the windows 11 TCP stack begin to be a limitation probably before the CPUs do if you have the best ones that you can for your socket. Enterprise Sata SSDs are also great and much cheaper than SAS SSDs. When you start putting 24 of any of these drives into a single chassis your networking (or the Sas adapter/expander) is probably going to be the limitation before the drives. 24 Sata SSDs can go faster than a SAS3008 can easily.
@churblefurbles5 ай бұрын
Problem is an n100 mini can do pretty close to that as well using almost no power.
@deepspacecow26445 ай бұрын
I think he meant for more in the enterprise rather than in the homelab.
@terosma5 ай бұрын
@@churblefurbles with 9 PCI lines you do not attach many NVMe to N100 or 100Gb NICs either
@chuckthetekkie5 ай бұрын
For my use case, NVMe is overkill as I don't need that crazy fast NVMe speeds. I'd rather have 16 SATA/SAS drives than 4 NVMe drives on a HBA card. Capacity is more important than speed. My server is mostly for serving media and the occasional VM so I don't really need NVMe speeds. Also SAS SSDs typically use less power than their NVMe counterparts. My server also has a bunch of 16TB SATA HDDs in it in a ZFS pool and I typically get over 900MB/s which is plenty fast for serving movies and TV shows but I would like to eventually replace them with something a bit more reliable and more energy efficient and these SAS SSDs sound like a logical upgrade for me. My wallet says otherwise but eventually I'll upgrade the HDDs.
@eTwisted5 ай бұрын
Ha ha, I have so many servers 8+ years old and yea I drop in Samsung consumer drives. Many I am trying to get off of RHEL7 onto 8 and that barely supports EPYC. But first gotta get Cadence working fully with RHEL7 so we can buy 5 year old servers.
@CatalystReaction5 ай бұрын
I can put my finger on it but the audio hasnt been as good recently
@youtubasoarus5 ай бұрын
It hurts my heart to hear hardware referred to as garbage. I get it, in an enterprise environment that is dollars wasted, but for home gamers.... that be gold in them there chassis.
@almc84455 ай бұрын
The pop up explaining what OEM means made me laugh - I would be SHOCKED if someone watching this level of analysis wasn't intimately familiar with the phrase
@shrimp_p4rm5 ай бұрын
If you have to ask what its used for, you definitely need at least 2.
@arthurswart44365 ай бұрын
Are these U.2 / SFF-8639 drives or is it the same SAS connector I use for mechanical SAS drives? I get suspicious when manufacturers cheerfully reply I can replace mechanical SAS drives with SAS SSDs, yet never respond when I ask about using them in the SAS bays I have available. I know i can replace them, I'd rather not buy new servers to do that just yet.
@LadyWolffie4 ай бұрын
No, they aren't compatible. There is trimode backplanes and connectors that accept u.3 nvme drives, sas and sata, but is a new technology
@rickwhite77365 ай бұрын
Why is a 1TB sas ssd $500 and a 1TB ssd only $40?
@Saphykitten5 ай бұрын
Come on Wendell, we can’t all have $4k servers at home >:c
@romevang5 ай бұрын
This video isn't targeted for home users.... It's making arguments for businesses to get off ancient hardware.
@Saphykitten5 ай бұрын
@@romevang Awe Im just busting his chops, and giving him the business ;)
@Koop13375 ай бұрын
Need a big disclaimer that says for your business not your home lab in your garage lol
@UmVtCg5 ай бұрын
SAS SSD are the special Forces of Storage media
@fwiler5 ай бұрын
I'm showing this to my boss that thinks our Broadwell servers are fine.. The cost in electricity to run them is more than they are worth.
@eDoc20205 ай бұрын
AFAIK the newer generations don't use any less power, they just give you more performance for the same power.
@fwiler5 ай бұрын
@@eDoc2020 It isn't about the electricity it's about Wendell saying just throw them out. And when a laptop has more compute than a server, you know it's time to change.
@eDoc20205 ай бұрын
@@fwiler If the electricity isn't an issue and they have plenty of compute power _for your use case_ there's no need to change.
@fwiler5 ай бұрын
@@eDoc2020 You aren't getting it at all. My post wasn't serious. I'm not actually going to show the video to my boss. My initial post was to show that even Wendell wouldn't use such an old pos in production. IT is notorious for not upgrading due to.... insert reason here. And no it isn't enough compute and there's about 100 other reasons I could list why you would upgrade, so don't say you don't need to change when you have no idea about the hardware.
@gabest45 ай бұрын
How does one 8TB ssd cost as much as 280TB hdds.
@BoraHorzaGobuchul5 ай бұрын
Strange math. In my market, one 8tb kioxia costs same as 6x20tb ultrastar hdds. And this is unpleasant but more or less reasonable
@duncancampbell94905 ай бұрын
Nice prices....
@tsclly23775 ай бұрын
on a older server (HP gen8) , NVME was generally something that would be on the motherboard connectors using 8PCIe lanes and they where expensive and could be burned up on the writes in 5-7 years, or would be password protected in such a way that if you didn't have that actual password, all you really had was parts.. Me, I collect the 45nm 400gb SLC SAS drives, using only the older ones, in DATA transfer/modification, as they may never burn out.. and almost nuke proof. When I was mining ETH (2018), we had some SATA Intel545's that lasted as little as 4 months, while the OS and routines on most were stored on HP SLC SATA and these never burned out, even when the temps went to over 140*F (air conditioner was turned off) and Mining cards failed and fried a Pcie accelerator.. Don't forget the tape back-up.. TLC is garbage.. and is going to be sketchy especially if you but used and generating a lot of useless data, like in AI.. A 400gb SLC is going to be better than a 4-6 TB that has a write (est.) limit of 30PB and is going to be more likely to live on, but these are now getting harder to get in the used market. If you are buying used, the considerations are, writes per GB total, used, how many can one get and costs..Deifately buy them 10-14 at a time. and look for the spares that are being cycled out. Like scoring 24-30 Dell 400GB SAS 2.5" 12G SLC Solid State Drive SSD T2TPF MDKW-400G-5C20 0T2TPF drives...
@kaeota5 ай бұрын
Can we have a clip of Wendell saying it's trash, I'm sorry 😂
@wooviee5 ай бұрын
This camera angle is so nice, feels like an old Mythbusters shot, or a shot from Adam's videos on Tested.
@mrhassell5 ай бұрын
SAS SSDs (Serial Attached SCSI Solid-State Drives) are often the preferred choice over SATA SSDs (Serial ATA Solid-State Drives).
@krykry6065 ай бұрын
I have a more important question. What is an AI SSD.
@Angel_the_Bunny5 ай бұрын
Reminds me of sshdd
@xandrios5 ай бұрын
The enterprise markup on these drives is way rediculous. But they all do it. What enterprise server platform would actually happily accept those nice Kioxia drives? Because dell and HPE will not. And for enterprise use - especially when deploying offsite - you probably want to go with one of the big brands in order to get your on site hardware support.
@Nah_no_thanks5 ай бұрын
Real SASsy drive... Punching out.
@bart_fox_hero28635 ай бұрын
Laugh now, but having to explain to your pc component why it’s lifespan was so short, as it threatens to kill you just before it expires as scheduled by the manufacturer is closer than we think boys
@floodo15 ай бұрын
I hope to be able to use a $1500 SSD at home one day lol (-8
@uni-kumMitsubishi5 ай бұрын
Wow looking healthy, feel like everyone is coming out of there post covid funk
@TheKev5075 ай бұрын
IMO the biggest reason these still exist is that NVME drives consume too much power vs SAS SSDs
@marcogenovesi85705 ай бұрын
do they though? SAS SSDs are power hungry too, they get hot
@TheKev5075 ай бұрын
@@marcogenovesi8570 yes definitely, SAS SSDs are typically under 10W, while NVME will easily hit 15-25W. There are some lower power NVME devices but the norm is to expect 50-100% more power consumption from NVME which leads to power and cooling challenges in storage dense servers, especially if other hot components like GPUs or top bin CPUs are required.
@christiano.48085 ай бұрын
I'm happy to see that you brought that up. I'm responsible for thousands of deployed servers and we've phased out HDDs a long time ago in favor of enterprise SATA and sometimes SAS SSDs. I'm now at the point where I think about using NVMe by default for new deployments and very recently decided against it after doing some power consumption tests. This is something you find VERY LITTLE information and chatter about online. And if you find something, there's a high chance that it is unrealiable information from overly optimistic data sheets and possibly generated by an AI having a bad dream. Idle: CM6-R: 7.2W RM6-R: 3.5W S4510: 1.7W This is a deal breaker. I can't deploy drives that need 4x the power of a SATA SSD. I can and will when the performance is required and then performance per watt will be reasonable, but how often will that really be the case? We will continue to use SATA and SAS SSDs unless we really need something faster and/or NVMe SSDs use less power. What we really need are high density low power NVMe models. They can be lower performance to achieve that. CPUs are getting more and more PCIe lanes to prepare for PCIe storage, so I want to use them for any use case, not just cutting edge performance. That's just not reasonably possible at the moment. Unless that changes, SATA and SAS SSDs are here to stay.
@krazydime05 ай бұрын
you aint putting one in a dell or hp unless it runs their firmware…aka…you bought it overpriced from them
@Ozz4655 ай бұрын
Just let it go brother , Its trash. Its hard to let go of what one used for ages .
@Anaerin5 ай бұрын
*cries in Xeon X5677*
@waterflame3215 ай бұрын
These CPUs are almost old enough to drive. Please let them rest. Try something a little newer
@drewzoo0285 ай бұрын
3:19 Made me feel targeted, I run an array of 24 240 GB Samsung 860 EVO SATA SSDs in my homelab 🤣
@piked865 ай бұрын
Are you concerned about drive wear?
@drewzoo0285 ай бұрын
@@piked86 Yes, and reliability in general, but they were free and I have a lot of spares. Can't beat free!
@piked865 ай бұрын
@@drewzoo028 That setup makes more sense with that price tag.
@paxdriver5 ай бұрын
Here's my crazy idea - nvme for buffer, 4 high storage mechanical drives. Speed of nvme, storage of HDD, all on 4x pcie lane with fantastic ext4 journaling. We need more layered solutions that come as a bundled-but-modular configuration. Sass HDD would be perfect for modern pcie bundle storage solutions imho.