This is what I miss from tech channels, absolute disregard to office ethics and that “let’s see what happens” attitude, without overly dramatic work up. Walk up to the rack, yank several power cables out, stick your head out of server room to listen for screams, go back, plug equipment back in, bring it back up and go with your day. It doesn’t matter if it was scripted or not, I can appreciate a professional in their field doing this just for the funk of it. You folks got yourselves a subscriber. Keep it up.
@45Drives2 жыл бұрын
Alexander, this feedback really helps us especially as we venture into a new series of videos. Thanks again for the kind words!
@swainlach45872 жыл бұрын
Yeah, I like submitting my work late because someone is fckg around in the server room.
@hz87112 жыл бұрын
I am surprised that i see this channel for the first time, man, you don't need to compare with LTT, your videos are way more professional and technical, and still you keep the content on very high level, salute for the camera man that stays at the server room the hole time! Thank you for this video
@MaxPrehl2 жыл бұрын
Got this whole video as an ad before a Tom Lawrence vid. Watched the whole thing and subscribed! Really enjoyed the end to end experience here!
@wbrace22762 жыл бұрын
Am I the only one that had to stop the video, then go back, just to hear him say “shit the bed” again? Point is, while I like your tech tips videos, this was a welcome change. Go fast, break shit, see what happens. Love it
@joncepet2 жыл бұрын
I was internaly screaming when you just pulled power cords from servers! Nice video though!
@Neutrino20722 жыл бұрын
In my datacenter you can pull anything and even full nodes and you're not going to notice anything. If you build well, you can sleep well. If anything only exists once, it's not good.
@redrob22302 жыл бұрын
In bubble’s voice ”what the hell Ricky? You’re pulling parts out, that can’t be good”
@Darkk6969 Жыл бұрын
Wow.. 72TB used out of 559TB available. It's gonna take awhile for CEPH to check everything after being shutdown like that. How are the performance of the VMs on ProMox when it happens?
@toddhumphrey36492 жыл бұрын
New platform is great, keep the content coming. This is like a 45Drive Cribs episode. Lol Love the product, keep up the great work
@45Drives2 жыл бұрын
Thanks, Todd. We really appreciate the feedback.
@alexkuiper10962 жыл бұрын
Really interesting - many thanks!
@LampJustin2 жыл бұрын
Haha awesome video! ^^ Can't wait for more of these!
@45Drives2 жыл бұрын
Appreciate it!
@Exalted6298 Жыл бұрын
Hi. I am trying to build Ceph on a single node using three nvmes (the CRUSHMAP has been modified, like this 'Step Chooseleaf firstn 0type OSD') . But the results of the 4K random read-write test were very poor. I don't know what the reason is. According to the FIO test results, RND4K Q32T16:4179.80 IOPS read 10368.50 IOPS write in RBD. The results of testing directly on the physical disk are as follows, RND4K Q32T16:35262.53 IOPS Read 32934.7 IOPS write.
@45Drives Жыл бұрын
Ceph Bluestore OSD backend was able to increase performance and latency of the OSD technology considerably over filestore. However, it was not designed specifically for NVMe as NVMe was not extremely prominent when it was being developed. There are some great optimizations for flash, but there are also limitations. Ceph has been working on a new backend called SeaStor which you may also find under the name Crimson if you wish to take a look, however it is still under development. With that being said, the best practice for getting as much IOPS as possible out of NVMe based OSDs is to allocate several OSDs to a single NVMe. Since a single bluestore OSD cannot come close to saturating an NVMe's IOPS, we then should partition our NVMe into at least 3X partitions and then create 3X OSD's out of a single NVMe. Your mileage may vary, but some people recommend 4X OSD's per NVMe but 45Drives recommends 3. This should definitely give you additional performance. Hope this helps! Thanks for the question.
@gorgonbert2 жыл бұрын
The transfer probably failed because the file handle died and it wasn’t able to re-establish it on one of the surviving cluster nodes. CIFS/SMB received a ton of mechanisms over time to accomplish file handle re-connection and I lost track which one’s which. The server and client need to support whichever mechanism for it to work. Would love to see a video about how your solution solves that problem. I would like to host SQL databases via CIFS/SMB (I have reasons ;-) )
@ztevozmilloz61332 жыл бұрын
Maybe you should try samba cluster, I mean CTDB. By the way my tests seems to conclude that for file sharing it's better to mount a cephfs than create a RBD volume from the VM. But not sure....
@blvckblanco2356 Жыл бұрын
im going to do this in my office 😂
@intheprettypink2 жыл бұрын
That has got to be the worst labgore in a server Ive seen in a long time.
@bryannagorcka18972 жыл бұрын
A Proxmox cluster of 1 hey. lol. Regarding the windows file transfer that stalled, it probably would have recovered after a few more minutes.
@user-cl3ir8fk4m2 жыл бұрын
Ooh, yay, another video about people who have enough money to set their own shit on fire during a global recession while I’m combing through six-year-old cold storage drives to find a couple extra GB of space! Not tone-deaf in the least!