We DESTROYED Our Production Ceph Cluster

  Рет қаралды 13,818

45Drives

45Drives

Күн бұрын

Пікірлер: 24
@TechJackass88
@TechJackass88 2 жыл бұрын
This is what I miss from tech channels, absolute disregard to office ethics and that “let’s see what happens” attitude, without overly dramatic work up. Walk up to the rack, yank several power cables out, stick your head out of server room to listen for screams, go back, plug equipment back in, bring it back up and go with your day. It doesn’t matter if it was scripted or not, I can appreciate a professional in their field doing this just for the funk of it. You folks got yourselves a subscriber. Keep it up.
@45Drives
@45Drives 2 жыл бұрын
Alexander, this feedback really helps us especially as we venture into a new series of videos. Thanks again for the kind words!
@swainlach4587
@swainlach4587 2 жыл бұрын
Yeah, I like submitting my work late because someone is fckg around in the server room.
@hz8711
@hz8711 2 жыл бұрын
I am surprised that i see this channel for the first time, man, you don't need to compare with LTT, your videos are way more professional and technical, and still you keep the content on very high level, salute for the camera man that stays at the server room the hole time! Thank you for this video
@MaxPrehl
@MaxPrehl 2 жыл бұрын
Got this whole video as an ad before a Tom Lawrence vid. Watched the whole thing and subscribed! Really enjoyed the end to end experience here!
@wbrace2276
@wbrace2276 2 жыл бұрын
Am I the only one that had to stop the video, then go back, just to hear him say “shit the bed” again? Point is, while I like your tech tips videos, this was a welcome change. Go fast, break shit, see what happens. Love it
@joncepet
@joncepet 2 жыл бұрын
I was internaly screaming when you just pulled power cords from servers! Nice video though!
@Neutrino2072
@Neutrino2072 2 жыл бұрын
In my datacenter you can pull anything and even full nodes and you're not going to notice anything. If you build well, you can sleep well. If anything only exists once, it's not good.
@redrob2230
@redrob2230 2 жыл бұрын
In bubble’s voice ”what the hell Ricky? You’re pulling parts out, that can’t be good”
@Darkk6969
@Darkk6969 Жыл бұрын
Wow.. 72TB used out of 559TB available. It's gonna take awhile for CEPH to check everything after being shutdown like that. How are the performance of the VMs on ProMox when it happens?
@toddhumphrey3649
@toddhumphrey3649 2 жыл бұрын
New platform is great, keep the content coming. This is like a 45Drive Cribs episode. Lol Love the product, keep up the great work
@45Drives
@45Drives 2 жыл бұрын
Thanks, Todd. We really appreciate the feedback.
@alexkuiper1096
@alexkuiper1096 2 жыл бұрын
Really interesting - many thanks!
@LampJustin
@LampJustin 2 жыл бұрын
Haha awesome video! ^^ Can't wait for more of these!
@45Drives
@45Drives 2 жыл бұрын
Appreciate it!
@Exalted6298
@Exalted6298 Жыл бұрын
Hi. I am trying to build Ceph on a single node using three nvmes (the CRUSHMAP has been modified, like this 'Step Chooseleaf firstn 0type OSD') . But the results of the 4K random read-write test were very poor. I don't know what the reason is. According to the FIO test results, RND4K Q32T16:4179.80 IOPS read 10368.50 IOPS write in RBD. The results of testing directly on the physical disk are as follows, RND4K Q32T16:35262.53 IOPS Read 32934.7 IOPS write.
@45Drives
@45Drives Жыл бұрын
Ceph Bluestore OSD backend was able to increase performance and latency of the OSD technology considerably over filestore. However, it was not designed specifically for NVMe as NVMe was not extremely prominent when it was being developed. There are some great optimizations for flash, but there are also limitations. Ceph has been working on a new backend called SeaStor which you may also find under the name Crimson if you wish to take a look, however it is still under development. With that being said, the best practice for getting as much IOPS as possible out of NVMe based OSDs is to allocate several OSDs to a single NVMe. Since a single bluestore OSD cannot come close to saturating an NVMe's IOPS, we then should partition our NVMe into at least 3X partitions and then create 3X OSD's out of a single NVMe. Your mileage may vary, but some people recommend 4X OSD's per NVMe but 45Drives recommends 3. This should definitely give you additional performance. Hope this helps! Thanks for the question.
@gorgonbert
@gorgonbert 2 жыл бұрын
The transfer probably failed because the file handle died and it wasn’t able to re-establish it on one of the surviving cluster nodes. CIFS/SMB received a ton of mechanisms over time to accomplish file handle re-connection and I lost track which one’s which. The server and client need to support whichever mechanism for it to work. Would love to see a video about how your solution solves that problem. I would like to host SQL databases via CIFS/SMB (I have reasons ;-) )
@ztevozmilloz6133
@ztevozmilloz6133 2 жыл бұрын
Maybe you should try samba cluster, I mean CTDB. By the way my tests seems to conclude that for file sharing it's better to mount a cephfs than create a RBD volume from the VM. But not sure....
@blvckblanco2356
@blvckblanco2356 Жыл бұрын
im going to do this in my office 😂
@intheprettypink
@intheprettypink 2 жыл бұрын
That has got to be the worst labgore in a server Ive seen in a long time.
@bryannagorcka1897
@bryannagorcka1897 2 жыл бұрын
A Proxmox cluster of 1 hey. lol. Regarding the windows file transfer that stalled, it probably would have recovered after a few more minutes.
@user-cl3ir8fk4m
@user-cl3ir8fk4m 2 жыл бұрын
Ooh, yay, another video about people who have enough money to set their own shit on fire during a global recession while I’m combing through six-year-old cold storage drives to find a couple extra GB of space! Not tone-deaf in the least!
@logananderon9693
@logananderon9693 2 жыл бұрын
You should nose breathe more.
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 2,5 МЛН
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 26 МЛН
MAGIC TIME ​⁠@Whoispelagheya
00:28
MasomkaMagic
Рет қаралды 36 МЛН
6-in-1: Build a 6-node Ceph cluster on this Mini ITX Motherboard
13:03
We Finally Did it Properly - "Linux" Whonnock Upgrade
21:07
Linus Tech Tips
Рет қаралды 3,8 МЛН
Highly Available Storage in Proxmox - Ceph Guide
31:13
Jim's Garage
Рет қаралды 33 М.
How to setup a RADOS Gateway for an S3 API in Ceph
22:14
Daniel Persson (Kalaspuffar)
Рет қаралды 9 М.
😜 #aminkavitaminka #aminokka #аминкавитаминка
00:14
Аминка Витаминка
Рет қаралды 2,5 МЛН