Ceph Storage Solutions: Which is the Fastest and Most Efficient?

  Рет қаралды 4,476

Daniel Persson (Kalaspuffar)

Daniel Persson (Kalaspuffar)

Күн бұрын

Пікірлер: 11
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Wow! What a surprise to see CephFS perform so much better than RGW. I would have expected the opposite, but I'm guessing the MDS's metadata caching makes a big difference for IO performance.
@DanielPersson
@DanielPersson Жыл бұрын
That could be one factor. The other one is local file caching and the general limitations of the RGW host. 1 CPU 2GB Mem
@apalrdsadventures
@apalrdsadventures Жыл бұрын
Makes sense, the RGW gateway becomes a bottleneck where CephFS clients can go directly to OSDs. Similar issue to CephFS via NFS gateway.
@kwnstantinos79
@kwnstantinos79 Жыл бұрын
@@apalrdsadventures if you use sata and sas yes need caching , over NVMe not need it .
@x-macpro6161
@x-macpro6161 Жыл бұрын
it is amazing test but I am not sure what is the different performance between Cephadm (container) and Ceph non-container (server). Do you think the performance is the same?
@DanielPersson
@DanielPersson Жыл бұрын
Hi X-MAC I've not tested it explicitly but I think it would not impact you much. So if you have a lot of resources to spare you should be just fine. But in cases you run more constrained or want to use your hardware more efficiently then all kinds of abstraction will add more cycles by design. I hope this helps. Thank you for watching my videos. Best regards Daniel
@ewenchan1239
@ewenchan1239 9 ай бұрын
Great video!!! Thank you for sharing your data. Despite it being VMs running on a single, physical system, but running on a NVMe SSD -- I am surprised that the results weren't higher. I would have expected that with no network (although you did mention that there is a virtual switch, so I wonder if the VMs were set up using the virtio network adapter rather than say some Intel GbE NIC that's usually VirtualBox's default) -- I would've expected higher results. Very interesting. And apparently, it's not particularly fast. It looks like that the global maximum write speed was approximately 116 MB/s (with one replica), whilst the global maximum write speed for erasure coded pool was ~73 MB/s - even when running off a single NVMe SSD. That's quite slow. I'm surprised.
@DanielPersson
@DanielPersson 9 ай бұрын
Hi Ewen I think the right takeaway from the video is the difference between different setups rather than actual speeds. With correct network setup with a couple of good hosts and drives you will have both speed and throughput. Some solutions has more complexity and will therefore be slower but might have a benefit when it comes to redundancy or space saving. Thank you for watching my videos. Best regards Daniel
@ewenchan1239
@ewenchan1239 9 ай бұрын
@@DanielPersson Thank you. I stumbled upon this video because it popped up on my feed, but it is interesting because I just set up a 3-node Proxmox HA cluster where I am running Ceph as well. Each node is a OASLOA Mini PC which has an Intel N95 processor (4-cores/4-threads), 16 GB of RAM and a 512 GB 2242 M.2 NVMe SSD, and are connected together via the dual GbE NICs. I've noticed that when I was testing the system (setting up my Windows AD DC, DNS server, and Pi-hole) that it wasn't super fast, but I had attributed that to the GbE NIC that ties the systems together. In my experience, creating a VM on the Ceph erasure coded RBD pool (k=2, m=1) wasn't really much faster than ~75 MB/s sequential write. The CPU utilisation, as reported by via Proxmox, also didn't show very high CPU utilisation neither. So this is very interesting to me -- not only the relative comparison of the speed differences between the different setups, but also the speed comparison relative to what should be possible, given the hardware that you were using for these tests.
@varunjain3870
@varunjain3870 Жыл бұрын
Did you try Portworx ? :D
@DanielPersson
@DanielPersson Жыл бұрын
Hi Varun. Thank you for watching my videos. I've not heard about it before but I've added it to my research list so there might be a video on the topic in the future. Best regards Daniel
Installing Promtail and Loki for Log Monitoring in Ceph
22:08
Daniel Persson (Kalaspuffar)
Рет қаралды 1,7 М.
Cephadm Revisited: Unveiling New Improvements and Enhanced Production Readiness
29:01
Daniel Persson (Kalaspuffar)
Рет қаралды 2,4 М.
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 11 МЛН
Colorful Pasta Painting for Fun Times! 🍝 🎨
00:29
La La Learn
Рет қаралды 308 МЛН
HELP!!!
00:46
Natan por Aí
Рет қаралды 32 МЛН
Try Not To Laugh 😅 the Best of BoxtoxTv 👌
00:18
boxtoxtv
Рет қаралды 7 МЛН
Ceph vs. SeaweedFS: Which Offers Better Performance?
24:14
Daniel Persson (Kalaspuffar)
Рет қаралды 6 М.
Setting up a new Ceph cache pool for better performance
16:27
Daniel Persson (Kalaspuffar)
Рет қаралды 6 М.
6-in-1: Build a 6-node Ceph cluster on this Mini ITX Motherboard
13:03
Highly Available Storage in Proxmox - Ceph Guide
31:13
Jim's Garage
Рет қаралды 33 М.
Accelerating Ceph Performance with High Speed Networks and Protocols
36:33
OpenInfra Foundation
Рет қаралды 7 М.
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 11 МЛН