Tuesday Tech Tip - Tuning and Benchmarking for your Workload with Ceph

  Рет қаралды 3,884

45Drives

45Drives

Күн бұрын

Пікірлер: 12
@BillHughesWeehooey
@BillHughesWeehooey 2 жыл бұрын
Hey Mitch, thanks for getting started on this video series so fast. I would like to see tuning for VM workloads first. So RBD. Thanks! Keep up the great work!
@subhobroto
@subhobroto 2 жыл бұрын
First, congratulations and thank you for getting this Ceph Cluster benchmark series kicked off. I really like the way you started this: being a networked FS, we really need to ensure the foundation of the Ceph Cluster - the CPU, Network and the Discs meet our requirements. The insight about disabling higher C States for CPU power saving was very helpful. I can see that a lot of us here are using Ceph to back VM workloads and yes, iSCSI setup, tuning and benchmarks would really be appreciated! I might have missed a detail - to get those nice 10Gbps numbers, was custom network tuning done Eg - jumboframes (and maybe a discussion of the pros and cons)?
@Darkk6969
@Darkk6969 Жыл бұрын
Jumbo frames will definitely help for throughput but latency can also be a killer. If there are any networking issues such as bad cable, switch issues and etc jumbo frames will work against you as it has to retransmit the larger 9k packets instead of the 1.5k packets that normal network use.
@ThorbjoernWeidemann
@ThorbjoernWeidemann 2 жыл бұрын
Great start, looking forward to the rest of this series. For me RBD peformance is most important.
@zacks4186
@zacks4186 2 жыл бұрын
This is great. I'm going to take a look at tuning the ceph cluster we built at work. Would love to see a top end version of this with an all NVMe setup.
@bamzilla1616
@bamzilla1616 2 жыл бұрын
Bring on Part 2!
@pewpewpew8390
@pewpewpew8390 2 жыл бұрын
Vm workloads and iscsi, hopefully with vmware as well. ,nfs (not vms) pretty please
@JackGuillemette
@JackGuillemette 2 жыл бұрын
Would be very interested on this as well
@Darkk6969
@Darkk6969 Жыл бұрын
I am debating on running CEPH again on our 7 node ProxMox cluster. Right now it's running ZFS with replication to keep things simple. I'm more into HA in keeping all the VMs running when a node fails. I've ran CEPH before on older version of ProxMox three years ago and had serious performance issues when it rebalances. I am thinking of having 4 compute nodes and 3 CEPH nodes just to separate this out instead of all 7 nodes running the VMs and CEPH at the same time on dedicated 10 gig network. I won't be using iSCSI so it'll all be through RBD. Any thoughts?
@jozefrebjak6209
@jozefrebjak6209 2 жыл бұрын
I tried to tune our cluster with tuned network-latency profile on PowerEdge R730xd Dell servers running Ubuntu 20.04.4 LTS and our CPUs started to overheat from 50 °C to 65 °C and increase so I decided to switch back to default balanced profile. C states was empty and Busy% was 100% on every single core.
@DanBroscoi
@DanBroscoi 2 жыл бұрын
Performance = More Watts = More Heat = More pressure on HVAC. There's no way to improve the performance without increasing the power consumption. (except free cooling in the winter) :)
@Darkk6969
@Darkk6969 Жыл бұрын
@@DanBroscoi Running this at a large data center shouldn't be an issue. Servers are built like tanks as they're designed to run 24/7 so I wouldn't worry too much about the CPU heat output.
When you discover a family secret
00:59
im_siowei
Рет қаралды 36 МЛН
나랑 아빠가 아이스크림 먹을 때
00:15
진영민yeongmin
Рет қаралды 16 МЛН
Cursor Is Beating VS Code (...by forking it)
18:00
Theo - t3․gg
Рет қаралды 63 М.
45Drives - Members of the #Ceph Foundation
2:00
45Drives
Рет қаралды 669
Apple Watch Series 10 VS Ultra 2 - DON'T BE FOOLED!
11:41
GregsGadgets
Рет қаралды 145 М.
When you discover a family secret
00:59
im_siowei
Рет қаралды 36 МЛН