An Overview of CephFS Architecture Part 1 - Clustering Powered by Ceph

  Рет қаралды 6,720

45Drives

45Drives

Күн бұрын

Пікірлер: 7
@alexandre-jacquesst-jacque5148
@alexandre-jacquesst-jacque5148 5 жыл бұрын
Interesting video, but something I honestly don't understand is why go into the trouble of having a virutalized layer when going for a single OSD node server? The only benefit I can see is that if there's a problem at your OS level (e.g. process crash) then you still have the other VMs preventing the cluster from going completely down. But is it really worth the added complexity? I've been running Ceph in production for 2 years and we started off with a single OSD node server and a crush map config that allowed the cluster to run healthy on a single server. And the OS never had any problem that would have been saved by having multiple VM's.
@45Drives
@45Drives 5 жыл бұрын
The idea is to create an entire virtual platform for every ceph service and external services such as SMB/NFS gateway, real-time dashboard, and any other service you would need. From here you can add more Virtual hosts and continue through with a complete hyperconverged solution. Or expand out to a physical cluster and reconfigure your initial virtual node as a physical one. Editing a the CRUSH map is definitely an efficient and effective way of solving the "single node" cluster issue, but editing CRUSH maps aren’t always the most straightforward task. By using PCI pass-through and giving the OSD VMs complete control of the physical disks, we can have a best of both worlds between physical and Virtual.
@leadiususa7394
@leadiususa7394 5 жыл бұрын
I think Ceph has a place and that V/P storage models have value when looking at future models of IoT deployments because of the very high I/O node access requirements but it still needs to grow to a except-able industry standard so it can be done on most fronts. (The virtial node adds injectable I/O as needed on a daily workload.) But in single deployment this will even better. I also like the idea of using the V/node as a backup for the p/Node when deploying in limit numbers. I want to test that for sure... If I could that is. /:>
@georgelza
@georgelza 5 жыл бұрын
Where is part 2 ?
@chrismcgean5010
@chrismcgean5010 5 жыл бұрын
Hey George, part two is here: kzbin.info/www/bejne/gaGwZXtnpt2re7M
@leadiususa7394
@leadiususa7394 5 жыл бұрын
Here it is... kzbin.info/www/bejne/gaGwZXtnpt2re7M
@leadiususa7394
@leadiususa7394 5 жыл бұрын
And the rest of the Ceph 3, 4 parts as well: kzbin.info/door/V_OrwHDqXdz7JuxjROhetA
An Overview of CephFS Architecture Part 3 - Use Cases
13:08
45Drives
Рет қаралды 2,5 М.
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.
We DESTROYED Our Production Ceph Cluster
18:19
45Drives
Рет қаралды 14 М.
Tuesday Tech Tip - Ceph Dashboard Walkthrough
19:19
45Drives
Рет қаралды 8 М.
Ceph Intro & Architectural Overview
37:11
Sniper Network
Рет қаралды 127 М.
Tuesday Tech Tip - The Simplest Way to Build a Ceph Cluster
13:14