thank you for this video. I remain skeptical about the performance with ceph. Does this solution work correctly with a machine park of 400 workstations under Ubuntu 22.94?
@45Drives3 күн бұрын
Long story short, yes - Ceph scales incredibly well for parallel access. I know this video is about NFS - but if you are using ubuntu workstations, skip the NFS layer altogether and mount native CephFS cut out one layer of latency. If you want to talk more reach out, we'd love to talk specifics!
@BrianThomas Жыл бұрын
Is this video for professionals that know the material or for tech enthusiast that don't understand the technology?
@CapitaineWEB-FR4 жыл бұрын
That was a really nice tutorial! Is Mario Kart a discount coupon to buy hardware(2 for 1)? NFS HA on VMWare = totally a must So hardware would be minimum 2 x AV15 + an external ceph server? Blog post about this?
@OneSmokinJoe4 жыл бұрын
I was wondering what your nfs.yml ansible-playbook file looks like.
@45Drives4 жыл бұрын
Hey Joe, here is a direct link to that: github.com/45Drives/ceph-ansible-45d/blob/master/nfs.yml - hope this helps!
Жыл бұрын
Link no longer works.
@danielkrajnik38173 жыл бұрын
starts at 12:01
@MustafizurRahman-py5vu Жыл бұрын
I have ceph nautilus and cephfs configured. I want nfs with cephfs . How to modify nfs playbook considering cephfs mounted on /mnt/mycephfs
@mateuszbieniek47914 жыл бұрын
Thanks, what's this web interface for Ceph that you are using? Does Ceph come with its own interface?
@45Drives4 жыл бұрын
Hey, Mateusz, Ceph has a fully featured web managed UI which they call the Ceph dashboard. This is where a vast majority of your daily administration can be handled from. The Ceph dashboard is part of the Ceph Manager Daemon, which has been necessary for normal operations of a Ceph cluster for the last several releases. We have actually done a fully featured video running through all the features of the Ceph dashboard in a previous Tech Tip which you can find here: kzbin.info/www/bejne/iIalfp1rhLeVm5Y&t Thanks!
@mateuszbieniek47914 жыл бұрын
@@45Drives Will watch. Thank you!
@learn_by_example3 жыл бұрын
FAILOVER TIME IS 5 MIN !! Kindly advice. Hello Team, we also trying the same setup and are using git-hub code for ceph-nfs into our setup , but what we see here is that it takes around 5 min to switchover from one active to another during failover. In our environment of Ceph Cluster(version 15.2.7) we are trying to use NFS HA Mode. Mode:"Active/Passive HA NFS Cluster" When we are using Active/Passive HA Config for NFS Server using Corosync/Pacemekar: 1. configuration is done and we are able to perform fail-over, but when an active node is tested with power-off/service-stop, we observe: 1.1 : I/O operations gets stuck for around 5 minute and then it resumes although the handover from active to other standby node happens immediately once the node is powered-off/service is stopped. Ganesha.conf: Ceph version: 15.2.7 NFS Ganesha : 3.3 Ganesha Conf: [ansible@cephnode2 ~]$ cat /etc/ganesha/ganesha.conf # Please do not change this file directly since it is managed by Ansible and will be overwritten NFS_Core_Param { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3,4; } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } CACHEINODE { Dir_Chunk = 0; NParts = 1; Cache_Size = 1; } RADOS_URLS { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; watch_url = "rados://nfs_ganesha/ganesha-export/conf-cephnode2"; } NFSv4 { RecoveryBackend = 'rados_ng'; } RADOS_KV { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; pool = "nfs_ganesha"; namespace = "ganesha-grace"; nodeid = "cephnode2"; } %url rados://nfs_ganesha/ganesha-export/conf-cephnode2 LOG { Facility { name = FILE; destination = "/var/log/ganesha/ganesha.log"; enable = active; } } EXPORT { Export_id=20235; Path = "/volumes/hns/conf/bb21b7c7-c663-40e9-ad11-a61441e6f77f"; Pseudo = /conf; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } } EXPORT { Export_id=20236; Path = "/volumes/hns/opr/138304ca-a70d-4962-9754-b572bce196b6"; Pseudo = /opr; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } }
@learn_by_example3 жыл бұрын
any input please ?
@GrandmasterPoi4 жыл бұрын
15 seconds downtime is way too slow for me :
@learn_by_example3 жыл бұрын
do you mind telling how much downtime you are able to achieve ? mine is worse - 5min !! looking for some config level changes that needs to achieve something less.