Tuesday Tech Tip - Highly Available NFS

  Рет қаралды 7,076

45Drives

45Drives

Күн бұрын

Пікірлер: 16
@Eli-q5z9h
@Eli-q5z9h 4 күн бұрын
thank you for this video. I remain skeptical about the performance with ceph. Does this solution work correctly with a machine park of 400 workstations under Ubuntu 22.94?
@45Drives
@45Drives 3 күн бұрын
Long story short, yes - Ceph scales incredibly well for parallel access. I know this video is about NFS - but if you are using ubuntu workstations, skip the NFS layer altogether and mount native CephFS cut out one layer of latency. If you want to talk more reach out, we'd love to talk specifics!
@BrianThomas
@BrianThomas Жыл бұрын
Is this video for professionals that know the material or for tech enthusiast that don't understand the technology?
@CapitaineWEB-FR
@CapitaineWEB-FR 4 жыл бұрын
That was a really nice tutorial! Is Mario Kart a discount coupon to buy hardware(2 for 1)? NFS HA on VMWare = totally a must So hardware would be minimum 2 x AV15 + an external ceph server? Blog post about this?
@OneSmokinJoe
@OneSmokinJoe 4 жыл бұрын
I was wondering what your nfs.yml ansible-playbook file looks like.
@45Drives
@45Drives 4 жыл бұрын
Hey Joe, here is a direct link to that: github.com/45Drives/ceph-ansible-45d/blob/master/nfs.yml - hope this helps!
Жыл бұрын
Link no longer works.
@danielkrajnik3817
@danielkrajnik3817 3 жыл бұрын
starts at 12:01
@MustafizurRahman-py5vu
@MustafizurRahman-py5vu Жыл бұрын
I have ceph nautilus and cephfs configured. I want nfs with cephfs . How to modify nfs playbook considering cephfs mounted on /mnt/mycephfs
@mateuszbieniek4791
@mateuszbieniek4791 4 жыл бұрын
Thanks, what's this web interface for Ceph that you are using? Does Ceph come with its own interface?
@45Drives
@45Drives 4 жыл бұрын
Hey, Mateusz, Ceph has a fully featured web managed UI which they call the Ceph dashboard. This is where a vast majority of your daily administration can be handled from. The Ceph dashboard is part of the Ceph Manager Daemon, which has been necessary for normal operations of a Ceph cluster for the last several releases. We have actually done a fully featured video running through all the features of the Ceph dashboard in a previous Tech Tip which you can find here: kzbin.info/www/bejne/iIalfp1rhLeVm5Y&t Thanks!
@mateuszbieniek4791
@mateuszbieniek4791 4 жыл бұрын
@@45Drives Will watch. Thank you!
@learn_by_example
@learn_by_example 3 жыл бұрын
FAILOVER TIME IS 5 MIN !! Kindly advice. Hello Team, we also trying the same setup and are using git-hub code for ceph-nfs into our setup , but what we see here is that it takes around 5 min to switchover from one active to another during failover. In our environment of Ceph Cluster(version 15.2.7) we are trying to use NFS HA Mode. Mode:"Active/Passive HA NFS Cluster" When we are using Active/Passive HA Config for NFS Server using Corosync/Pacemekar: 1. configuration is done and we are able to perform fail-over, but when an active node is tested with power-off/service-stop, we observe: 1.1 : I/O operations gets stuck for around 5 minute and then it resumes although the handover from active to other standby node happens immediately once the node is powered-off/service is stopped. Ganesha.conf: Ceph version: 15.2.7 NFS Ganesha : 3.3 Ganesha Conf: [ansible@cephnode2 ~]$ cat /etc/ganesha/ganesha.conf # Please do not change this file directly since it is managed by Ansible and will be overwritten NFS_Core_Param { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3,4; } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } CACHEINODE { Dir_Chunk = 0; NParts = 1; Cache_Size = 1; } RADOS_URLS { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; watch_url = "rados://nfs_ganesha/ganesha-export/conf-cephnode2"; } NFSv4 { RecoveryBackend = 'rados_ng'; } RADOS_KV { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; pool = "nfs_ganesha"; namespace = "ganesha-grace"; nodeid = "cephnode2"; } %url rados://nfs_ganesha/ganesha-export/conf-cephnode2 LOG { Facility { name = FILE; destination = "/var/log/ganesha/ganesha.log"; enable = active; } } EXPORT { Export_id=20235; Path = "/volumes/hns/conf/bb21b7c7-c663-40e9-ad11-a61441e6f77f"; Pseudo = /conf; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } } EXPORT { Export_id=20236; Path = "/volumes/hns/opr/138304ca-a70d-4962-9754-b572bce196b6"; Pseudo = /opr; Access_Type = RW; Protocols = 3,4; Transports = TCP; SecType = sys,krb5,krb5i,krb5p; Squash = No_Root_Squash; Attr_Expiration_Time = 0; FSAL { Name = CEPH; User_Id = "admin"; } }
@learn_by_example
@learn_by_example 3 жыл бұрын
any input please ?
@GrandmasterPoi
@GrandmasterPoi 4 жыл бұрын
15 seconds downtime is way too slow for me :
@learn_by_example
@learn_by_example 3 жыл бұрын
do you mind telling how much downtime you are able to achieve ? mine is worse - 5min !! looking for some config level changes that needs to achieve something less.
Highly Available Storage in Proxmox - Ceph Guide
31:13
Jim's Garage
Рет қаралды 35 М.
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 39 МЛН
Noodles Eating Challenge, So Magical! So Much Fun#Funnyfamily #Partygames #Funny
00:33
HELP!!!
00:46
Natan por Aí
Рет қаралды 72 МЛН
NAS OS Showdown! // TrueNAS vs Unraid
18:08
Christian Lempa
Рет қаралды 32 М.
Tuesday Tech Tip - How Ceph Stores Data
14:23
45Drives
Рет қаралды 22 М.
Tech Tip Tuesday - Why CephFS is Great
4:29
45Drives
Рет қаралды 10 М.
How 45 Drives Open Source Houston Command Center Makes ZFS On Linux Easy
22:25
How to set up a cluster with CephAdm
26:08
Daniel Persson (Kalaspuffar)
Рет қаралды 20 М.
What's On My Home Server? Storage, OS, Media, Provisioning, Automation
27:30
Wolfgang's Channel
Рет қаралды 1,2 МЛН
Может ли перегореть фонарик?
0:52
Newtonlabs
Рет қаралды 199 М.
МОЙ НОВЫЙ ИГРОВОЙ СМАРТФОН Realme 13+ 5G
0:59
Маризи
Рет қаралды 1,4 МЛН
Оу МАЙ! Огромный ПЛАНШЕТ, в котором ЕСТЬ ВСЁ! И очумелый SHARK 3!
16:43
Лазерная замена стекла iPhone 14 plus
1:00
Mosdisplay
Рет қаралды 3,3 МЛН