Proxmox 8 Cluster with Ceph Storage configuration

  Рет қаралды 118,670

VirtualizationHowto

VirtualizationHowto

Күн бұрын

Пікірлер: 141
@davidefuzzati8249
@davidefuzzati8249 Жыл бұрын
That's how a tutorial should be done! Thoroughly explained and step-by-step detailed!!! THANK YOU SO VERY MUCH!!
@IamDmitriev
@IamDmitriev Жыл бұрын
Yes and it sounds strange, but this step-by-step instruction from one side, but realy helped to understand logic of proxmox and ceph from the other side.
@substandard649
@substandard649 11 ай бұрын
This was a perfect tutorial, watched it once, built a test lab, everything worked as expected.
@VirtualizationHowto
@VirtualizationHowto 11 ай бұрын
Awesome @substandard649, glad it was helpful! Be sure to sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
@RobertoRubio-z3m
@RobertoRubio-z3m 3 ай бұрын
I can confirm this. I tried and it worked at first try. Really good.
@Stingray7423
@Stingray7423 3 ай бұрын
Not only this is a perfect ProxMox/Ceph tutorial but amazing tutorial on how to make proper videos that deliver results! Thank you!
@rich-it-hp
@rich-it-hp Жыл бұрын
Great video and thoroughly detailed. My only advice for properly monitoring a "migrating VM" would be to send a ping to the 'Migrating VM' from a different machine/VM. When doing anything from the VM being migrated, the process will pause in order to be transferred over to the new host (thus not showing any dropped packets "from" the VM's point of view). Keep up the good work!
@VirtualizationHowto
@VirtualizationHowto Жыл бұрын
Thank you @user-xn3bt5mz1x good point. Thanks for your comment.
@naami2004
@naami2004 Жыл бұрын
The best Proxmox & Ceph tutorial, thank you.
@VirtualizationHowto
@VirtualizationHowto Жыл бұрын
@naami2004, awesome! Thank you for the comment and glad it was helpful.
@ayoubguennane5230
@ayoubguennane5230 5 ай бұрын
Good job thank u
@AchmatJoseph-g3s
@AchmatJoseph-g3s 26 күн бұрын
Oh man Plain simple language that novices like myself understand. Thank you this was very intuitive
@pg_usa
@pg_usa 7 ай бұрын
Best detailed HOW to video in Proxmox universe...
@substandard649
@substandard649 10 ай бұрын
This is great. I would love to see a realworld homelab version using 3 mini PCs and a 2.5gb switch. I think there are a lot of users like me running home assistant in a proxmox vm along with a bunch of containers for cctv / dns etc. There are no videos covering this ceph scenario and I need a hero 😊
@VirtualizationHowto
@VirtualizationHowto 10 ай бұрын
@substandard649 sign up and join the VHT forums here and we can discuss any questions you have in more detail: www.virtualizationhowto.com/community
@JasonsLabVideos
@JasonsLabVideos Жыл бұрын
Good video sir, i played with this with a few Lenovo Mini machines and loved it !!
@ierosgr
@ierosgr Жыл бұрын
Nice presentation and explanation of some key core steps of the procedure. Yet you omit to mention that -nodes should be the same from a h/w perspective, specially when VMs running are Win Servers since you could easily loose your License just by transferring it to a different node with different h/w specs -even if someone, might get it from just pausing the video and noticing that the 3 storages are the same on all the 3 nodes, a mention of that wouldn't t hurt. -finally a video like this, could be a nice start for several other ones, about maintaining and troubleshooting a cluster with ceph since usual stuff like a node went down for good or went down for a long time since parts need to be ordered in order to be fixed, (this will have as a result several syslog messages flood the user and you might want to show how to stop them or suppress them until the node gets fixed again)....etc
@Meowbay
@Meowbay Жыл бұрын
This is bullcrap. I have proxmox running on 3 tiny PC's (TRIGKEY, Intel, and an older mini-PC board), all 3 of them were once licensed for Windows 7, 10 and 11, I've transferred all their activations to my Microsoft's cloud account, which is essentially done just by/when activating and having logged in using a MS account. I then installed proxmox and erased the 3 machines. They even have different sized boot-SSD's, proxmox and ceph don't give a rat's ass. I can easily run/create a Win11 VM and transfer it without issues between the 3. Microsoft has all 3 hardware images in its database, so it's all fine with the OS moving from one to the other.
@ierosgr
@ierosgr Жыл бұрын
@@Meowbay Nice but give it a try with Windows Server Licences not plain OSes. You mentioned you gave it a try with Win 7 10 11. I stated Win Server OSes. I was at phone with Microsoft over an hour and rhey couldnt even give me a straight answer if the licence would be maintained or not after migration. Finally, I was talking about production environments where having the knowledge of what will happen is mantatory, and not homelab.
@samegoi
@samegoi Жыл бұрын
Ceph is incredible nice distributed object storage solution which is open source. I need to check it out myself
@youNOOBsickle
@youNOOBsickle Жыл бұрын
I’ve been planning to move from VMware & VSAN to “Pmox” :) & ceph for a while now. I just need the time to set everything up and test. I love that you virtualized this first! My used storage is about 90% testing vm’s like these. 🤷‍♂️
@achmadsdjunaedi9310
@achmadsdjunaedi9310 Жыл бұрын
The best tutorial for clustering, 😊thank you sir....We will try it on three server devices, to be applied to the Republic of Indonesia radio data center...
@IvanPavlov007
@IvanPavlov007 7 ай бұрын
Did it work?
@sking379
@sking379 Жыл бұрын
We definitely love the content, we appreciate your attention to detail!!!
@RobertoRubio-z3m
@RobertoRubio-z3m 3 ай бұрын
Thanks SO MUCH for this video. It literally turned things around for me. Cheers from Panama.
@cafeaffe3526
@cafeaffe3526 8 ай бұрын
Thanks for this awesome tutorial. It was easy to understand, also for an non native english speaker.
@PaulKling
@PaulKling Жыл бұрын
FYI when clicking in CMD window it changes the title to select and the process running in CMD window pauses. For most of the demo it was in selection mode (paused ping command) it would be interesting to see how it worked without the select. Otherwise loved the demo and Ceph storage setup exactly what I was looking for.
@VirtualizationHowto
@VirtualizationHowto 11 ай бұрын
@PaulKling awesome! Thank you for the comment! Be sure to sign up on the forums: www.virtualizationhowto.com/community
@jburnash
@jburnash Жыл бұрын
Thank you! This was incredibly helpful with my setting up Ceph for the first time and showed all the details necessary to better understand it and test that it was working!
@NetDevAutomate
@NetDevAutomate 6 ай бұрын
Super helpful with really clear steps and explanations, saved me a lot of time and learnt a lot too - many thanks.
@EViL3666
@EViL3666 9 ай бұрын
Thank you, that was very informative spot-on... One thing I did pickup, and this is my "wierdness", you might be trying a little to hard with the explicit descriptions. For example, the migration testing, you explicity call out the full hostnames several times - at this stage in the video, viewers are intimately familiar with the servers, so stating "server 1 to 2" would feel more natural.
@IvanPavlov007
@IvanPavlov007 7 ай бұрын
Could go both ways - as a newbie I appreciate the explicit details as it’s exactly when presenters start saying generic-sounding “first box” or “the storage pool” is where I often get lost!
@erikdebont5876
@erikdebont5876 Ай бұрын
Thank for the video. Great explanation and it works like a charm !
@michaelcooper5490
@michaelcooper5490 Жыл бұрын
Great Video sir, I appreciate the work you put in. It is well explained. Thank you.
@dimitristsoutsouras2712
@dimitristsoutsouras2712 Жыл бұрын
At 3:26 it would be useful to be mentioned the fact that ceph and HA would be highly benefit from a different network for their exchange data procedures. There would be the point where you should choose a different network (in case of course there is one to choose from) from the managed one. Yes it will work with the same network for everything but it won t be as performant as with a dedicated one. New edit: Stand corrected by 8:26 where you do mention it.
@radiowolf80211
@radiowolf80211 Ай бұрын
Best video out on this. What do you think of having a bunch of random drives? How much should I care about same processors, same drive models, same drive sizes?
@markmonroe7330
@markmonroe7330 13 күн бұрын
Excellent presentation. Thank you.
@cberthe067
@cberthe067 Жыл бұрын
Great tutorial ! I'm planning to buy some old thinclient (ryzen A10) to test this proxmox 8 ceph config !
@dronenb
@dronenb Жыл бұрын
Wow, this is an excellent tutorial. Thanks!
@nicklasring3098
@nicklasring3098 8 ай бұрын
That's really cool! Thanks for the vid
@souhaiebbkl2344
@souhaiebbkl2344 Жыл бұрын
You have got a new subscriber. Awesome tutorial.
@felipemarriagabenavides
@felipemarriagabenavides Жыл бұрын
Advice: in production environments use 10Gbps links on all servers, or else a "bottleneck" is generated if the disks are running at 6Gbps speed
@ErikS-
@ErikS- 6 ай бұрын
Do you really think 6 Gbps is the speed you will get from a HDD??
@Stony5438
@Stony5438 5 ай бұрын
​@@ErikS-SSD perhaps. Or a raid array maybe
@ShimoriUta77
@ShimoriUta77 3 ай бұрын
Bro forgot RAID arrays can get PRETTY fast, so.a single 100Gbps or dual 100Gbps happens to be the surefire way
@junialter
@junialter Жыл бұрын
that was a perfect introduction. Thank you.
@vmaione1964
@vmaione1964 6 ай бұрын
the best guide i found tnx so much for your effort. just a question related to ceph , do you suggest/prefer quincy or reef? tnx so much
@---tr9qg
@---tr9qg Жыл бұрын
Can't wait when proxmox dev team decided to deploy fault tolerance functionality into their product. It would be cool.
@marcusrodriguesadv
@marcusrodriguesadv Жыл бұрын
Great content, shoutoutz from Brazil...
@djstraussp
@djstraussp Жыл бұрын
Nice video, I'm planning upgrading to Proxmox CEPH Cluster this holidays. Promptly result from YT algorithm. BTW, that nested Cluster under Vsphere....😮
@VirtualizationHowto
@VirtualizationHowto 11 ай бұрын
@djstraussp Thank you for the comment! Awesome to hear.....also sign up for the forums, would like to see how this project goes: www.virtualizationhowto.com/community
@BR0KK85
@BR0KK85 2 ай бұрын
Awesome tutorial thank you 😊
@Sammoga_Yeddi
@Sammoga_Yeddi 7 ай бұрын
Great video. Thank you!
@parl-88
@parl-88 Жыл бұрын
Wonderful Video! Thanks for your time and detailed explanations. I just found your YT channel and I am loving it so far.
@VirtualizationHowto
@VirtualizationHowto Жыл бұрын
Awesome @pedroandresiveralopez9148! So glad to have you and thank you for your comment. Also, join up on the Discord server, I am trying to grow lots of good discussions here: discord.gg/Zb46NV6mB3
@robbuurman1667
@robbuurman1667 5 ай бұрын
Great tutorial !
@rahilarious
@rahilarious Жыл бұрын
please make ceph cluster tutorial on non-proxmox distribution
@bash-shell
@bash-shell Жыл бұрын
Your videos are great help. PS. I THINK light mode for tutorials would be better seeing details.
@acomav
@acomav Жыл бұрын
Totally agree. Dark mode may be the personal preference for a majority of people in day to day work on their own screen, but for KZbin videos, you should use light mode. Love your content.
@resonanceofambition
@resonanceofambition Жыл бұрын
bro this is so cool
@AlienXSoftware
@AlienXSoftware 9 ай бұрын
Great video, it is a shame you had the command prompt window in "selected" mode when you did the demo of live migration, as this would have paused the pings though, but neat none the less.
@MrRoma70
@MrRoma70 Жыл бұрын
Nice work, CEPH is really good, although I moved a VM from a different disk to the Pool and did not migrate seamless, never less I like the idea, can you make a video to show how to use CEPH in HA, Thank you
@arthurd6495
@arthurd6495 10 ай бұрын
Thanks. good stuff.
@carlnakamura4861
@carlnakamura4861 Ай бұрын
Great video, thanks! What are the pre-requisites for installing CEPH? I have a number of NUCs running in a Proxmox cluster with only 1 locally installed NVMe per node. Can CEPH be installed and configured to run in my environment by partition the NVMe drives? I can install CEPH component but obviously there's no disk to select in the guide to create the shared pool... On the flip side, if it is not possible, how do you remove all CEPH traces, I can't seem to find an intuitive way to do it (or at least not as easy as adding it)...
@MAD20248
@MAD20248 Жыл бұрын
thank you so much, i think now i clearly understand how the storage requirements works. but what about the cpu/ ram sharing? I'm planning to build a cluster with enough storage and run vm on each one of then and fully utilize the hardware on each one of them, i don't know how the cluster is gonna be have when one of the nodes fail or should i spare some ram/cpu
@souhaiebbkl2344
@souhaiebbkl2344 Жыл бұрын
One question. Do we need to have the same shared storage space accros all nodes for ceph to work properly ?
@IvanPavlov007
@IvanPavlov007 7 ай бұрын
I have the same question - can I make one physical server have much larger storage (eg via external HBA/SAS 12*3.5” enclosure) than others, to use as extra file storage?
@ShimoriUta77
@ShimoriUta77 3 ай бұрын
Just as a side note, thats not an encrypted string. It is a JsonWebToken or JWT for short. The payload is not encrypted at all, it is digitally signed but not encrypted. It is plaintext base64 encoded.
@AlejandroRodriguez-wt2mk
@AlejandroRodriguez-wt2mk Жыл бұрын
nicely done, subscribed now.
@Renull55
@Renull55 Жыл бұрын
I don't have storage in the osd step how do I create?
@fatihince2810
@fatihince2810 4 ай бұрын
super!
@milocheri
@milocheri 8 ай бұрын
Hi, it is the best tutorial i ever see so far on youtube, it is complete, however i have a question, since you said you are running each proxmox in Virtualbox, how did you manage to create a vm and not get the error message "KVM virtualisation configured, but not available" thank you for your help!
@khamidatou
@khamidatou 9 ай бұрын
Geat tutorial, thanks for sharing
@VirtualizationHowto
@VirtualizationHowto 9 ай бұрын
Thanks for watching!
@juanramonlopezruiz5216
@juanramonlopezruiz5216 4 ай бұрын
Excellent tutorial. Just 2 questions about Ceph: 1) What happens if the disks in my 3 servers have different sizes (e.g. 200, 500, 800GB) and if one of them are ssd disk and the other two are mechanical. 2) Where the MVs hard disk really lives , in the volumen across the 3 servers ? Thanks for your help.
@mr-manuel0
@mr-manuel0 3 ай бұрын
Unfortunately your ping test during migration is useless, since you clicked in the cmd box at 14:38 and the ping stopped. To resume you would have to press enter, but you did not. You can see that the ping of 4 ms is freezed there. Would been interesting, if at least one ping is lost.
@jesusleguiza77
@jesusleguiza77 Ай бұрын
Hi, what configuration should I apply to enable the minimum number of available copies to be 1 for a cluster with 2 nodes and a qdevice? Cheers.
@2Blucas
@2Blucas 7 ай бұрын
Hi, thanks for your great content, simple well explained. "Regarding Proxmox VE's High Availability features, if I have a critical Microsoft SQL Server VM, will the system effectively handle a scenario where one PVE node crashes or if there's a need to migrate the VM to another PVE? Specifically, I'm concerned about the risk of losing transactions during such events. How does Proxmox ensure data integrity and continuity for database applications like SQL Server in high availability setups?"
@frandrumming
@frandrumming 10 ай бұрын
Its cool 😎
@markstanchin1692
@markstanchin1692 10 ай бұрын
Hello, great video I was able to follow along. Question, what’s the difference between a cluster like this in proxmox and a kubernetes is K3 set up in proxmox trade off the benefits of one or the other, etc. Also, could you list some examples on what possible use scenarios and configurations etc. Thanks.
@VirtualizationHowto
@VirtualizationHowto 10 ай бұрын
@markstanchin1692 thank you for the comment! Sign up on the VHT forums here and let's discuss it in more detail: www.virtualizationhowto.com/community
@igorpavelmerku7599
@igorpavelmerku7599 8 ай бұрын
Interesting ... adding the second node gets into the cluster, but stays red (like unavailable); when trying to add the third node I get a "An error occurred on the cluster node: cluster not ready - no quorum?" error and the cluster join aborts. I have reinstalled from scratch all three nodes a couple of times, I have removed cluster and redone over and over again to no avail. Not working my side ...
@fakebizPrez
@fakebizPrez 28 күн бұрын
I nicely went through this last week, but after buying a ton of hardware and reconfiguring i went with a clean install, and now I can't even get an OSD made. It's like the initial configuration haunts the disks til the end of time.
@subhajyotidas1609
@subhajyotidas1609 8 ай бұрын
Thanks for very clear and concise tutorial. I had one question though. As the 'pool' is shared by three nodes, will it be possible to make the VM auto migrate to another host if one host goes down abruptly?
@VirtualizationHowto
@VirtualizationHowto 8 ай бұрын
@subhajyotidas1609 Thank you for the comment! Yes, the Ceph storage pool acts like any other shared storage once configured. You just need to setup HA for your VMs and if a host goes down, the heartbeat timer will note the host is down and another Proxmox host will assume ownership of the VM and it will be restarted on the other host. Hit me up on the forums if you have any other questions or need more detailed explanations. Thanks @subhajyotidas1609 ! www.virtualizationhowto.com/community
@KevinZyu-iz7tn
@KevinZyu-iz7tn Жыл бұрын
nice tutorial. thanks. Is it possible to attach an external Ceph pool to Proxmox cluster?
@troley1284
@troley1284 Жыл бұрын
Yes, you can mount external RBD or CephFS to Proxmox.
@JitendraSingh-fw9qf
@JitendraSingh-fw9qf Ай бұрын
Hello, You explained just home lab level conf but in Production, we need to add multiple monitors (other than node public IP subnet), different ceph cluster IP subnet, multiple mds servers, multiple ceph manager. And all this for high replication throughput, high resiliency, high availability. Can you please share a proper enterprise class network diagram fir all ceph services.
@jonathan-._.-
@jonathan-._.- 2 ай бұрын
wait was the windos cm paused while it was migrating 🙈 ? (note the "Select" in the title bar)
@valleyboy3613
@valleyboy3613 9 ай бұрын
great video. do the ceph disks on each node need to be the same size?? I have 2 Dell servers and was going to run a mini micro PC as the 3rd node with 2TB in each of the Dells but 1TB in the Dell mini PC. would that work?
@VirtualizationHowto
@VirtualizationHowto 9 ай бұрын
@valleyboy3613 thank you for the comment. See the forum thread here: forum.proxmox.com/threads/adding-different-size-osd-running-out-of-disk-space-what-to-look-out-for.100701/ as it helps to understand some of the considerations. Also, create a new Forum post on the VHT Forums if you need more detailed help: www.virtualizationhowto.com/community
@andrevangijsel957
@andrevangijsel957 4 ай бұрын
Is a 2 node a possibility with an external vm for quorum monitoring like a 2 node vsan?
@losfurychannel
@losfurychannel Жыл бұрын
Which is better, VMware or Proxmox? I have 3 nodes with 4 SSDs each, and all three have 10GB NICs. But for a high-performance high-availability environment, which is the better option, especially when it comes to VM performance with Windows? In your experience, is Proxmox with Ceph better, or VMware with vSAN?
@kossboss
@kossboss 28 күн бұрын
can i use that ceph cluster for storing data outside of proxmox and not just for vms for proxmox?
@Ogk10
@Ogk10 7 ай бұрын
Would this work with multiple location? One environment at home and one at my parents for ha and universal setup/ease of use?
@fbifido2
@fbifido2 Жыл бұрын
1. what about host with more than 1 hdd/ssd? what should they do in the OSD part?
@frzen
@frzen Жыл бұрын
One osd per disk for spinning disk and one nvme / ssd can be used as wal for multiple osd I think
@pg_usa
@pg_usa 7 ай бұрын
One question: If i add a disk to the CEPH pool its formated to 0 or the data keeped? Thank you
@GrishTech
@GrishTech Жыл бұрын
4:21 - I don't think that's an encrypted stream. That just looks like base64 encoded information.
@43n12y
@43n12y Жыл бұрын
thats what it is
@gbengadaramola8581
@gbengadaramola8581 11 ай бұрын
Thank you!! An insightful video, can I configure a cluster and CEPH storage over 3 datacenter without a dedicated network link, only over the internet.
@VirtualizationHowto
@VirtualizationHowto 10 ай бұрын
@gbengadaramola8581 Thank you for the comment, please sign up on the VHT forums and we can discuss it further: www.virtualizationhowto.com/community
@tariq4846
@tariq4846 10 ай бұрын
I have the same question
@jamhulk
@jamhulk Жыл бұрын
Awesome! How about proxmox plus SAN storage?
@dwieztro6748
@dwieztro6748 Жыл бұрын
what happen if pmox1(admin cluster) has crash and can't up again? and what if i re install pmox1?
@channelbaimcodingshopcodin2232
@channelbaimcodingshopcodin2232 Ай бұрын
How much node for it implementatiin ceph? I have 2 node
@kjakobsen
@kjakobsen Жыл бұрын
Thats funny. I have always heard, you couldn't do live migrations on a nested hypervisor setup.
@lsimsdj
@lsimsdj Ай бұрын
I have this issue: Ceph is not compatible with disks backed by a hardware RAID controller.
@bioduntube
@bioduntube 11 ай бұрын
will this process work for Virtual Environment 6.2-4?
@fbifido2
@fbifido2 Жыл бұрын
2. why u did not show total storage of pool? can we add more storage later? how to set that up?
@thekobaz
@thekobaz 3 күн бұрын
Is anyone aware of a proxmox/ceph performance tuning guide? I have a 3 node proxmox with SSD storage that natively gets 500MB/sec when writing directly to the disks. I have a 10gbe network and high end xeon servers. When those disks are in a proxmox/ceph cluster and reading/writing to ceph storage, I get about 30-50MB/sec. The speed of ceph is awful. I also have a SSD NAS over 10gbe lan, and the SSD NAS gets 450MB/sec on a raid-5 setup. I'm considering dumping my entire ceph cluster and just moving all the storage drives into a second NAS.
@niravraychura
@niravraychura Жыл бұрын
Very good tutorial.. But I have a question.. What kind of bandwidth you should have to use ceph.. I mean to ask is a gigabit is enough or one should use 10gig?
@VirtualizationHowto
@VirtualizationHowto Жыл бұрын
@niravraychura, thank you for the comment! Hop over to my Discord server to discuss this further either in the home lab discussion section or home-lab-pics channel: discord.gg/Zb46NV6mB3
@nyanates
@nyanates Жыл бұрын
If you're going to get serious about it you should have a 10G link and a dedicated Ceph network. Get a HW setup with 2x nics in it so one of them can be dedicated to the Ceph network.
@niravraychura
@niravraychura Жыл бұрын
@@nyanates thank you for the answer 😇
@bioduntube
@bioduntube 10 ай бұрын
thanks for the video. I am trying to set up Clustering and Ceph on nodes that have previously been configured. I have succeeded with Clustering. However, Ceph was installed but when I try to set up OSD, I get the error "Ceph is not compatible with disks backed by a hardware RAID controller". My ask is what can I do to remedy this?
@VirtualizationHowto
@VirtualizationHowto 10 ай бұрын
@bioduntube thank you for the comment! Hit me up on the forums with this topic and let's discuss it further www.virtualizationhowto.com/community
@stevencook8763
@stevencook8763 11 күн бұрын
As you know, dude your video is good but this over used comment is driving me nuts.
@visheshgupta9100
@visheshgupta9100 Жыл бұрын
I am planning on deploying multiple Dell R730XD in homelab environment. Was looking for a storage solution / NAS. Would you recommend using TrueNAS or CEPH? Can we create SMB / iSCSI shares on a CEPH cluster? How to add users / permissions?
@visheshgupta9100
@visheshgupta9100 Жыл бұрын
Also, in the present video, you've added just 1 disk per node. How can we scale / expand our storage? Is it as simple as plugging in new drives and adding it to the OSD? Do we need to add the same amount of drives in each node?
@youssefelankoud6497
@youssefelankoud6497 7 ай бұрын
Don't forget to give us your feedback if you used CEPH, and how it is worked ?
@fbifido2
@fbifido2 Жыл бұрын
3. Can we upgrade the size of the Ceph disk, eg: from a 50GB to a 1TB, if the 50GB is about to get full? 3a. How does one know the free space on ech host if the HDD is in a Ceph pool?
@VirtualizationHowto
@VirtualizationHowto Жыл бұрын
@fbifido2, thanks for the comments and questions. Hop over to the Discord server and we can have more detailed discussions there: discord.gg/Zb46NV6mB3
@chenxuewen
@chenxuewen Жыл бұрын
good
@hagner75
@hagner75 Жыл бұрын
Love your video. However, I'm a bit disappointed in you. You made your nested Proxmox on a VMware ESXi setup. That should've been Proxmox :P Good job nonetheless.
@KingLouieX
@KingLouieX 10 ай бұрын
My storage added to node 1 works fine but when I try to add the OSD to the other nodes it states no disks available.. Can the other 2 nodes share the USB drive connected to Node 1?? Or does the other 2 nodes need their own unused storage in order for Ceph to work? thanks.
@VirtualizationHowto
@VirtualizationHowto 10 ай бұрын
@KingLouieX thank you for the comment! Sign up on the forums and create a new topic under "Proxmox help" and let's discuss this further: www.virtualizationhowto.com/community
@pivot3india
@pivot3india Жыл бұрын
what happens if one of the server fails in the cluster ? The virtual machine keeps running on another server (fault tolerance) or there is failover ?
@samstringo4724
@samstringo4724 Жыл бұрын
Yes if you set up High Availability (HA) in the Proxmox UI.
@SataPataKiouta
@SataPataKiouta Жыл бұрын
Is it a hard requirement to have 3 nodes in order to form a functional PVE cluster?
@VirtualizationHowto
@VirtualizationHowto 11 ай бұрын
Thank you for the comment! Sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
@spritez77
@spritez77 3 ай бұрын
Poor man's hyperconverge-ish.. yes lets do it
@cheebadigga4092
@cheebadigga4092 Жыл бұрын
So Ceph is "just" HA? Meaning, all nodes in the cluster basically see the same filesystem?
@MikeDeVincentis
@MikeDeVincentis Жыл бұрын
Sort of but not really. Ceph is distributed storage across the cluster using dedicated drives for OSD's with a minimum of 3 nodes. You have to have a cluster before you build the storage, and you have to have drives installed in the nodes to build the ceph cluster. Data is distributed across the nodes so they are readily available if a node or drive / osd fails. You then have the option of turning on HA for the vm's so they can always be available on top of the data.
@cheebadigga4092
@cheebadigga4092 Жыл бұрын
@@MikeDeVincentis Thanks for the explanation. However I still don't really understand. Does "distributed" mean, that each node has an exact replica of a given data set? Like a mirror? Or is it more like a RAID 0?
@MikeDeVincentis
@MikeDeVincentis Жыл бұрын
@@cheebadigga4092 more like raid 10. 3 copies of the data blocks spread across the nodes. Think raid but spread across multiple devices, not just drives inside one system.
@cheebadigga4092
@cheebadigga4092 Жыл бұрын
@@MikeDeVincentis ahhh thanks!
@tanaseav
@tanaseav 18 күн бұрын
Having that nested inside vmware, really shows that vmware is just godlike, except pricing ....😂😂😂
@JohnWillemse
@JohnWillemse Жыл бұрын
Please have a look at Wazuh - Het Open Source Security Platform met Security Information and Event Management (SIEM) Regards John 🤗
@mankaner
@mankaner 5 ай бұрын
awesome tutorial!
k0s vs k3s - Which is best for home lab?
14:20
VirtualizationHowto
Рет қаралды 34 М.
Best Docker Container Server Setup // Docker Swarm, CephFS, and Portainer
23:06
If people acted like cats 🙀😹 LeoNata family #shorts
00:22
LeoNata Family
Рет қаралды 17 МЛН
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 12 МЛН
Пой, играй гитара... 23.11.24...
2:47
Лилия 💖. Танцующий Белгород
Рет қаралды 11 М.
Proxmox 10 tweaks you need to know
9:48
VirtualizationHowto
Рет қаралды 50 М.
6-in-1: Build a 6-node Ceph cluster on this Mini ITX Motherboard
13:03
Setting Up Proxmox High Availability Cluster & Ceph
16:54
Novaspirit Tech
Рет қаралды 29 М.
Don’t run Proxmox without these settings!
25:45
Christian Lempa
Рет қаралды 284 М.
ProxMox High Availability Cluster!
11:08
Craft Computing
Рет қаралды 170 М.