Hi, it seems that the main Docker Compose code was borrowed from the elkninja repository, as described in an Elasticsearch blog post. However, there is a significant drawback to this implementation: the generated certificates lack passwords, and no keystores are configured. The author of the blog post mentioned that this setup is suitable for Proof of Concept (POC) purposes, but not for production environments.
@mathas604 Жыл бұрын
Thank you for the video. Really appreciate it. Maybe you can add more hands on in ingesting filebeat (including logstash filtering) and metricbeat to monitor firewall metricbeat in kibana
@alkhateeeb5 ай бұрын
Thank you, Ali, for this video, useful one.
@alexonepiece1562Ай бұрын
Спасибо, братик! Помог)
@agilebarsfromtimebarsltd.4918 Жыл бұрын
Totally awesome, thank you very much.
@edinsonguzman179 Жыл бұрын
I run the docker-compose up -d and always fail to start the container elk-es01-1, How to troobleshoot this problem?
@edinsonguzman179 Жыл бұрын
I run this locally in Mac
@luquinhas-mg Жыл бұрын
me too,but i run rhel 9
@FRITTY12348546 Жыл бұрын
Same issue
@raypi229711 ай бұрын
I'am work. what's log you seeing?
@HAMZABOURGUIGA10 ай бұрын
same here, please further information for this issue...
@jonmarkortiz9 ай бұрын
Thanks so much for this very simple and well narrated tutorial. I am curious what your approach would be. I currently have my docker-compose file that has the following services - frontend, backend, mongo, and redis. My frontend and backend are referencing builds that point to Dockerfiles that exist in the roots of each directories. The mongo and redis are not and instead referencing the images along with additional meta info. My question is this - wanting to keep my docker-compose file more readable and not make it too enormous, is there a strategy on how to introduce the services for elasticsearch, kibana, and some number of es nodes - es01, es02 etc? In regard to the docker-compose implementation which elastic gives us, is it possible to create an elasticsearch directory, with a Dockerfile that abstracts out more of the docker-compose implementation? Are there examples out there you know of and maybe some key pages in Docker to reference regarding this? Thanks again for all your help. Btw, I am happy to send you a link to my existing repo containing my yml, if it helps you see more clearly. Thanks again.
@Karan-gk7jw9 ай бұрын
Hey sir the volume you are talking about around 3:30 can we use kafka as the volume
@ainmohamad4582Ай бұрын
Hi, i still not clear with the syslog parsing part, how can logstash parsed the data if you not specify the path? and can i parse the log file?
@TherealLeroyJenkins10 ай бұрын
The error message you're seeing is related to Elasticsearch bootstrap checks that are performed when Elasticsearch detects that it is running in production mode. Specifically, the error: ``` bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] ``` indicates that the `vm.max_map_count` setting on your host is set too low for Elasticsearch to operate reliably in a production environment. This setting defines the maximum number of memory map areas a process may have. Elasticsearch recommends setting this to at least `262144`. ### Fixing the `vm.max_map_count` Issue To resolve this issue, you need to increase the `vm.max_map_count` setting on your host system. This setting is applied at the OS level, not within Docker containers, so you must set it on the host that runs your Docker daemon. #### For Linux Hosts 1. **Temporarily (does not survive reboot):** You can temporarily set `vm.max_map_count` to the recommended value by running the following command on your host: ```sh sudo sysctl -w vm.max_map_count=262144 ``` 2. **Permanently (survives reboot):** To make the change permanent, so it persists across reboots, add the following line to `/etc/sysctl.conf`: ``` vm.max_map_count=262144 ``` Then, apply the changes with: ```sh sudo sysctl -p ``` #### Verifying the Change To verify that the setting has been applied, run: ```sh sysctl vm.max_map_count ``` You should see `vm.max_map_count = 262144` as the output. ### After Adjusting `vm.max_map_count` Once you've adjusted the `vm.max_map_count` on your host, you should be able to start your Elasticsearch service without encountering the previous bootstrap check failure. If you're using Docker Compose, make sure to restart your services for the changes to take effect: ```sh docker-compose down docker-compose up -d ``` ### This took me a couple of hours to figure out, but it had me stumped as well. hope it helps. I also increased my total ram on the VM to 16gb of ram, and she's pegging around 85% usage. will most likely end up increasing to 20 gb. but I am also looking at decreasing number of nodes. I only just started. thanks to OP. I was stuck on this ELK stack for a while.
@schoonees5 ай бұрын
Hi Ali, fantastic video - works like a charm. Thx for the effort. I have one or two questions regarding adding additional containers to the docker-compose file. If i add additional containers, i get the following error, validating /home/test/elk/docker-compose.yml: services.logstash Additional property filebeat is not allowed. Can file beat just be added as a separate container instead of adding it to the docker-compose file?
@AliYounesGo4IT5 ай бұрын
You can add it as a separate container, but I think the error is because Filebeat has to be on the same level as Logstash under the "services" key in the docker-compose.yml file.
@dawidlelito4 ай бұрын
any helps how to add metricbeat as docker to the stack for cluster monitoring?
@avalagum7957 Жыл бұрын
The SSL thingy makes everything look complicated. Is there any setup with only 1 node for elasticsearch without SSL?
@AliYounesGo4IT Жыл бұрын
with Elasticsearch 8.x and on, security is enabled by default. You have to explicitly disable it. I never tried it, but you can try creating docker-compose.yml file with only two services (es and kibana) and make sure to set xpack.security.enabled: false
@meysamzoghi Жыл бұрын
hi thanks for your video plz make video about rolling upgrade cluster node elasticsearch i want to upgrade with rolling upgrade but when i upgrade node 1, i give error: 1.master node disconnected, restarting discovery 2.this node is locked into cluster UUID help me if you can
@cpptip9150 Жыл бұрын
geat tutorial
@naveenbala4140 Жыл бұрын
Where is encryption key
@geusilva6632 Жыл бұрын
You don't need to set this parameter. It will give you a warning but you can ignore it.
@priyashukla75164 ай бұрын
How can I take data from MySQL db?
@therus000 Жыл бұрын
thanx for video, so nice work but can u share please that docker-compose file and config file for logstash
@AliYounesGo4IT Жыл бұрын
I will try to upload it soon
@patilavinash74068 ай бұрын
Hi I want to install ELK on a test/production server can you please me for that
@Real.Devops3 ай бұрын
this config files are discontinued , dont use this video to install ELK
@zhajikun53096 ай бұрын
I run your docker-compose file but get this error in Kinaba: FATAL Error: [config validation of [xpack.encryptedSavedObjects].encryptionKey]: value has length [16] but it must have a minimum length of [32].
@wbarbosa06 ай бұрын
ENCRIPTION_KEY on .env should have at least 32 chars, the default value has 16...
@Ethan777100 Жыл бұрын
what terminals and packages do i need in VS Code?
@AliYounesGo4IT Жыл бұрын
I installed the Remote - SSH extension to connect to the remote Linux host. Other than that I have YAML and json installed.
@Ethan777100 Жыл бұрын
Oh. 1. So does this mean I need to have Linux in my computer? I only have Windows 10. 2. Must I have the Remote-SSH extension? Because my situation is that i need to host my data within the same machine as local host. But i wanna basis from your Video. @@AliYounesGo4IT
@Ethan777100 Жыл бұрын
I'm actually trying to replicate your setup on my computer but difference is I'm using localhost. I'm running into issues currently regarding docker socket. There is a bad gateaway connection that causes kibana container to hang up and Exit because it fails to establish a connection with Elasticsearch container. On my elastic search container, when I do a curl request to localhost 9200, I dont get a response either. What is going wrong in my setup? Currently on ELK version 8.11.0 across all components.
@ashutoshtiwari4398 Жыл бұрын
Did you get any solution?
@dimakovalev-f6p Жыл бұрын
бля епересетээээээээ а где файлики с кодомммм???????
@AliYounesGo4IT Жыл бұрын
the files are on the official documentation site. I'm just explaining how to use them.
@정환문-g3e9 ай бұрын
hello I enjoyed watching the KZbin video I added the settings and files as shown on KZbin and ran it, but the same error as Hardy occurred. ✔ Network elasticity created ✔ Container elkdocker-setup-1 Healthy ✘ Container elkdocker-es01-1 Error ✔ Container elkdocker-kibana-1 Created ✔ Container elkdocker-es02-1 Created ✔ Container elkdocker-es03-1 Created ✔ Container elkdocker-logstash-1 Created dependency failed to start: container elkdocker-es01-1 exited (78) I wonder if there is any workaround. And I'm curious how to enter the url for kibana to appear in the browser. take care
@arggomes10 ай бұрын
Hi Ali, nice explanation, but i am receiving the following error below. Creating agomes_setup_1 ... done Creating agomes_es01_1 ... done Creating agomes_kibana_1 ... done Creating agomes_es02_1 ... done Creating agomes_es03_1 ... done Creating agomes_logstash_1 ... error ERROR: for agomes_logstash_1 Cannot start service logstash: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/agomes/logstash.conf" to rootfs at "/usr/share/logstash/pipeline/logstash.conf": mount /home/agomes/logstash.conf:/usr/share/logstash/pipeline/logstash.conf (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ERROR: for logstash Cannot start service logstash: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/agomes/logstash.conf" to rootfs at "/usr/share/logstash/pipeline/logstash.conf": mount /home/agomes/logstash.conf:/usr/share/logstash/pipeline/logstash.conf (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ERROR: Encountered errors while bringing up the project.
@mr0ffka8 ай бұрын
Did you find solution?
@moizhassan983 ай бұрын
make sure that the logstash.conf file is located in the same folder as you are running the docker compose up command from
@leblobb3 ай бұрын
had the same problem and noticed i had a typo in the "logstash.conf" filename. make sure the file is named correctly and as @moizhassan98 saiy in the same direcotry as your docker-compose.yml file