Nginx vs Apache Performance

  Рет қаралды 39,550

Anton Putra

Anton Putra

Күн бұрын

Пікірлер
@AntonPutra
@AntonPutra Ай бұрын
🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)
@unom8
@unom8 Ай бұрын
NATs vs Kafka Kafka vs IBM MQ
@MDFireX5
@MDFireX5 Ай бұрын
у этого чела проблемы ... где fast api ?
@MusKel
@MusKel Ай бұрын
NATS vs Kafka vs Redis streams, 😁
@davidmcmartins
@davidmcmartins Ай бұрын
Node.js vs Elixir (Phoenix framework)
@tombyrer1808
@tombyrer1808 Ай бұрын
Nginx vs nodejs/deno/bun? (only node would be fine; we know how the other 3 compare)
@NaourassDerouichi
@NaourassDerouichi Ай бұрын
Please just accept my gratitude for all the benchmarks you're doing and making public. Also, keep doing whatever tests you find relevant. Cheers!
@AntonPutra
@AntonPutra Ай бұрын
thank you! ❤️
@dimasshidqiparikesit1338
@dimasshidqiparikesit1338 Ай бұрын
nginx vs caddy vs traefik please! and maybe try pingora?
@dimasshidqiparikesit1338
@dimasshidqiparikesit1338 Ай бұрын
and IIRC, nginx drop requests when overloaded, while caddy tries to answer all requests by sacrificing response time
@ayehia0
@ayehia0 Ай бұрын
would be so cool
@AntonPutra
@AntonPutra Ай бұрын
will do!
@amig0842
@amig0842 Ай бұрын
@@dimasshidqiparikesit1338 why Pingora when there is River?
@lucsoft
@lucsoft Ай бұрын
Traefik and Caddy!
@almaefogo
@almaefogo Ай бұрын
1 vote for this, and comparem them to nginx
@severgun
@severgun Ай бұрын
traefik not a web server
@almaefogo
@almaefogo Ай бұрын
@@severgun that's true, I the comparison I wanted is as reverse proxy instead of web server
@HVossi92
@HVossi92 Ай бұрын
He already did a performance benchmark between traefik and caddy
@almaefogo
@almaefogo Ай бұрын
@@HVossi92 yeah but I wanted to see how it compares to nginx since that's what I'm using right now and I have been thinking of switching to traefik because I have been having some strange issues that I can't really pinpoint and was wondering if it could something to do with nginx
@nisancoskun
@nisancoskun Ай бұрын
Adding "multi_accept on" directive to nginx config might help availability on high loads.
@inithinx
@inithinx Ай бұрын
Is this not the default behaviour?
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
@@inithinx Nope.. You need to fine tuning not only your database.. Like I told ANton before. But you also need to fine-tune Nginx
@inithinx
@inithinx Ай бұрын
@@MelroyvandenBerg makes sense.
@AntonPutra
@AntonPutra Ай бұрын
Thanks! I'm actually going over the NGINX configuration right now, making sure it's properly optimized for the next test!
@inithinx
@inithinx Ай бұрын
Please include caddy next time! I wonder how golang works in this case! Also, next time try to do brotli compression as well. Cheers!
@TheChrisR12
@TheChrisR12 Ай бұрын
It would be interesting to see how caddy compares to Nginx and apache.
@chu121su12
@chu121su12 Ай бұрын
caddy, zstd compression, h3
@PragmaticIT
@PragmaticIT Ай бұрын
Caddy vs nginx please
@MariosMich
@MariosMich Ай бұрын
traefik vs caddy vs nginx: the ultimate benchmark
@hermes6910
@hermes6910 Ай бұрын
I agree, caddy would be very interesting.
@AndriiShumada
@AndriiShumada Ай бұрын
these Grafana charts are kinda ASMR for me :)
@AntonPutra
@AntonPutra Ай бұрын
😊
@ValerioBarbera
@ValerioBarbera Ай бұрын
I was searching for this kind of comparison for years.
@AntonPutra
@AntonPutra Ай бұрын
i'll do more 😊
@nnaaaaaa
@nnaaaaaa Ай бұрын
I've run nginx serving as reverse proxy in the 30K r/s range for production workloads, the way nginx handles tls is kind of naive and could be improved. Basically what is happening is that there is almost always uneven distribution of work across worker processes and it dogpiles with tls overhead. limiting the tls ciphersuites used can help mitigate this so that there is less variance in how long TLS handshakes take on aggregate. also multi_accept on is you friend.
@AntonPutra
@AntonPutra Ай бұрын
Thanks for the feedback! I'll see if I can monitor each individual worker/thread next time
@nnaaaaaa
@nnaaaaaa Ай бұрын
​@@AntonPutrathis mostly happens from dealing with production loads where you have a diverse set of tls client implementations. not everyone will choose the same ciphersuites. this is an example of things often omitted from synthetic benchmarks because people just dont think of it.
@rajeshnarayanappa4639
@rajeshnarayanappa4639 Ай бұрын
Amazing tests. You got a subscriber for this bloody good content
@AntonPutra
@AntonPutra Ай бұрын
thank you! 😊
@kamurashev
@kamurashev Ай бұрын
Cool, but Apache (ngnix probably too) has so many things to configure, eg prefork/worker mpm, compression rate etc.
@AntonPutra
@AntonPutra Ай бұрын
true, i do my best to make these test fair
@lukavranes2921
@lukavranes2921 Ай бұрын
another amazing performance benchmark and just the one I wanted to see rerun from your old videos. many thanks and great job I'm still curious about the results tho. I'm really looking forward to seeing someone explain why nginx crashed in the last test also I think that apache's compression algorithm is causing the high cpu usage in the first 2 tests and it would perform more like nginx if compression was off (but that's unrealistic to find in the real world) many thanks again and looking forward to the next x vs y video, this second season is very informative
@AntonPutra
@AntonPutra Ай бұрын
thank you! i got a couple of PRs to improve apache and nginx. if they make a significant improvement, i'll update this benchmark
@zuzelstein
@zuzelstein Ай бұрын
Elixir/Gleam vs nodejs/bun/deno. Really interesting to see where Erlang VM shines.
@AntonPutra
@AntonPutra Ай бұрын
ok noted!
@MattiasMagnusson
@MattiasMagnusson Ай бұрын
This was really interesting, i used to be running Apache alot a few years ago, and like you, i switched for the huge performance benefit of Nginx (in most cases apparently). Now, i don't do any loadbalancing using nginx or apache but this was really interesting to me as HA is always something i have been looking for but never really managed to do (lack of hardware and knowledge in my homelab). Earned the sub well done!
@ReptoxX
@ReptoxX Ай бұрын
Just searched yesterday if you already uploaded a benchmark between nginx and caddy and you just now uploaded nginx vs apache. Great starting point :)
@AntonPutra
@AntonPutra Ай бұрын
I'll make nginx vs caddy vs traefik soon
@karsten600
@karsten600 Ай бұрын
Valuable benchmarks! Tip: There is this insane resonance on the audio of this video (and probably more of your videos), so when you pronounce words with s, I can feel a brief pressure in my ear from my brain trying to regulate the intensity. 😅
@AntonPutra
@AntonPutra Ай бұрын
thanks for the feedback, i'll try to reduce it
@guoard
@guoard Ай бұрын
Great. please do the same test for Nginx vs Haproxy too.
@AntonPutra
@AntonPutra Ай бұрын
thanks! will do!
@fumped
@fumped Ай бұрын
nginx as reverse proxy with static content caching and apache as dynamic web server is a killer combo!
@AntonPutra
@AntonPutra Ай бұрын
😊
@TweakMDS
@TweakMDS Ай бұрын
I wonder if apache and nginx use a different default compression level. The test results hint at this (even though both state 6 as default in their docs), and diminishing returns on a higher compression level might be hurting apache in this test. There might be some improvements investigated by skipping compression on files less than 1kb (which I think is a best practice), as well as setting the same gzip compression level on both services.
@AntonPutra
@AntonPutra Ай бұрын
thank you for the feedback! i'll double check compression levels next time
@davidmcken
@davidmcken Ай бұрын
Given my exposure to both apache and nginx, this lines up. You want something to serve static content nginx is king. I am concerned about what is happening around that 80% though. The way I see them nginx is like a scythe able to cut through a metric boatload of requests and apache like a swiss army knife with a boatload of tools available to do everything that has ever come up in my travels (this is where I sense apache's slowness comes from, its versatility). I guess the car analogy is nginix can do a 1/4 mile straight faster but apache could do a rally better as its more adaptable. I have a non-compliant endpoint that uses api_key HTTP header and it took effort just to get nginix to leave it alone and then I route that path to an apache container where it gets fixed.
@MattHudsonAtx
@MattHudsonAtx Ай бұрын
i have found i can make nginx do everything apache does, including serve php and all that application-layer stuff people do with apache. it's not especially advisable, though.
@davidmcken
@davidmcken Ай бұрын
@@MattHudsonAtx the invalid header issue I mentioned I haven't found a way to do it with nginix, at best I can get it to pass it through for something else to deal with using the ignore_invalid_headers directive. Given I was trying to stay just using the nginix proxy manager which is handling everything else I would love to know an alternative way.
@AntonPutra
@AntonPutra Ай бұрын
thanks for the feedback!
@Future_me_66525
@Future_me_66525 Ай бұрын
Love it with the cam, keep it up
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@GameBully2K
@GameBully2K Ай бұрын
Amazing test I did the same test with Grafana K6 but between Nginx and Openlitespeed. Your test definitely explains why cyberpanel is the most performant out of the open source hosting software I tested. it uses a combination of apache and openlitespeed ( I think the perform a reverse proxie with apache and serve the website using openlitespeed )
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@mohammadalaaelghamry8010
@mohammadalaaelghamry8010 Ай бұрын
Great video, as always. Thank you.
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@antonztxone
@antonztxone Ай бұрын
Definitely there should be caddy and traefik in this tests! Thanks for this kind of videos!
@AntonPutra
@AntonPutra Ай бұрын
I'll do those two as well soon
@AIExEy
@AIExEy Ай бұрын
ngix vs pingora please! great content keep up the good work!
@AntonPutra
@AntonPutra Ай бұрын
thank you! will do
@Chat_De_Ratatoing
@Chat_De_Ratatoing Ай бұрын
those benchmarks are so much more useful and truthful than the "official" benchmarks from the devs
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@marknefedov
@marknefedov Ай бұрын
We had experienced an interesting issue with Go application and Nginx when migrated from Pyhton to Golang, that Nginx uses A LOT more tcp packets to communicate with golang apps, at first it overloaded a load balancer cluster and then the application itself, we still haven't figured out what happened, because we also were in the process of migrating to Traefik, but it looks like Go and Nginx really want to split requests into a lot of packets since the most load came from TCP reassembly, and there were a lot more sockets in waiting ACK then usual.
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
Did you try to set `multi_accept on`?
@konstantinp440
@konstantinp440 Ай бұрын
Thank you very much for your hard work 😊
@AntonPutra
@AntonPutra Ай бұрын
❤️
@NDT080
@NDT080 Ай бұрын
Some sort of freak: - Add IIS to the test
@chralexNET
@chralexNET Ай бұрын
A lot of organizations (corporations mostly) use IIS though, so even if IIS is bad then it would still be worthwhile to show how bad it is.
@AntonPutra
@AntonPutra Ай бұрын
ok interesting, i'll try it out
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
Again Anton, great test, but you forget to fine-tune the servers again. Just like the database test. You shouldn't use the defaults.
@_Riux
@_Riux Ай бұрын
Why not? Don't you think most people will use the default settings? Imo this way of testing is probably the most representative of real world performance. Of course it's also interesting to see how far you can optimize, but this is definitely useful.
@willi1978
@willi1978 Ай бұрын
there should be sane defaults. many setups will run with defaults.
@willl0014
@willl0014 Ай бұрын
Agreed the defaults should be representative of the average
@sudifgish8853
@sudifgish8853 Ай бұрын
@@_Riux wtf no, in the "real world" people actually configure their servers, or it's just a hobby project where nothing of this matters.
@ooogabooga5111
@ooogabooga5111 Ай бұрын
@@_Riux People who have defaults have no traffic, if you want to talk about traffic and performance, tuning the server is a must.
@MAK_007
@MAK_007 Ай бұрын
love u anton
@AntonPutra
@AntonPutra Ай бұрын
❤️
@andreialionte3269
@andreialionte3269 Ай бұрын
do REST VS GRPC
@Matheus1233722
@Matheus1233722 Ай бұрын
GraphQL vs gRPC maybe
@AntonPutra
@AntonPutra Ай бұрын
will do soon as well as graphql!
@leonardogalindo2068
@leonardogalindo2068 13 күн бұрын
Please a video for how to measure a microservice resource usage, how to benchmark a python service for example, for calculate cloud cost
@AntonPutra
@AntonPutra 13 күн бұрын
ok noted!
@alekc7292
@alekc7292 Ай бұрын
very good and good diagram for test scenarios is beautiful and understandable
@AntonPutra
@AntonPutra Ай бұрын
thank you!
@mrali_18
@mrali_18 Ай бұрын
Please compare Nginx and HAProxy.
@krisavi
@krisavi Ай бұрын
That would need various workloads of reverse proxy. Ones that filter traffic and others that don't as HAproxy doesn't do web server part.
@AntonPutra
@AntonPutra Ай бұрын
ok will do!
@simon3121
@simon3121 Ай бұрын
You’re English is very good. Not sure whether your pronunciation of ‚throughput‘ is your signature move or not. I noticed it in multiple videos..
@AntonPutra
@AntonPutra Ай бұрын
😊
@rh4009
@rh4009 Ай бұрын
Oh, the ironey
@jerkmeo
@jerkmeo Ай бұрын
love your performance test....you've saved me a lot of time on product selection!
@kebien6020
@kebien6020 Ай бұрын
For the reverse proxy tests, can you test with the swiss army knife of reverse proxies: Envoy proxy? It supports TLS, mTLS, TCP proxying (with or without TLS), HTTP1, 2 and even HTTP3, multiple options for discovering target IPs, circuit breaking, rate-limiting, on the fly re-configuration, and even Lua scripting in case all of that flexibility isn't enough.
@AntonPutra
@AntonPutra Ай бұрын
i did it in the past maybe a year ago or so but will definitely refresh it with new use cases soon
@pengku175
@pengku175 Ай бұрын
really great video! can you do a nginx vs tengine next? it claimed that it has a better performance than nginx and I'm very curious about it, love your vid
@AntonPutra
@AntonPutra Ай бұрын
ok noted!
@SAsquirtle
@SAsquirtle Ай бұрын
I feel like the intro parts are kinda spoilery even if you're blurring out the graph legends
@AntonPutra
@AntonPutra Ай бұрын
😊
@HeyItsSahilSoni
@HeyItsSahilSoni Ай бұрын
When looking at the 85% cpu breakpoint, one thing I could think of was some form of a leak, maybe try to slow down the request increase rate, it might show different results.
@AntonPutra
@AntonPutra Ай бұрын
thanks, i'll try next time
@rwah
@rwah Ай бұрын
How do you configure Apache MPM? Fork mode or Event mode?
@AntonPutra
@AntonPutra Ай бұрын
i use event more, here is origin config - github.com/antonputra/tutorials/blob/219/lessons/219/apache/httpd-mpm.conf#L5-L12 i also got a pr with improvement - github.com/antonputra/tutorials/blob/main/lessons/219/apache/httpd-mpm.conf#L10-L18
@neoko7220
@neoko7220 Ай бұрын
Please compare PHP on Swoole/Roadrunner/FrankenPHP Server versus Rust, Go, Node.js
@AntonPutra
@AntonPutra Ай бұрын
yes i'll do it soon
@kokamkarsahil
@kokamkarsahil Ай бұрын
Is it possible to benchmark pingora as well? It will be easy to use it after river became available so will wait for it in future! Thanks a lot for the benchmark!
@AntonPutra
@AntonPutra Ай бұрын
yes just added pingora in my list
@milendenev4935
@milendenev4935 Ай бұрын
Ok thank you very much for really providing these insights! I was in the making of my own reverse proxy, and this is some key data. I think I might have made a RP better than both of those. 😏
@AntonPutra
@AntonPutra Ай бұрын
my pleasure, they have a lot of built in functionality
@vasilekx8
@vasilekx8 Ай бұрын
Perhaps you need to try the previous version to fix problems with nginx, or build it from source too?
@AntonPutra
@AntonPutra Ай бұрын
i may try something in the future
@rh4009
@rh4009 Ай бұрын
I agree. Both the 85% CPU behaviour and the much higher backend app CPU usage feel like regressions.
@danielwidyanto5142
@danielwidyanto5142 Ай бұрын
Saya pikir orang Indo, ternyata bukan. But it's a great video (and I'm still sticking to Apache - PHP MPM coz I've never had such a huge traffic... except for the DDOS event).
@AntonPutra
@AntonPutra Ай бұрын
yeah, i'm not 😊 i heard apache php integration is very good
@IK-wp5eq
@IK-wp5eq Ай бұрын
11:35 higher cpu for apps behind nginx indicate that they have more work to do because nginx must be sending more data per second to apps than Apache.
@Blink__5
@Blink__5 Ай бұрын
i know a lot of people already asked for this, but i also want to see Traefik and Caddy
@AntonPutra
@AntonPutra Ай бұрын
ok will do!
@chralexNET
@chralexNET Ай бұрын
I would like to see a test with NGINX Stream Proxy Module which acts as just a reverse TCP or UDP proxy, not as a HTTP proxy. I for example, use this for some game servers where I reverse proxy both TCP and UDP packets. I setup NGINX for this because it seemed like the easiest thing to do, but I don't know if it has the best performance.
@krisavi
@krisavi Ай бұрын
That could be one of the comparisons with HAProxy that is also TCP proxy capable.
@AntonPutra
@AntonPutra Ай бұрын
Interesting, I'll try to include it in one of the new benchmarks
@geg4385
@geg4385 Ай бұрын
this made me wanna see tcp vs quic
@AntonPutra
@AntonPutra Ай бұрын
ok i may do it sometime in the future
@toniferic-tech8733
@toniferic-tech8733 Ай бұрын
Did you use RSA or ECDSA certificates? Because ECDSA should be used most of the time, as they are faster to transmit (less bytes in TLS handshake). Also, nowadays, when used as Reverse Proxy, the connection to the backend servers (i.e. downstream) should be also encrypted, and not cleartext.
@AntonPutra
@AntonPutra Ай бұрын
I used RSA in both proxies, and regarding the second point, it's good to have but difficult to maintain, you constantly need to renew the certificates that the application uses.
@toniferic-tech8733
@toniferic-tech8733 Ай бұрын
I don’t agree. Internal certificates can be automated with internal CA and ACME, or external CA (e.g. Let’s Encrypt) or long-lasting certificates.
@kameikojirou
@kameikojirou Ай бұрын
How does Caddy compare to these two?
@AntonPutra
@AntonPutra Ай бұрын
i'll add it as well soon
@kariuki6644
@kariuki6644 Ай бұрын
I’m curious how Java spring webflux compares to spring boot
@AntonPutra
@AntonPutra Ай бұрын
i'll do java soon
@MadalinIgnisca
@MadalinIgnisca Ай бұрын
Why would you activate compression instead of serving pre-compressed files?
@AntonPutra
@AntonPutra Ай бұрын
I didn't get the question. You use compression to improve latency and overall performance. With a payload that is four times smaller, it takes less time to transmit over the network.
@RAHUL-vm8bn
@RAHUL-vm8bn Ай бұрын
Can You Please Start series on Docker Networking tips or Anything related to DevOps it will be helpful Learning from your Experience
@AntonPutra
@AntonPutra Ай бұрын
i'll try to include as many tips as i can in the benchmarks 😊
@jimhrelb2135
@jimhrelb2135 Ай бұрын
I feel like network usage in itself is related to request/s, in that if one webserver is able to satisfy more requests per time, it's prone to having more network usage within that same timeframe. Why not network usage per request?
@AntonPutra
@AntonPutra Ай бұрын
it's common to use rps, requests per second metric to monitor http applications
@90hijacked
@90hijacked Ай бұрын
took me a while to realize this isn't OSS nginx, have not played around with the F5 one, does it come with its builtin metrics module ? or what did u use to export those? great content as always!
@patryk4815
@patryk4815 Ай бұрын
this is OSS nginx
@rafaelpirolla
@rafaelpirolla Ай бұрын
oss doesn't come with metric node module. latency can only be measured at the client; server cpu/mem/net is not nginx metric module's responsability
@patryk4815
@patryk4815 Ай бұрын
@@rafaelpirolla don't know what you talking about, k8s expose cpu/mem/net stats for every POD
@90hijacked
@90hijacked Ай бұрын
@@rafaelpirolla makes sense that latency was obtained from clients, thank you!! worked around this once using otel module + tempo metrics generator, but that was rather convoluted / unsatisfactory approach
@AntonPutra
@AntonPutra Ай бұрын
yeah, it's open-source nginx. Also, the most accurate way to measure latency is from the client side, not using internal metrics. In this test i collect cpu/memory/network for web servers using node exporter since they are deployed on standalone VMs
@simonlindgren9747
@simonlindgren9747 Ай бұрын
Please test some more experimental servers too, like maybe rpxy/sozu compared to envoy.
@AntonPutra
@AntonPutra Ай бұрын
ok i'll take a look at them
@ziv132
@ziv132 Ай бұрын
Can you add Caddy
@AntonPutra
@AntonPutra Ай бұрын
will do soon!
@koko9089nnn
@koko9089nnn Ай бұрын
Can you do `envoy` please? it is widely used by Google GCP
@Cyanide0112
@Cyanide0112 Ай бұрын
Can you try others? Like Envoy? There are some other "obscure" ones .. I wonder if you can test those
@AntonPutra
@AntonPutra Ай бұрын
i tested envoy in the past but i think it's time to refresh
@ksomov
@ksomov Ай бұрын
please compare the performance of nginx and haproxy
@AntonPutra
@AntonPutra Ай бұрын
ok noted!
@konga8165
@konga8165 Ай бұрын
Caddy, traefik, and envoy proxy!
@AntonPutra
@AntonPutra Ай бұрын
yes will do soon!
@HowToLinux
@HowToLinux Ай бұрын
Please do Nginx vs HaProxy
@AntonPutra
@AntonPutra Ай бұрын
ok will do!
@HowToLinux
@HowToLinux Ай бұрын
@@AntonPutra Thanks!
@muhammadalfian9057
@muhammadalfian9057 5 күн бұрын
Next openlitespeed vs nginx vs apache please
@idzyubin720
@idzyubin720 Ай бұрын
Compare go-grpc and rust-tonic please Tonic contributors fix many issues and increase performance
@AntonPutra
@AntonPutra Ай бұрын
ok i'll take a look!
@MadalinIgnisca
@MadalinIgnisca Ай бұрын
All the time I had stability with Apache, but with Nginx occasionally I had warnings in my alerts as service was restarting
@AntonPutra
@AntonPutra Ай бұрын
It's very common in production to quickly fill up all available disk space with access logs; this is issue number one.
@markg5891
@markg5891 Ай бұрын
I've noticed this weird behavior of nginx as a reverse proxy to a backend server too. Even if that backend server itself is just serving static data, the mere act of being a reverse proxy seems to cause a rather big performance hit for nginx. Weird.
@AntonPutra
@AntonPutra Ай бұрын
thanks for the feedback
@TadeasF
@TadeasF Ай бұрын
I'd be very interested nginx VS caddy
@AntonPutra
@AntonPutra Ай бұрын
will do soon!
@bhsecurity
@bhsecurity Ай бұрын
I always wanted to see this.
@AntonPutra
@AntonPutra Ай бұрын
my pleasure!
@qatanah
@qatanah Ай бұрын
hi, what tools are you using for monitoring and benchmark graphs?
@roger-sei
@roger-sei Ай бұрын
Grafana
@KTLO-m8p
@KTLO-m8p Ай бұрын
Thanks!
@AntonPutra
@AntonPutra Ай бұрын
prometheus + grafana
@sPanKyZzZ1
@sPanKyZzZ1 Ай бұрын
One future idea test, job schedulers
@AntonPutra
@AntonPutra Ай бұрын
like airflow?
@amig0842
@amig0842 Ай бұрын
Please compare River reverse proxy with Nginx
@AntonPutra
@AntonPutra Ай бұрын
ok interesting
@Kanibalvv
@Kanibalvv Ай бұрын
you need to check kernel params... tcp_mem default is always to low, that can explain nginx problem.
@AntonPutra
@AntonPutra Ай бұрын
thanks will check
@dasten123
@dasten123 Ай бұрын
very interesting
@AntonPutra
@AntonPutra Ай бұрын
thanks!
@gpasdcompte
@gpasdcompte Ай бұрын
A 4th test with the apache "allowoverride none" would be nice, i've heard it improve performance, but never tried :/
@AntonPutra
@AntonPutra Ай бұрын
ok i'll take a look!
@fateslayer47
@fateslayer47 Ай бұрын
I'm looking at benchmarks and feeling good about choosing nginx even though my website gets 1 user per month.
@AntonPutra
@AntonPutra Ай бұрын
haha
@KTLO-m8p
@KTLO-m8p Ай бұрын
How are you exporting the results into the graphing software? Can you explain what softwares those are to do that so I can recreate this setup?
@AntonPutra
@AntonPutra Ай бұрын
sure, I use Prometheus to scrape all the metrics and Grafana for the UI. it's all open source, and I have a bunch of tutorials on my channe
@KTLO-m8p
@KTLO-m8p Ай бұрын
@@AntonPutra thanks!
@simonecominato5561
@simonecominato5561 Ай бұрын
In the last test, are the Rust applications running in the same instance as the server? It seems like the Rust application in the Nginx case is stealing processor time to the server.
@Pero12121
@Pero12121 Ай бұрын
At 1:26 he explained where everything is hosted. Applications have separated machines
@simonecominato5561
@simonecominato5561 Ай бұрын
@@Pero12121 I missed it, thanks.
@AntonPutra
@AntonPutra Ай бұрын
yeah, in this test web servers are deployed on dedicated vms
@mkvalor
@mkvalor Ай бұрын
Something isn't quite right here. In all 3 tests, you show the requests per second synchronized until a major failure happens. The time log at the bottom seems to indicate these requests per second metrics are being gathered over the same time period. Yet how can this be possible when one web server has a significantly higher latency, measured at the client, than the other? Once the latency difference hits 1ms, that means we should notice at least 1,000 fewer requests per second for each second that passes after that moment -- accumulating as time goes by. And, of course, this difference should accumulate even more quickly the higher the latency goes. It looks to me like you (accidentally?) decided to normalize the graphs of each contest so the RPS would match until one of the servers failed. Or if not, what am I missing here?
@pable2
@pable2 Ай бұрын
Like the others said, with Caddy would be amazing
@AntonPutra
@AntonPutra Ай бұрын
yes soon
@MrDocomox
@MrDocomox Ай бұрын
check istio gateway vs nginx.
@AntonPutra
@AntonPutra Ай бұрын
will do! thanks
@nexovec
@nexovec Ай бұрын
nginx vs Caddy please!
@AntonPutra
@AntonPutra Ай бұрын
will do!
@architector2p0
@architector2p0 Ай бұрын
Hi, could you create a video explaining step by step how to prepare such a testing system from scratch?
@AntonPutra
@AntonPutra Ай бұрын
sure, but i already have some tutorials on my chanel that cover prometheus and grafana
@hatersbudiman7058
@hatersbudiman7058 Ай бұрын
Next Caddy and open litespeed
@AntonPutra
@AntonPutra Ай бұрын
noted!
@VijayGanesh-s5q
@VijayGanesh-s5q Ай бұрын
Will you make a comparison between the best frameworks of zig(zzz), rust(axum), go(fiber). I have been waiting for this long time.
@AntonPutra
@AntonPutra Ай бұрын
yes will do
@DominickPeluso
@DominickPeluso Ай бұрын
Redbean and caddy please
@AntonPutra
@AntonPutra Ай бұрын
ok added to my list
@malcomgreen4747
@malcomgreen4747 Ай бұрын
Test start at 5:21
@AntonPutra
@AntonPutra Ай бұрын
i have timestamps in each video
@malcomgreen4747
@malcomgreen4747 Ай бұрын
​@@AntonPutra nice thank you
@damianszczukowski1912
@damianszczukowski1912 Ай бұрын
compare apache/nginx to traefik and caddy
@AntonPutra
@AntonPutra Ай бұрын
yes will do soon
@Gooblehusain
@Gooblehusain Ай бұрын
Anton, your name is very indonesian. More specifically, chinesse indonesia. Do you any association to indonesian culture
@severgun
@severgun Ай бұрын
This is slavic name lol.
@steeltormentors
@steeltormentors Ай бұрын
bro, jangan malu2in...dari logat bicara Anton Putra ini berasa Jowo banget ya? wkwk
@AntonPutra
@AntonPutra Ай бұрын
no, but i was frequently told about my name when i was in bali
@AntonPutra
@AntonPutra Ай бұрын
@severgun he was referring to my last name actually
@erickvillatoro5683
@erickvillatoro5683 Ай бұрын
Please do Traefik vs nginx ingess controller!!!
@AntonPutra
@AntonPutra Ай бұрын
will do!
@ghostvar
@ghostvar Ай бұрын
We usually using these two, nginx for ssl dan reverse proxy and apache for php handler :/
@AntonPutra
@AntonPutra Ай бұрын
yeah apache has nice php integration
@GuedelhaGaming
@GuedelhaGaming Ай бұрын
Nginx vs YARP
@AntonPutra
@AntonPutra Ай бұрын
ok noted!
@MrAmG17
@MrAmG17 Ай бұрын
Cowboy , Erlang and other high performers for future videos
@AntonPutra
@AntonPutra Ай бұрын
will do soon, but first ruby on rails 😊
@nikitalafinskiy8089
@nikitalafinskiy8089 Ай бұрын
Would be interesting to see Kotlin (natively compiled) using Spring vs Go
@AntonPutra
@AntonPutra Ай бұрын
ok noted!
@nomadvagabond1263
@nomadvagabond1263 Ай бұрын
You blur the texts, but the colors give them out🥲 chose colors that arent related to the technology.
@AntonPutra
@AntonPutra Ай бұрын
😊
@MelroyvandenBerg
@MelroyvandenBerg Ай бұрын
Ps. latest Nginx is version 1.27.2 actually, right? Maybe it's the "latest" version on your system, but it's not The lastest version.
@AntonPutra
@AntonPutra Ай бұрын
i used latest "stable" version not from the mainline
@VirendraBG
@VirendraBG Ай бұрын
Try this test with Dynamic HTML Content fetched from SQL Databases.
@MrCustomabstract
@MrCustomabstract Ай бұрын
FastAPI would be cool
@AntonPutra
@AntonPutra Ай бұрын
yes soon
Kafka vs RabbitMQ Performance
9:21
Anton Putra
Рет қаралды 42 М.
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 8 МЛН
NGINX Tutorial for Beginners
51:03
freeCodeCamp.org
Рет қаралды 312 М.
Nginx vs Caddy Performance
15:27
Anton Putra
Рет қаралды 32 М.
Miłosz Smółka - Killing the legacy and other CQRS stories
37:20
Golang Meetups Prague
Рет қаралды 361
Nginx vs Traefik Performance (Reverse Proxy)
13:18
Anton Putra
Рет қаралды 28 М.
Docker Image BEST Practices - From 1.2GB to 10MB
7:15
Better Stack
Рет қаралды 97 М.
Proxy vs Reverse Proxy vs Load Balancer | Simply Explained
13:19
TechWorld with Nana
Рет қаралды 231 М.
MySQL vs PostgreSQL Performance
13:23
Anton Putra
Рет қаралды 46 М.