🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)
@unom8Ай бұрын
NATs vs Kafka Kafka vs IBM MQ
@MDFireX5Ай бұрын
у этого чела проблемы ... где fast api ?
@MusKelАй бұрын
NATS vs Kafka vs Redis streams, 😁
@davidmcmartinsАй бұрын
Node.js vs Elixir (Phoenix framework)
@tombyrer1808Ай бұрын
Nginx vs nodejs/deno/bun? (only node would be fine; we know how the other 3 compare)
@NaourassDerouichiАй бұрын
Please just accept my gratitude for all the benchmarks you're doing and making public. Also, keep doing whatever tests you find relevant. Cheers!
@AntonPutraАй бұрын
thank you! ❤️
@dimasshidqiparikesit1338Ай бұрын
nginx vs caddy vs traefik please! and maybe try pingora?
@dimasshidqiparikesit1338Ай бұрын
and IIRC, nginx drop requests when overloaded, while caddy tries to answer all requests by sacrificing response time
@ayehia0Ай бұрын
would be so cool
@AntonPutraАй бұрын
will do!
@amig0842Ай бұрын
@@dimasshidqiparikesit1338 why Pingora when there is River?
@lucsoftАй бұрын
Traefik and Caddy!
@almaefogoАй бұрын
1 vote for this, and comparem them to nginx
@severgunАй бұрын
traefik not a web server
@almaefogoАй бұрын
@@severgun that's true, I the comparison I wanted is as reverse proxy instead of web server
@HVossi92Ай бұрын
He already did a performance benchmark between traefik and caddy
@almaefogoАй бұрын
@@HVossi92 yeah but I wanted to see how it compares to nginx since that's what I'm using right now and I have been thinking of switching to traefik because I have been having some strange issues that I can't really pinpoint and was wondering if it could something to do with nginx
@nisancoskunАй бұрын
Adding "multi_accept on" directive to nginx config might help availability on high loads.
@inithinxАй бұрын
Is this not the default behaviour?
@MelroyvandenBergАй бұрын
@@inithinx Nope.. You need to fine tuning not only your database.. Like I told ANton before. But you also need to fine-tune Nginx
@inithinxАй бұрын
@@MelroyvandenBerg makes sense.
@AntonPutraАй бұрын
Thanks! I'm actually going over the NGINX configuration right now, making sure it's properly optimized for the next test!
@inithinxАй бұрын
Please include caddy next time! I wonder how golang works in this case! Also, next time try to do brotli compression as well. Cheers!
@TheChrisR12Ай бұрын
It would be interesting to see how caddy compares to Nginx and apache.
@chu121su12Ай бұрын
caddy, zstd compression, h3
@PragmaticITАй бұрын
Caddy vs nginx please
@MariosMichАй бұрын
traefik vs caddy vs nginx: the ultimate benchmark
@hermes6910Ай бұрын
I agree, caddy would be very interesting.
@AndriiShumadaАй бұрын
these Grafana charts are kinda ASMR for me :)
@AntonPutraАй бұрын
😊
@ValerioBarberaАй бұрын
I was searching for this kind of comparison for years.
@AntonPutraАй бұрын
i'll do more 😊
@nnaaaaaaАй бұрын
I've run nginx serving as reverse proxy in the 30K r/s range for production workloads, the way nginx handles tls is kind of naive and could be improved. Basically what is happening is that there is almost always uneven distribution of work across worker processes and it dogpiles with tls overhead. limiting the tls ciphersuites used can help mitigate this so that there is less variance in how long TLS handshakes take on aggregate. also multi_accept on is you friend.
@AntonPutraАй бұрын
Thanks for the feedback! I'll see if I can monitor each individual worker/thread next time
@nnaaaaaaАй бұрын
@@AntonPutrathis mostly happens from dealing with production loads where you have a diverse set of tls client implementations. not everyone will choose the same ciphersuites. this is an example of things often omitted from synthetic benchmarks because people just dont think of it.
@rajeshnarayanappa4639Ай бұрын
Amazing tests. You got a subscriber for this bloody good content
@AntonPutraАй бұрын
thank you! 😊
@kamurashevАй бұрын
Cool, but Apache (ngnix probably too) has so many things to configure, eg prefork/worker mpm, compression rate etc.
@AntonPutraАй бұрын
true, i do my best to make these test fair
@lukavranes2921Ай бұрын
another amazing performance benchmark and just the one I wanted to see rerun from your old videos. many thanks and great job I'm still curious about the results tho. I'm really looking forward to seeing someone explain why nginx crashed in the last test also I think that apache's compression algorithm is causing the high cpu usage in the first 2 tests and it would perform more like nginx if compression was off (but that's unrealistic to find in the real world) many thanks again and looking forward to the next x vs y video, this second season is very informative
@AntonPutraАй бұрын
thank you! i got a couple of PRs to improve apache and nginx. if they make a significant improvement, i'll update this benchmark
@zuzelsteinАй бұрын
Elixir/Gleam vs nodejs/bun/deno. Really interesting to see where Erlang VM shines.
@AntonPutraАй бұрын
ok noted!
@MattiasMagnussonАй бұрын
This was really interesting, i used to be running Apache alot a few years ago, and like you, i switched for the huge performance benefit of Nginx (in most cases apparently). Now, i don't do any loadbalancing using nginx or apache but this was really interesting to me as HA is always something i have been looking for but never really managed to do (lack of hardware and knowledge in my homelab). Earned the sub well done!
@ReptoxXАй бұрын
Just searched yesterday if you already uploaded a benchmark between nginx and caddy and you just now uploaded nginx vs apache. Great starting point :)
@AntonPutraАй бұрын
I'll make nginx vs caddy vs traefik soon
@karsten600Ай бұрын
Valuable benchmarks! Tip: There is this insane resonance on the audio of this video (and probably more of your videos), so when you pronounce words with s, I can feel a brief pressure in my ear from my brain trying to regulate the intensity. 😅
@AntonPutraАй бұрын
thanks for the feedback, i'll try to reduce it
@guoardАй бұрын
Great. please do the same test for Nginx vs Haproxy too.
@AntonPutraАй бұрын
thanks! will do!
@fumpedАй бұрын
nginx as reverse proxy with static content caching and apache as dynamic web server is a killer combo!
@AntonPutraАй бұрын
😊
@TweakMDSАй бұрын
I wonder if apache and nginx use a different default compression level. The test results hint at this (even though both state 6 as default in their docs), and diminishing returns on a higher compression level might be hurting apache in this test. There might be some improvements investigated by skipping compression on files less than 1kb (which I think is a best practice), as well as setting the same gzip compression level on both services.
@AntonPutraАй бұрын
thank you for the feedback! i'll double check compression levels next time
@davidmckenАй бұрын
Given my exposure to both apache and nginx, this lines up. You want something to serve static content nginx is king. I am concerned about what is happening around that 80% though. The way I see them nginx is like a scythe able to cut through a metric boatload of requests and apache like a swiss army knife with a boatload of tools available to do everything that has ever come up in my travels (this is where I sense apache's slowness comes from, its versatility). I guess the car analogy is nginix can do a 1/4 mile straight faster but apache could do a rally better as its more adaptable. I have a non-compliant endpoint that uses api_key HTTP header and it took effort just to get nginix to leave it alone and then I route that path to an apache container where it gets fixed.
@MattHudsonAtxАй бұрын
i have found i can make nginx do everything apache does, including serve php and all that application-layer stuff people do with apache. it's not especially advisable, though.
@davidmckenАй бұрын
@@MattHudsonAtx the invalid header issue I mentioned I haven't found a way to do it with nginix, at best I can get it to pass it through for something else to deal with using the ignore_invalid_headers directive. Given I was trying to stay just using the nginix proxy manager which is handling everything else I would love to know an alternative way.
@AntonPutraАй бұрын
thanks for the feedback!
@Future_me_66525Ай бұрын
Love it with the cam, keep it up
@AntonPutraАй бұрын
thank you!
@GameBully2KАй бұрын
Amazing test I did the same test with Grafana K6 but between Nginx and Openlitespeed. Your test definitely explains why cyberpanel is the most performant out of the open source hosting software I tested. it uses a combination of apache and openlitespeed ( I think the perform a reverse proxie with apache and serve the website using openlitespeed )
@AntonPutraАй бұрын
thank you!
@mohammadalaaelghamry8010Ай бұрын
Great video, as always. Thank you.
@AntonPutraАй бұрын
thank you!
@antonztxoneАй бұрын
Definitely there should be caddy and traefik in this tests! Thanks for this kind of videos!
@AntonPutraАй бұрын
I'll do those two as well soon
@AIExEyАй бұрын
ngix vs pingora please! great content keep up the good work!
@AntonPutraАй бұрын
thank you! will do
@Chat_De_RatatoingАй бұрын
those benchmarks are so much more useful and truthful than the "official" benchmarks from the devs
@AntonPutraАй бұрын
thank you!
@marknefedovАй бұрын
We had experienced an interesting issue with Go application and Nginx when migrated from Pyhton to Golang, that Nginx uses A LOT more tcp packets to communicate with golang apps, at first it overloaded a load balancer cluster and then the application itself, we still haven't figured out what happened, because we also were in the process of migrating to Traefik, but it looks like Go and Nginx really want to split requests into a lot of packets since the most load came from TCP reassembly, and there were a lot more sockets in waiting ACK then usual.
@MelroyvandenBergАй бұрын
Did you try to set `multi_accept on`?
@konstantinp440Ай бұрын
Thank you very much for your hard work 😊
@AntonPutraАй бұрын
❤️
@NDT080Ай бұрын
Some sort of freak: - Add IIS to the test
@chralexNETАй бұрын
A lot of organizations (corporations mostly) use IIS though, so even if IIS is bad then it would still be worthwhile to show how bad it is.
@AntonPutraАй бұрын
ok interesting, i'll try it out
@MelroyvandenBergАй бұрын
Again Anton, great test, but you forget to fine-tune the servers again. Just like the database test. You shouldn't use the defaults.
@_RiuxАй бұрын
Why not? Don't you think most people will use the default settings? Imo this way of testing is probably the most representative of real world performance. Of course it's also interesting to see how far you can optimize, but this is definitely useful.
@willi1978Ай бұрын
there should be sane defaults. many setups will run with defaults.
@willl0014Ай бұрын
Agreed the defaults should be representative of the average
@sudifgish8853Ай бұрын
@@_Riux wtf no, in the "real world" people actually configure their servers, or it's just a hobby project where nothing of this matters.
@ooogabooga5111Ай бұрын
@@_Riux People who have defaults have no traffic, if you want to talk about traffic and performance, tuning the server is a must.
@MAK_007Ай бұрын
love u anton
@AntonPutraАй бұрын
❤️
@andreialionte3269Ай бұрын
do REST VS GRPC
@Matheus1233722Ай бұрын
GraphQL vs gRPC maybe
@AntonPutraАй бұрын
will do soon as well as graphql!
@leonardogalindo206813 күн бұрын
Please a video for how to measure a microservice resource usage, how to benchmark a python service for example, for calculate cloud cost
@AntonPutra13 күн бұрын
ok noted!
@alekc7292Ай бұрын
very good and good diagram for test scenarios is beautiful and understandable
@AntonPutraАй бұрын
thank you!
@mrali_18Ай бұрын
Please compare Nginx and HAProxy.
@krisaviАй бұрын
That would need various workloads of reverse proxy. Ones that filter traffic and others that don't as HAproxy doesn't do web server part.
@AntonPutraАй бұрын
ok will do!
@simon3121Ай бұрын
You’re English is very good. Not sure whether your pronunciation of ‚throughput‘ is your signature move or not. I noticed it in multiple videos..
@AntonPutraАй бұрын
😊
@rh4009Ай бұрын
Oh, the ironey
@jerkmeoАй бұрын
love your performance test....you've saved me a lot of time on product selection!
@kebien6020Ай бұрын
For the reverse proxy tests, can you test with the swiss army knife of reverse proxies: Envoy proxy? It supports TLS, mTLS, TCP proxying (with or without TLS), HTTP1, 2 and even HTTP3, multiple options for discovering target IPs, circuit breaking, rate-limiting, on the fly re-configuration, and even Lua scripting in case all of that flexibility isn't enough.
@AntonPutraАй бұрын
i did it in the past maybe a year ago or so but will definitely refresh it with new use cases soon
@pengku175Ай бұрын
really great video! can you do a nginx vs tengine next? it claimed that it has a better performance than nginx and I'm very curious about it, love your vid
@AntonPutraАй бұрын
ok noted!
@SAsquirtleАй бұрын
I feel like the intro parts are kinda spoilery even if you're blurring out the graph legends
@AntonPutraАй бұрын
😊
@HeyItsSahilSoniАй бұрын
When looking at the 85% cpu breakpoint, one thing I could think of was some form of a leak, maybe try to slow down the request increase rate, it might show different results.
@AntonPutraАй бұрын
thanks, i'll try next time
@rwahАй бұрын
How do you configure Apache MPM? Fork mode or Event mode?
@AntonPutraАй бұрын
i use event more, here is origin config - github.com/antonputra/tutorials/blob/219/lessons/219/apache/httpd-mpm.conf#L5-L12 i also got a pr with improvement - github.com/antonputra/tutorials/blob/main/lessons/219/apache/httpd-mpm.conf#L10-L18
@neoko7220Ай бұрын
Please compare PHP on Swoole/Roadrunner/FrankenPHP Server versus Rust, Go, Node.js
@AntonPutraАй бұрын
yes i'll do it soon
@kokamkarsahilАй бұрын
Is it possible to benchmark pingora as well? It will be easy to use it after river became available so will wait for it in future! Thanks a lot for the benchmark!
@AntonPutraАй бұрын
yes just added pingora in my list
@milendenev4935Ай бұрын
Ok thank you very much for really providing these insights! I was in the making of my own reverse proxy, and this is some key data. I think I might have made a RP better than both of those. 😏
@AntonPutraАй бұрын
my pleasure, they have a lot of built in functionality
@vasilekx8Ай бұрын
Perhaps you need to try the previous version to fix problems with nginx, or build it from source too?
@AntonPutraАй бұрын
i may try something in the future
@rh4009Ай бұрын
I agree. Both the 85% CPU behaviour and the much higher backend app CPU usage feel like regressions.
@danielwidyanto5142Ай бұрын
Saya pikir orang Indo, ternyata bukan. But it's a great video (and I'm still sticking to Apache - PHP MPM coz I've never had such a huge traffic... except for the DDOS event).
@AntonPutraАй бұрын
yeah, i'm not 😊 i heard apache php integration is very good
@IK-wp5eqАй бұрын
11:35 higher cpu for apps behind nginx indicate that they have more work to do because nginx must be sending more data per second to apps than Apache.
@Blink__5Ай бұрын
i know a lot of people already asked for this, but i also want to see Traefik and Caddy
@AntonPutraАй бұрын
ok will do!
@chralexNETАй бұрын
I would like to see a test with NGINX Stream Proxy Module which acts as just a reverse TCP or UDP proxy, not as a HTTP proxy. I for example, use this for some game servers where I reverse proxy both TCP and UDP packets. I setup NGINX for this because it seemed like the easiest thing to do, but I don't know if it has the best performance.
@krisaviАй бұрын
That could be one of the comparisons with HAProxy that is also TCP proxy capable.
@AntonPutraАй бұрын
Interesting, I'll try to include it in one of the new benchmarks
@geg4385Ай бұрын
this made me wanna see tcp vs quic
@AntonPutraАй бұрын
ok i may do it sometime in the future
@toniferic-tech8733Ай бұрын
Did you use RSA or ECDSA certificates? Because ECDSA should be used most of the time, as they are faster to transmit (less bytes in TLS handshake). Also, nowadays, when used as Reverse Proxy, the connection to the backend servers (i.e. downstream) should be also encrypted, and not cleartext.
@AntonPutraАй бұрын
I used RSA in both proxies, and regarding the second point, it's good to have but difficult to maintain, you constantly need to renew the certificates that the application uses.
@toniferic-tech8733Ай бұрын
I don’t agree. Internal certificates can be automated with internal CA and ACME, or external CA (e.g. Let’s Encrypt) or long-lasting certificates.
@kameikojirouАй бұрын
How does Caddy compare to these two?
@AntonPutraАй бұрын
i'll add it as well soon
@kariuki6644Ай бұрын
I’m curious how Java spring webflux compares to spring boot
@AntonPutraАй бұрын
i'll do java soon
@MadalinIgniscaАй бұрын
Why would you activate compression instead of serving pre-compressed files?
@AntonPutraАй бұрын
I didn't get the question. You use compression to improve latency and overall performance. With a payload that is four times smaller, it takes less time to transmit over the network.
@RAHUL-vm8bnАй бұрын
Can You Please Start series on Docker Networking tips or Anything related to DevOps it will be helpful Learning from your Experience
@AntonPutraАй бұрын
i'll try to include as many tips as i can in the benchmarks 😊
@jimhrelb2135Ай бұрын
I feel like network usage in itself is related to request/s, in that if one webserver is able to satisfy more requests per time, it's prone to having more network usage within that same timeframe. Why not network usage per request?
@AntonPutraАй бұрын
it's common to use rps, requests per second metric to monitor http applications
@90hijackedАй бұрын
took me a while to realize this isn't OSS nginx, have not played around with the F5 one, does it come with its builtin metrics module ? or what did u use to export those? great content as always!
@patryk4815Ай бұрын
this is OSS nginx
@rafaelpirollaАй бұрын
oss doesn't come with metric node module. latency can only be measured at the client; server cpu/mem/net is not nginx metric module's responsability
@patryk4815Ай бұрын
@@rafaelpirolla don't know what you talking about, k8s expose cpu/mem/net stats for every POD
@90hijackedАй бұрын
@@rafaelpirolla makes sense that latency was obtained from clients, thank you!! worked around this once using otel module + tempo metrics generator, but that was rather convoluted / unsatisfactory approach
@AntonPutraАй бұрын
yeah, it's open-source nginx. Also, the most accurate way to measure latency is from the client side, not using internal metrics. In this test i collect cpu/memory/network for web servers using node exporter since they are deployed on standalone VMs
@simonlindgren9747Ай бұрын
Please test some more experimental servers too, like maybe rpxy/sozu compared to envoy.
@AntonPutraАй бұрын
ok i'll take a look at them
@ziv132Ай бұрын
Can you add Caddy
@AntonPutraАй бұрын
will do soon!
@koko9089nnnАй бұрын
Can you do `envoy` please? it is widely used by Google GCP
@Cyanide0112Ай бұрын
Can you try others? Like Envoy? There are some other "obscure" ones .. I wonder if you can test those
@AntonPutraАй бұрын
i tested envoy in the past but i think it's time to refresh
@ksomovАй бұрын
please compare the performance of nginx and haproxy
@AntonPutraАй бұрын
ok noted!
@konga8165Ай бұрын
Caddy, traefik, and envoy proxy!
@AntonPutraАй бұрын
yes will do soon!
@HowToLinuxАй бұрын
Please do Nginx vs HaProxy
@AntonPutraАй бұрын
ok will do!
@HowToLinuxАй бұрын
@@AntonPutra Thanks!
@muhammadalfian90575 күн бұрын
Next openlitespeed vs nginx vs apache please
@idzyubin720Ай бұрын
Compare go-grpc and rust-tonic please Tonic contributors fix many issues and increase performance
@AntonPutraАй бұрын
ok i'll take a look!
@MadalinIgniscaАй бұрын
All the time I had stability with Apache, but with Nginx occasionally I had warnings in my alerts as service was restarting
@AntonPutraАй бұрын
It's very common in production to quickly fill up all available disk space with access logs; this is issue number one.
@markg5891Ай бұрын
I've noticed this weird behavior of nginx as a reverse proxy to a backend server too. Even if that backend server itself is just serving static data, the mere act of being a reverse proxy seems to cause a rather big performance hit for nginx. Weird.
@AntonPutraАй бұрын
thanks for the feedback
@TadeasFАй бұрын
I'd be very interested nginx VS caddy
@AntonPutraАй бұрын
will do soon!
@bhsecurityАй бұрын
I always wanted to see this.
@AntonPutraАй бұрын
my pleasure!
@qatanahАй бұрын
hi, what tools are you using for monitoring and benchmark graphs?
@roger-seiАй бұрын
Grafana
@KTLO-m8pАй бұрын
Thanks!
@AntonPutraАй бұрын
prometheus + grafana
@sPanKyZzZ1Ай бұрын
One future idea test, job schedulers
@AntonPutraАй бұрын
like airflow?
@amig0842Ай бұрын
Please compare River reverse proxy with Nginx
@AntonPutraАй бұрын
ok interesting
@KanibalvvАй бұрын
you need to check kernel params... tcp_mem default is always to low, that can explain nginx problem.
@AntonPutraАй бұрын
thanks will check
@dasten123Ай бұрын
very interesting
@AntonPutraАй бұрын
thanks!
@gpasdcompteАй бұрын
A 4th test with the apache "allowoverride none" would be nice, i've heard it improve performance, but never tried :/
@AntonPutraАй бұрын
ok i'll take a look!
@fateslayer47Ай бұрын
I'm looking at benchmarks and feeling good about choosing nginx even though my website gets 1 user per month.
@AntonPutraАй бұрын
haha
@KTLO-m8pАй бұрын
How are you exporting the results into the graphing software? Can you explain what softwares those are to do that so I can recreate this setup?
@AntonPutraАй бұрын
sure, I use Prometheus to scrape all the metrics and Grafana for the UI. it's all open source, and I have a bunch of tutorials on my channe
@KTLO-m8pАй бұрын
@@AntonPutra thanks!
@simonecominato5561Ай бұрын
In the last test, are the Rust applications running in the same instance as the server? It seems like the Rust application in the Nginx case is stealing processor time to the server.
@Pero12121Ай бұрын
At 1:26 he explained where everything is hosted. Applications have separated machines
@simonecominato5561Ай бұрын
@@Pero12121 I missed it, thanks.
@AntonPutraАй бұрын
yeah, in this test web servers are deployed on dedicated vms
@mkvalorАй бұрын
Something isn't quite right here. In all 3 tests, you show the requests per second synchronized until a major failure happens. The time log at the bottom seems to indicate these requests per second metrics are being gathered over the same time period. Yet how can this be possible when one web server has a significantly higher latency, measured at the client, than the other? Once the latency difference hits 1ms, that means we should notice at least 1,000 fewer requests per second for each second that passes after that moment -- accumulating as time goes by. And, of course, this difference should accumulate even more quickly the higher the latency goes. It looks to me like you (accidentally?) decided to normalize the graphs of each contest so the RPS would match until one of the servers failed. Or if not, what am I missing here?
@pable2Ай бұрын
Like the others said, with Caddy would be amazing
@AntonPutraАй бұрын
yes soon
@MrDocomoxАй бұрын
check istio gateway vs nginx.
@AntonPutraАй бұрын
will do! thanks
@nexovecАй бұрын
nginx vs Caddy please!
@AntonPutraАй бұрын
will do!
@architector2p0Ай бұрын
Hi, could you create a video explaining step by step how to prepare such a testing system from scratch?
@AntonPutraАй бұрын
sure, but i already have some tutorials on my chanel that cover prometheus and grafana
@hatersbudiman7058Ай бұрын
Next Caddy and open litespeed
@AntonPutraАй бұрын
noted!
@VijayGanesh-s5qАй бұрын
Will you make a comparison between the best frameworks of zig(zzz), rust(axum), go(fiber). I have been waiting for this long time.
@AntonPutraАй бұрын
yes will do
@DominickPelusoАй бұрын
Redbean and caddy please
@AntonPutraАй бұрын
ok added to my list
@malcomgreen4747Ай бұрын
Test start at 5:21
@AntonPutraАй бұрын
i have timestamps in each video
@malcomgreen4747Ай бұрын
@@AntonPutra nice thank you
@damianszczukowski1912Ай бұрын
compare apache/nginx to traefik and caddy
@AntonPutraАй бұрын
yes will do soon
@GooblehusainАй бұрын
Anton, your name is very indonesian. More specifically, chinesse indonesia. Do you any association to indonesian culture
@severgunАй бұрын
This is slavic name lol.
@steeltormentorsАй бұрын
bro, jangan malu2in...dari logat bicara Anton Putra ini berasa Jowo banget ya? wkwk
@AntonPutraАй бұрын
no, but i was frequently told about my name when i was in bali
@AntonPutraАй бұрын
@severgun he was referring to my last name actually
@erickvillatoro5683Ай бұрын
Please do Traefik vs nginx ingess controller!!!
@AntonPutraАй бұрын
will do!
@ghostvarАй бұрын
We usually using these two, nginx for ssl dan reverse proxy and apache for php handler :/
@AntonPutraАй бұрын
yeah apache has nice php integration
@GuedelhaGamingАй бұрын
Nginx vs YARP
@AntonPutraАй бұрын
ok noted!
@MrAmG17Ай бұрын
Cowboy , Erlang and other high performers for future videos
@AntonPutraАй бұрын
will do soon, but first ruby on rails 😊
@nikitalafinskiy8089Ай бұрын
Would be interesting to see Kotlin (natively compiled) using Spring vs Go
@AntonPutraАй бұрын
ok noted!
@nomadvagabond1263Ай бұрын
You blur the texts, but the colors give them out🥲 chose colors that arent related to the technology.
@AntonPutraАй бұрын
😊
@MelroyvandenBergАй бұрын
Ps. latest Nginx is version 1.27.2 actually, right? Maybe it's the "latest" version on your system, but it's not The lastest version.
@AntonPutraАй бұрын
i used latest "stable" version not from the mainline
@VirendraBGАй бұрын
Try this test with Dynamic HTML Content fetched from SQL Databases.