Just yesterday i thought, this series is no longer getting new content and is abandoned, and this came up. Very happy to see the new content. Please keep adding to it, the content is very good, and so much more to cover.
@ConceptandCoding3 ай бұрын
@@anshulkatare yes, more videos soon going to come
@architgarg10883 ай бұрын
Love the way you explain complex problems in a very easier and understandable manner.
@aashishkalia82693 ай бұрын
shreyansh one humble request when you teach terms like rate limiter , consistent hashing, jitterness atlaest give some reference abt their implementation in which java api are these used in production envireoment , bcz this is what should distinguish you from others as drawing boxes is not that a big deal , please take it as positive feedback
@HungryEagle26103 ай бұрын
Your explanations are crystal clear. Thank you for sharing this. Wouldn't this been load tested during the QE phase using libraries like locust
@sanketh7683 ай бұрын
did not understand the logic behind reties , it will be too late for the request to complete in case of retries customer would have already given up and closed the app and made a retry manually by making a new request
@ankitbiswas1013 ай бұрын
Was waiting for just this!
@baluk6710Ай бұрын
great explanation but in the age of horizontal auto scaling, what is the need for managing thundering effect in the application logic. Can you please give some practical scenarios where it would make sense to handle it this way.
@sanketh7683 ай бұрын
BMS is a very matured system , is this the first time such a high load has come ? there have been block buster movies and events in the past. I thought they might have seen these issues in the past
@amanbhagat16163 ай бұрын
Thanks for sharing this.
@sahilsharma24453 ай бұрын
Hey Shreyansh, is it advisable to use kafka/sqs before the request reaches application layer? We can implement rate limiting on messaging queues and prevent system from processing more requests and failing. Are we not doing this because we want to synchronize the process?
@rishiraj25483 ай бұрын
Thank you
@subhamsadhukhan90983 ай бұрын
Can you explain more about autoscale ?
@saravanansivakumar92593 ай бұрын
Didnt they have rate limiter in their system ?
@gauravyadav0423 ай бұрын
Hi Shreyansh , During retry, would the request again go through API gateway or would it be tried from load balancer ? Can you please also confirm who would be doing the retry ? I am thinking since the queue is full the request won't be reaching the application server. Please confirm.
@ConceptandCoding3 ай бұрын
again answer is not straight forward, retry can be done by application, load balancer or application gateway. but let say you have chosen retry via application level or load balancer level , but in later point some other system fail but your application retried passed, in that case, the whole request will be retried again. so generally retry should happen from single source like here application gateway. but if you are designing that retry also required at application or load balancer, then it can be done but proper handling need to be done (for idempotency, latency etc)
@gauravyadav0423 ай бұрын
Thank you Shrayansh!
@SivakiranBoppana3 ай бұрын
why they couldn't leverage rate limiter?
@iranna90652 ай бұрын
I request you to use a bigger board and write in boggers fonts
@ConceptandCoding2 ай бұрын
noted
@aakash.nagpal983 ай бұрын
respect ha bhai
@RoyBoyLab3 ай бұрын
Nice
@random45733 ай бұрын
Why auto scale at all? In this unique situation they know there will be lot of traffic for a small period of time may be 30 min to 1 hr. So just get more resources for that period before hand.
@sahilsharma24453 ай бұрын
I feel using jitter at application level or a rate limiter is much more cost efficient than using more resources. Users would have no problems waiting 5 minutes or more. Your thoughts?
@girishanker37962 ай бұрын
Pre-scaling would be great in this situation as we are sure the traffic is going to be high. Using mq's as rate limiters would also be a great design for slowing up throughput.