This pattern of responding back with req_id and then let the client poll with it is exactly what we use throughout our system.
@FunkyELF2 ай бұрын
That's not really what this chat was about. The idea of getting a unique id and polling isn't new. This is specifically about long polling where if there's nothing you simply leave the connection open for a while until there is something which reduces the number of poll requests.
@imhiteshgarg2 ай бұрын
What do you mean by req_id? Are you getting inbuilt uuid with every request or are you creating one as usual?
@addanametocontinue2 ай бұрын
Yes, that's how standard short polling operates and is what many systems use simply because it's easy to implement. Client submits request, server returns response. Client begins to check status of request every certain interval with 99% of those requests the server simply telling you there is nothing or it's not ready. Long-polling is one way to address this. Another way is webhooks, which requires a lot of effort to implement.
@A2Fyise2 ай бұрын
It's fascinating to watch this, especially since I’ve been researching event-driven protocols, WebSockets, and fallback mechanisms as polling. I wasn’t familiar with long polling before-this was super informative.
@PaKa-kj3rjАй бұрын
I remember subscribing to you when you were around 1k subs (been through many google accounts and lost ya) back when you done the vlog by the water fountain. Wow, almost 500k, gratz!
@Makeupartist6201Ай бұрын
Why I can't found your videos seriously you are gem for me ❤
@sumer9999Ай бұрын
I used it as podcast while walking, awesome
@MaxPicAxe2 ай бұрын
I'm at 4:55 and I'm about the guess what long polling means. My guess is, why can't we just wait with the response. When there is no result, don't say there's no result, just wait until there is a result and then respond. Then you might only have to poll like every say minute, and it times out every minute, hence the name long polling.
@seephor2 ай бұрын
Hussein. I would be interesting to compare HTTP long-polling to Websocket for push updates. I know they are both TCP technologies and one is true duplex and real-time but that comes at a cost. Do you know how expensive is it for say 1 million long-polled connections vs. 1 million Websocket connections both at idle? I would assume long-polling would win in terms of raw resource requirements and compute or am I wrong?
@vishal8274Ай бұрын
love these podcasts
@antidotejack2771Ай бұрын
love you video @hnasr, but when are we talking about spring security architecture?
@WherelsWally2 ай бұрын
I'm not sure if I caught this, but how does this work in a distributed system? Suppose the Client makes the initial request to Node 1 to generate the report. Then, the same Client makes a long poll request to Node 2 to get the report when it is ready. How is the system set up such that when the report is actually generated, all Nodes receive this event and sends the response to all clients accordingly? Does each node need to have some kind of consumer to consume "completion" events from some centralised queue? Or do workers push this "completion" event (perhaps by some API call) to all nodes once they are done? Also, if too many clients hold a long polled connection to a server, at some point will the server no longer be able to handle any incoming connections?
@yyyd6559Ай бұрын
I think if the report have been previously requested by an initial polling request the result will be cashed so they can be fed to subsequent requests in different nodes. I might be wrong.
@naveenkothamasuАй бұрын
long poll pattern is only about how to reduce the chattiness of short poll technique. You are asking about how does Node 2 track the status of request submitted by Node 1 which is besides the point i.e. that part needs to be designed either for short or long polling. There are multiple ways to handle it - centralized state/db for all services or re-directing the request to the "owning" node by key hashing (sharding the request space to establish ownership mapping between nodes and shards and any request not owned by a specific node redirect them to the "owning" node) etc.
@ahmedkhudhair8035Ай бұрын
Hi hussain tell us about you experience in bahrain , when you was there Bahrain it was up to date in technology field ???, how you think the future in bahrain looks like?
@TASHO-khanАй бұрын
Hussein Please make a microservice course on udemy your teaching style is best in the market.
@mohameddiaa303719 күн бұрын
What is the best for serverles backends without using other services?
@FunkyELF2 ай бұрын
Never heard of this concept. I like it, it seems like a very clever way to be efficient. However, as you mentioned... proxies and other middleware might not like it, and to the client they might not be able to distinguish between a middleware timeout or the service itself timing out. Perhaps this could be mitigated the first time it happens by the client itself specifying an ever-decreasing timeout via headers or something until it finally hears back from the service. I'm only at 10:45 right now watching at 2x speed, so maybe it's discussed later ;-)
@BlindVirtuoso2 ай бұрын
Excellent one! Thanks! Much appreciated.
@Akshatgiri2 ай бұрын
Love this, but wouldn’t a push system be better is most of these scenarios? Is the case against websockets is that’s it’s quite resource intensive to keep connections alive, especially for cases where most of these scenarios application doesn’t need the “real timeless”
@mustafhussain76272 ай бұрын
Would be nice if you share some resources or articles.
@theweirdamir2 ай бұрын
443k subs, ur channel is safe now(:
@yousifmagdi2 ай бұрын
Great topic. What if the backend during a long poll request will also have to check the database for existence if data? Will the backend also kind of also query the database with an interval?
@hnasr2 ай бұрын
Absolutely, this is one implementation. another idea would be to have a broker in the middle (kafka/rabbit) and have the backend connect to the broker and when the message is received it is pushed directly to the backend which then distributed to the clients
@ashishrao30692 ай бұрын
Hey Hussein, When do we choose Long polling over websockets?
@matthewslyh4052Ай бұрын
Long polling is just an implementation over websockets or regular sockets. So long polling is just a specific way of using a websocket connection that differs from the more standard request/response paradigm. Long polling is characterized by a client opening a websocket or raw socket connection with a server and sending an initial message to the server basically saying "I'm here" or "Send me X data when you get the chance" and then leaving the connection open. So it's not "different" from websockets, it IS websockets, just with a specific implementation. Long polling is just the idea that instead of opening a websocket, sending a quick message, and then closing the websocket.... you open the websocket and leave it open while you go off and do other things while you wait for the server to get back to you. That's really all it is. You choose to use the long polling implementation when opening and quickly closing connections with a server in rapid succession becomes more expensive than just opening a longer lasting, single connection with the server and waiting for the result to come back later.
@hnasr25 күн бұрын
Good question, its depends on the use case but I would say if the clients are beefy enough and can handle the workload use websockets, else long polling.. this is because long polling allows the clients to consume on demand, where as websockets pushes data regardless whether the client is ready to consume it or not
@aidanwelch47632 ай бұрын
I'm considering having my requests respond with an "expected ready time" then the client only checks again if its past that, but obviously this doesn't work in all cases.
@jmfernandes82 ай бұрын
Hi Hussein! For the queue example, using the long polling model, are we essentially shifting the short polling process from the frotend (browser) backend to the backend event queue? If the client polls and the resource isn’t ready yet, the backend doesn’t respond immediately. Instead, it has to keep checking whether the resource has been processed and is available in the queue to be consumed, essentially short polling. Is this the standard way?
@imhiteshgarg2 ай бұрын
Hi Hussein, I just want to confirm that the api on which we get the uuid from the server and the api on which the client asks for the status of that uuid are different apis, right?!
@SuperEnigma272 ай бұрын
Is it fair to say Server Side Events is an example for a scenario where long-polling is used? Client makes a request and server waits on responding to the request till one of timeout, events are available
@hnasr2 ай бұрын
I think so, server side events the way i look at it is server side events is an infinite long polling model where we constantly keep getting “events” that is why in my course i had SSE as its own pattern but that is debatable
@Frank-qg4ik2 ай бұрын
24:15 sure just gloss over the headline. As far as I know the proxy isn't going to tell the server that anything happened. Maybe if the server has a good stack implementation it can catch the event by writing to a bad socket. But that is a big if. This just creates the opportunity for the backend to increment the pointer without verifying that the client was synced. If polling is absolutely required the it would be way saner to cache and not keep long lived connections open for no reason. But polling is almost never required these days.
@SnSn-p7n2 ай бұрын
Doesn’t Kafka use long polling these days? What method Kafka consumer uses to consume the messages?
@Frank-qg4ik2 ай бұрын
@SnSn-p7n dunno, haven't had a use for Kafka yet.
@deezydoezeet2 ай бұрын
I love it!
@pedramhaqiqi70302 ай бұрын
what polling strategies can be implemented on the client-side if we do not have control of the BE? Can we do any better than some tailored/configurable exponential-backoff. Lets say we have costly api calls and the service provider only supports a GET/ status request.
@CODFactory2 ай бұрын
At around 20:00 you mention that when the client comes back and queries again, the server can check if the response is in recv queue or not....the recv queue will be closed in general and there won't be any queue to query since the initial socket will be closed on both ends as the client dies, unless server keeps it open for some time even after client is gone...is this wrong misunderstanding?
@hnasr2 ай бұрын
Oh Im not sure I said receive queue (of course the kernel rcv queue is gone ) but i meant a logical queue maintained by the application which is in user space. it is just a regular user space queue to manage the jobs but good point physical receive queue (kernel) vs logical job application user queue
@aidanwelch47632 ай бұрын
Won't having open requests (at least in HTTP) consume a lot of memory on the server?
@blackswordsman9745Ай бұрын
hey there's something called git pulling right ? :D
@prof_chandanhkumar6616Ай бұрын
nice
@costathoughtsАй бұрын
I a mistake in an aws question because poll and pull guessing lol
@ankyrishuАй бұрын
Won't Long polling block the threads on backend which will eventually be a disaster for backend as it may have finite number of threads?
@arslans3036Ай бұрын
curious about this too, but if you are using something like async-await there is no application thread that gets blocked here. instead it'll all get managed by the kernel, but then you might run into memory issues if you are queueing too many requests.