This hits a sweet spot between a few things: a complex topic like load balancing, docker and docker compose (just the tip), and sockets, all under a practical example. This is great. Thank you!
@twitchizle Жыл бұрын
Its like g spot
@sunnyrajwadi4 жыл бұрын
Solves real life problems. Thank you.
@YGNCode4 жыл бұрын
This is really awesome. My current company using websocket and still don't need to scale. But, it might be in the future, so I was checking around. You video explain very well. Thanks
@DiaryOfMuhib3 жыл бұрын
I was really struggling with WebSocket scaling. Nicely explained!
@dearvivekkumar4 жыл бұрын
Hi Hussein, Thanks for making all these great videos, these days I used to check daily if you have uploaded any video or not. All your videos are very useful and answering lots of my doubts.
@ryanquinn1257 Жыл бұрын
Such a quick powerful demo. If you’re breaking Redis you’re already gonna need to be doing more advanced stuff than this haha.
@hoxorious4 жыл бұрын
By far one of the best channels I have ever subscribed to 👍
@jackykwan82142 жыл бұрын
Really wonderful video, keep going !! I love how you simplify the talk, and with a practical POC example !
@zcong34022 жыл бұрын
Very nice video, this provides a reasonable good depth of the architeture details of how to build a real time application, and especially how the redis (or any application can work as a broker) plays in this architectures. Thank you!
@letsflow.oficial10 ай бұрын
Hey Hussein, first of all, I need to say that I love your videos, they are very informative and very clear, even satisfatory for relaxing purposes kkkk relax while we learn :) Thank you for this video on websockets and redis. Could you please, explain how we could use this architecure to spin up a model handling? Let's supose a database to store all the messages and a central copy of the model, with disbributed copies of the model in each client. Then we would use the command pattern to alter the model based on commands, keeping a stack of commands, and maybe a snapshot to replay comands and have the ability do do and undo changes to the model. I'm facing this challenge right now and would love to hear from you on that.
@mytheens66523 жыл бұрын
I wish I could get you as my senior developer.
@basselturky40272 жыл бұрын
This channel is gold mine.
@jackcurrie48052 жыл бұрын
Your channel is fantastic Hussein, thanks for making such great content!
@hnasr2 жыл бұрын
Thanks Jack
@neketavorotnikov6743 Жыл бұрын
So as i understand our ws proxy server hold each ws connection from clients. So the question is If our ws app server need to be scaled to hold N ws connections, why our proxy is able to hold them all by one? Why is so big difference in performance between ws proxy server and ws app server?
@jongxina35954 жыл бұрын
Dude you have no idea how GLAD I am to have found this video! Amazing 😀
@hnasr4 жыл бұрын
Ben Sharpie enjoy! 😊
@M.......A2 жыл бұрын
At the end of the video, you mentioned that Redis is a single point of failure. Isn't it also the case with HAProxy? Thanks for the video.
@peterhindes562 жыл бұрын
Yes. If you host at multiple sites, you could replicate redis across. And then dns will handle your load balancing
@sezif31573 жыл бұрын
thanks for the video Hussein, one question : 13:02 - all the backend servers in haproxy.config are linked to 8080 , ws1:8080, ws2:8080... and so on. , but in docker-compose you gave them APPID, diferent than 8080, so inside the docker-compose network, those servers will be on the port you gave from environment. should this be ws1:APPID1, ws2:APPID2... etc?
@peterlau9731 Жыл бұрын
Really appreciate the video! Perhaps can also cover the db design/optimization for a chat app? I believe many interesting topics like sharding, and database selection can be covered; thanks and looking forward to future videos!
@lonewolf25473 жыл бұрын
You just solved one of my biggest problems...thanx a ton
@vilmarMartins2 жыл бұрын
Would the number of connections in HAProxy be a problem?
@hnasr2 жыл бұрын
It can at a large scale (hundred of thousands) Thats when you would have two HAProxy instances and either use keepalive with virtual IP or load balance them at the app client side through DNS. I wouldn’t go there unless absolutely necessary of course
@vilmarMartins2 жыл бұрын
@@hnasr Excellent! Thanks a lot!!!
@programmer13562 жыл бұрын
Brilliant. Inspirational. Thank you very much.
@ciubancantheb3st3 жыл бұрын
Can you do a tutorial on doing the same thing but with a redis cluster, as redis is single threaded and it might throttle the processes when you are as big as facebook
@vewmet4 жыл бұрын
Love your content bro! Awesome
@uneq95892 жыл бұрын
That was a really nice explaination. Just have one question on the reverse proxy. What would the limit on the number of websocket connection the reverse proxy be able to handle?
@sanderluis36524 жыл бұрын
wow, very clear tutorial
@hnasr4 жыл бұрын
Thanks Sander!
@localghost3000 Жыл бұрын
How would you gracefully handle if one of your server instances with an active connection goes down?
@shoebpatel40274 жыл бұрын
Hey, Hussein make a video on Elastic Search in Details.
@ProgrammerRajaa3 ай бұрын
Thanks for the awesome Content but I have doubt We have a reverse proxy that needs to maintain all ws connection active does the proxy will not get overloaded If so what the purpose of using reverse proxy we can use single ws server Can you clear my doubt
@ZoraciousDCree4 жыл бұрын
Really appreciate all that you have to offer! Good pace in presentation, interesting side notes, and keeping it fun. Thanks.
@hnasr4 жыл бұрын
Thank you 🙏 glad you liked the content 😍
@anthonyfarias3214 жыл бұрын
I recently implemented something very similar for a phone dialer. I used socket io, and a library for connecting socketio with redis, socketio adapter. It works smoothly.
@ashuthe15 ай бұрын
Very Informative :)
@sariksiddiqui60594 жыл бұрын
How does Load balancing look like for a websocket?Does sticky sessions at layer 7 is enough, since it's websocket, the TCP connection would remain open anyway no?
@hnasr4 жыл бұрын
Good question, web socket starts at layer 7 proxying (upgrade) then funnels back at layer 4 as stream level.
@hichem6555 Жыл бұрын
thank you , this video solve the big problem that I have !!!!!💪
@arbaztyagi1233 жыл бұрын
I have one doubt.. the way you stored the connections in an array.. is it the good way...? and how can I store these connections in a central store or memory where all other servers (machine) can access those stored connections. Thanks
@saidkorseir1923 жыл бұрын
Great work Hussein. Super clean. I have a question. What if I create docker-compose.yml with only ws1 and "docker-compose up --scale ws1=4", how does haproxy config file need to be? I couldn't find a way. Also I tried balancing with nginx.
@sthirumalai4 жыл бұрын
Hi Nasser. Thanks for the video and is pretty informative. What if one of the Websocket server crashes while serving the traffic. How can we guarantee the delivery to the clients connected to the WS server. Also how is HA guaranteed in REDIS? Awaiting your response
@hnasr4 жыл бұрын
Santhoshkumar Thirumalai since websockets are stateful and a server crashed, the client MUST restart the connection again with the reverse proxy so it goes to another server..
@sthirumalai4 жыл бұрын
@@hnasr : Thanks for the response. Did some research and found an interesting article on Session Management using AWS Elasticcache redis to persist the sessions. The solution you gave may not scale well I suppose. aws.amazon.com/caching/session-management/
@m_t_t_ Жыл бұрын
Is it a good idea to store all of the messages in a in-memory database though?
@962tushar3 жыл бұрын
A dumb question, can we not persist these connection somewhere like Redis (It'll have some cost associated to it due to serialization and deserialization, would it be negligible,? ) but it would make the load balancer avoid sticky sessions.
@davidmontdajonc63324 жыл бұрын
Im trying to figure out how to to this in aws with the autoscaling groups in case I'd need it. No idea how I will get which servers are suscribed info... Can I code that redis stuff on php or do I need to import all my ratchet ws logic to a nodejs app? Thanks for the video!!!
@vewmet4 жыл бұрын
Hey david, we are also doing on aws
@davidmontdajonc63324 жыл бұрын
@@vewmet Cool, how is it going? Have you found some good documentation or tutorials? Are you using elasticache for Redis? Cheers!
@kailashyogeshwar84922 жыл бұрын
Very nice explanation and demo. One question though, demo shows brodcasting of messages to all the connected clients.In case of delivery to single client does the backend to which user is connected also subscribes to user specific topic. eg: User 1 connected to Backend 4444, backend will also subscribe to a channel based on userId or something else to receive direct messages.Is there an alternate approach for doing the subscription.
@robinranabhat31252 жыл бұрын
Just curious. In this particular example, would clients from different tabs (not windows) be considered the same or not ?
@abdallahelkasass63323 жыл бұрын
How to save opened connections after reload servers.
@abhimanyuraizada77132 жыл бұрын
Hi Hussein, As you have created a simple websocket server here, cannot we spin it up with cluster module as in most cases in production, the servers use Nodejs clustering, so will we connect our websocket in that case to different worker ids?
@rajatahuja47204 жыл бұрын
I was looking for the same. You rock :)
@hnasr4 жыл бұрын
Thanks glad you found it!
@lucas_badico4 жыл бұрын
Just build one like this using Go. Was really satisfying!
@hnasr4 жыл бұрын
Lucas gomes de santana nice work! It does feel satisfying when you finish a project
@lucas_badico4 жыл бұрын
I really wanted to discuss my approach with you. I build my WebSocket server in go, and I have a feeling that I don't need a Redis connection because my pub-sub is inside the application. Anyway, thanks for the videos, learning a lot with them.
@mahmoudsabrah51583 жыл бұрын
Is there a source ports limitation between the reverse proxy and the websocket server ? , because the reverse proxy has to reserve (Source port) for each websocket connection to the websocket server, and the websocket connection will still alive for a long time, so we will run out of source ports really quick at the reverse proxy
@animatrix18514 жыл бұрын
Could you give a situation where you'd need to scale? When do you do this, when the socket has >64k connections (or) maxed out ram because of high load of messages.
@hnasr4 жыл бұрын
Adithya angara one example when one server can no longer handle all your users this need to be tested because it depends on the app. You app might be very CPU/mem hungry and could only handle 10k web socket connections. However your app might be light and efficient and could handle 100k .. You need to monitor your server and your clients and see if the experience starts to become degrading
@ragavkb25974 жыл бұрын
Good video and i enjoyed it. In your example you stored the connections in an array in the nodejs. Is this typically how real world application do or are there any patterns ? It would be nice to have tutorials on connection drop from a client and how things get cleaned up eventually on the server.
@gurjarc12 жыл бұрын
nice video. I have one question. What if there are thousand users, how will load balancer know which user's call to map to which stateful server. Will we refer to some db that holds the users and do the mapping?
@MAURO28ize2 жыл бұрын
Hi, How could i save the connections of 2 servers ? ,for example : 2 users could connect to different servers , so if a server have to response to 2 clients , it wouldn't find the connection data for response them. Help me please.
@pickuphappiness50272 жыл бұрын
In one to one chat case, in redis db we can have user server mapping and when multiple servers recieve message from server 1 - they check whther they are connected to intended user and specific server connected to intended user can process that message(in one to one chat) is this possible?
@UzairAhmad.2 ай бұрын
Implemented same thing in django but now i understand why we use redis.
@stormilha3 жыл бұрын
Awesome content!
@XForbide2 жыл бұрын
Can someone help me understand something? From what i understand is that load balancers like NGINX have a max connection limit of 66K ish due to limit on number of open file descriptors you can have. So if connections are long lived, doesnt that mean in such an architecture youre gonna get bottled necked to 66K at the load balancer level (or any intermediate proxy)? So regardless of how many machines you have behind the load balancer it will always be capped that amount. So what is the correct way to scale to say 100K concurrent connections? Ive read somewhere about dns load balancing. is this the way to go?
@mayankkumawat88024 жыл бұрын
How would this work if there are multiple channels. With different users in them
@fxstreamer2382 жыл бұрын
I ran into a redis npm library error on redis publish event in docker compose with error that seams to be incompability issues with latest node version and latest docker. When bunch of noobs have access to open source code and can contribute and write whatever they want that happens. not only they change the way the library was configured but also they mess up with all kinds of nodejs arguments (coding with new way of nodejs syntax just to be fansy) to manipulate and make it suitable or unsuitable for a version of windows or node and sometimes like this when even all are the latest version something breaks
@TheNayanava3 жыл бұрын
Hi Nasser, I have never implemented websockets ever, but here is something I want to understand. When a TCP connection, a persistent one, is established between the client and the server, how do we decide on what ports to open on the server side. For example: in a normal http communication scenario, on the edge we would enable 443 to allow only (s) communication, and then on the actual servers open up 443 or 80 depending on whether or not we have a zero trust architecture pattern. But how is it done in case of websockets? I understand we maintain a registry to store the information about which connection the server event should be pushed to, so that it can be routed correctly to the client. How many ports do we open up on the server side.. in short, when any one says we scaled up to 1 million connections on a single machine, how is that achieved??
@MidhunDarvin6253 жыл бұрын
What are the connections limit on the load balancer ? and How will we scale the load balancer if there is that limit ?
@m_t_t_ Жыл бұрын
There won’t be a limit because the load balancers job is so small. But if we started getting google like traffic then we would need multiple datacentres and DNS would do the load balancing between the load balancers
@angeliquereader Жыл бұрын
Great Content! Just a. doubt. So we're spinning up 4 different instances. Each instance will have it's own "connections" variable. So if a client is connected to instance 1 and another client to instance 3, then how id the message sent by client1 reached to client3?
@anchalsharma08434 ай бұрын
Redis PubSub can be used here again. Hussein took an example of group chat. But to make it 1:1 massaging here's what you can do. 1. Servers Setup remains the same. 2. When a client 1 connects to a web server, we need to subscribe to a redis channel named 'client1'. 3. When other client connects to some other server, we sub to the redis channel named 'client2' from that same server 3. Suppose client 1 sends a message to 2. Upon receiving the message on client1's server, you will publish this message to the channel of the intended recipient i.e channel 'client2' 4. As we already had client2's server sub to the channel 'client2', that server will get the message published by client 1. And you ferry it to the user2 via the websocket connection
@angeliquereader4 ай бұрын
@@anchalsharma0843 my doubt was for this group chat applications only! so basically if we also do console.log(connection.length) will it be 1 or 4? ( I guess 1 )
@momensalah84974 жыл бұрын
Well explained thanks. but I have a question, how can all this node apps listen to one port (8080) without an error? should they be ported or exposed to a different ports from each other?
@hnasr4 жыл бұрын
Momen Salah thanks Momen! They listen on the same port without any error because they are different containers which each has a unquie ip address. If they were on the same host network then correct you have to pick different ports
@diboracle1232 жыл бұрын
HI Hussein, No doubt it is good informative video but one doubt, here bottle neck is the load balancer. If we have millions of users and only one load balancer is sufficient to handle those many tcp connections ? one more doubt I have (it is different context) let's say I have a trading application like upstock, zerodha , we can create a watchlist of stocks. Those stock price are updating frequently , If UI sends request to the server to fetch latest price then server will be bombarded with lots of request and it is not scalable also. How we can do? pls give some thoughts here..
@m_t_t_ Жыл бұрын
If the load balancer started to be the bottleneck, then another cluster would be made and distributed through DNS
@shailysangwan39773 жыл бұрын
The content is explained pretty well and spontaneously enough for one to follow but the the pitch of the voice varies too much to keep the volume constant through the video. (i'm using earphones so it might just be me)
@earlvhingabuat89843 жыл бұрын
New Subscriber Here! Thanks for this awesome video!
@hnasr3 жыл бұрын
🙏🙏🙏
@karthikrangaraju94213 жыл бұрын
Hi Hussein, pub sub is not real time no? It’s pull based. Instead I think we should use redis only for bookkeeping which server has what connections and the server themselves push messages to other servers directly.
@hnasr3 жыл бұрын
You can implement pub sub as push, pull or long polling.
@ahmeddaraz84944 жыл бұрын
inspiring video hussien, thanks, but I have a question, can we add a HA mode for haproxy (i.e.. by using keepalived) and that has no impact on the established tcp websocket connections ?
@hnasr4 жыл бұрын
Interesting question Ahmed! It really depends if its active/active or active/passive.. if you used keepalived with haproxy, keep alived will make sure there is only one active haproxynode and all your sockets will go through that. If that haproxy goes down keepalived will switch to the other haproxy node and all connections will be dropped (because websockets are stateful) Active active ensures more well balanced configuration and less likely to fail but still failures can happen and unfortunately here the client has to manually reestablish the connection..
@ahmeddaraz84944 жыл бұрын
@@hnasr I was thinking that probably the tcp connections can be some how shifted, as the virtual IP is same and tcp is dealing with IP/Port (probably I am wrong here), I am still not quite sure about that and I also did not do any research, but your answer is more sense !
@zummotv10134 жыл бұрын
Does Google keep (notes making app) use web socket? What are the things to keep in mind if I am making a cloning of google keep?
@hnasr4 жыл бұрын
zummotv not sure what they are using but if its google probably gRPC, instead of websockets. That being said you get the same result. Notes are little tougher specially if you want to reconcile changes
@esu71164 жыл бұрын
Do you have any ideas on how to scale the reverse proxy too or this is not necessary?
@hnasr4 жыл бұрын
Esu you can if your monitoring shows that the reverse proxy can’t handle the load, you can deploy another reverse proxy on an active/active cluster and put them behind a DNS SRV record Check out the video here Active-Active vs Active-Passive Cluster to Achieve High Availability in Scaling Systems kzbin.info/www/bejne/ml6ll5xrpt6qfNE
@implemented24 жыл бұрын
How does proxy know which server to send data to? Does it have a mapping from clients to servers?
@hnasr4 жыл бұрын
Great question, you specifically asked about the proxy (not reverse proxy) right? the proxy knows because the client actually wants to go the final destination end server which is example is google.com Let us say you want to go to google.com and you have configured your client to use "1.2.3.4" as a proxy so in HTTP at least the client adds a header call "host: google.com" and that is how the proxy knows where it will forward the traffic to looking at layer 4 content of this packet, the client puts the destination IP address (1.2.3.4) as the PROXY not google.com ' ip address.. proxy is the final destination to from a layer 4 prespective, but the layer 7 the real final destination is google.com
@kiranparajuli67242 жыл бұрын
Hi Hussein, really nice video. It was very helpful, informative. At one part of the video, you talked about the drawback of redis that it have to register two clients for a single server as subscriber and publisher. What software you mentioned to solve this problem? It was little unclear in the video.
@yelgabs Жыл бұрын
Isn't the load balancer here a single point of failure?
@giangviet51552 жыл бұрын
This video just explains about load balancing for somethings stateful like WS. Not sure about scaling. While mention to scaling, you must resolve both scale-out, scale-in problems. But it's seem with rounded-robin and HAProxy-config-file like that. It's impossible to scale in/out. Anw, thanks for great video.
@alshameerb4 жыл бұрын
How can we send some data when we connect...it’s like client wants to store data in a certain location ...I need to send this location to client during connection...how can we do that...
@alshameerb4 жыл бұрын
I mean send location to server...
@developerjas3 жыл бұрын
You saved my life!
@denisrazumnyi64564 жыл бұрын
Well done !!!
@hnasr4 жыл бұрын
🙏
@OneOmot3 жыл бұрын
What if you have just another websocket server that is just connected to all other ws server instead of redis. Each message will be send to its clients and one of it is the last ws server that one sends it to the other server. So each ws don't need to know redis server. Just the on ws server is configured to know other ws server and connects in case of failure the other ws can just operate fine. You can scale this just put two or more connector ws!?
@hnasr3 жыл бұрын
Yes that is possible for sure, its just you will be building your own version of a pub/sub system using websockets. Assuming synchronously. Possible and had its own use cases.
@HM_Milan3 жыл бұрын
can we rederect all websockts to another available docker aws deferent availablity zonez
@hnasr3 жыл бұрын
Yes! You can set a rule in haproxy to redirect traffic to another backend based on the source ip for example. Better approach is to use geoDNS
@HM_Milan3 жыл бұрын
@@hnasr thanks
@5mintech5674 жыл бұрын
Hi First of all i like your videos and watch these stuff u are creating that is awesome but i have a doubt regarding docker file workdir path so my question is that while i am creating these docker file i am unable to link the volumes or the path like /home/node/app so can u tell me how i can bind the volumes for the images.I am mostly uses the ubuntu system for my development so it can change the folder structure ?
@nailgilaziev4 жыл бұрын
Hello and thanks! You say that there is an implementations of reverse proxies (gateways) that can create really one physical tcp connections, but this is another story) can you tell it? at least as answer to this question. Thanks!
@hnasr4 жыл бұрын
If the client of the reverse proxy is within the same subnet, the client can set its gateway IP address as the reverse proxy ip address. This way any packets will immediately go to the gateway (reverse proxy) through the power of ARP. And the reverse proxy simply use pure NAT to replace the packet as its own public ip address before sending it to the backend This is exactly how your phone connected through the WIFI router works. All packets go through your router by default because its the default gateway. You can actually see this in ur wifi settings
@predcr2 жыл бұрын
Can you please help me in scaling up my redis server
@anuragvohra55194 жыл бұрын
Isn't load balancer and reddis bottle neck of your application scalling?
@hnasr4 жыл бұрын
Anurag Vohra there will always be bottlenecks for sure. No system is perfect. I would however relief that bottleneck by introducing many loadbalancers and throw them behind an active/active cluster. Active-Active vs Active-Passive Cluster Pros & Cons kzbin.info/www/bejne/ml6ll5xrpt6qfNE
@anuragvohra55194 жыл бұрын
@@hnasr thanx that cover what I was searching for!
@anuragvohra55194 жыл бұрын
@@hnasr Do you have any protal where one can reach you for job offers ? [kind of freelancing]
@FAROOQ951234 жыл бұрын
Please make video on elastic stack
@sreevishal22234 жыл бұрын
Awesome 👌👌, All i wanted at the moment.!!. Also instead of building same container multiple times with different port can i spin up a docker swarm??.
@hnasr4 жыл бұрын
Sure you can!
@wassim56224 жыл бұрын
I dont get it this multiple servers things, does it mean buy mlre hosting plans or what does it exactly mean by multiple servers ?
@hnasr4 жыл бұрын
wassim could be multiple physical machines, or multiple virtual machines in a single physical machines or multiple containers in a single machine .. really depends how far you want to go with scaling
@wassim56224 жыл бұрын
@@hnasr Thanks !!
@dgalaa58504 жыл бұрын
when i use nginx servers like this can i access to other services by socket id
@hnasr4 жыл бұрын
Am not sure there is a socket id but you can sure create an id and use it in rules I think
@saurabhahuja67074 жыл бұрын
Here haproxy is maintaning connection between backend and fronend, will that cause bottleneck.. If yes then how to solve it.?
@kozie9283 жыл бұрын
you can create multiple haproxy/nginx instances with docker compose for example
@bisakhmondal83713 жыл бұрын
Hey Hussein, Thanks for the awesome content man. I am extending the application to a multiroom chat server kinda like discord and also for person to person unicast. But in this highly distributed environment, I am choosing apache Kafka for pub/sub (one reason is the connectors for persistency). But I am still thinking about how to serve the pub/sub system because creating a single topic for all chat rooms (with some meta information for each message meant for that room) is a disaster but also creating individual topics for individual chatrooms is also a disaster (because I don't have any idea how to consume messages when the number of topics is humongous). My main goal is selective broadcast to all the users connected to each node js server and joined a particular room. Any thoughts here, I would love to hear them. If possible could you please provide any reference to articles/blogs related to this content?
@manglani873 жыл бұрын
Hi Hussein, I have a similar question / doubts, can you please help here!
@mti2fw2 жыл бұрын
Hey! I imagine that you would like to save in your database the user chat groups id, for example. Am I right? If yes, you could test subscribe your user in each of them, so each chat group would have a different channel for the messages. I'm not right if this is scalable but it's a idea that you could try to use
@Samsonkwakunkrumah3 жыл бұрын
How do you handle offline users in this architecture?
@jeyfus2 жыл бұрын
One way to handle this could consist of persisting the messages of the related topic(s) in a database. When your (formerly) offline client goes live, they can fetch the whole history using a regular http request.
@sergiosandoval38213 жыл бұрын
Master !!!!!!!!
@RahulSoni-vc8kv4 жыл бұрын
Does not the ha proxy become bottle neck?
@hnasr4 жыл бұрын
Rahul Soni it does of course that is why you need to scale the haproxy itself , you can either use active active or active passive cluster Active-Active vs Active-Passive Cluster Pros & Cons kzbin.info/www/bejne/ml6ll5xrpt6qfNE
@houssemchr15394 жыл бұрын
Well explained thanks, can you explain how push notifications works like fcm, and if there is any alternative as open source
@hnasr4 жыл бұрын
houssem chr thanks! Made a video on push notifications here kzbin.info/www/bejne/bnWUf3Sbr6hges0
@EhSUN372 жыл бұрын
we subscribe and publish to ""livechat" but we are receiving from "message" ? wtf is "message" ? and what happened to "livechat" then? very nice explanation dude !
@adb863 жыл бұрын
Hussein , Awesome explanation on haproxy, can you please tell us how to run haproxy on container with https . Creating certificate on host machine works great wen haproxy is also started on host machine , but wen haproxy is running as a docker container with certificates created on host machine does not work . I did not find a way to create cert from container itself .Your input is valuable , please respond.
@trollgg7773 жыл бұрын
Let's say you have an API gateway, after that, you have an Auth microservice that validates requests. And also you have a cluster behind a load balancer with an instance of WebSocket. How do you connect your clients to the WebSocket? lol i'm struggling with this!!!
@randomlettersqzkebkw2 жыл бұрын
I do not understand how this is scaling, when the middle load balancer is actually connected as well to the clients. If it merely routed the request directly to the websocket servers, then ok, but its not doing that :/
@dmitrychernivetsky58762 жыл бұрын
"scaling" with a single point of failure redis. FYI, most of the libraries and therefore code with respect to connection to clustered redis is entirely different from what was presented.
@gerooq Жыл бұрын
But why have multiple WS servers and then use Redis to share messages when you can just run a single WS server that uses in-process memory to store a map of "channel name" to list of sockets that requested to subscribe to that channel. Then it's trivial to simply divvy emitted messages among other sockets in the same channel 🤷♂️. I mean it's way more performant especially if done multithreaded.
@nit500002 жыл бұрын
Thank you for the great article. It is very useful indeed. (Sorry but I feel your voice is very annoying. 😣😂🤣 )
@vibekdutta65393 жыл бұрын
A big fan of your channel always has been. Can you please explain the difference between subscriber.on (subscribe) and subscriber.on(message), I didn't understand the direction of the data flow here.
@praneetpushpal14104 жыл бұрын
Nice tutorial! Thanks! If you have any free time, could you please share your insights on this: "Twitter account of top celebrity hacked". How this would have happened even after so much security at twitter.