When to use a Publish-Subscribe System Like Kafka?

  Рет қаралды 30,366

Hussein Nasser

Hussein Nasser

4 жыл бұрын

In this video, I explain when to use a pub-sub messaging system such as Kafka or redis , rabbitMQ. I compare this to request response system
🏭 Software Architecture Videos
• Software Architecture
💾 Database Engineering Videos
• Database Engineering
🛰 Network Engineering Videos
• Network Engineering
🏰 Load Balancing and Proxies Videos
• Proxies
🐘 Postgres Videos
• PostgresSQL
🚢Docker
• Docker
🧮 Programming Pattern Videos
• Programming Patterns
🛡 Web Security Videos
• Web Security
🦠 HTTP Videos
• HTTP
🐍 Python Videos
• Python by Example
🔆 Javascript Videos
• Javascript by Example
👾Discord Server / discord
Support me on PayPal
bit.ly/33ENps4
Become a Patreon
/ hnasr
Stay Awesome,
Hussein

Пікірлер: 35
@pajotrus
@pajotrus 4 жыл бұрын
better cliffhanger than netflix
@hnasr
@hnasr 4 жыл бұрын
Piotr J next week 😂 stay tuned
@CrazyColdCrapper
@CrazyColdCrapper Жыл бұрын
That teaser comparison table...
@slick-riq
@slick-riq 2 жыл бұрын
Love how you add enthusiastic story telling to your videos!
@noherczeg
@noherczeg 2 жыл бұрын
Technically you can disconnect after upload succeeded on req/resp as well. You don't need pub-sub for this. You just need to organize processes/services differently.
@moulayeabderrahmaneelimbit1728
@moulayeabderrahmaneelimbit1728 Жыл бұрын
Good point, but the idea here is decoupling (no service knows about the existance of the other services), the service just publish in the queue and the other services work event based
@romantsyupryk3009
@romantsyupryk3009 4 жыл бұрын
Thanks so much for this video tutorial.
@mohammadnashashibi2942
@mohammadnashashibi2942 4 ай бұрын
awesome videos as always! just one note though, the diagram you have is presenting the flow as if the broker is holding the actual raw video data for all subscribers which could be miss leading. As you already know in reality the UPLOAD service will typically store the raw video data in some sort of data storage (e.x. AWS S3 bucket) then publish a message to the broker containing the event type and video url (something like "topic=new_uploaded; video_url=..."), so any subscriber consuming this message can go fetch the video and do whatever they need to do and repeat the process until reaching the notifications service where the flow ends.
@ShreyaSharma-vg7eb
@ShreyaSharma-vg7eb Жыл бұрын
It was funny when the line came up "Garry V, producing content like there is no tomorrow ' xD It was nice, I like your content
@the_sweet_heaven
@the_sweet_heaven 3 жыл бұрын
Very good explanation. Love it. Do you also have its java demonstration?
@noahwilliams8918
@noahwilliams8918 4 жыл бұрын
The next time you make another demo site using Steve's Brothel, you should animate it and make "Slam in there and Subscribe" fly across the screen!
@rampandey191
@rampandey191 4 жыл бұрын
Great content mate
@hnasr
@hnasr 4 жыл бұрын
🙏
@noahwilliams8918
@noahwilliams8918 4 жыл бұрын
Oooooobviously this guy has to be up all the time - YEAH, U BET I CAUGHT THE THAT'S WHAT SHE SAID 🤣🤣🤣 I mean, what else am I realistically gonna notice at 1am while listening to a software engineering video that I'm too tired to actually parse!
@a_k__
@a_k__ 4 жыл бұрын
Thank you for the informative video,. I have a question, do you literally publish the video to Kafka or is just the text/string that the job is done?
@Oswee
@Oswee 4 жыл бұрын
Kafka is not the storage. Video is stored on appropriate storage. Kafka is used as middleman to capture who did what. Like, compressing service sees an event "VIDEO_UPLOADED" and so it know that it can fetch that video from the "video storage" and compress it. Once it finish compressing, it publishes new event like "VIDEO_COMPRESSED". Thats it. Compressing service does not care about no any other service or who will consume his emitted events. It just emits that event when job is done. The beauty of Kafka is that you don't need to query it. Kafka will "notify" any subscriber that there is a new event. I think it is worth to make some research on what is Kafka and how it works. :)
@a_k__
@a_k__ 4 жыл бұрын
Dzintars Klavins I have always had this question. If Kafka works the way you mentioned why not define some endpoints in your microservice (like express endpoints) and those different endpoint can call different functions when they get a get or post request. I assume the get request is more expensive that publishing an event in Kafka but we are introducing a new service and layer of abstraction. ( violation of YAGNI?). Or maybe Kafka is helping in terms of separation of concerns. What do you think?
@Oswee
@Oswee 4 жыл бұрын
@@a_k__ It's mostly about distributed services. No single service should rely on any other service directly. Like, payment services can blow up, but you still are able to sell the products and capture the orders ( i mean capturing orders is more important, than loosing them). When payment service will be restored, you can process the unprocessed orders. Sorry for very dumb and simple example. Also it's about team management. You can (and should) make a monolithic service if you have a small team. But what if you have 10000 developers? Or even 100? They can't wait on each other while some functionality will be implemented by the other team. You can do it like you said, and gRPC almost works that way. You have gRPC Client and Server in every microservice. And you just call functions (called Remote Procedure Calls), which returns the data. But what if the RPC endpoint is down? You just throw away that request? It's OK, if that was just simple GET request. But what if that was POST or other? You will throw away that payload? This is where Kafka can play in. Instead of directly calling other service, you emit an events in past tense - "THIS_HAPPENED + payload", "THAT_HAPPENED + payload" and so on. Also.. it's important to separate "GET" from "POST"... i mean... Read and Write. In most cases you have separate service for "Create Order" and separate service for "List Orders". If the Write side goes down, your Read side still can serve the data. In the query side, you can call the "Orders Query Service" directly via ... any API call (i still prefer gRPC), but i still would emit the events who was requesting what data. In parallel. Also... why many uses Kafka is ability to use it as "Event store" (not quite right naming there).. but basically you can store all the evens and to have huge audit log. At any time you can see in what state we was 2 weeks ago, a year ago and so on. THIS IS HUGE!!! What it means... you have all the historical data. If you came up with new idea and you need data for that, you can reply all the historical events and build up your required data view or materialized view. Like, combine bits and pieces from different topics and build the new data with all the history. This is also HUGE for the analytics teams. On the banking industry, this is no-brainer at all. They all uses some stream processing. But, you should understand that... as single developer... this is HARD!!! Really, really hard!!! No so much from the development perspective as much from management perspective. Once you will move in this direction, you instantly will encounter the hustle with service discovery, service security, CI/CD, automation, etc, etc.. it just will eat up ton of time. I you are not in banking, transportation, enterprise arena.. you should not look at this direction. You can use Kafka just as event log if you will. If you are interested, then look up for "CQRS", "Event Sourcing", "Event Driven Architecture", "DDD - Domain Driven Design". Those will be good starting points. EDIT: Forgot to mention. Not directly related to the Kafka, but all this allows to write any service in suitable language. Like... need to serve http? Take Go. Need to make compression? Take C. And so on. Every team is independent and Kafka is their middleman for the data exchange.
@a_k__
@a_k__ 4 жыл бұрын
Dzintars Klavins Thank you for your thorough answer. That was very helpful. I can see how useful event sourcing can be. You can rewind to a point back in time say 2 weeks ago by just looking at logs and see what happened at that time when you had bugs.
@Oswee
@Oswee 4 жыл бұрын
@@a_k__ Yes. This benefit as well. You can fix many issues, just by replaying an events.
@hasan0770816268
@hasan0770816268 Жыл бұрын
6:32 smooth "thats what she said" joke
@toekneema
@toekneema 3 жыл бұрын
are we gonna just let that "thats what she said" slide? 6:33
@falconheavy595
@falconheavy595 2 жыл бұрын
Does facebook notifications also use Kafka ?
@nilanjansarkar100
@nilanjansarkar100 4 жыл бұрын
Nasser, request you to please make a video on CQRS. It needs your de-jargoning :P
@hnasr
@hnasr 4 жыл бұрын
This is one of the most confusing concepts that is full of jargon need to understand it fully before I make a video on it..
@nilanjansarkar100
@nilanjansarkar100 4 жыл бұрын
@@hnasr This got even you confused! whoah...
@talesara74
@talesara74 2 жыл бұрын
Will you actually publish the actual comtents or the metadata of contents events?
@ruixue6955
@ruixue6955 2 жыл бұрын
1:01 4:14 Message Queue topics/channel 4:40 middle layer 6:35 Topic
@jackmaison4209
@jackmaison4209 4 жыл бұрын
I'm confused. Is this a reupload?
@hnasr
@hnasr 4 жыл бұрын
Jack Maison yes its a short highlight from an 1 hour video (pub/sub) answering a common question.
@jesseinit
@jesseinit 4 жыл бұрын
Get a bit practical with these concepts
@Oswee
@Oswee 4 жыл бұрын
I don't think it's possible. First - it's not his channel format. 2nd - to show this in practice ... it's just crazy amount of variables and job. Like... you want to do that in Go, Rust, Java or all of them? Do you need service discovery, tracing, context passing. Single Kafka is huge topic. Then... how to package those services, where and how to run them? Then security, service identity. So.... basically too many things to cover and show +/- good practices. But if you want to try it... i can suggest simple example. Make websocket API gateway... take request... pass it to Command service which should simply publish an event into Kafka topic. Then.. make another Event Handler service which subscribes to that Kafka topic. Once it receives new message make some logic with that message data. Publish the outcome in new topic. ... Then make new service - Projection service which consumes EH outcome topic and stores the data in some database. Can be any, MariaDB, Cassandra.. whatever. Publish an event when data is stored in DB successfully. ... Then make new service called Query Handler, subscribe to Projection outcome topic. Implement gRPC server and return materialized view data from DB when WebSocket API requests it. ... In WebSocket API service implement Kafka consumer and gRPC client. Make it listen to Projection outcome topic. Once projection event happens, make rpc request to the Query service which will return that materialized view. Take the payload and broadcast it back to the WSS channel. That's it... super-oversimplefied but pretty scalable distributed event driven microservice architecture. :) Have fun. :) (sorry for mistakes, i'm on mobile there)
ZeroMQ (ØMQ) Crash Course
29:55
Hussein Nasser
Рет қаралды 52 М.
Is there a Limit to Number of Connections a Backend can handle?
18:43
Hussein Nasser
Рет қаралды 32 М.
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 17 МЛН
DELETE TOXICITY = 5 LEGENDARY STARR DROPS!
02:20
Brawl Stars
Рет қаралды 16 МЛН
Redis pub-sub vs Kafka? What to use for a chat application?
27:01
Better Dev with Anubhav
Рет қаралды 5 М.
You want to use Kafka? Or do you really need a Queue?
11:43
CodeOpinion
Рет қаралды 23 М.
System Design: Why is Kafka fast?
5:02
ByteByteGo
Рет қаралды 1 МЛН
Layer 4 vs Layer 7 Proxying In Details Explained with Examples
24:24
Hussein Nasser
Рет қаралды 24 М.
NEVER lose dotfiles again with GNU Stow
14:33
typecraft
Рет қаралды 15 М.
КОПИМ НА АЙФОН В ТГК АРСЕНИЙ СЭДГАПП🛒
0:59
Неразрушаемый смартфон
1:00
Status
Рет қаралды 829 М.
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 5 МЛН