Event-Driven Architecture (EDA) vs Request/Response (RR)

  Рет қаралды 99,316

Confluent

Confluent

Күн бұрын

In this video, Adam Bellemare compares and contrasts Event-Driven and Request-Driven Architectures to give you a better idea of the tradeoffs and benefits involved with each.
To learn more about Event-Driven Architectures, check out Adam’s video on the 4 Key Types of EDA: • 4 Key Types of Event-D...
Many developers start in the synchronous request-response (RR) world, using REST and RPC to build inter-service communications. But tight service-to-service coupling, scalability, fan-out sensitivity, and data access issues can still remain.
In contrast, Event-Driven Architectures (EDA) provide a powerful way to decouple services in both time and space, letting you subscribe and react to the events that matter most for your line of business.
RELATED RESOURCES
► Tips to Help your Event-Driven Architecture Succeed - • How to Unlock the Powe...
► Introduction to Apache Kafka - cnfl.io/3Q7Pdn8
► Introduction to Kafka Connect - cnfl.io/3W24w4C
► Designing Events and Event Streams - cnfl.io/3vYIWU5
► Event Sourcing and Event Storage with Apache Kafka - cnfl.io/3JlXoso
► Designing Event-Driven Microservices - cnfl.io/444Kpoz
CHAPTERS
00:00 - Intro
01:04 - Reactivity
02:11 - Coupling in Space and Time
03:20 - Consistency
04:32 - Historical State
06:36 - Architectural Flexibility
09:09 - Data Access and Data Reuse
10:56 - Summary
-
ABOUT CONFLUENT
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion - designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.
#eventdrivenarchitecture #apachekafka #kafka #confluent #rest #rpc

Пікірлер: 76
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. I took a crack at explaining some of the key differences between two popular interservice architectures. Let me know what you think, and if there are any other videos you'd like to see.
@vnadkarni
@vnadkarni Ай бұрын
Love it! Thank you for making this, Adam.
@raticus79
@raticus79 Ай бұрын
Good work!
@Kostrytskyy
@Kostrytskyy Ай бұрын
Very clean, thank you!
@kfliden
@kfliden Ай бұрын
Thanks, best illustration of the two different architectures I've seen so far!
@mario_luis_dev
@mario_luis_dev Ай бұрын
you guys are making some high quality content here. Keep up the great work! 👏
@alizeynalov7395
@alizeynalov7395 Ай бұрын
thank you, Adam. An amazing, short but very valuable video. Please continue enlighting us more. Also, I feel that this way of presenting is much better than not seeing the tutor, just the whiteboard. It makes me feel engaged.
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Hey, thanks for the feedback! I'm going to have a few more coming out in the next month or so, so stay tuned for more.
@iChrisBirch
@iChrisBirch 14 күн бұрын
I agree, I like seeing someone talking to me and drawing as they are explaining, it's much more engaging.
@Fikusiklol
@Fikusiklol Ай бұрын
Hello, Adam! I absolutely love and admire your effort (and other Confluent speakers) in making these very complex topics so easy to understand and grasp. Absolute best out there. Laconic an informative. Big thanks!
@adambellemare6136
@adambellemare6136 Ай бұрын
Adam here. Thanks! I appreciate the kind words.
@iChrisBirch
@iChrisBirch 14 күн бұрын
Very well explained and the diagrams helped a lot. Great pacing, I didn't get lost in words and didn't feel like I need to play on 1.5x speed like a lot of videos. I liked the lecture style of this vs many 'content creators' that have visually beautiful videos with animations and graphics that in the end distract from the topic. Great job!
@ConfluentDevXTeam
@ConfluentDevXTeam 9 күн бұрын
Thanks Chris, I appreciate the kind words - I'm going to have a few more coming out next month, I hope they land well with you.
@dream_emulator
@dream_emulator Ай бұрын
Amazing explanation! Thanks for this 👍😃
@mr.daniish
@mr.daniish Ай бұрын
Adam drops another knowledge bomb! Respect
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
🤓
@gulhermepereira249
@gulhermepereira249 18 күн бұрын
Hi, Adam. Thank you for the great explanation, but there's another important part missing : the cost. Could you please go over that for the following videos?
@ConfluentDevXTeam
@ConfluentDevXTeam 16 күн бұрын
Adam here - that's a good request! I will see what I can do, but I think that it might end up being a better blog post than a video, mostly because of figures, tables, etc. I will think on what I can do, but thank you for your request.
@TheNoahHein
@TheNoahHein 29 күн бұрын
Amazing video! Question about your setup, have you been teaching yourself to write backwards? My mind doesn’t quite wrap around how this video is filmed, it looks like the transparent “whiteboard” is in front, with you behind it writing.
@ConfluentDevXTeam
@ConfluentDevXTeam 29 күн бұрын
Hi, Adam here. I'm using what's called a "lightboard", which is effectively just a sheet of glass, lit by a strip of LEDs around the edge. The camera shoots through the board, recording everything backwards. In post-processing, we flip the video on its vertical axis, in effect "mirroring" the view. Notice how I am using my left hand to draw in the video - in reality, I was using my right hand! it's just when we flip it in post, it also flips me :) If you want a better description, try searching "lightboard" in youtube, I know there are several people who have done a "how does it work" type videos.
@yasirnawaz2798
@yasirnawaz2798 16 күн бұрын
Just awesome!
@reneschindhelm4482
@reneschindhelm4482 24 күн бұрын
Do you keep the entire history of events (1 create, N update, maybe 1 delete event) for each and every document/object/… in those topics? How does that affect storage/performance over time? Or is there some way to compress/discard past events, say f.e. by regularly creating snapshots of the state?
@ConfluentDevXTeam
@ConfluentDevXTeam 24 күн бұрын
Adam here. Okay, great question. What you're describing is what I call "delta" events, where the events describe a difference. You've correctly identified that to get the current state, you'd have to apply all the events, in order - in other words, an event-sourcing style pattern. Over time the topic grows unbounded, based only on the quantity of delta events and not on the actual domain size. For example, if you have a billion items added to a shopping cart, and a billion items removed, then you'll have 2 billion events just for one cart. My recommendation is that you look at using event-carried state transfer (Warning, incoming advertisement). I wrote about them in a Confluent course that you may find helpful (developer.confluent.io/courses/event-design/fact-vs-delta-events/), back when I had a lot more hair. In terms of your observation of compressing and discarding the past - this is precisely what we would do with compaction for state/fact-based events. However, if you try to do it with deltas, it becomes a lot more challenging. For one, you need to generate a the state that you're going to store - basically exactly what a state/fact event already is! Two, you need to generate a snapshot of the current state that perfectly aligns with specific offsets in your Kafka topic. Three, you need to make sure that every client can access and read the snapshot, then switch over to the realtime - this is actually challenging to do when you're following a self-service architecture, and limits the technology choices your consumers can use. Four, you'll need to perfectly delete the data in the snapshot and the topic, such that there isn't an accidental overlap between the two. Actually very hard to do when you consider race conditions, new consumers, and atomicity. I can go on, but the gist is that it's a hard way to communicate state - you're much better off using the principles of event-carried state transfer, and state/fact type events.
@AftercastGames
@AftercastGames 14 сағат бұрын
Also, keep in mind that even in a traditional system where you only store the current state of each entity in a database, you’re almost always going to want to store a historical log of changes to those entities somewhere. So the way that I look at it, you’re probably going to end up storing both the current state and the historical changes either way. Event driven design just stores the changes first.
@jairajsahgal7101
@jairajsahgal7101 5 күн бұрын
Thank you
@ajitnandakumar
@ajitnandakumar 17 сағат бұрын
Hi Adam, On Reactivity, I understand the difference in Async vs Req/response, but what is the conclusion and difference in reactivity between the two architectures. This was not clear.
@raghavshashtri1817
@raghavshashtri1817 28 күн бұрын
Hi Adam, I wanted to check what all are the ways to get the completion status in EDA from fulfilment store? I can think of polling only. Which I believe shouldn't be recommended. Could you suggest the best approach.
@ConfluentDevXTeam
@ConfluentDevXTeam 26 күн бұрын
Adam here. Couple options really, depending on your requirements. If you have a client (webpage) that needs to get immediate fulfillment status to present to the consumer, you could have the client connect to Fulfilment via say a REST call and simply ask. Your fulfillment service would need to provide support for this interface, of course, meaning it is both a RR and an EDA service (this is okay). A second option is that you emit events from Fulfillment whenever the status changes (eg: In Progress / Completed / Partially Completed). Other services could listen to the fulfillment results and make their own decisions. You could also sink the Fulfillment data to a simple key-value store (eg, DynamoDB), and use a simple service to provide on-demand RR answers to current Fulfillment status (minus a short latency for the event propagation). This pattern is helpful because it provides multiple services the stream of events to do their own work as they see fit. There is no "right" answer here. It depends on your customer needs. I _prefer_ to use the second option because it enables looser coupling, replayability, and all of the other things I mentioned in this video. I can also decouple serving business processing needs (making sure orders are fulfilled) from end-user customer availability needs (querying to see their statuses), such that one cannot interfere with the other. If fulfillment service crashes because of bugs in my code, I don't need to worry about DynamoDB & webserver failing to provide my customers with the last known status of their order.
@mohammadshweiki1511
@mohammadshweiki1511 Ай бұрын
My question is about his setup, the board he is using to write in front of the camera is an Acrylic board correct? can anyone correct me if am wrong here? and what is the best marker to use I do deliver online training and consultation and I want to use the same method
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. This is a homemade board that I made using modular aluminum framing, and initially I tried using a 6' x 4' sheet of acrylic. However, I found that my acrylic sheet would "fog up" if I turned the lights on too bright, and furthermore, it was very easy to scratch. I tried very hard to not scratch it, but within a week of light use I ended up with a few deep enough scratches that I couldn't hide them in post-processing. At that point, I switched to locally sourced low-lead glass (aka Starphire type), 1/4" thick. The sheet weighs about 55 lbs at a size of 64" x 44" (I made it smaller to match the actual 4K resolution), and I had to modify the frame to install some glass clamps. But it's much more durable, easier to clean, and clearer at higher brightness. I think I paid about $220 for the acrylic sheet, and $550 for the low-lead glass (both delivered). I would recommend skipping acrylic, as it can be hard to source a clear one and it just scratches too easily for sustained high-definition usage.
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Oh, and in terms of markers, I found that "EXPO NEON Marker Dry-Eraser" worked best in terms of contrast and visibility. I've also tried some liquid-chalk markers to mixed results, some show up well, some do not.
@JitenPalaparthi
@JitenPalaparthi 6 күн бұрын
What is the device we can write like that?.. is it just a cam on or
@ConfluentDevXTeam
@ConfluentDevXTeam 5 күн бұрын
A lightboard!
@davaigo2170
@davaigo2170 Ай бұрын
For EDA, do we need to use CDC technology for it?
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. No, you don't need to, though it is a good way to get started by bootstrapping data into your Kafka broker. Some applications, such as one made with Kafka Streams, Flink, or FlinkSQL, can natively emit their own events to Kafka - events as input, and events as output. But if you're starting with absolutely no event-driven applications or event streams at all, then learning how to use CDC (such as Debezium) is a very good place to start.
@SonAyoD
@SonAyoD Ай бұрын
How would you pick one over the other? What are the use cases.
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. There are many ways you _can_ solve problems, which means that everyone has a different opinion on the matter. Given that I won't be able to explore this all in a youtube comment, we can start with the certainties. You'll be very safe using Request-Response and RPC-style communication whenever you need client-server communication. You'll also be safe if you use event-driven architecture to drive services that don't require blocking low-latency calls, like managing real-product inventory, handling advertising campaigns, and processing orders and payments. The whole middle ground between these two is where things can get muddy, where you'll get different answers depending on who you ask, and ultimately you'll end up with "it depends". What I like about EDAs is that if you invest a bit of time and effort into building well-defined event-streams, then you unlock the possibility to choose whichever is best for the task at hand - RR OR EDA.
@SonAyoD
@SonAyoD Ай бұрын
@@ConfluentDevXTeam thanks for the explanation!
@FecayDuncan
@FecayDuncan Ай бұрын
I prefer an orchestrating ordering process that triggers events for underlying services to act on. These services will obtain the necessary data by making API calls to other services. Highly flexible through the use of process driven approach. Decoupled through event driven services. Consistency through well defined APIs.
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. If you're using events as triggers to make API calls to other services, you lose replayability of the events. I've seen services work this way, and it's fine if you don't care about history or about high load on your servers hosting the API calls.
@FecayDuncan
@FecayDuncan Ай бұрын
@@ConfluentDevXTeam my orchestrator can replay itself based on history and each event driven service can scale up horizontally to consume more work.
@Fikusiklol
@Fikusiklol Ай бұрын
Why would you orchestrate some business process based on events (I assume), if services still making sync calls to other API's? That feels like orchestrated choreograhy. Data and temporal coupling are still there. Could you please explain underlying reason to do so?
@kohlimhg
@kohlimhg Ай бұрын
There are sometimes cases where the full data a service needs to process an event would be too large for e.g. a Kafka message. In that special case the service could obtain additional data via a synchronous call to another API. If the data provided by the API are immutable then replayability won't be lost.
@adambellemare6136
@adambellemare6136 Ай бұрын
​@@kohlimhg Yep, we also call it "claim cheque/check" pattern. The complexity with this pattern is stitching together the permissions and access controls between the Kafka record and the system that you present the claim check to. One good trick is to put all the work in the serializer and deserializer, such that it's transparent to the consumer.
@spreadpeaceofmindful
@spreadpeaceofmindful Ай бұрын
In the EDA, 1)How do we verify the transaction got verified? (In an event where subscribers lose an event) 2) In a micro services scenario where multiple nodes are running, how do we prevent duplicates? (how to stop Processing the same order twice)
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. 1) If you're talking about distributed transactions, you're going to need to look at the saga pattern. The subscribers won't "lose" an event unless they have written buggy code, in which case it's their responsibility to fix it. 2) One option is to check out "effectively/exactly-once processing" in Apache Kafka. It'll ensure your system doesn't create duplicate events, regardless of if or when it fails in its processing. However, it's still possible to cause side effects (like calling an external REST API) each time you process the event, despite exactly-once. However, this is the same as if you were to call a REST API, get a timeout, and retry calling it - resulting in duplicate processing. My advice is to make your code idempotent so that duplicate processing has no side effects. If you can't do that, then you're going to have to build a system to guard your consumer against duplicate processing, such as consuming records 1 by 1 (slow) or using durable state to keep track of precisely which records have been processed and which have not via atomic updates.
@lamintouray7333
@lamintouray7333 Ай бұрын
I am a student Software engineering and I found this quite interesting. Is there any academic/research paper out there that discuss this topic in details you could point out peharps. Thank you.
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here. I don't know of any whitepaper for this subject, particularly as this is really a "versus" type of discussion. You may find some luck searching for one or the other terms (EDA, or RR) on its own. Otherwise, I'd just be googling this for you. Lots of this is stuff I've picked up in my own experience over the years, listening to others, etc.
@sorvex9
@sorvex9 6 күн бұрын
A paper, for kafka, kekw
@AlanGramont
@AlanGramont 11 күн бұрын
Your storefront probably should NOT be rewriting order changes that have reached complete. They should create a modification record. The view of the order will be a merged view of the original record and all modified records. In a document database, this is one collection showing the "current" order representing the merge and a table of changes over time. The changes can be differences but it also could just be the complete order as a second record with a version. In this way, the storefront can always provide order history without needing to pull it from external sources.
@ConfluentDevXTeam
@ConfluentDevXTeam 11 күн бұрын
Adam here. Some different philosophies to unpack. One option is to have multiple topics, one with each status. Another is to have multiple statuses/modification records (different schemas in the same topic). In both cases, we put the work on the consumers reconcile the data.(seemingly what you recommend with a modification record). The difficult part is that the consumers must then know about each of these topics and event types, and be prepared to reconcile them without making any interpretation, ordering, or logic mistakes. It results in a very tightly coupled workflow (think Event Sourcing). I personally advise against this methodology as saving a few bytes over the wire isn't worth the extra complexity for most use cases. A second option is to produce the record with the updated status from the Storefront service (single writer principle - Storefront owns the topic, it publishes the updates). However, storefront must then manage the lifecycle of the order through the entire system, which is more of an orchestrational responsibility, and less with its main purpose of taking orders from customers. A third option is to build a purpose-built orchestrator to manage the entire order fulfillment workflow. Storefront emits the initial order to this orchestrator , and then is done. Subsequent changes to the order are managed by the orchestrator. This is beyond the scope of a youtube comment, but I wanted to include it for clarity. A fourth option is to extend the third option with multiple orchestrators for separate parts of the fulfillment workflow, while also relying on loose choreography between major workflow chunks. This option tends to be what many mature event-driven organizations end up with - orchestration for critical workflows (And sub workflows), and choreography for less critical and looser-coupled systems. Again, beyond the youtube comment scope. I've gone into the State vs. Delta subject in my Confluent hosted course - but you can find the KZbin video here if you're so interested: kzbin.info/www/bejne/rIuVnoqCf8mFl6M
@ConfluentDevXTeam
@ConfluentDevXTeam 11 күн бұрын
Oops one more thing - The nice thing about Kafka is that we can decide what to keep and what to compact away with State events. So for example, we may decide to keep all Orders for 1 year uncompacted. Events older than 1 year get compacted so that only the final status remains. For operational use-cases, we'd have to decide how much history we care about in the stream. For analytical purposes, we can just materialize everything as-is to an append-only Iceberg table. Plug in your own columnar processor for query, and you have full access to every state transition of your order for all of time.
@jgrote
@jgrote 18 күн бұрын
I was honestly wondering how you learned how to write backwards so effectively until I realized you just flipped the video...
@ConfluentDevXTeam
@ConfluentDevXTeam 16 күн бұрын
Adam here. Yep, honestly, the first lightboard video I saw I thought the same thing. :)
@AftercastGames
@AftercastGames 14 сағат бұрын
It also helps that everyone is left handed. 😉 Actually, I never noticed until this video that when you walk behind the text, you block the black background, which makes it impossible to read the text. Well, I guess that’s only a problem if you’re white. 🤔 I guess there are some advantages to being black after all… Does this mean that light boards are racist? Well, in any case, great video. 👍
@qapplor
@qapplor 28 күн бұрын
I don't understand the need for Kafka here, the storefront could easily keep a history of order data to use for inventory, data lakes, ML etc. etc.? Also, depending on how the request-response's model's architecture is planned, it could work with "events" as well. Just don't design it to require an immediate response, but rather poll for a list of the fulfilled orders regularly and update its "order status" attribute. If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right? In the EDA example, the storefront would at some point need to display the fulfilled order, so it still needs to consume "responses" from the fulfillment, just it's asking Kafka instead the fulfillment service. You still need to define a data structure for the event and hope all your future application will be able to consume it, it's still a hard contract. Isn't it true you could create an asynchronous req-res Application? The immediate need for a response seems contrived and a beginner's mistake, frankly.
@ConfluentDevXTeam
@ConfluentDevXTeam 26 күн бұрын
FWIW, you are unlikely to run your data lake off a single operational database. Shopify (where I worked previously) had several hundred very large sharded databases to power their entire storefront experience for operations. Data lakes were a whole other story, and there was no way that we could process the queries we needed given the production setup. If all of your data can fit in a single DB and you can do your analytics in there as well, then go for it - I suggest only adding complexity as required, keep it simple where possible. > In the EDA example, the storefront would at some point need to display the fulfilled order, Storefront is not required to display the fulfilled order. There are many ways to communicate this information back to the client, such as using a modular frontend with different services powering different aspects of the UI. > Isn't it true you could create an asynchronous req-res Application? Yes. There are many ways to build services, and your answer is always going to be "it depends". > If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right? Event-driven architectures, such as those provided by Kafka, enable producers and consumers to decouple via the event stream / Kafka topic. Multiple consumers can use that data as they see fit for their own purposes. Services complete the work at their own rate, and can independently scale up and down depending on needs. If a service dies, it can resume from where it left off in the topic. In contrast with RR, each service must request and respond data. In flight requests are lost when services die. There is no common log to source data from, nor a history to see what happened over time. Latency can be lower, and yes you can also asynchronously process RR, but it's hard to get into the nuances in a short lightboard video.
@azizsafudin
@azizsafudin Ай бұрын
Fulfilment is spelled without two Ls
@ConfluentDevXTeam
@ConfluentDevXTeam Ай бұрын
Adam here! I'm a Canadian, and I always spell it as "Fulfilment" (British/AUS/CAD/NZ Spelling) in my personal life. But my editors insisted I use "Fulfillment" (US English). I did more than a few takes where I had to stop and rewind before I decided to just start with the word written on the board.
@AftercastGames
@AftercastGames 14 сағат бұрын
Ugh… don’t get me started! I’m in the US, and the one word I just can’t let go is “cancelled” vs “canceled”. That word should have two Ls, period. This is where I draw the line. I’m willing to die on this particular hill if I have to. 😏
@Nominal_GDP
@Nominal_GDP 8 сағат бұрын
I feel like he's a bit biased
@vanelord
@vanelord 26 күн бұрын
Man drawing boxes around single point of failure (kafka) and saying it's loosely coupled. What's next ? Cloud as decentralized service ? I think I just watched a very long product advertisement. Better go learn some actor model and read about Carl Hewitt work instead of watching this brain wash.
@ConfluentDevXTeam
@ConfluentDevXTeam 25 күн бұрын
Adam here. Loose coupling pertains to the producer and consumer being decoupled in time and space. Introducing Kafka into the equation provides the ability for producers and consumers to keep working, even if the former or the latter fail. While it is true that Kafka can fail as well, it is a resilient, distributed, and fault-tolerant system. If set up properly, you would need to experience multiple instance and/or AZ failures to get to a point where the service is not operational. Most people running their own Kafka in the cloud will run a multi-AZ distribution, and rely on the guarantees provided by AWS/GCP/Azure/Oracle/Etc. And while I know you didn't like the video, Carl Hewitt is indeed an impressive thinker and was ahead of his time.
@vanelord
@vanelord 24 күн бұрын
​ @ConfluentDevXTeam Sorry for being harsh. I first watched video got frustrated wrote comment and then looked at the author and I thought to remove this comment but I left it because it have a point. So I want to explain myself. I don't like video because it is very chaotic and says nothing about impact of queue message sizes, message consistency for single topic but only complains about API consistency and presents RPC like technology from 90s. Watching this video I feel like I got back in time and servers are still using thread pool instead of event loop, everything is synchronous and it's using WSDL and SOAP and queue is the answer to all the problems, whether it's not. The presentation of queue adventages are very chaotic. Especially if you're presenting queue so unique like kafka that have message retention and message order consistency. Because people can ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete. For me the presentation should start with DAG single node RPC and two edges with same message and then author should say that if you want this messages be processed multiple times or if this message needs to be reprocessed (draw the edge going back to same node) ? We have an answer for it - kafka - the queue with retention and guaranteed order. You don't have to mangle your RPC business logic anymore. Don't worry about performance because kafka is battle tested by linkedin where it was developed in the first place. On the other hand if you need a sequence of things happening with single message and you have performance problems, if your messages are very big maybe it's better to use for example serverless soultions or maybe DAG processing frameworks like Airflow because if you put everything into queue or everything in RPC you end up with the same problems but in different environment. It should be clearly stated that data design and understanding of data flow is more important than underlying architecture and business logic. Because everything is just a wrapper around data. If you don't understand where is your data coming from and where it's going don't pick a solution.
@ConfluentDevXTeam
@ConfluentDevXTeam 24 күн бұрын
> ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete. I don't discuss the others because the movie would balloon in size. There's a lot of tech out there, this is just one way to do it. Also, REST is certainly not obsolete, I hope that was not your takeaway.
@user-ng8wh8to5o
@user-ng8wh8to5o 29 күн бұрын
Event driven architecture is a headache for developers, it has a lot of pitfalls and i recommend never do it
@ConfluentDevXTeam
@ConfluentDevXTeam 29 күн бұрын
Adam here. I'm sorry to hear you haven't had a good experience with EDAs, but I encourage you not to discard it as a tool from your kit. My experience has been quite the opposite, where developers love it and embrace it once they understand where to use it and what it's best suited for.
@andresdigi25
@andresdigi25 Күн бұрын
I know all the smart people use kafka or other systems like sns, sqs etc. But something alwaus bothered me. Why you can not use a database to do that? a well tune postrgress in RDS or dynamo? I mean to store events and lets consumers and producers read/write from that db? why kafka and all these systems are preferred?
@ConfluentDevXTeam
@ConfluentDevXTeam Күн бұрын
Hey there, Adam here. A lot comes down to business needs and your willingness to support custom solutions. People like to use Kafka because it handles a lot of problems out of the box, without you having to customize anything. You can scale to millions of events per second, have high availability to survive individual node failures without degradation or halting of services, and get automatic expiry, cleanup, compaction, schema management, multi-topic transactions, security, access controls, rate limiting, and quotas, all in one package. There's no custom code required, and just as you would have a managed DB on RDS, for example, you can also get managed or serverless Kafka. Another big reason people like to use Kafka is the Kafka Connect framework. When your data is scattered across several systems, it can be problematic to access it in near real-time from other locations. Kafka Connect lets you pipe your data from your databases to Kafka directly, so that you can then read it or route it however you want to wherever you want. Accessing the data is simple, as you have a wide range of different Kafka clients in different languages to integrate into your applications. Not to turn this into an essay, but the gist is that Kafka offers a lot of functionality that you're not going to get if you try to roll your own message broker using Postgres or DynamoDB. It isn't to say you can't do it, but it's one of those things where it may just be simpler to get the tool built for the specific job (near realtime event brokering) and use that. But again, it depends heavily on your business needs!
@andresdigi25
@andresdigi25 Күн бұрын
@@ConfluentDevXTeam Adam thanks for taking the time to give such an in depth response. BTW i am not saying kafka is bad idea. In my company I am one of the person that advocates a lot to use EDA with aws service such sns, sqs, eventbridge. But sometimes i question myself if we are taking all the juice from these tools or if those tools under heavy conditions works better than a good customized solution like a DB in RDS. At the end of the day brokers have a persistence layer to store simple messages. Also i guess this comes down to the people you have, if they have experience with these modern tools, or they are more aligned to classic DBMS systems. And also the money you have to spend on a solution, and how big will become in the future. Thanks again.
@AftercastGames
@AftercastGames 14 сағат бұрын
I also prefer databases whenever possible, but the question is, what happens when the database goes down? Event queues have their advantages, especially when money is involved.
@arseniotedra4573
@arseniotedra4573 20 күн бұрын
#iamAmillionaire#arseniotedra#aimglobal ❤ thanks so much ❤
4 Key Types of Event-Driven Architecture
9:19
Confluent
Рет қаралды 10 М.
Тяжелые будни жены
00:46
К-Media
Рет қаралды 5 МЛН
CAN YOU HELP ME? (ROAD TO 100 MLN!) #shorts
00:26
PANDA BOI
Рет қаралды 36 МЛН
An honest review of Devin AI
15:33
Zack
Рет қаралды 80
This Is Why Managers Don't Trust Programmers...
28:04
Thriving Technologist
Рет қаралды 164 М.
Understanding B-Trees: The Data Structure Behind Modern Databases
12:39
Top 7 Ways to 10x Your API Performance
6:05
ByteByteGo
Рет қаралды 304 М.
WebSocket - The Easiest and Detailed Explanation
9:35
Coding with Yalco
Рет қаралды 5 М.
5 Design Patterns That Are ACTUALLY Used By Developers
9:27
Alex Hyett
Рет қаралды 172 М.
Are we going back to PHP with fullstack JavaScript?
9:57
Maximilian Schwarzmüller
Рет қаралды 109 М.
So You Think You Know Git - FOSDEM 2024
47:00
GitButler
Рет қаралды 956 М.
Solid Programming - No Thanks
32:00
ThePrimeTime
Рет қаралды 223 М.
What does larger scale software development look like?
24:15
Web Dev Cody
Рет қаралды 1,2 МЛН
iPhone 12 socket cleaning #fixit
0:30
Tamar DB (mt)
Рет қаралды 3,4 МЛН
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 9 МЛН
Цифровые песочные часы с AliExpress
0:45
What percentage of charge is on your phone now? #entertainment
0:14
Power up all cell phones.
0:17
JL FUNNY SHORTS
Рет қаралды 49 МЛН