Synadia RethinkConn 24 Intro
0:32
Пікірлер
@uwontlikeit
@uwontlikeit 19 сағат бұрын
i didn't get a clear understanding of how request to such microservice would work on NATS. What's the request-response logic?
@uwontlikeit
@uwontlikeit 19 сағат бұрын
what is the max throughput? for publisher looks like it's only 11 messages per second?
@zuzelstein
@zuzelstein 4 күн бұрын
hm, I was watching this pretty sure it's already released, but no, github says v2.11 is 66% complete
@lordrift
@lordrift 6 күн бұрын
When is the next video coming up ? One of the key reasons Kafka is used is for absolute ordering of messages to a unique consumer from a group. This video ended right before that topic for NATS which is really one of the missing key points for NATS to replace kafka when dealing with realtime in-order events consumption distributed across multiple consumers from the same group.
@ryanhaney
@ryanhaney 7 күн бұрын
Thanks for the excellent videos. FYI, at 1:50, missing link to video.
@BarakaAndrew
@BarakaAndrew 7 күн бұрын
I think the only issue now is documentation. Nats has a bunch of hidden features, I usually have to ask on slack to get help coz I can't find how to do something anywhere online.
@dankogulsoy
@dankogulsoy 13 күн бұрын
Awesome.
@johnboy14
@johnboy14 17 күн бұрын
This really looks like an awesome feature. Makes it much easier to expose it at the edge
@iluznm-ul2nz
@iluznm-ul2nz 17 күн бұрын
This is a bit unfair comparison. To make an apple-to-apple comparison you'd have to use Kafka + ksqlDB.
@ro4lol
@ro4lol 22 күн бұрын
Cool. Nats is a great design
@iluznm-ul2nz
@iluznm-ul2nz 25 күн бұрын
4:44 server-side message filtering is nice, but I wouldn't draw a conclusion that opposite is "extremely inefficient". With kafka for example, messages are consumed in batches so there is not such a huge overhead unless IO-bound. Also, because the predicate is defined in the consumer itself it means it can be unit tested. With server side filtering I'd need to have an integration test to fully test a consumer behavior.
@wa1gon
@wa1gon 27 күн бұрын
You talked about root keys and not using them. What root key and how are the created?
@codingsafari
@codingsafari 29 күн бұрын
if you cancel the context. your function will return context.Canceled error, and then you do os.Exit(1) due to use of log.Fatal. So that graceful thing, doesnt work.
@MoamlRH
@MoamlRH Ай бұрын
great video, thanks would be great to share your vim config here :)
@Ollinho12
@Ollinho12 Ай бұрын
S tier content. Extremely helpful and informative
@alirezaramezanpour9199
@alirezaramezanpour9199 Ай бұрын
hey thank you for the greate content. i just have a problem that i want to send message to a specific container of same workers and it seems that there's no way to do this, if i'm wrong please help me 😁
@NathanWienand
@NathanWienand Ай бұрын
How have I not known about this for so long? Great work Jeremy keep it up. #subbed
@ssouravs
@ssouravs Ай бұрын
All the very best for your future endavours Jeremy
@Zro_2_One
@Zro_2_One Ай бұрын
Great work, Synadia team! Thank you, KeepApp Software Development
@dobeerman
@dobeerman Ай бұрын
Thank you for another insightful video! I really appreciate the effort you put into breaking down the differences between the two systems. It’s always great to see deep dives into these technologies. However, I noticed a few points that could use some clarification. I think, you oversimplify by saying Kafka consumer groups lack state, and JetStream consumers are stateful. In reality, Kafka consumer groups do manage state, particularly offsets, which are stored in a highly available way within Kafka itself. So, consumer groups can resume processing from the correct position after failures. The second point is, you suggest that Kafka requires extensive planning for partitions and that partitioning is cumbersome. I'd agree that's true and partitioning needs careful consideration, but it is also Kafka’s primary mechanism for scaling horizontally. Proper partitioning is critical for Kafka’s ability to handle massive amounts of data efficiently. Also, Kafka provides tools to manage and rebalance partitions when necessary. Finally, you claim that Kafka offers limited options for message consumption compared to NATS JetStream, particularly in filtering and controlling replay behavior. Actually, Kafka provides significant flexibility through consumer offsets, compacted topics, and Kafka Streams, which allow for complex processing, filtering, and windowing of data. There're some other points to discuss, but these are most crucial. Cheers ;)
@dobeerman
@dobeerman Ай бұрын
Thank you for the video and for such a thorough comparison. It was very insightful! However, I noticed a few technical inaccuracies regarding Kafka that I’d like to clarify. 1. You mentioned that Kafka isn’t a “proper” messaging system and is mainly a distributed log platform. Of course, Kafka did start as a distributed log, it has evolved significantly and is now widely used as a messaging system. It does handling pub-sub effectively. 2. There’s a point about Kafka topics being less flexible because they lack the hierarchical structure of NATS subjects. Well, Kafka topics are designed to be simple and efficient, with key-based partitioning that supports powerful message routing. This simplicity is key to Kafka’s scalability. 3. You suggest that Kafka requires clients to receive all messages and filter them locally, which can be inefficient. In reality, Kafka consumers can use offsets and keys to retrieve only the messages they need, especially with compacted topics or Kafka Streams. 4. You also noted that Kafka lacks CRUD operations compared to NATS JetStream. Sure, Kafka doesn’t offer traditional CRUD like a database, it has strong mechanisms like compacted topics, transactional messaging, and exactly-once semantics that handle many data management needs effectively. 5. You mentioned that Kafka isn’t “real real-time” and focuses more on throughput than latency. Kafka does use batching for throughput, and it’s also capable of low-latency processing with the right configuration. 6. Finally, you suggest that Kafka’s partitioning is a workaround for its single-consumer-per-topic design. In fact, partitioning is a deliberate design choice that enables Kafka to scale horizontally and handle massive data volumes efficiently. Thank you again for the video 👍 Cheers ;)
@mfreeman451
@mfreeman451 Ай бұрын
great updates and good luck to you Jeremy
@stevemcardle2013
@stevemcardle2013 Ай бұрын
For IdP integration it would be great to get some guidance to integrate to KeyCloak
@stevemcardle2013
@stevemcardle2013 Ай бұрын
I have a need for this feature. Currently we use Redis and we have a User cache. Each User is placed into the cache with a TTL of 10 minutes. This allows the User data to not be looked up from the datastore again for 10 minutes which is the average time users stay around. However, each new read updates the TTL to 10 minutes from now. Only once they have been inactive for 10 minutes OR they change their User data are they evicted. In the case of an update the Update is placed immediately into the Cache. This keeps our cache relatively small and self cleaning while allowing users that stick around to bypass the lookup overhead for each request.
@deltagamma1442
@deltagamma1442 Ай бұрын
Great work guys!
@Zro_2_One
@Zro_2_One Ай бұрын
looking forward to the v2.11 NATS release! #teamNATS #JetStream
@estherburton1105
@estherburton1105 Ай бұрын
Are you in investment for a sml deposits...please confirm....thank you .
@deltagamma1442
@deltagamma1442 2 ай бұрын
When did you create the basic chat app before the auth stuff? I can't find the video
@shahzodshafizod
@shahzodshafizod 2 ай бұрын
Hi. What IDE do you use?
@deltagamma1442
@deltagamma1442 2 ай бұрын
I don't understand this. I thought it was a message broker running in-memory. Where did the services part come in? Does nats have some 'nats functions' that run inside of nats core? Or can we use everything you said with nats js? I'm using nest js currently for my monorepo. I'm curious to know if i can integrate everything you described here.
@zapduran
@zapduran 2 ай бұрын
can anyone tell me how to install the nats plugin on benthos?
@pawegraczyk6050
@pawegraczyk6050 2 ай бұрын
Nice
@Gacha.Lola18899
@Gacha.Lola18899 2 ай бұрын
To answer the first question: the current latest version (i.e. top of main) of the `nats` CLI tool let's you view messages inside a working queue stream using `nats stream view`
@masliaalias7876
@masliaalias7876 2 ай бұрын
thanks for the video! gonna try it out now
@alexanderroos4391
@alexanderroos4391 2 ай бұрын
Does NATS offer a way to order queues? E.g. start processing messages from queue B only after queue A is empty.
@iit9006
@iit9006 2 ай бұрын
Excellent demonstration! Will explained!
@dparkinson6
@dparkinson6 2 ай бұрын
Hey I know this is super old now, but wouldn’t securing this be challenging? How are you going to securely connect to nats in the front end without exposing the details? Thanks for any feedback.
@joeblue2492
@joeblue2492 2 ай бұрын
You can run it on Google and hetzner where nested virtualisation is supported
@lldadb664
@lldadb664 2 ай бұрын
Thanks for helping with my initial heartburn of seeing a similar leadership election (using Redis) in a codebase that I recently inherited (for which no original dev is available). I’m hoping to be able to move to NATS and I think I *might* be able to remove its particular use, but good to see that it wasn’t totally astronaut engineered (even though it’s just a web app backend 😅).
@lldadb664
@lldadb664 2 ай бұрын
Another great overview. Thanks!
@sirk3v
@sirk3v 2 ай бұрын
nice editor, could you share your configs?
@馬安奇
@馬安奇 2 ай бұрын
Some nsc commands used: nsc add operator --generate-signing-key --sys --name university nsc edit operator --require-signing-keys --account-jwt-server-url "nats://0.0.0.0:4222" nsc edit operator --require-signing-keys --account-jwt-server-url "nats://0.0.0.0:4222" nsc add account college nsc edit account college --sk generate nsc add user --account college student_1 nsc list keys -A nsc generate config --nats-resolver --sys-account SYS > resolver.conf nsc generate config --nats-resolver --sys-account college > sens-resolver.conf nsc push -A nsc generate creds -n student_1 > student_1.creds
@trailerhaul8200
@trailerhaul8200 2 ай бұрын
NATS is so refreshing. It can do and be lots of things
@pauvilella
@pauvilella 2 ай бұрын
I would be very curious to know the NeoVim configuration you have setup, do you have it public? 🙂
@hughacland4096
@hughacland4096 2 ай бұрын
very cool!
@ahmedkhalil7317
@ahmedkhalil7317 2 ай бұрын
Love the CP ux :)
@levi3970
@levi3970 2 ай бұрын
i don't like how kafka is the first thing that shows up in my searches when it's so limited in terms of usecase
@PeterNunnOZ
@PeterNunnOZ 3 ай бұрын
Do you happen to have the code running on the Pi some where handy? I need to do something similar to this :)
@nchomey
@nchomey 3 ай бұрын
This is huge. I'm not ready to use it yet, but will definitely use it when my system is in production. I have to figure most self-hosted nats clusters will do the same! I wonder if you'd consider some sort of free or cheap dev license, where we can get it connected to get it integrated into our workflows while developing our system, but it has severely limited message quotas or something like that so that we aren't freeloading off of it on a production system?
@SynadiaCommunications
@SynadiaCommunications 3 ай бұрын
Thanks @nchomey! It may have not been clear, but since you are bringing your own NATS system, there are no limits imposed on the NATS side since you are paying for it/running it. BYON is about utilizing the capabilities noted in the video (blog post coming out shortly today). Is the ask more around "try-before-you-subscribe"?
@nchomey
@nchomey 3 ай бұрын
@@SynadiaCommunications Thanks! Yes, its more a try-before-you-subscribe sort of thing. Or, more specifically, free while you get your architecture set up and then start paying for it once you have actual usage/traffic beyond just basic dev work. Sort of like you have for the hosted cloud offering!
@holgerwinkelmann6219
@holgerwinkelmann6219 3 ай бұрын
Good idea ! That’s what we spoke about recently…