"Balance all the things!" Very informative video on how Kafka works and what things the producers and consumers do automatically.
@commandershepard298627 күн бұрын
Awesome👍
@ragingpahadi11 ай бұрын
Very informative video 🎉
@mateuszkopij41202 жыл бұрын
As always, great tips, thanks!
@odesferreira Жыл бұрын
Amazing talk! Keep up
@mathieugauthron374411 ай бұрын
Kris, you're a star. Great video.
@debabhishek8 ай бұрын
all the points are interesting.. I was thinking if after consumer fetch if we can explore the threadpool option to speed up the processing speed , got a validation here.. another interesting point is over commit by the consumers.. so does it means that I dont need to commit ( or ack) every record .. suppose my consumer is reading from Topic A and B ( both having two partitions) its enough to commit for the last offset of A1 A2, B1 and B2 . though I am processing more records from these topic partitions I am committing ( ack-ing ) the last offset for each partition. @confluent please correct me if I am wrong
@HenrykSzlangbaum2 жыл бұрын
Great discussion
@debabhishek8 ай бұрын
one little details I am searching about fetch or consumer poll . consumer is subscribed to more than one topic or 1 topic ..--> more than one partition , now the leaders for the partition are in different brokers.. ( dont know if you read from leaders or from isr list) ,, even if you read from the isr's , they may fall in different brokers.. .. what consumer do in such cases forward more than one request in different brokers and collate the results and present it to the client ? what if one broker is responding slow .. or not responding at all .. .. if responding slow consumer is ignoring it , it my keep on respond slow. and will be silently ignored. .. can you please write one two lines about consumer fetch.
@anandperi7060 Жыл бұрын
I believe the compression has to happen at the individual message/event level else they can't be written to correct partition. Not sure if the compress happens at the batch level as the talk is leading us to believe.
@krisajenkins Жыл бұрын
No, that's not correct. The producer allocates a batch per partition for exactly this reason. Compression happens at the batch level, before the batch is sent to its allocated partition. Trust me, Nikoleta knows this stuff inside out. 🙂
@abhinee2 жыл бұрын
What a great discussion
@HenrykSzlangbaum2 жыл бұрын
Ya, I'm sure ppl only start caring about batching sizes only after buying millions in hardware
@anandperi7060 Жыл бұрын
I'm new to Kafka but auto scaling is such a basic concept now a days ... why can't you add more brokers and disk if the load is increasing based on some metric and scale back later. Agreed some rebalancing of partitions etc needs to happen to the scale down may not be as simple but that is because Kafka seems to have coupled compute with storage its in their architecture. Having a side cluster and everything I hear seems ugly IMO.