Kafka Consumer Offsets Explained

  Рет қаралды 19,396

Tech With Aman

Tech With Aman

Күн бұрын

Пікірлер: 36
@MohitSaini-tv8go
@MohitSaini-tv8go 6 ай бұрын
@Aman Bro your kafka series helped me to crack difficult interviews in difficult situations ..You helped me to clear my concepts!!!! Please make more videos ... Appreciated !!!!!!
@elighteloy1810
@elighteloy1810 2 жыл бұрын
Thank you so much from your video. You make me understand clearly in Kafka.
@raphy78626
@raphy78626 Жыл бұрын
Really nice I have seen many videos none were clear on these details
@SanjeevKumar-qv4kc
@SanjeevKumar-qv4kc 4 жыл бұрын
Nice video about consumer offset👍👍
@nikhil-zz6mr
@nikhil-zz6mr 7 ай бұрын
can you please explain at 8:32 how can a commit be at an offset 10 when the processing is happening for the messages at the earlier offsets?
@MohitSaini-tv8go
@MohitSaini-tv8go 6 ай бұрын
Good one Aman !! Thanks
@swastikbhomkar7296
@swastikbhomkar7296 3 жыл бұрын
Thanks! Understood more deeply now.
@TechWithAmanYadav
@TechWithAmanYadav 3 жыл бұрын
Glad it was helpful!
@ayushgarg5929
@ayushgarg5929 Жыл бұрын
Please make such a playlist on SQS Also
@oavilex
@oavilex 3 жыл бұрын
Nice explanation
@lohithreddy9875
@lohithreddy9875 2 жыл бұрын
Hi Your videos are so informative and easy to understand.Thanks a lot. Can you please make a video of scaling tips and how to reduce load on consumers using configs.
@manugoel467
@manugoel467 3 жыл бұрын
Great video, in a concise manner. Answered most of my doubts. Best of Luck Aman!!
@seenimurugan
@seenimurugan 2 жыл бұрын
good one. Thank you.
@sudippandit1
@sudippandit1 2 жыл бұрын
Hi, how do we handle the duplicates values in Kafka?
@rohitkumarnode4839
@rohitkumarnode4839 2 жыл бұрын
Hello Tech Aman, you said in case of manual commit , you can commit the messages individually once particular message processed, but as we know that kafka always keeps the status of largest offset committes, suppose mesage with 2,3 offset hsn't processed and message with offset 4 gets processed we commit the offset 4, in that case 2,3 offset will also be considered as committed. how we can remove this issue ?
@TechWithAmanYadav
@TechWithAmanYadav 2 жыл бұрын
Hi Rohit, You can commit any offset. But in usecase you have mentioned why 2,3 are not processed? Since a single consumer will be processing message in order. And if that is the case then you would have already processed 2,3. Other if these are in different partition then that dosnt bother this partition offsets as offsets are committed per partition basis.
@RohitKumar-bl2eg
@RohitKumar-bl2eg 2 жыл бұрын
@@TechWithAmanYadav I am processing messages in parallel not in particular order , so I guess committing individually won’t work correctly. Any thoughts ?
@InderjeetSingh007
@InderjeetSingh007 2 жыл бұрын
Any code repo for implementation of above?
@TechWithAmanYadav
@TechWithAmanYadav Жыл бұрын
Not right now
@rohitkumarnode4839
@rohitkumarnode4839 2 жыл бұрын
Also one more doubt i have, how we can consume same message twice upon two successive consumer poll, i have this requirement where if previous messages didn't get processed and i have not committed i want to receive those same messages again, but it seems to consume next batch of messages because somehow kafka remembers that which offsets was consumed during previous poll. In Simple words how can we reset the in memory offsets ?
@TechWithAmanYadav
@TechWithAmanYadav 2 жыл бұрын
if you don't commit the message, then in next poll the same message will be consumed again. And remember to turn off auto commit option.
@RohitKumar-bl2eg
@RohitKumar-bl2eg 2 жыл бұрын
@@TechWithAmanYadav I tried the same but it’s not happening, I am using node-rdkafka npm library.
@satishkuppam8008
@satishkuppam8008 3 жыл бұрын
I am trying to override auto.offset.reset in cosumer.properties and connect-standalone.properties file but it is not overriding. Can you please tell me how to fix this and i have added the connector.client.config.override.policy=All in connect-standalone file.
@saurav0777
@saurav0777 4 жыл бұрын
I have tried saving the offset in kafka itself in my spark streaming app using commitAsync API. However it is not syncing on time or immediately in kafka internal topic __consume_offset because of that spark is processing duplicates records whenever it restarted even though doing graceful restart . Could you please give some pointers to fix ?
@TechWithAmanYadav
@TechWithAmanYadav 4 жыл бұрын
You can acknowledge in sync and send acknowledgement after processing each message.
@srimayeeponuganti775
@srimayeeponuganti775 4 жыл бұрын
@@TechWithAmanYadav any reasons for consumer processing same message twice?
@TechWithAmanYadav
@TechWithAmanYadav 4 жыл бұрын
Well there can multiple scenarios. There might be some db crash and you need to process messages again.
@sudheerkumaratru9418
@sudheerkumaratru9418 3 жыл бұрын
I am not able to listen for the particular topic @kafkalistener Is not working
@neha6000
@neha6000 Жыл бұрын
Implementation kaha hain..?
@sudheerkumaratru9418
@sudheerkumaratru9418 3 жыл бұрын
Hii
@ManishTiwari-or8zt
@ManishTiwari-or8zt 6 ай бұрын
Not good content... It can be better way to explain...
@sudippandit1
@sudippandit1 2 жыл бұрын
Hi, how do we handle the duplicates values in Kafka?
@TechWithAmanYadav
@TechWithAmanYadav 2 жыл бұрын
are you pushing same value twice?
@sudippandit1
@sudippandit1 2 жыл бұрын
Yes due to the network issues, streaming job got failed and restarted by itself and same row values got populated twice
Kafka Topic Replication Explained
7:09
Tech With Aman
Рет қаралды 962
3. Apache Kafka Fundamentals | Apache Kafka Fundamentals
24:14
Confluent
Рет қаралды 487 М.
Long Nails 💅🏻 #shorts
00:50
Mr DegrEE
Рет қаралды 18 МЛН
Smart Sigma Kid #funny #sigma
00:33
CRAZY GREAPA
Рет қаралды 26 МЛН
Lazy days…
00:24
Anwar Jibawi
Рет қаралды 7 МЛН
Kafka Topics, Partitions and Offsets Explained
7:03
Stephane Maarek
Рет қаралды 155 М.
Kafka Tutorial - Exactly once processing
13:33
Learning Journal
Рет қаралды 53 М.
Why Your Computer Has Probably Not Been Hacked
8:19
Ask Leo!
Рет қаралды 399
Topics, Partitions and Offsets:  Apache Kafka Tutorial #2
6:41
Anton Putra
Рет қаралды 26 М.
Apache Kafka in 6 minutes
6:48
James Cutajar
Рет қаралды 1 МЛН
Kafka Tutorial - Fault Tolerance
12:08
Learning Journal
Рет қаралды 172 М.