@Aman Bro your kafka series helped me to crack difficult interviews in difficult situations ..You helped me to clear my concepts!!!! Please make more videos ... Appreciated !!!!!!
@elighteloy18102 жыл бұрын
Thank you so much from your video. You make me understand clearly in Kafka.
@raphy78626 Жыл бұрын
Really nice I have seen many videos none were clear on these details
@SanjeevKumar-qv4kc4 жыл бұрын
Nice video about consumer offset👍👍
@nikhil-zz6mr7 ай бұрын
can you please explain at 8:32 how can a commit be at an offset 10 when the processing is happening for the messages at the earlier offsets?
@MohitSaini-tv8go6 ай бұрын
Good one Aman !! Thanks
@swastikbhomkar72963 жыл бұрын
Thanks! Understood more deeply now.
@TechWithAmanYadav3 жыл бұрын
Glad it was helpful!
@ayushgarg5929 Жыл бұрын
Please make such a playlist on SQS Also
@oavilex3 жыл бұрын
Nice explanation
@lohithreddy98752 жыл бұрын
Hi Your videos are so informative and easy to understand.Thanks a lot. Can you please make a video of scaling tips and how to reduce load on consumers using configs.
@manugoel4673 жыл бұрын
Great video, in a concise manner. Answered most of my doubts. Best of Luck Aman!!
@seenimurugan2 жыл бұрын
good one. Thank you.
@sudippandit12 жыл бұрын
Hi, how do we handle the duplicates values in Kafka?
@rohitkumarnode48392 жыл бұрын
Hello Tech Aman, you said in case of manual commit , you can commit the messages individually once particular message processed, but as we know that kafka always keeps the status of largest offset committes, suppose mesage with 2,3 offset hsn't processed and message with offset 4 gets processed we commit the offset 4, in that case 2,3 offset will also be considered as committed. how we can remove this issue ?
@TechWithAmanYadav2 жыл бұрын
Hi Rohit, You can commit any offset. But in usecase you have mentioned why 2,3 are not processed? Since a single consumer will be processing message in order. And if that is the case then you would have already processed 2,3. Other if these are in different partition then that dosnt bother this partition offsets as offsets are committed per partition basis.
@RohitKumar-bl2eg2 жыл бұрын
@@TechWithAmanYadav I am processing messages in parallel not in particular order , so I guess committing individually won’t work correctly. Any thoughts ?
@InderjeetSingh0072 жыл бұрын
Any code repo for implementation of above?
@TechWithAmanYadav Жыл бұрын
Not right now
@rohitkumarnode48392 жыл бұрын
Also one more doubt i have, how we can consume same message twice upon two successive consumer poll, i have this requirement where if previous messages didn't get processed and i have not committed i want to receive those same messages again, but it seems to consume next batch of messages because somehow kafka remembers that which offsets was consumed during previous poll. In Simple words how can we reset the in memory offsets ?
@TechWithAmanYadav2 жыл бұрын
if you don't commit the message, then in next poll the same message will be consumed again. And remember to turn off auto commit option.
@RohitKumar-bl2eg2 жыл бұрын
@@TechWithAmanYadav I tried the same but it’s not happening, I am using node-rdkafka npm library.
@satishkuppam80083 жыл бұрын
I am trying to override auto.offset.reset in cosumer.properties and connect-standalone.properties file but it is not overriding. Can you please tell me how to fix this and i have added the connector.client.config.override.policy=All in connect-standalone file.
@saurav07774 жыл бұрын
I have tried saving the offset in kafka itself in my spark streaming app using commitAsync API. However it is not syncing on time or immediately in kafka internal topic __consume_offset because of that spark is processing duplicates records whenever it restarted even though doing graceful restart . Could you please give some pointers to fix ?
@TechWithAmanYadav4 жыл бұрын
You can acknowledge in sync and send acknowledgement after processing each message.
@srimayeeponuganti7754 жыл бұрын
@@TechWithAmanYadav any reasons for consumer processing same message twice?
@TechWithAmanYadav4 жыл бұрын
Well there can multiple scenarios. There might be some db crash and you need to process messages again.
@sudheerkumaratru94183 жыл бұрын
I am not able to listen for the particular topic @kafkalistener Is not working
@neha6000 Жыл бұрын
Implementation kaha hain..?
@sudheerkumaratru94183 жыл бұрын
Hii
@ManishTiwari-or8zt6 ай бұрын
Not good content... It can be better way to explain...
@sudippandit12 жыл бұрын
Hi, how do we handle the duplicates values in Kafka?
@TechWithAmanYadav2 жыл бұрын
are you pushing same value twice?
@sudippandit12 жыл бұрын
Yes due to the network issues, streaming job got failed and restarted by itself and same row values got populated twice