Overview of Confluent Cloud
3:27
14 күн бұрын
The Confluent Q3 ‘24 Launch
5:48
Career Reentry After a Break
20:35
What is a Data Streaming Platform?
11:50
Пікірлер
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Күн бұрын
What innovative tech have you been curious about but haven't tried yet? Let us know in the comments!
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Күн бұрын
What's a recent tech acquisition or merger you are most excited about and why? Let us know in the comments!
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Күн бұрын
What's the No. 1 thing you look forward to at a tech conference? Let us know in the comments!
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 2 күн бұрын
Are there other areas of Confluent Cloud that would help peel back the curtain? Drop a comment below to let us know other areas/features you’d like us to break down.
@JonathanAli-ef4kx
@JonathanAli-ef4kx 3 күн бұрын
Nice vid!
@colouredpages
@colouredpages 5 күн бұрын
One of the best lectures for Kafka. I always come here to refresh stuff
@preciousnyasulu629
@preciousnyasulu629 5 күн бұрын
I consider myself lucky to have seen both setups in action. One thing l have noticed is that event drive architecture is flexible for scaling
@suirenenrius1982
@suirenenrius1982 5 күн бұрын
thanks!
@firozalisayyed7867
@firozalisayyed7867 5 күн бұрын
Congratulation... From Nanded Maharashtra.. Mahurgad ❤❤❤
@MWGHUY
@MWGHUY 8 күн бұрын
Is there any way to clear CTAS data when joining 2 tables together?
@aleshkakr7351
@aleshkakr7351 9 күн бұрын
Good afternoon, please tell me how to flink a session job inside which I have a java archive from a remote repository, specify the credentials for connecting to a Kafka topic? Flink itself is deployed as a session in k8s by the flink operator, thanks in advance!
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 9 күн бұрын
After watching the shortest video on Connectors, what do you think? Anything else you’d like to see us cover in Confluent Cloud? Leave us a comment below.
@datrumpet5
@datrumpet5 9 күн бұрын
In the Spark producer, is it literally as simple as adding a column as the "key" before write? Is there anything else needing to be considered? Let us assume the data is being read from a table that is already partitioned by a date and then the column you will be using as the key.
@febryanmz
@febryanmz 11 күн бұрын
Hello Sir, We have an enterprise license for Confluent kafka. But the problem is, we want to integrate with SoftwareAG Webmethods version 10.15. We are looking for any documentation or videos about this. But there are no such thing as a best practice how to use it. Only documentation for Installation adapter and user guide without any example. Can you please provide a documentation for a simple usecase? thank you sir.
@KhaledKimboo4
@KhaledKimboo4 12 күн бұрын
I think people talking about high level architecture don't have much experience with real life production code, all good until your command requires up to date projection to take a decision and all hell break loose
@ShivanshPandey-x3s
@ShivanshPandey-x3s 13 күн бұрын
what are the charges for Confluent Control Center?
@CulTube13
@CulTube13 14 күн бұрын
What if delivery semantics is "At most once"? How consistency can be reached?
@mecamon
@mecamon 15 күн бұрын
What a beautiful way of explaining concepts. Bravo and thanks!
@faysalahmedsiddiqui
@faysalahmedsiddiqui 16 күн бұрын
I assume you were using flink, not flinksql? Hence, you were using own decoder .. performance boost with Jasoniter was great . Did you perform this experiment in flink with rocksdb backend or in-memory??
@TheElochai
@TheElochai 16 күн бұрын
I don't see the "offset feature".
@fopperer
@fopperer 17 күн бұрын
Sadly your choice of colours (~ 10:55) is not very accessible.
@venkatb8317
@venkatb8317 17 күн бұрын
Hi Flink guru's, below is the my code, am getting AIOB Exception, but when i connect with normal consumer code with kafka with schema registry, it will be fine, but when i pass the code env i.e ###12, and when am print() method is uisng am getting error, am using same pojo which is generated by avsc file only, could you please help us public class Test{ public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.enableCheckpointing(60000); try { Properties props = new Properties(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "my server url"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); props.setProperty("specific.avro.reader", "true"); final HashSet<TopicPartition> partitionSet = new HashSet<>( Arrays.asList(new TopicPartition("test", 0))); KafkaSource<avropojo> kafkaSource = KafkaSource.<avropojo>builder() .setGroupId("group1") .setStartingOffsets(OffsetsInitializer.latest()) .setPartitions(partitionSet) .setProperties(props) .setDeserializer( KafkaRecordDeserializationSchema.valueOnly( ConfluentRegistryAvroDeserializationSchema.forSpecific( enb_Record.class, "schema registy url"))) .build(); /// when i add below am getting error line no ######12 DataStream<avropojo> records = env.fromSource(kafkaSource, WatermarkStrategy.noWatermarks(), "transactions"); records.print(); //===> getting error here env.execute("flink-env"); } catch (Exception e) { // TODO: handle exception e.printStackTrace(); } }
@venkatb8317
@venkatb8317 17 күн бұрын
my logs: [Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.flink.connector.base.source.reader.SourceReaderBase - Closing Source Reader. [Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher - Shutting down split fetcher 0 [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=group1-3, groupId=group1] Resetting generation and member id due to: consumer pro-actively leaving the group [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=group1-3, groupId=group1] Request joining group due to: consumer pro-actively leaving the group [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.consumer for group1-3 unregistered [Source Data Fetcher for Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher - Split fetcher 0 exited. [Source: transactions -> Sink: Print to Std. Out (4/8)#4] WARN org.apache.flink.runtime.taskmanager.Task - Source: transactions -> Sink: Print to Std. Out (4/8)#4 (b2ff27f0ff8dd3b3bdc460d1fd6ef070_cbc357ccb763df2852fee8c4fc7d55f2_3_4) switched from RUNNING to FAILED with failure cause: java.io.IOException: Failed to deserialize consumer record due to at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:56) at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:33) at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203) at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422) at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:638) at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:973) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917) at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970) at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) at java.base/java.lang.Thread.run(Thread.java:842) Caused by: java.lang.ArrayIndexOutOfBoundsException: Index -150 out of bounds for length 2 at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:460) at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:283) at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:188) at org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:136) at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:248) at org.apache.avro.specific.SpecificDatumReader.readRecord(SpecificDatumReader.java:123) at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:180) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154) at org.apache.flink.formats.avro.RegistryAvroDeserializationSchema.deserialize(RegistryAvroDeserializationSchema.java:109) at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82) at org.apache.flink.connector.kafka.source.reader.deserializer.KafkaValueOnlyDeserializationSchemaWrapper.deserialize(KafkaValueOnlyDeserializationSchemaWrapper.java:51) at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:53) ... 14 more [Source: transactions -> Sink: Print to Std. Out (4/8)#4] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: transactions -> Sink: Print to Std. Out (4/8)#4 (b2ff27f0ff8dd3b3bdc460d1fd6ef070_cbc357ccb763df2852fee8c4fc7d55f2_3_4). [flink-pekko.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Source: transactions -> Sink: Print to Std. Out (4/8)#4 b2ff27f0ff8dd3b3bdc460d1fd6ef070_cbc357ccb763df2852fee8c4fc7d55f2_3_4. [flink-pekko.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: transactions -> Sink: Print to Std. Out (4/8) (b2ff27f0ff8dd3b3bdc460d1fd6ef070_cbc357ccb763df2852fee8c4fc7d55f2_3_4) switched from RUNNING to FAILED on cb0379c8-4cc5-465f-a315-e9b4989f629c @ 127.0.0.1 (dataPort=-1). java.io.IOException: Failed to deserialize consumer record due to at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:56) at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:33) at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203) at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422) at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:638) at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:973) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917) at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970) at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) at java.base/java.lang.Thread.run(Thread.java:842) Caused by: java.lang.ArrayIndexOutOfBoundsException: Index -150 out of bounds for length 2 at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:460) at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:283) at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:188) at org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:136) at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:248) at org.apache.avro.specific.SpecificDatumReader.readRecord(SpecificDatumReader.java:123) at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:180) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154) at org.apache.flink.formats.avro.RegistryAvroDeserializationSchema.deserialize(RegistryAvroDeserializationSchema.java:109) at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82) at org.apache.flink.connector.kafka.source.reader.deserializer.KafkaValueOnlyDeserializationSchemaWrapper.deserialize(KafkaValueOnlyDeserializationSchemaWrapper.java:51) at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:53) ... 14 more [flink-pekko.actor.default-dispatcher-7] INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager - Received resource requirements from job d88aa3965d1f1584a570887ae4a8b465: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=7}] [SourceCoordinator-Source: transactions] INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator - Removing registered reader after failure for subtask 3 (#4) of source Source: transactions. [flink-pekko.actor.default-dispatcher-8] INFO org.apache.flink.runtime.jobmaster.JobMaster - 1 tasks will be restarted to recover the failed task b2ff27f0ff8dd3b3bdc460d1fd6ef070_cbc357ccb763df2852fee8c4fc7d55f2_3_4. [flink-pekko.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job kddi-converter (d88aa3965d1f1584a570887ae4a8b465) switched from state RUNNING to RESTARTING.
@DebrajDutta-c6l
@DebrajDutta-c6l 18 күн бұрын
Why does it need to wait for phase 2 to commit the transactions at sink?
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 17 күн бұрын
In phase 1, all of the transactions for all of the sinks are pre-committed. Only after all of those pre-commits succeed, does it proceed to phase 2, which commits the transactions. If any instance fails, all of the transactions are rolled back. Without this two-step procedure, there'd be a risk of some transactions being committed in phase 1 and others failing, which would lead to inconsistencies.
@krushnnabaviskar4131
@krushnnabaviskar4131 20 күн бұрын
Thanks brother for great explanation
@adrianukawski376
@adrianukawski376 22 күн бұрын
this series is so valuable. Thank you!
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 20 күн бұрын
Wade here. I'm glad you are enjoying the series. It's always nice to hear that people are getting real value out of the content we create.
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 23 күн бұрын
Hey there, I’m Lucia! I love trying out data visualization libraries, and it’s especially fun when I get to do it with live data sources like Kafka. I’m so happy I get to share this project with you in video format; if you’d like to go a little more in depth with a blog, you can check out the first part in my series on this stack (Kafka, Confluent Cloud, Flink SQL, and Streamlit) linked in the description above.
@jamiemarshall8284
@jamiemarshall8284 24 күн бұрын
I find this terminology problematic. Especially with cloud architecture, where traditional servers are abstract. Everything is using events and resquest-responses.
@AvinashReddyAlla
@AvinashReddyAlla 25 күн бұрын
A super amazing teacher, I highly resect your teaching methodology...
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations 25 күн бұрын
Did you miss out on Current 2024? Don’t worry, you can catch up on all the highlights by reading the recap blog linked in the description above.
@hassanzaid5060
@hassanzaid5060 28 күн бұрын
Wow, amazing work
@yogeshwarpatel7540
@yogeshwarpatel7540 29 күн бұрын
Amazing
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Ай бұрын
Hi - David here. I’ve gone deeper into the topic of using Flink SQL for streaming analytics over on Confluent Developer-including some examples of how to use OVER windows. See the links in the description above.
@Daniel-dj7vc
@Daniel-dj7vc Ай бұрын
ok...what?
@mdnix
@mdnix Ай бұрын
0:56 Tony Stark reference
@yashverma7084
@yashverma7084 Ай бұрын
Am I the only one who noticed Hindi subtitles?
@nicolaszein
@nicolaszein Ай бұрын
What billboard are you using? Its wonderful.
@hariharanthirumeni
@hariharanthirumeni Ай бұрын
Microservices are the way to build large and complex systems. Biggest advantage of Microservice architecture is that the isolation of domain. Isolation makes it loosely coupled, independatly deployable, owns the data and by thus data is encapsulated, independantly scalable and limit the blast radius. Though the hardware requirement of microservices are exponentially higher than monolith, its worth it. another issue with microservice is the tracebility of transactions in logs. But thats been handled by centralised login framework like zipkin and MDC variables.
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Ай бұрын
Wade here. The thing is, that although hardware requirements might be higher, they are also more optimized. You can scale only what you need to scale. You aren't wasting hardware on things that don't matter. There's actually been a lot of cases where companies switch to microservices and end up using less hardware, at least on the beginning. What happens after is that now they have unlocked scalability that they couldn't previously achieve and so they end up adding hardware as their business expands beyond their previous limits.
@hariharanthirumeni
@hariharanthirumeni Ай бұрын
@@ConfluentDeveloperRelations Agree to your comments on scalability perspective. Almost all microservices today are hosted in a private/public cloud(I used to run microservices on plain old linux server a decade ago) .Most of the applications in Financial institutions are on Private cloud i,e either kubernetes/openshift/pcf platform due to security policy imposed by gov. Setting up this platform bare minimum requires 3 nodes and additional man power to maintain. And to achieve resilency you need to have multiple cluster span across multiple data centers. Also a load balancer for industrial grade. If this is were a monolith, we require 2 nodes and 2 instances of any application servers.
@suleymantopdemir4028
@suleymantopdemir4028 Ай бұрын
ai guy 🤖
@kzmOP
@kzmOP Ай бұрын
*thank you sir. A veteran IT person learned a lot.❤*
@Fikusiklol
@Fikusiklol Ай бұрын
Thanks, every bit of information counts 😄
@croydon21H
@croydon21H Ай бұрын
@3:28 what is the purpose of extra consumers in a CG ? Is this not making things more manual and error prone? i.e. incorrect configuration hurting performance? I am not sure I understand why we need more groups vs more consumers ina group?
@ghanty123
@ghanty123 Ай бұрын
Excellent presentation Adam, gave me great insights into headless architecture - something I was already pursuing at some level, without knowing the terminology. Confluent continues to be on the bleeding edge of data processing - thanks again for putting this out here
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Ай бұрын
👋Hey there! Thanks for watching; hope you are now a little bit more comfortable working with replication in Kafka, and perhaps a bit more confident that well built systems don't lose data that easily. Don’t forget to subscribe if you found this useful, as we release content quite often! If you have any questions or feedback, drop a comment below-we’d love to hear from you!😊Also, check out the description for links to related resources. Enjoy! 🎉
@hicham8331
@hicham8331 Ай бұрын
how to do authentication on kafka connect api
@nnz13
@nnz13 Ай бұрын
Schema microservice for microservices 😂
@ConfluentDeveloperRelations
@ConfluentDeveloperRelations Ай бұрын
Wade here. Yeah, creating a separate microservice for managing your schemas is definitely not the right way to go. I actually am familiar with a project that went kind of that direction. They had a "Database" microservice and a "Front End" Microservice etc. Not quite what the intent was. Microservices should be based around domain concepts, rather than technical elements. Having said that, using a schema registry is a good idea. However, I wouldn't consider that to be a microservice by itself. It's more like a piece of infrastructure the same way that your database is infrastructure. Think of the schema registry as a database for your schemas, rather than a microservice for your schemas.
@roobanmathi968
@roobanmathi968 Ай бұрын
very informative.
@MarciaAllen-y6u
@MarciaAllen-y6u Ай бұрын
I love how approachable you make tough topics!
@RainerArencibia
@RainerArencibia Ай бұрын
Great video! quick question: What would happen if the Kafka service is down in the EDA model? Or How robust is the Kafka service?
@womanwithtoomanyhobbies
@womanwithtoomanyhobbies Ай бұрын
Wasn't the first topic - "All that glitters is not gold". Did it come in haphazard order when used --from-beginning?