i will be installing confluent kafka using the method you told. will I be able to connect my spark job with my kafka topic that i created on the confluent cloud?
@mohammadhaque18734 ай бұрын
Best tutorial ever on impala architecture. Thank you.
@VishalThakur-wo1vx4 ай бұрын
Good video . But a little explanation around why Kappa architecture is not silver bullet for all use cases would have helped.
@azwan19925 ай бұрын
Excellent. Subscribed.
@jimmiedaugherty6 ай бұрын
"Promo SM"
@adnanraza1117 ай бұрын
Excellent video, specially the last part clarifying when to use hive and when to use impala...
@prestonmccauley437 ай бұрын
Clear, easy to follow, and even the example of jupyter was clear.
@SiddheshPrabhugaonkar7 ай бұрын
Many congratulations for the recognitions. and thank you for this informative webinar
@nithishkrishnans53507 ай бұрын
Hi, In all your video you are using the local service start like dev mode. I just want to know how to start all cluster for Prod environment. Can you please help me on this?
@arnabkgcoin7 ай бұрын
Very good explanation!
@SantoshKumar-yr2md8 ай бұрын
If I want to pursue Gen AI, what prerequisites require to learn, I am Manager in Healthcare and want to add Gen AI in my profile, I know SQL intermediate level but not having extensive knowledge in programming or coding stuff
@NeoStudy78 ай бұрын
thanks bro
@ramakambhampati50948 ай бұрын
Fantastic info. Thanks
@akhil1234560009 ай бұрын
I was looking for a detailed comparison in the web and I am glad I found this channel, very informative and detailed.
@DancingWithData9 ай бұрын
This was a great video. Thank you very much!
@DataCouch9 ай бұрын
Glad you enjoyed it!
@abdelrahimahmad780110 ай бұрын
Hi, Thanks for the video. However, most of these points are not limitations as Nifi is specified for a specific data processing task. Cheers!
@ananthpai556310 ай бұрын
Can you make a video on installation of Impala and querying hdfs data in pure Hadoop environment on Redhat OS?
@gustavomartin211 ай бұрын
excellent explanation thanks
@DataCouch9 ай бұрын
You are welcome!
@akshayaghade28411 ай бұрын
bro facing issue middle of installation cd opt/anaconda3 showing this error no file or directory
@Aysh_Sach_16-211 ай бұрын
can we please have more such sessions?
@nadranaj Жыл бұрын
Excellent explanation
@prasanna9123 Жыл бұрын
I have a doubt like when ever a producer produces a message/event it goes to the leader partition. At the same time if we also consider the partitioning i.e., hash partitioning, then that particular partition should be the leader partition. Based on the partitioning, that particular partition will be made as a leader partition? Or my understanding is wrong. Can you please revert me back.
@anonymous_devil3730 Жыл бұрын
First you said zokeeper does the election and then you said one the broker who is the contoller does the election of who should become next leader, right? Isn't this contradicting? Do you have a very in depth like advanced of the advanved video explaining kafka ..so that we can work as a kafka emgineer in production and not in college projects?? Also please clarify the above contradiction. Thanks.
@dharmeshajudiya1709 Жыл бұрын
Details exaplanation, thank for the sharing
@bhavypratap2844 Жыл бұрын
Sir I have started studying Deep Learning, but I didn't get the significance of the kernel. Can u briefly explain. Thank you.
@DataCouch Жыл бұрын
Hi @bhavypratap2844 In deep learning, the term "kernel" refers to a fundamental concept in mathematical operations performed on data. Let's say you have a bunch of numbers, and you want to perform some calculations on them. The kernel is like a small mathematical function or operation that you apply to each number in your data.
@sergefedorow8430 Жыл бұрын
NiFi has SQL processors. What do you mean under "No SQL interface in NiFi"?
@DataCouch Жыл бұрын
Actually, NiFi does not have native SQL processors. When we mentioned "No SQL interface in NiFi," then we were referring to the fact that NiFi does not provide built-in processors specifically designed for working with SQL databases or executing SQL queries. Instead, NiFi primarily focuses on data flow management, transformation, and routing across various systems and formats. However, it's worth noting that NiFi can interact with SQL databases indirectly by utilizing processors like ExecuteScript or ExecuteStreamCommand to execute custom scripts or external commands that interact with SQL databases.
@tech-n-data Жыл бұрын
Perfect little learning nugget.
@IamMQaisar3 ай бұрын
exactly ! 💯
@nitinagarwal3574 Жыл бұрын
Airline example does not make sense, check in can be a lookup on batch layer how it does not define the streaming data exactly.
@DataCouch Жыл бұрын
Hi Nitin, Thanks for your comment. We appreciate your feedback. You're right, check-in data can be a lookup on the batch layer. However, there are some cases where it makes sense to have it in the streaming layer. For example, if you want to provide real-time recommendations to passengers based on their recent check-in activity, then you'll need to have that data in the streaming layer. In our video, we were trying to provide a general overview of the Lambda Architecture. We didn't go into all of the possible use cases, but we wanted to give you a good starting point. If you have any other questions about the Lambda Architecture, please feel free to ask.
Do you have any class with kafka with avro using scala?
@DataCouch Жыл бұрын
Hi Prakash, We offer standard courses of confluent. Here is the link to the developer course: drive.google.com/file/d/18L0xHVcr0LrN_kZqIhRIzhMlC2nfL5CZ/view?usp=sharing It has code in Java/Python/C#
@kongyoutube Жыл бұрын
nice one, thank you
@saikrishna-bq8bc Жыл бұрын
Great!!!! Clear cut explanantion
@DataCouch Жыл бұрын
Glad you liked it
@vt1454 Жыл бұрын
Good summary of differences
@LearnBigData Жыл бұрын
Wonderful content!! Very very knowledgable
@rohit72486 Жыл бұрын
Great and a detailed explanation. Nice 👍
@DataCouch Жыл бұрын
Glad it was helpful!
@sarfarazhussain6883 Жыл бұрын
Hi Bhavuk, Suppose we have managed Confluent environment but I want to run Kafka Connect in-house (i.e. self-managed) using VMs. I want to read data from Postgres using in-house Kafka Connect and publish data to Kaka Topics in the managed Confluent environment. Can we do it and if yes, then what things do we need to take care infra-wise?
@DataCouch Жыл бұрын
Yes, you can do this. In order to run in-house Kafka Connect, you will need to provision VMs for your Kafka Connect clusters. You will also need to make sure that the VMs have access to the Kafka topics in the managed Confluent environment. Additionally, you will need to configure your Kafka Connect cluster to connect to your Postgres database and properly configure the source connectors for the data you want to read. Once this is complete, you should be able to publish data from Postgres to your Kafka topics in the managed Confluent environment.
@arpitsharma7695 Жыл бұрын
why do you have to move the files/folders?
@RishiRajKoul Жыл бұрын
can we have multiple...... billions of rows data in Kudu? Kudu seems to be like Oracle, correct?
@DataCouch Жыл бұрын
Yes, you can have multiple billions of rows of data in Kudu. However, the exact size limit of your data will depend on the specific hardware resources you have available.