Excellent .. can’t be explained more simpler than this
@JjJj-q8w2n3 ай бұрын
John doe reaching out
@georgelza4 ай бұрын
hihi... do you have notes how to relocate the Derbydb files onto persistnt storage, thus allowing for the container to be killed, recreate and the metastore it had persists. At the moment imoved the DerbyDB off to PostgreSQL but it adds weight/complexity to the build.
@likhithcr75174 ай бұрын
the best video i have even seen on installing spark in ubuntu. keep doing great work my brother
@mumovictor11024 ай бұрын
Easy to follow setup, kudos to the creator.
@blismosacademy4 ай бұрын
You will need a company email address to install and practice with Query Surge. If you don't have a company email, you may also use a college email address if available
@yaserarafath65814 ай бұрын
I am searching a job so i don't have company email id what I will do
@pulastyadas33514 ай бұрын
so helpful to understand total concepts.
@Saikumar-qb2jj5 ай бұрын
Clister clear explanation bro🎉 But no reach 😢😢
@findsarfaraz5 ай бұрын
Don''t waste your time here this is most useless installation video I ever saw..
@rubenagurcia9067 ай бұрын
Thanks, it works!!!
@DorcasHosea-zo8ol7 ай бұрын
mine is showing no file directory
@RoClaGo7 ай бұрын
Thanks!!!
@simransinha52988 ай бұрын
no im not feeling awsome
@akshaykadam12608 ай бұрын
Very Informative Video. Thank You..🙂
@chinmay77818 ай бұрын
I had a question here sir .... So the logical Plan which is created by Catalyst Optimizer can we called it a Linage and same goes for physical plan can we call it as DAG ?
@blismosacademy8 ай бұрын
No, lineage and logical plan are not the same. Lineage in Spark refers to the sequence of transformations that lead to the creation of a DataFrame or RDD (Resilient Distributed Dataset), whereas the logical plan is an abstract representation of the query or operations to be performed, which can be either resolved or unresolved. Yes, the DAG (Directed Acyclic Graph) is based on the physical plan, which is Spark's final execution plan.
@adnanrasheed54949 ай бұрын
Thank You! So much Sir your guidance to install apache airflow make me able to install it easily. Thanx for your kind cooperation in this regard.
@BOSS-AI-2011 ай бұрын
I'm getting error while initializing db
@princess_eye_status Жыл бұрын
Great explanation 😃👍🏻....thankuu so much sir 😇
@blismosacademy Жыл бұрын
Thank you for your support 😃
@Vikasptl07 Жыл бұрын
Still on mapR?
@bernatferrer425 Жыл бұрын
Im getting error "this site cant be reached, localhost refused to connect"
@blismosacademy Жыл бұрын
@bernatferrer425 Could you email the screenshot of error to [email protected]
@Braven77 Жыл бұрын
Thank you for a video. But I have a question. During the command "airflow scheduler" one operation was repeating and repeating " adopting or resetting orphaned tasks for active dag runs" and it didnt finish, what should it be? Thank you!
@alagandulasandeep9148 Жыл бұрын
I need etl automation testing course
@praketasaxena Жыл бұрын
this is great !
@DanaiTobaiwa Жыл бұрын
ive managed to install and configure airflow but having issues running the DAG when i try to trigger it i keep getting the error that DAG is not found in DagMOdel
Sure, +91 7892528411 and also you can check our Website for more information www.blismosacademy.com
@FabricioFerreira-hw7jf Жыл бұрын
thanks from Brazil!
@arunavasamanta9264 Жыл бұрын
Mahantha sir great♥
@krishnakanth4830 Жыл бұрын
Hi , thanks for the video. it's very informative. After airflow scheduler command what shortcut did you use to stop it and go to next airflow webserver command. Thanks in advance.
@blismosacademy Жыл бұрын
Thank you for your message!!!! You can use the keyboard shortcut Control+Z. Alternatively, you may open a new terminal and type "Airflow webserver."
@pawanchoudhary619 Жыл бұрын
After running beeline -u jdbc:hive2://, getting this 0: jdbc:hive2://(closed) And if I run any query, getting this Connection is already closed How to resolve this issue?
@blismosacademy Жыл бұрын
Instead of using beeline -u jdbc:hive2:// you can invoke hive by using hive command in the terminal. If you again get any error please mail the screenshot to [email protected]
@pradnyasutar6910 Жыл бұрын
do we need to install sqlite before installing airflow?...I am getting error when i run airflow scheduler command
@blismosacademy Жыл бұрын
No Pradnyasutar, SQLite is not required for Airflow installation You can mail the screenshot to learn@blismosacademy we will guide you with the installation
@pradnyasutar6910 Жыл бұрын
@@blismosacademy ok thanks
@RibsCribs7 ай бұрын
@@pradnyasutar6910 @pradnyasutar6910 hey I'm getting the same error too. how did you solved it
@N.el.M Жыл бұрын
Thank you for the nice walk-through👍
@TannerBarcelos Жыл бұрын
This finally cleared up the issues I was having understanding the abstraction layer Hive represents. I understood Hive was a collection of data in HDFS, or some other distributed file system, but I was confused on how that was, why it was and what it did to make that work. I knew the meta store played a role but I was also confused on how data even gets inserted into hive because it feels like a database so again, that abstraction was confusing. This video cleared it all up! You earned a sub. Thanks a ton.
@blismosacademy Жыл бұрын
Thank you so much for your comment!!! We are happy that you got the better understanding of Hive
@moeal5110 Жыл бұрын
I have followed along with you but I am getting this error any help /opt/spark/spark-3.4.0-bin-hadoop3/bin/spark-class: line 71: /usr/lib/jvm/jdk-11.0.19/bin/java: No such file or directory /opt/spark/spark-3.4.0-bin-hadoop3/bin/spark-class: line 97: CMD: bad array subscript
@blismosacademy Жыл бұрын
Hello @moeal5110, Delete the previous spark-3.4.0-bin-hadoop3 file and Install the new Spark File on home i)Instead of this /opt/spark<file_Path> just ii)Click on home -> open the terminal -> get the path of the file using pwd command and paste it in the bashrc file and save it and again start the spark Please do follow the above steps and if you get any error or issues reach out to us: [email protected]
@vaibhavtiwari8670 Жыл бұрын
GOOD CONTENT
@flexxy7649 Жыл бұрын
Thank you!
@NamanRawal-l9j Жыл бұрын
thank you, this was a very clear video and explanation on the Hive metastore
@blismosacademy Жыл бұрын
Thank You!!!
@chetanhande1570 Жыл бұрын
when you were inserting the data with standalone mode (with insert query) into customer/orders table in hive, which file format got created in warehouse dir ? was that ORC ? (the file having name like 000000_0)
@anantghuge6258 Жыл бұрын
Please provide your number
@Chalisque2 жыл бұрын
It's worth knowing that you can use cat > a_file_name and then type the contents of the file at the command line, like using a heredoc in a script. (I like doing this when writing short scripts as I can see the contents of the terminal as I write it, and it is a good challenge in thinking before you type.)