Got a question on the topic? Please share it in the comment section below and our experts will answer it for you. For Edureka Hadoop Training and Certification Curriculum, Visit our Website: bit.ly/2Ozdh1I
@debjitsengupta32387 жыл бұрын
I am in the industry for the last 17 years.On Big Data, I have gone through many tutorials in the past and also attended a course. None can match Eudereka. Stuffs are explained in simple, crisp and lucid manner. My best wishes are with them
@edurekaIN7 жыл бұрын
Hey Debjit, thanks for the wonderful feedback! We are glad you found it useful. Do subscribe to our channel to stay posted on upcoming videos. Cheers!
@UltraPokemonwalkthru8 жыл бұрын
One of the best videos that you can get for learning Hive. The instructor is very clear and the subject is covered very well. Everyone wanting to know Hive should check out this video. Highly RECOMMENDED!!
@edurekaIN8 жыл бұрын
+UltraPokemonwalkthru We are dancing with joy! Thanks for your lovely words. It is testimonials from people like you that keep us going. Do keep checking back in for more videos. You must also check out our blogs sometime @ edureka.co/blog/
@khkyt9 жыл бұрын
simple clear and crisp for beginners. topics not covered here are : SerDe and UDFs in HIVE
@edurekaIN9 жыл бұрын
+Hemanth Kumar Thanks for the feedback Hemanth! You can check out our playlist for videos on the mentioned topics. Please subscribe to our channel for more videos.
@vikram.vibing6 жыл бұрын
Excellent video! It is very hard to find such a comprehensive tutorial for FREE! Thanks Edureka!
@edurekaIN6 жыл бұрын
Hey Vikram, we want everyone who is interested to learn to have a platform to learn and grow! Glad to see people appreciating it! :)
@palashjain19927 жыл бұрын
The lecture is really good. It is very informative and cleared all the basics concepts oh hive.
@patillinganna4 жыл бұрын
clear,concise and precise..thank you..
@edurekaIN4 жыл бұрын
You are welcome👍
@digwijoymandal86629 жыл бұрын
Thanks very much. The explanation of the partition and bucket showing the hdfs path was great.
@edurekaIN9 жыл бұрын
+Digwijoy Mandal Thank you! Appreciate your feedback. Please do check out other other tutorials. You can also find some interesting blogs on big data here: www.edureka.co/blog/
@bharatk2124 жыл бұрын
Very nice and informative tutorial, clear explanation along practical examples. Thanks for the video and your efforts.
@karthikkumar20049 жыл бұрын
Excellent session and explained all topics very well with practicals..thank you
@edurekaIN9 жыл бұрын
Thank you for the great feedback +Naga Karthik Kumar Kuriseti ! Please subscribe to our channel for more videos.
@com2ram9 жыл бұрын
Thanks for such a nice and clear explanation. I'm sure will help lots of other people as well. keep posting!!
@edurekaIN9 жыл бұрын
+ram sharma Thank you! We will definitely keep posting. We hope you're reading our blogs too --> www.edureka.co/blog/
@indiras2136 жыл бұрын
Great explanation .. so many information 🙂. Thanks for this thumps up edureka
@edurekaIN6 жыл бұрын
Thank you for watching our video. Do subscribe, like and share to stay connected with us. Cheers :)
@geetamacha34556 жыл бұрын
Well explained with regards to basic concepts of Hive and Hadoop. Thank you for posting this video
@chandradasaka33768 жыл бұрын
Very well explained. My appreciations and thanks to the presenter
@edurekaIN8 жыл бұрын
Thanks for taking time out to check out our video. We are so happy you liked it. Please do keep checking back in for more videos. Whenever you have the time, you must also check out our blog page @ edureka.co/blog/ and tell us what you think. Have a good day!
@mayankbitsful6 жыл бұрын
Really good stuff on hive....everything explained very clearly. Thanks
@edurekaIN6 жыл бұрын
Thank you for watching our video. Do subscribe, like and share to stay connected with us. Cheers :)
@utpalsarkar31788 жыл бұрын
Thanks a lot for uploading such a video, from where we can easily start to explore more in Hive.
@edurekaIN8 жыл бұрын
+Utpal Sarkar We are happy we could spark your interest in Hive! You must also check out our blogs sometime, we have some pretty interesting stuff there; edureka.co/blog
@shalendra20feb7 жыл бұрын
Very nice explanation.. thanks sir.
@rehari12949 жыл бұрын
Great video with clear explanation!!
@edurekaIN9 жыл бұрын
Thank you for the feedback +Rehari ! Please subscribe to our channel for more videos.
@prabhat1234567897 жыл бұрын
Thank you very much Sir and the Edureka for the neat and nice explanation.
@vidyapatil67188 жыл бұрын
Thank you very much !! Very clear instructions and great video !!
@edurekaIN8 жыл бұрын
+Vidya Patil Glad you liked it. Do keep checking back in for more videos. Also, you must visit our blog page sometime; edureka.co/blog
@gauravkunararvindpawar54536 жыл бұрын
Thank you, Sir. The best tutorial. Excellent video and great explanation!! :)
@edurekaIN6 жыл бұрын
Hey Gauravkunar, thank you for watching our video. Do subscribe and stay connected with us. Cheers :)
@juliagrazielastein96403 жыл бұрын
Tranks
@devapriyakkdk7 жыл бұрын
Excellent Tutorial !!!
@vijaymathew86066 жыл бұрын
Thank you very much sir. Good explanation.
@edurekaIN6 жыл бұрын
Hey Vijay thank you for watching our video. Do subscribe, like and share to stay connected with us. Cheers :)
@MayankGupta-el9rj7 жыл бұрын
Hello Team, thanks for this simple explanation.
@lukesimons76207 жыл бұрын
This is well done. Thank you
@edurekaIN7 жыл бұрын
Hey Luke, thank you for watching our video. Do subscribe, like and share to stay connected with us. Cheers :)
@chandrashekharmarathe97529 жыл бұрын
Superb Explanation...
@edurekaIN9 жыл бұрын
Chandrashekhar Marathe Thank you!
@lucanatali64168 жыл бұрын
Very good explanation! Are the slides available somewhere? Thanks.
@edurekaIN8 жыл бұрын
Thanks for your good words! We will get back with responses to your query very soon. In the meantime, you can always check out our blogs @ edureka.co/blog/ We have blogs on all your favourite topics!
@deepakkini38356 жыл бұрын
what are the different file formats does hive support? what are the advantages of using them?
@deep540468 жыл бұрын
SIR can you please post some interview questions on HIVE and their explainations. It would be great
@edurekaIN8 жыл бұрын
Hey Deep, you could check out this tutorial for Hadoop career paths and interview questions: goo.gl/myHIgl . You can also go through our Hadoop interview questions blog which has a section on Hive. Here's the link: goo.gl/sbzE70
@anujagrawal71537 жыл бұрын
Excellent video and great explanation!! :)
@venkatesanramamurthy44118 жыл бұрын
excellent explanation thanks
@edurekaIN8 жыл бұрын
+Venkatesan Ramamurthy Glad you liked it! Do keep checking back in for new videos, we add them every single week!
@aakankshagupta73938 жыл бұрын
Thanks very much for such a good explanation. Could you please let me know the location of external table where it is created.
@edurekaIN8 жыл бұрын
Thanks for your wonderful feedback on the Hive video. External table is located within the HDFS but not in the warehouse, hence data continues to exist even if the table is deleted. If you require a more detailed answer to this query, please leave your email address and contact number here, we will get an in-house expert to answer your query.
@saleemeee18 жыл бұрын
you are awesome sir, explained very well.
@edurekaIN8 жыл бұрын
+Shaik Saleem, thanks for checking out our tutorial! We're glad you liked it. Here's another video that we thought you might find useful: kzbin.info/www/bejne/omK0nniGeqaYo9U. Hope this helps. Cheers!
@prakasamvelusamy57997 жыл бұрын
Very Nice. Thanks.
@edurekaIN7 жыл бұрын
Thank you for watching our video. Do subscribe, like and share to stay connected with us. Cheers :)
@parthshah11547 жыл бұрын
Nice video with example.and its really very good explanation. i just have few questions : 1) Can we change the table structure once it is created ? like to add new column or to change datatype of current column. 2) when you used bucket , you used clustered by (state), but when we show result of it in file by using cat it should give us result statewise correct ? sorry if i misunderstood this topic. 3) Do you have same kinda explanation video for PIG ? 4) do you have list of interview questions for HIVE and PIG ?
@edurekaIN7 жыл бұрын
Hey Parth, thanks for checking out our tutorial! We're glad you liked it. 1. Yes, you can do it, let's say for hive To change the data type of column ALTER TABLE table_name CHANGE col_name col_name newType To add a new column ALTER TABLE table_name ADD COLUMNS (new_col datatype); 2. Yes, If the table is bucketed by some specific column, data is distributed based on the column specified. 3. Here's are a couple of tutorials on Pig that we think will be useful: kzbin.info/www/bejne/j6iXmZaJh5J3fbc and kzbin.info/www/bejne/aqu7pJqvg5mNetE. 4. Here are interview question blogs on Pig and Hive that will help you: www.edureka.co/blog/interview-questions/hadoop-interview-questions-pig/ www.edureka.co/blog/interview-questions/hive-interview-questions www.edureka.co/blog/interview-questions/top-50-hadoop-interview-questions-2016/ Hope this helps. Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
@ankurtyagi47709 жыл бұрын
Great job..Keep Posting..Thanks!!!
@edurekaIN9 жыл бұрын
Thank you for the feedback +Ankur Tyagi ! Please subscribe to our channel for more videos.
@tharun4959 жыл бұрын
kudos to trainer
@edurekaIN9 жыл бұрын
Thank you for the feedback +Tharun Kumar ! Please subscribe to our channel for more videos.
@girichokkalingam13498 жыл бұрын
good explanation...
@bhawneshrai9338 жыл бұрын
Nicely explained
@edurekaIN8 жыл бұрын
+Bhawnesh Rai Glad you liked it. Do check back in for more videos! You must also check out our blog page sometime @ edureka.co/blog
@rohinirithe15227 жыл бұрын
How to decide how many number of buckets can be created for any output data? Also depending on which parameter we shall create buckets? Like for transaction_records table we used hash values (% modulo) of transaction id, but in another example where we have used column state to create buckets, so how we are distributing data in such case?
@edurekaIN7 жыл бұрын
Hey Rohini, thanks for checking out our tutorial! Here are the answers to your queries: 1.How to decide how many number of buckets can be created for any output data? Also depending on which parameter we shall create buckets? No .of Buckets depends upon the task which you want to perform. Essentially, when you load data you often do not want one load per mapper ( especially for partitioned loads because this results in small files ), buckets are a good way to define the number of reducers running. So if your cluster has 40 task slots and you want the fastest ORC creation performance possible you would want 40 buckets. ( Or distribute your data by 40 and set number of reducers ) 2.Like for transaction_records table we used hash values (% modulo) of transaction id, but in another example where we have used column state to create buckets, so how we are distributing data in such case? Hashing can be used to index character data, instead of building an index on a varchar(50) column for example, the data values can be hashed into smaller smallint (2 byte) or integer (4 byte) values. It’s also used in the distributed database or table partitioning worlds to get a deterministic place to hold your data, for example, given 10 database servers holding a segment of the data the hash function GetPartitionHash( @transaction_number ) can be used to return the values 1 to 10. Hope this helps. Cheers!
@shilpasaha92066 жыл бұрын
Can we change the locaion of managed table?? Why the data doesn't get dropped in from external table??
@edurekaIN6 жыл бұрын
Hey Shipa, "Yes, we can change the default location of Managed tables using the LOCATION keyword while creating the managed table. The user has to specify the storage path of the managed table as the value to the LOCATION keyword.External table only deletes the schema of the table. hence, the data isn't dropped." Hope this helps!
@nancysharma85339 жыл бұрын
Awsome trainer.... sir can u provide dataset ( txns1.txt) so i can practice.
@digwijoymandal86629 жыл бұрын
+Nancy sharma lookout in github
@deepalisamtani3298 жыл бұрын
Hie... The video is awesome and explains the concept in a very lucid manner. I wanted to ask one thing. I installed the VMware player and the Cloudera VM, but when I try to sync the VMware player with the Cloudera VM, it shows .vmx file is corrupt. Therefore, i am not able to start the Cloudera VM. Could u suggest a solution to this problem or suggest some other VM? Please, do share ... I have to do hands on for HIVE ASAP.... Thanks.!!
@edurekaIN8 жыл бұрын
Hey Deepali, thanks for checking out the tutorial! We're glad you found it useful. :) Regarding your query, the .vmx file might not have downloaded completely. Please go ahead and download virtualbox and cloudera VM using below links, and then you can start working on hive. Virtualbox: www.virtualbox.org/wiki/DownloadsCloudera VM (select virtualbox as platform): www.cloudera.com/downloads/quickstart_vms/5-8.html Hope this helps. Cheers!
@harshitamewada57077 жыл бұрын
thanks sir for this excellent tutorial but sir how can i get data for practice purpose
@edurekaIN7 жыл бұрын
Hey Harshita, thanks for checking out our tutorial! We're glad you found it useful. As the data used in this tutorial are Edureka course artifacts, you can get access to them by enrolling into our course here: www.edureka.co/big-data-and-hadoop. Hope this helps. Cheers!
@kapilmadawat69817 жыл бұрын
Super Explanation :) Can you provide the dataset ( txns1.txt) link so i can download the file and practice for the same.
@edurekaIN7 жыл бұрын
Hey Kapil, thanks for the wonderful feedback! We're glad you found our tutorial useful. Please share your email address and we will send it. Cheers!
@kapilmadawat69817 жыл бұрын
Thanks for reply my email id is kapil.jn.92@gmail.com you can send the dataset ( txns1.txt) on the same
@edurekaIN7 жыл бұрын
Hey Kapil, we have shared it with you. Do subscribe to our channel to stay posted on upcoming videos. You can also check out our complete training here: www.edureka.co/big-data-and-hadoop. Hope this helps. Cheers!
@ritishaswain62767 жыл бұрын
Excellent video.. can you provide the dataset (txn1.txt)
@edurekaIN7 жыл бұрын
Hey Ritisha, thanks for the wonderful feedback! We're glad you found the tutorial useful. Please share your email address and we will send it. Cheers!
@ritishaswain62767 жыл бұрын
Please send me the data set in this email id ritisha.swain@gmail.com
@snehamaheshwari63587 жыл бұрын
hi, in latest cloudera version there is no databaseName in hive-site.xml file. i have checked twice. Do we have to change the name in the lastest cloudera version as well?
@edurekaIN6 жыл бұрын
Embedded mode is the default metastore deployment mode for CDH. In this mode, the metastore uses a Derby database, and both the database and the metastore service are embedded in the main HiveServer process. Both are started for you when you start the HiveServer process. This mode requires the least amount of effort to configure, but it can support only one active user at a time and is not certified for production use. Hope this helps :)
@Bhago1158 жыл бұрын
Good explanation with example :) :).
@edurekaIN8 жыл бұрын
Glad you enjoyed our video. Do keep checking back in for more videos. Whenever you are free, you must also check out our blog page @ edureka.co/blog/ and tell us what you think. Have a good day!
@chinmayamartha95749 жыл бұрын
Can I download the dataset that is being used in this video.
content is good, there are so many breaks in the audio, which gave bad experience, may be the instructor is eating food.
@edurekaIN7 жыл бұрын
Hey Suresh, thanks for checking out our tutorial. While we appreciate the humor, rest assured that the trainer wasn't eating during the video but was running the code. :) This is our bread and butter; we wouldn't ruin it by spilling the butter. Cheers!
@Reddy03256 жыл бұрын
Hi All, please look into below problem, No Route to Host from quickstart.cloudera/192.168.204.129 to 192.168.204.128:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; my machine ip address : 192.168.204.129 Thanks in advance