small file problem in Hadoop? According to me if we have lots of small files in cluster that will increase burden on namenode . bcoz namenode stores the meta data of file so if we have lots of small files name node keep noting address of files and hence if master down cluster also gone down.
@DataSavvy4 жыл бұрын
That is right... In addition to this spark will also need to create more executor tasks... This will create unnecessary overhead and slow down your data processing
@saurabhgulati25053 жыл бұрын
Also if these files are compressed, the executor core will get busy decompressing them.
@tanmaydash803 Жыл бұрын
name node ?
@-leaflet Жыл бұрын
@@tanmaydash803 otherwise called the Master
@Khang-lt4gkАй бұрын
Question 1 at 3:15: Issues with many small files on Hadoop. - Resource utilization problem: Each task is assigned to process data in a single partition. Multiple small files -> multiple small partitions -> multiple tasks are required -> multiple tasks are in queue -> high frequency for context switching -> high load on driver node (for allocation and orchestrating tasks among executors and cores) -> high possibility for driver OOM. - .metadata file (responsible for storing address of compressed partition files) consists of many key pairs to map -> low shuffle efficiency for almost every transformations.
@cajaykiran3 жыл бұрын
I would have watched this video at least 5 times between yesterday and today. Thank you very
@anujtirkey98674 ай бұрын
Same 😂
@sashikiran93 жыл бұрын
Important point - hive partitioning is not same as Spark partitioning. 7:34-9:14
@r.kishorekumar13882 жыл бұрын
Where there are lot of small files in hadoop, the namenode performance can be impact because of unable to fast process the data.. Actually Hadoop is for handling big data.. So creating too many small files may end up with namenode performance impact. I came across this problem in my project
@bharathraj454510 ай бұрын
Hi bro iam new to big data can you guide me further
@DataSavvy10 ай бұрын
Hi Bharath, happy to guide you. Drop me an email on aforalgo@gmail.com
@sky-i8dАй бұрын
Hadoop is generally for big data. So for storage the minimum block size is 128MB, so having such a small file can significantly waste storage as atleast one block will be assigned to one file. Please correct me if I am wrong here.
@ShashankGupta3472 жыл бұрын
crisp & clear , Thanks !
@sumit_ks3 жыл бұрын
Very well explained sir.
@DataSavvy3 жыл бұрын
Thanks Sumit :)
@FaizanAli-we5wc Жыл бұрын
You are too good sir thank you soo much for clearing our concepts❤
@vutv574211 ай бұрын
Nice explanation ❤ Completed ❤
@DataSavvy11 ай бұрын
Thanks
@tanushreenagar3116 Жыл бұрын
Best explanation
@DataSavvy Жыл бұрын
Thanks for liking
@vamshi8784 жыл бұрын
@data savvy, i obesrved in my local system with multiple cores, partitionBy and bucketBy both doesn't perform any shuffle, there is no exchange in plan. That is why it is producing small files in both cases? Is that right? Will it perform shuffle in large cluster? I am jts reading from a file and writing in partitionby or bucket by no transformations, tell me in this case cluster level also no shuffle will be there?
@khanmujahid47433 жыл бұрын
It uses hash value of the search item and go to the bucket which matches with the hash value
@rakeshdey17024 жыл бұрын
This is nice explanation, But you are considering physical partition for hive , but memory level partition for spark to show difference no of files generated
@subhajitroy58504 жыл бұрын
Really appreciate @Data Savvy for the effort. I have a question: The data searching/retrieval process in case of partitioned table can (to create an analogy) we understand, the way element retrieval is done in binary tree and in case of partitioned bucketed table, a way search is done in nested binary tree . I am referring to Binary tree in Data structure Recently, I followed one Mock Bigdata Interview video in your channel,liked a lot. If possible please upload a few more such videos. Thanks :)
@DataSavvy4 жыл бұрын
Hi Subhajit... Thanks. More mock interviews are planned in next few weeks.. excuse me but I did not get your question :(
@subhajitroy58504 жыл бұрын
@@DataSavvy The way data is retrieved / searched in partitioned hive table, can we think / correlate the same with that of element retrieve in case of binary tree (Binary Tree in Data Structure). Not sure if this is a better version :)
@raviranjan2173 жыл бұрын
Small file problem is headache to name node since it has to manage metadata info. also spark need more number of executor which is again a overhead .
@sanketkhandare64302 жыл бұрын
excellent explaination. helped a lot
@prosperakwo75634 жыл бұрын
Thanks for the great video, very clear explanation
@punpompurАй бұрын
Wouldn't it be possible for data in buckets to be skewed as well? Does the hash function ensure that each bucket will be the same size?
@shikhargupta75522 жыл бұрын
Please keep making more such videos. Also would be great if you could make something for cloud related big data technologies
@DataSavvy11 ай бұрын
Thanks Shikhar, I will plan to create videos on cloud. Do u need videos on any specific topic on cloud?
@anikethdeshpande8336 Жыл бұрын
is bucketing not used with save() method ? it works fine with saveAsTable() getting this error AnalysisException: 'save' does not support bucketBy and sortBy right now.
@ksktest1873 жыл бұрын
Great efforts ,keep it up
@jonathasrocha64802 жыл бұрын
Does Bucketing is used when the column have high cardinality ?
@saurabhgarud66904 жыл бұрын
Thanks for a very helpful video. My question here is, how we can perform optimisation using bucketing,? As in bucketing data is shuffled among different buckets, so it will not be sorted, so if i am using where condition over bucketed table how should i avoid irrelevant bucket scans like i do in partitioning? In short does where condition optimises bucketed table if not then what are other optimisations over bucketing ?
@HemanthKumardigital2 жыл бұрын
Thank you so much sir ☺️ .
@rajlakshmipatil44154 жыл бұрын
No of bucket in spark = size of data /128 Iam I correct so in that case as above we can't specify no of buckets in spark ? In which case should we go for bucketing and which case should go for partitioning can you give some example ?
@DataSavvy4 жыл бұрын
If u use partitioning and it creates small files, then u should consider using bucketing there...
@rajlakshmipatil44154 жыл бұрын
@@DataSavvy Thanks for answering
@kaladharnaidusompalyam8514 жыл бұрын
I ll tell you one thing here. Partitioning is done based upon the column & bucketing is done based upon the rows. (i.e., both concepts are splitting data into multiple pieces. But part based on column and buck based on rows/records.) Suppose if we have data 1-100 .we can bucket data like 1-25 in one bucket and 25 -50 in second bucket and 50-75 &75-100respectively. Based on rows. But partation is based on column. Ex. If you have column name (population in year wise from 2010-2020) we split data based on year wise . 2010 ,2011,2012...2020into 10 partations. If it is 100%correct .please comment some one. Dont feel bad. If im wrong i make it correct. Tq
@DataSavvy4 жыл бұрын
Partitioning and bucketing both are done one column... only diff is , How the records are grouped. I think your statement is right but u are viewing these concepts in more complex way..
@DataSavvy4 жыл бұрын
Thanks Rajlakshmi :)
@anurodhpatil4776 Жыл бұрын
excellent
@anandraj25584 жыл бұрын
Nice. explanation.. Can you please also take Hive join example map side join and all other joins and performance tuning.
@DataSavvy4 жыл бұрын
Sure will create videos on that
@ayushjain1394 жыл бұрын
How can I find if my bucketing was really utilized by the query? Can be visible from the physical plan? Also, I am believing that in the case of partition+bucketing, both the partition and bucket filters should be on my query?
@kumarsatyachaitanyayedida47172 жыл бұрын
How can we consider a particular column to use as partitioning or to use as bucketing
@vikramrajsahu19623 жыл бұрын
Can we increase the performance of the Hive query while fetching the records, assuming table is already partitioned?
@uditmittal38162 жыл бұрын
Thanks for the video. But i have one query ,how to insert data in bucketed table of hive using spark. I tried this, but it didn't give correct output.
@bhavaniv17214 жыл бұрын
Hi,r u handling spark and scala training classes?
@rajeshp33233 жыл бұрын
but what i herd is in spark 1 partition = 1 block size.... partitions are not created like in hive using specific column name again here in spark when comes to bucketing..as u said 1 bucket should be minimum of block size....so is it mean 1 bucket = 1 partition...then what is the need of bucketing in spark...im confused
@xxxxxxxxxxa2322 жыл бұрын
Partitioning and bucketing are similar to GROUP BY ... and WHERE value in a range
@bhooshan25 Жыл бұрын
very useful
@kketanbhaalerao Жыл бұрын
without partitioning can we directly do bucketing in spark?
@sambitkumardash95854 жыл бұрын
Sir, could you please give one example syntactically between Hive partition, bucketing vs spark partition, bucketing . And couldn't understand the last point of your summary, could you please give some more clarity on it .
@DataSavvy4 жыл бұрын
Let me look into that
@Apna_Banaras3 жыл бұрын
Small file problem in hadoop? Its generates lot's of metadata . Than its increase the burden of name node
@kaladharnaidusompalyam8514 жыл бұрын
Hi Harjeet, i have came across a question in my latest interview. what are the packages we need when we want to impliment spark?
@DataSavvy4 жыл бұрын
Hi... It depends on what dependencies are u using in your project... Check you sbt file
@sagarbalai11223 жыл бұрын
If you already have some project then check in sbt/ pom file but generally you need atleast spark-core, spark-sql to start with basic ops.
@selvansenthil1 Жыл бұрын
How can we make bucket size to 128 mb as partion size would be 128 mb which will further devided into buckets.
@engineerbaaniya4846Ай бұрын
is it correct to say that partitioning will create multiple folders while bucketing will create multiple files in spark
@kaladharnaidusompalyam8514 жыл бұрын
what kind of problems we will face when there are a lot of small files in hadoop? My ans is : Hadoop is meant for handling large size of files in less number. i.e , hadoop can handle big size files with less count. hadoop wont give better results in efficient way for lot of small files, because there sould be SEEK time for reading data from hard disk to fetch a record . this would increase if you use lot of small files, it will increase system down time. and more over meta data also increases.
@DataSavvy4 жыл бұрын
Thats Right :) . There will be few more issues. Please see pinned message
@likithaguntha81053 жыл бұрын
Can we partition after bucketing?
@routhmahesh95253 жыл бұрын
How can we decide the number of buckets in case after partitioning one file 128 mb ,2nd file 400mb ,3rd file 200 mb..kindly answer..thanks in advance
@gyan_chakra2 жыл бұрын
Sir better quality is not available for this video. Please fix it.
@DataSavvy2 жыл бұрын
Hi Bhumitra...I am working on fixing this
@nobinstren37984 жыл бұрын
thanks men its help
@DataSavvy4 жыл бұрын
Thanks Nobin. Pleasure... :)
@sandipsawant75254 жыл бұрын
Thanks for this video. One question, in which kind of cases we need to use only bucketing , and how query search happens ? Thanks again🙏
@DataSavvy4 жыл бұрын
When partition on a column will create small files, use bucketing without partition.. before doing sort merge join also u can create buckted table and improve performance of join
@sandipsawant75254 жыл бұрын
@@DataSavvy Thank you sir for answer. If I used 4 buckets, when I hit select query then it will go to only one specific bucket or it will search in all buckets? Because in partition we have folders with value, in case of bucketing, how query will know , in which bucket to search ?
@AtifImamAatuif4 жыл бұрын
@@sandipsawant7525 It will use " hash value" of the search item and go to the bucket , which matches with the "hash value "
@sandipsawant75254 жыл бұрын
@@AtifImamAatuif Thanks
@ayushjain1394 жыл бұрын
@@DataSavvy "before doing sort merge join also u can create buckted table and improve performance of join" - Kindly explain how and why?
@ramchundi28163 жыл бұрын
Thanks, Harjeet. It was a great explanation. Quick question for you - What will happen if we remove a partition key after loading the data (in managed and external tables)?
@nikhithapolanki3 жыл бұрын
How can u remove partition key once table is created? If u drop and recreate table without partition, data present in physical location of table cannot be read by table. It will give parsing exception
@dheemanjain820511 ай бұрын
partition is same as group By and Bucketing is same as range
@DataSavvy11 ай бұрын
Hi, it's actually different...
@krunalgoswami46543 жыл бұрын
I like it
@alokdaipuriya46073 жыл бұрын
Hi Harjeet..... Thanks for such informative video. One qq here U choose country column for partition that's ok And u choose age column for buckets. So here why did u choose age column for bucketing ? Why not Name column ? Or we can choose any from name and age or there is some technicality behind to choose bucketing column ? If yes plz do comment.
@saketmulay83532 жыл бұрын
it depends on the filter you want to apply, if you want to apply filter on age and you are bucketing by name, then the problem will remain as it is and it won't make any sense.
@GreatIndia17292 жыл бұрын
IF we have large Number of Small files, then Number of I/O operations...like opening & closing files will be increased. This is Performance Issue.
@mohitmehta37884 жыл бұрын
If we want to query the table for country= india and age=20. Now that we have create new bucketed table, do we have to query bucketed table or initial table. Little lost here.
@DataSavvy4 жыл бұрын
u will query bucketed table :)
@NN-sw4io3 жыл бұрын
Sir, What if the filter only by age? So how about the partition and bucket?
@Ady_Sr Жыл бұрын
Volume of data would increase if we have small file.. Volume can be alot of small files or few large files.. both are No No
@sivakrishna34134 жыл бұрын
I want to learn Spark and pyspark. Are you providing any training?
@DataSavvy4 жыл бұрын
Hi Siva... I am currently to perusing any online training... Let me look into this prospect