Can you please the share the notebook URL please ? thanks a lot , really gr8 learnings
@bhushanmayank3 жыл бұрын
How does spark know that other table attribute is identical on which it is bucketed while joining?
@gauravbhartia75434 жыл бұрын
Nicely Explained.
@TechWithViresh4 жыл бұрын
Thanks:)
@aashishraina28313 жыл бұрын
excellent
@TechWithViresh3 жыл бұрын
Thanks :)
@RAVIC32004 жыл бұрын
Again great content video, Viresh can you make video on those scenarios which interviewer usually ask like - 1) if you have 1TB of file how much time it takes to process (you take any standard cluster setup configuration to explain) and if i reduce to 500GB then how much time it will take. 2) DAG related scenarios questions ? 3) If spark job failed in middle then, will it start from starting if you re-trigger it again ? if not then why? 4) checkpoint related. Please try to cover such scenarios, if its inside one video then it will be really helpful.. thanks again for such videos.....
@TechWithViresh4 жыл бұрын
Thanks, don’t forget to subscribe.
@RAVIC32004 жыл бұрын
@@TechWithViresh I'm your permanent viewer 🙏🙏
@cajaykiran3 жыл бұрын
Is there anyway I can reach out to you to discuss something important?
@TechWithViresh3 жыл бұрын
Send the details at techwithviresh@gmail.com.
@dipanjansaha68244 жыл бұрын
When we directly write to adls i.e the files then how bucketing helps? 2. Also is that a correct understanding bucketing is good when we use a datafram for read purpose only.. as what I understood if there's a use case where in every build write operation happens.. bucketing would not be the best approach..
@TechWithViresh4 жыл бұрын
Yes, bucketing is more effective in reusable tables involved in heavier joins
@cajaykiran3 жыл бұрын
Thank you
@gunishjha40303 жыл бұрын
Great content!!!, You have used bucketBy in scala code to do the changes, can you tell how to handle the same in spark sql as well. do we have any function we can pass in spark sql for the same.
@gunishjha40303 жыл бұрын
found it thanks anyways PARTITIONED BY (favorite_color) CLUSTERED BY(name) SORTED BY (favorite_numbers) INTO 42 BUCKETS;
@mdfurqan Жыл бұрын
@@gunishjha4030 but are u able to insert the data in bucketed table using spark-sql underlaying storage is Hive?
@sachink.gorade82094 жыл бұрын
Hello Viresh sir, Nice explaination. Just one thing I did not understand when we create 8 partitions for these two tables as I could not find any code for it in video. So could you please explain?
@TechWithViresh4 жыл бұрын
8 is the default partitions(round robin) created for the cluster used here with 8 nodes.
@TechWithViresh4 жыл бұрын
8 is the default number of partitions (round robin) as the cluster used has 8 nodes
@mateen1614 жыл бұрын
Nice explanation!...Just wondering how the number of buckets should be decided. In this example, you had used 4 buckets, can't we use 6 or 8 or 10. Is there a specific reason for using 4 buckets ?
@TechWithViresh4 жыл бұрын
It can be any number, depending on your data and bucket column
@himanshusekharpaul4764 жыл бұрын
Hey ..Nice explanation ..But here i have one doubt ... in above vedio you have given no of bucket is 4 . What are the criteria we should keep in mind while deciding no of bucket in real time.?? Is there any formula or bucket size constraints ??? Could you please help ??
@TechWithViresh4 жыл бұрын
The idea behind these two data distribution techniques- partition and bucket is to have data distribution evenly and in such optimum size , which can be effectively processed in a single task
@himanshusekharpaul4764 жыл бұрын
Ok.. What is the optimum bucket size that can be processed by single task??
@aashishraina28313 жыл бұрын
i think this video is reapted above. can be deleted.