very well explained...thanks...in this example we have smaller dept table hence cross joining not impacting the number of records what if we have huge size of dept table also (both our tables are huge) second does salting works on string columns as well (if joining key is string)
@easewithdataАй бұрын
In DW concepts DIM tables are basically smaller that Fact tables. But that's a trade-off you have handle if you want to go for manual salting. And yes STRING columns can be used for salting. If you like my content, Please make sure to share with your network over LinkedIn 👍
@at-cv9ky8 ай бұрын
V nice explanation.
@dataworksstudio8 ай бұрын
Great video. Thank you bhaiya.
@ankitpehlajani92448 ай бұрын
This is what exactly I was looking for , such a great way to explain.Can you please explain why did you choose 16 as a number of partitions ?
@easewithdata8 ай бұрын
The number partitions need to factor of cores. subhamkharwal.medium.com/pyspark-the-factor-of-cores-e884b2d5af6c
@akshaykadam12603 ай бұрын
great work
@easewithdata3 ай бұрын
Thank you for your feedback 💓 Please make sure to share it with your network over LinkedIn 👍
@vishaljoshi17527 ай бұрын
thank you for explanation one question is there at 4:41 that 39.6 MB is data first loaded in memory then it deserialized then it become 79mb or it is more than that means( deserialized data - 79 MB ) is able to process at one time then after processing it that 79 MB will be processed
@easewithdata6 ай бұрын
Spillage shows the amount of data it is not able to fit in memory for processing, both in deserialized and serialized form. This data will get processed as soon Spark is able to fit the same after processing some data.
@vishaljoshi17526 ай бұрын
@@easewithdata thanks...like in sort merge join suppose we are joining two tables let suppose after shuffling the shuffled partition of one table have very big size so that it can not fully fit in memory.. I have seen in such a scenario spark is able to join the data with the help of sort merge join how it is possible as we know both table partitions have to be fit in memory for join
@sumeet65105 ай бұрын
Great Video👍
@SivakumarGalaxy-r5g7 ай бұрын
Mass solutions bruhh🎉
@maheshkumarkukkudapu210828 күн бұрын
Hi Sir, emp_records_skewed.csv file is missing in the GIT. Could you please upload the file.
@easewithdata26 күн бұрын
Unfortunately the file is too big to upload in Git. You can create the skewed file by merging same type of data multiple times to the same file.
@Amarjeet-fb3lk3 ай бұрын
Why you made 32 shuffle partition if you have 8core, If one partition is going to process on single core, from where it will get other remaining 24 cores?
@easewithdata3 ай бұрын
The 8 cores will process all the 32 partitions in 4 iterations each. (8X4 = 32)
@Fullon29 ай бұрын
Amazing video. Why not use repartition using id_departament? Isn't it simpler?
@easewithdata9 ай бұрын
If we repartition shuffle partitions after joining, then its again an extra step of exchange. And if we try to repartition before shuffle then the data will again get skewed in shuffle partitions. So, its a choice of optimisation completely based on scenario.
@easewithdata9 ай бұрын
Watch out for AQE video for the most simpler option to fix Skewness.
@Fullon29 ай бұрын
@@easewithdata thanks bro, I'm going to watch.
@montypoddar106 ай бұрын
I couldn't find the emp_skewed_data in the repo, can you please share it here in the link or on the git repo.
@easewithdata6 ай бұрын
The dataset is too huge to be uploaded in Github. I am trying an external source to upload the same.
@Rafian19242 ай бұрын
Awesome sir.. why don't you create courses on udemy?? You have a great ability to explain technical aspects ❤❤
@easewithdata2 ай бұрын
Thank you ❤️ I want all good content to be available for free. If you like my content, just make sure to share with your network over LinkedIn 👍
@Rafian19242 ай бұрын
@@easewithdata definitely sir😊🙏
@Rakesh-q7m8r7 ай бұрын
Hi Shubham, The skewed employee dataset is not available, could you please push it to the git.
@easewithdata7 ай бұрын
Hello, Unfortunately GitHub is not allowing me to push GB files anymore. I am trying to locate another file sharing service to upload the bigger files.
@pawansharma-pz1hz7 ай бұрын
@@easewithdata Hi Please create file at your end using below code # Read Employee data _schema = "first_name string, last_name string, job_title string, dob string, email string, phone string, salary double, department_id int" emp = spark.read.format("csv").schema(_schema).option("header", True).load("data/employee_records.csv") from pyspark.sql.functions import lit,count emp.groupBy("department_id").agg(count("first_name").alias("count")).show() dept_3 = spark.range(40).select(lit(3).alias("department_id_temp")) dept_7 = spark.range(40).select(lit(7).alias("department_id_temp")) emp_inc_3 = emp.join(dept_3, emp["department_id"] == dept_3["department_id_temp"], "left_outer") emp_inc_3 = emp_inc_3.drop("department_id_temp") emp_inc_3.groupBy("department_id").agg(count("first_name").alias("count")).show() emp_inc_7 = emp_inc_3.join(dept_7, emp_inc_3["department_id"] == dept_7["department_id_temp"], "left_outer") emp_inc_7 = emp_inc_7.drop("department_id_temp") emp_inc_7.groupBy("department_id").agg(count("first_name").alias("count")).show() emp_inc_7.repartition(1).write.format("csv").mode("overwrite").option("header", True).save("data/output/emp_record_skewed.csv")
@anjibabumakkena7 ай бұрын
Hi, how can we getvthe data to practice as same@@easewithdata
@VikasChavan-v1c5 ай бұрын
if we can set the shuffle partition to 32 and don't use the salting technique, just do the joining on original department_id columns then what will happen
@easewithdata5 ай бұрын
Shuffle setting doesn't guarantee even data distribution among executors. In order to make sure the data is distributed properly, we are using slating technique.
@natarajbeelagi5692 ай бұрын
can you pls provide data
@easewithdataАй бұрын
All datasets are uploaded at: github.com/subhamkharwal/pyspark-zero-to-hero/tree/master/datasets