Great and Clear expression..the one and only best playlist for Pyspark in youtube
@fahdelalaoui32282 жыл бұрын
that's what I call quality content. Very logically presented and instructed.
@deepaktamhane83733 жыл бұрын
Great sir ...happy for clearing the concepts
@RanjanSharma3 жыл бұрын
Keep watching..thanks bro . Keep sharing and Exploring bro :)
@neerajjain21383 жыл бұрын
Very neat and clear explanation. Thank you so much.!! .**SUBSCRIBED** one more thing ..how can someone dislike anyone's efforts to produce such helpful content. please respect the hard work.
@RanjanSharma3 жыл бұрын
thanks So nice of you :) . Keep sharing and Exploring bro :)
@HamdiBejjar2 жыл бұрын
Excellent Content, Thank you Ranjan.. Subscribed :D
@sukhishdhawan3 жыл бұрын
excellent explanation,, strong hold on concepts,,
@RanjanSharma3 жыл бұрын
Glad you liked it! thank you :)
@dhanyadave61462 жыл бұрын
Hi Ranjan, thank you for the great series and excellent explanations. I have two questions: 1) In the video at 5:05, you mention that PySpark requires a cluster to be created. However, we can create Spark Sessions locally as well if I am not mistaken. When we run spark locally, could you please explain how PySpark would outperform pandas? I am confused about this concept. You can process data using various cores locally, but your ram size will not change right? 2) In the previous video you mentioned that Apache Spark computing engine is much faster than Hadoop Map Reduce because Hadoop Map Reduce reads data from the hard disk memory during data processing steps, whereas Apache Spark loads the data on the node's RAM. Would there be a situation where this can be a problem? For example, if our dataset is 4TB and we have 4 nodes in our cluster and we assign 1TB to each node. How will an individual node load 1TB data into RAM? Would we have to create more nested clusters in this case?
@universal43342 жыл бұрын
I've same doubt. How spark would store TB's of data in ram
@sridharm85502 жыл бұрын
Nice explanation
@mohamedamineazizi33603 жыл бұрын
great explanation
@RanjanSharma3 жыл бұрын
Glad you think so! Buddy keep exploring and sharing with your friends :)
@guitarkahero48853 жыл бұрын
Content wise great videos.. way of explaining can be improved.
@RanjanSharma3 жыл бұрын
Glad you think so!Thanks :) Keep Exploring :)
@TK-vt3ep3 жыл бұрын
you are too fast in explaining things. Could you please slow down a bit ? btw, good work
@RanjanSharma3 жыл бұрын
Thanks for your visit .. Keep Exploring :) in my further videos , i have decreased the pace.
@JeFFiNat0R3 жыл бұрын
Great thank you for this explanation
@RanjanSharma3 жыл бұрын
Thanks :) Keep Exploring :)
@JeFFiNat0R3 жыл бұрын
@@RanjanSharma I just got a job offer for a data engineer working with databricks spark. Your video definitely helped me in the interview. Thank you again.
@RanjanSharma3 жыл бұрын
@@JeFFiNat0R Glad i could help you 😊
@naveenchandra73883 жыл бұрын
@9:19 min RDD in memory computation? Panda does in memory isn't it? do RDD also do in-memory.. may be i lost somewhere with point can you explain this minute difference please?
@loganboyd4 жыл бұрын
Why are you still using RDDs and not the Spark SQL Dataframe API?
@RanjanSharma4 жыл бұрын
This video was just for explanation of RDD. In next video, I will be explaining SQL DataFrame.