Pyspark Tutorials 3 | pandas vs pyspark || what is rdd in spark || Features of RDD

  Рет қаралды 58,418

Ranjan Sharma

Ranjan Sharma

Күн бұрын

Пікірлер: 27
@satyanarayana9630
@satyanarayana9630 3 ай бұрын
Great and Clear expression..the one and only best playlist for Pyspark in youtube
@fahdelalaoui3228
@fahdelalaoui3228 2 жыл бұрын
that's what I call quality content. Very logically presented and instructed.
@deepaktamhane8373
@deepaktamhane8373 3 жыл бұрын
Great sir ...happy for clearing the concepts
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Keep watching..thanks bro . Keep sharing and Exploring bro :)
@neerajjain2138
@neerajjain2138 3 жыл бұрын
Very neat and clear explanation. Thank you so much.!! .**SUBSCRIBED** one more thing ..how can someone dislike anyone's efforts to produce such helpful content. please respect the hard work.
@RanjanSharma
@RanjanSharma 3 жыл бұрын
thanks So nice of you :) . Keep sharing and Exploring bro :)
@HamdiBejjar
@HamdiBejjar 2 жыл бұрын
Excellent Content, Thank you Ranjan.. Subscribed :D
@sukhishdhawan
@sukhishdhawan 3 жыл бұрын
excellent explanation,, strong hold on concepts,,
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Glad you liked it! thank you :)
@dhanyadave6146
@dhanyadave6146 2 жыл бұрын
Hi Ranjan, thank you for the great series and excellent explanations. I have two questions: 1) In the video at 5:05, you mention that PySpark requires a cluster to be created. However, we can create Spark Sessions locally as well if I am not mistaken. When we run spark locally, could you please explain how PySpark would outperform pandas? I am confused about this concept. You can process data using various cores locally, but your ram size will not change right? 2) In the previous video you mentioned that Apache Spark computing engine is much faster than Hadoop Map Reduce because Hadoop Map Reduce reads data from the hard disk memory during data processing steps, whereas Apache Spark loads the data on the node's RAM. Would there be a situation where this can be a problem? For example, if our dataset is 4TB and we have 4 nodes in our cluster and we assign 1TB to each node. How will an individual node load 1TB data into RAM? Would we have to create more nested clusters in this case?
@universal4334
@universal4334 2 жыл бұрын
I've same doubt. How spark would store TB's of data in ram
@sridharm8550
@sridharm8550 2 жыл бұрын
Nice explanation
@mohamedamineazizi3360
@mohamedamineazizi3360 3 жыл бұрын
great explanation
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Glad you think so! Buddy keep exploring and sharing with your friends :)
@guitarkahero4885
@guitarkahero4885 3 жыл бұрын
Content wise great videos.. way of explaining can be improved.
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Glad you think so!Thanks :) Keep Exploring :)
@TK-vt3ep
@TK-vt3ep 3 жыл бұрын
you are too fast in explaining things. Could you please slow down a bit ? btw, good work
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Thanks for your visit .. Keep Exploring :) in my further videos , i have decreased the pace.
@JeFFiNat0R
@JeFFiNat0R 3 жыл бұрын
Great thank you for this explanation
@RanjanSharma
@RanjanSharma 3 жыл бұрын
Thanks :) Keep Exploring :)
@JeFFiNat0R
@JeFFiNat0R 3 жыл бұрын
@@RanjanSharma I just got a job offer for a data engineer working with databricks spark. Your video definitely helped me in the interview. Thank you again.
@RanjanSharma
@RanjanSharma 3 жыл бұрын
@@JeFFiNat0R Glad i could help you 😊
@naveenchandra7388
@naveenchandra7388 3 жыл бұрын
@9:19 min RDD in memory computation? Panda does in memory isn't it? do RDD also do in-memory.. may be i lost somewhere with point can you explain this minute difference please?
@loganboyd
@loganboyd 4 жыл бұрын
Why are you still using RDDs and not the Spark SQL Dataframe API?
@RanjanSharma
@RanjanSharma 4 жыл бұрын
This video was just for explanation of RDD. In next video, I will be explaining SQL DataFrame.
@AkashShahapure
@AkashShahapure Жыл бұрын
Audio is low compared previous 2 videos.
@kritikalai8204
@kritikalai8204 3 жыл бұрын
**gj**
Pyspark Tutorials 2 | Introduction to the Apache Spark and Map Reduce
11:12
Elza love to eat chiken🍗⚡ #dog #pets
00:17
ElzaDog
Рет қаралды 18 МЛН
amazing#devil #lilith #funny #shorts
00:15
Devil Lilith
Рет қаралды 18 МЛН
龟兔赛跑:好可爱的小乌龟#short #angel #clown
01:00
Super Beauty team
Рет қаралды 120 МЛН
3. What is RDD in Spark | RDD Tutorial | Pyspark Tutorial
11:36
learn by doing it
Рет қаралды 3,6 М.
Learn Apache Spark in 10 Minutes | Step by Step Guide
10:47
Darshil Parmar
Рет қаралды 340 М.
Elza love to eat chiken🍗⚡ #dog #pets
00:17
ElzaDog
Рет қаралды 18 МЛН