To whom may be concerned when to use GroupByKey over ReduceByKey: groupByKey() can be used for non-associative operations, where the order of application of the operation matters. For example, if we want to calculate the median of a set of values for each key, we cannot use reduceByKey(), since median is not an associative operation.
@ldk68535 ай бұрын
Hindu again 🤢
@pankajchikhalwale87698 ай бұрын
Hi, I like your Spark videos. Please create a dedicated video for top 100 most frequently used Spark Commands. - Pankaj C
@sagarrawal77409 ай бұрын
Video recommendatin at the end are blocking the content...
@pmdsngh10 ай бұрын
i see, for RDD its memory and for Dataframe it is mem + disk
@dipakit4510 ай бұрын
why are you talking like sleppy mode ??
@raviyadav-dt1tb11 ай бұрын
Please provide aws questions and answers. Thank you 🙏
@avinash700311 ай бұрын
what is MSCK ?
@Dipanki-c7k11 ай бұрын
What if I have multiple spark jobs in parallel in on spark session
@adityamathur2284 Жыл бұрын
For ORC format, schema evolution is not just limited to adding new columns. Backward Compatibility: Adding Columns: New columns can be added to the schema without affecting existing data files. When reading old ORC files with a new schema that includes additional columns, the new columns will be treated as optional and filled with default values. Removing Columns: Similar to Parquet, existing columns can be removed without breaking compatibility. When reading old ORC files with a new schema that excludes certain columns, those columns will be ignored. Changing Data Types: Data types of existing columns can be changed, and ORC will attempt to convert the data to the new type. However, similar to Parquet, this conversion might result in data loss if the types are not compatible. Forward Compatibility: Adding Columns: New columns can be added, and existing files can still be read without errors. The new columns will be filled with default values when data from the old files is read. Removing Columns: Files written with a schema that has fewer columns can still be read with a newer schema containing additional columns. The additional columns will be treated as optional. Changing Data Types: Forward compatibility is generally maintained for changing data types, but careful consideration is needed to avoid potential data loss or conversion issues. above points are what I found supplementing with your content. thanks for your videos and dedication in making them, it is really helpful for my preparation.
@YoSoyWerlix Жыл бұрын
Hi! Why you say Avro is row oriented, isn't also columnar storage?
@srinubathina7191 Жыл бұрын
Thank you
@srinubathina7191 Жыл бұрын
Super content thank you
@raviyadav-dt1tb Жыл бұрын
Good sir
@Tarasankarpaul1 Жыл бұрын
Could you please tell what is the difference between partition pruning and predicate pushdown
@ritikpatil4077 Жыл бұрын
Both same
@RohanKumar-mh3pt Жыл бұрын
Very Nice and clear explanation before this video i was very confused regarding executor tuning part now after this video it is now crystal clear.
@mdmoniruzzaman703 Жыл бұрын
Hi, 10 nodes means including the master node? i have a configuration like this: "Instances": { "InstanceGroups": [ { "Name": "Master nodes", "Market": "SPOT", "InstanceRole": "MASTER", "InstanceType": "m5.4xlarge", "InstanceCount": 1 }, { "Name": "Worker nodes", "Market": "SPOT", "InstanceRole": "CORE", "InstanceType": "m5.4xlarge", "InstanceCount": 9 } ], "KeepJobFlowAliveWhenNoSteps": false, "TerminationProtected": false },
@venkateshgurram7707 Жыл бұрын
@TechWithViresh: no recent videos. Can you please add . your videos are very useful brother. thanks
@TechWithViresh Жыл бұрын
Thanks, for sure videos coming soon :)
@micheleadriaans6688 Жыл бұрын
Thanks! A great and concise explanation!
@jalsacentre1040 Жыл бұрын
The 2nd map will not executed as no action performed on result data set after collect.
@wafa0196 Жыл бұрын
hello, i find the content very interesting especially on when the hash join is better than the sort merge join. could you please tell me where you found the documentation on that?
@terrificmenace Жыл бұрын
Many thanks to you sir. 😊 i learnt spark from you
@vishalaaa1 Жыл бұрын
very good. please make videos as interview questions on spark as a group of videos
@vishalaaa1 Жыл бұрын
nice
@panduranga Жыл бұрын
Audio quality is not good content is good
@snehakavinkar2240 Жыл бұрын
Limit comes after order by in query execution order, how using limit will reduce the number of records to be sorted? Am I missing anything here?
@Trip-Train Жыл бұрын
Why are you converting dataframe to rdd ?? It is very bad practice in terms of performance
@ajaywade9418 Жыл бұрын
video from 11:30, we are adding random key to exiting towerid key for Example. tower id: 101 and salt key : 67 then 101+67= 168 hash value of the 168 would be a final value right. what in case of partition column is string datatype. ??
@TechWithViresh Жыл бұрын
Incase of strings, we can add surrogate keys, based on string column values and then do the salting.
@SahilSharma-it6gf Жыл бұрын
bhai ye hindi m bta dega toh tera kuch chla jaa rha h kya??
@tanushreenagar3116 Жыл бұрын
Perfect 👌 explanation
@andrewshk8441 Жыл бұрын
Very good and descriptive comparison. Thank you!
@PrajwalSuryawanshi-ds2xs Жыл бұрын
You gave the all information about Hive.. is this enough for interview?
@shivankchaturvedi5875 Жыл бұрын
How the last map operation will run on driver see till collect a job will be completed and whenever we call another action it will create new job with new Dag which will again distributed and run on executors??
@utku83 Жыл бұрын
Good explanation.. Thank you 👍
@ecpavanec Жыл бұрын
can we get ppt that you show in the videos?
@umeshkatighar3635 Жыл бұрын
What If each node has only 8cores?? How does spark allocate 5cores per jvm ?
@bhaskaraggarwal8971 Жыл бұрын
Awesome✨
@ansariasim4463 Жыл бұрын
bro if you have 6 blocks in Hadoop 3 then it consumes 15 blocks. Suppose we have a file which consists of 2 Blocks (B1 and B2). 1) With current HDFS setup, we will have total (2×3 = 6 blocks in total). For Block B1 -> B1.1, B1.2, B1.3 For Block B2 -> B2.1, B2.2, B2.3 2) With EC setup, we will have total (2×2 + 2/2 = 5 blocks in total). For Block B1 -> B1.1, B1.2 For Block B2 -> B2.1, B2.2 The 3rd Copy of each Block will be Xor’ed together and stored as a single Parity Block as (B1.1 xor B2.1) -> Bp In this setup: If B1.1 is corrupted, we can recompute B1.1 = Bp xor B2.1 If B2.1 is corrupted, we can recompute B2.1 = Bp xor B1.1 If both B1.1 and B2.1 are corrupted, then we have another copy of both the blocks (B1.2 and B2.2) If parity Block Bp is corrupted, then it is again recomputed as B1.1 xor B2.1
@ravikumark6746 Жыл бұрын
@Ankit Bansal can you please solve this using SQL please
@amazhobner8 ай бұрын
This isn't instagram where you can tag channels lol
@tarunreddy5917 Жыл бұрын
Is there any differences with performance issues?
@atulgupta9301 Жыл бұрын
Crisp , concise and to the point explanation in great detail. Anyone can understand through this video. Extremely well done. Kudos...