Repartition Vs Coalesce
12:00
2 жыл бұрын
Пікірлер
@abuhashemhustle
@abuhashemhustle 2 күн бұрын
Very useful. Thanks.
@kiranchavadi7881
@kiranchavadi7881 Ай бұрын
You are doing great, please do not stop making videos!
@kiranchavadi7881
@kiranchavadi7881 Ай бұрын
Very detailed and clear understanding!
@naren2146
@naren2146 Ай бұрын
I'M highly interested in taking the azure course.Could you please share the online course details and let me know when will you start the new batch is scheduled to begin?
@ceciliaayala3923
@ceciliaayala3923 Ай бұрын
In 2024 the "Post SQL Script" option is not available, but I solved that by setting the Post SQL in the "Pre SQL Script" of the next task, and a the end of the pipeline I added a Script object with only a "Pre SQL Script" for the last Post SQL. Thanks for the video!
@Ks-oj6tc
@Ks-oj6tc 2 ай бұрын
Good session, Thankyou
@pallaviak11
@pallaviak11 4 ай бұрын
This is very helpful video, I am new to ADF, and wanted to implement similar scenario, thanks for this.
@Lolfy23
@Lolfy23 4 ай бұрын
voice is very low... not acceptable
@vru5696
@vru5696 6 ай бұрын
Can you also create video on copy files from SharePoint location to ADLS? Thanks
@WolfmaninKannada
@WolfmaninKannada 6 ай бұрын
sir thanks alot for demonstrating how to perform iot data stream on azure .It really helped me to get in depth understanding & service use cases. please do make more end to end videos on stream data analytics
@g.suresh430
@g.suresh430 7 ай бұрын
Nice Explanation, I want all the columns in both the examples
@PS65501
@PS65501 7 ай бұрын
if I have 4-5 level of sub folders how this will work ?
@GaneshNaik-lv6jh
@GaneshNaik-lv6jh 7 ай бұрын
Good Explanation sir, Thank You.....
@chandrashekar3649
@chandrashekar3649 7 ай бұрын
U are not updating playlist sir.... pls look into it.....
@chandrashekar3649
@chandrashekar3649 7 ай бұрын
U are not updating playlist sir.... pls look into it.....
@AnandZanjal
@AnandZanjal 8 ай бұрын
Great video on Azure! Really helpful and easy to follow. Thanks for sharing!
@Tsoy78
@Tsoy78 8 ай бұрын
Thanks, I think this can be enhanced and shortened, as you could have dynamic expression with "if-else" inside the actual "If" condition expression
@rathnamaya6263
@rathnamaya6263 8 ай бұрын
Thank you so much😊
@soumyabag5268
@soumyabag5268 8 ай бұрын
Can i get link for the dataset?
@mounicagvs9020
@mounicagvs9020 9 ай бұрын
Keshav , can u share the link of the next video where u compared the contents of file using md5?
@gauravpandey211088
@gauravpandey211088 10 ай бұрын
Once you have this column as RDD post transformation, how do you add it back to existing data frame as a new column?
@berglh
@berglh 9 ай бұрын
If you want to do this in a Spark data frame, and store the results, use the pyspark.sql.functions.split function to split the string by a delimiter, this will return an array column like map. Then to get the same sort of effect as flatMap inside and existing data frame, you can use the pyspark.sql.functions.explode function on the array column of split values. import pyspark.sql.functions as f; df.withColumn("split_values", f.split(f.col("product_descriptions"), " ")); df.withColumn("exploded", f.explode(f.col("split_values"))); Keep in mind, it depends on what you're trying to do; map and flatMap are useful if you want to return the column from a data frame to then do other work in the programming language outside the context of Spark; say for instance, getting a list in Python to iterate through using another Python library. If you want to retain the data in the data frame, you're usually better off using the built-in Spark functions on the data frame columns directly, in some cases, these are calling map and flatMap internally on the RDD, but it typically results in less code for the same performance. There are circumstances where map and flatMap methods can be slower in my experience; sticking to the Spark/Pyspark built-in column functions is best. You can build a data frame from an RDD using RDD.toDF(), but you will need some kind of index value to join it back on to the source data frame in a meaningful way, due to the way that Spark does partitioning between executors, there is no inherent order of the data and would make joining an RDD back (at scale) pointless without a column to join on. So, this goes back to the point that using the built-in functions avoid all this hassle.
@SriVanshi
@SriVanshi 10 ай бұрын
How to do in scala spark?
@handing2857
@handing2857 11 ай бұрын
Very clear explained
@spawar2443
@spawar2443 Жыл бұрын
too much advertise
@vishwavihaari
@vishwavihaari Жыл бұрын
It worked for me. Thank you.
@CloudandTechie
@CloudandTechie Жыл бұрын
Great , can you share the scripts and the resources used in the project on github or drop box is possible, Thanks again for this wonderful session.
@tejpatnala709
@tejpatnala709 Жыл бұрын
I have doubt, in case we need to connect to on prem sql server do we need to have ssms (connected with sql server) installed in same machine where adf is present ?? Or else it will pass the test connection with out any server installed in the same machine but in diff machine will it work the linked service test connection?
@Saikumarguturi
@Saikumarguturi 7 ай бұрын
I think so , it's No need to install ssms
@dhp106
@dhp106 Жыл бұрын
This video helped me SO MUCH thank you.
@muzaffar527
@muzaffar527 Жыл бұрын
Where to define Nested if, Nested else, outer elif, outer else else
@muzaffar527
@muzaffar527 Жыл бұрын
I thought we are calling outer activities in these conditions, but it is simple sql queries. Now understood the concept. Great approach. it helped me in my pipeline. Thank you.👍🏼
@muzaffar527
@muzaffar527 Жыл бұрын
I didn’t understand how and where to write queries - query1, query2, query3.. Could you please help.
@morriskeller
@morriskeller Жыл бұрын
Nice and clear, thank you!
@muzaffar527
@muzaffar527 Жыл бұрын
As discussed in the video, is real time scenario recorded? Please share the link if it is recorded, I didn't find in your playlist. Thank you.
@sethuramalingam3i
@sethuramalingam3i Жыл бұрын
super bro
@s.ifreecoachinginstitute0077
@s.ifreecoachinginstitute0077 Жыл бұрын
Nice Video Bro, But I have small doubt on after completion of mapping data ?which data stored in azure sqldb?
@shankarshiva5587
@shankarshiva5587 Жыл бұрын
I need to display the data from table which have special character in databricks Ex: select first-name from table name. It throws error
@Rmkreddy92
@Rmkreddy92 Жыл бұрын
nice explanation and collective information shared on each concepts. Thank you very much bro.
@kachipamarthy8254
@kachipamarthy8254 Жыл бұрын
Hi Keshav, I would like to discuss with regarding the training can I have your email I’d please to discuss
@KeshavLearnTSelf
@KeshavLearnTSelf Жыл бұрын
Hi, [email protected] is my email
@hiteshlalwani5519
@hiteshlalwani5519 Жыл бұрын
Hi Keshav, Can we add a DATE table in the modelling portion which act as a bridge table between two respective tables?
@SK-wp4tm
@SK-wp4tm Жыл бұрын
Hey can you please share the PPT that you have used in this video ?
@azyamp
@azyamp Жыл бұрын
thank you, was very useful
@MrTejasreddy
@MrTejasreddy Жыл бұрын
HI bro,recently found u r channel u r grate content...can u plz make a video powerbi report on azure
@unfolded8
@unfolded8 Жыл бұрын
better than most lecturers
@funtimewithlekyasri9445
@funtimewithlekyasri9445 Жыл бұрын
Nice presentation.
@abhinavrai6519
@abhinavrai6519 Жыл бұрын
Thank you so much for this perfect explaination.
@jadhavsakshi5834
@jadhavsakshi5834 Жыл бұрын
Hey we can also do this in this way right for example if we want to insert data into multiple tables using different CSV files dynamic way
@aravind5310
@aravind5310 Жыл бұрын
Great efforts. If you have added Databricks then it's so helpful.
@khandoor7228
@khandoor7228 Жыл бұрын
awesome, nice piece of code!!
@mateen161
@mateen161 Жыл бұрын
Nice one. Thank you!
@shaikshavalishaik4457
@shaikshavalishaik4457 Жыл бұрын
Thank you Keshav for excellent videos, Can you please do a video on to read data from Azure SQL Data Base server and write CSV file to Blob storage. If it is already done can you please share the link of the video