7. Remove Duplicate Rows using Mapping Data Flows in Azure Data Factory

  Рет қаралды 55,686

WafaStudies

WafaStudies

Күн бұрын

Пікірлер: 44
@anithasantosh6729
@anithasantosh6729 3 жыл бұрын
Thank for the video . I was trying to use Groupby and rest of the columns as a stored procedure. Your video made my job easy.
@WafaStudies
@WafaStudies 3 жыл бұрын
Welcome 😁
@susmitapandit8785
@susmitapandit8785 2 жыл бұрын
In Output file , Why EmpID is not in sorted format even though we used Sort function?
@vishaljhaveri7565
@vishaljhaveri7565 Жыл бұрын
In the aggregation step, choose column pattern and write name!=columnnameonwhichyougroupedby -> Basically this will filter out all the columns which are mentinoed in GroupBy step and will perform aggregation on the rest of the other columns. Write $$ if you don't want to change the name of the main column and write first($$) or last($$) as per your requirement.
@mankev9255
@mankev9255 3 жыл бұрын
Good concise tutorial with clear explanations. Thank you.
@WafaStudies
@WafaStudies 3 жыл бұрын
Thank you ☺️
@gsunita123
@gsunita123 4 жыл бұрын
Data in Output Consolidated CSV is not sorted on EmployeeID , we did use the Sort before the Sink , then why the data is not sorted ?
@Anonymous-cj4gy
@Anonymous-cj4gy 3 жыл бұрын
Yes, it is not sorted. Same thing happened with me
@Aeditya
@Aeditya 2 жыл бұрын
Yeah it's not sorted
@maheshpalla2954
@maheshpalla2954 3 жыл бұрын
How do you know which function to use since we are not sure about duplicate rows if we have millions of records in Source??
@Anonymous-cj4gy
@Anonymous-cj4gy 3 жыл бұрын
in the output file data is still not sorted, if you see it. same thing happen with me also. even after using sort - data is still unsorted.
@lehlohonolomakoti7828
@lehlohonolomakoti7828 2 жыл бұрын
Amazing video, super helpful, allowed me to remove duplicates from a restapi source and create a ref table inside my db
@WafaStudies
@WafaStudies 2 жыл бұрын
Thank you 😊
@swapnilghorpadewce
@swapnilghorpadewce 2 жыл бұрын
Hi, I am trying to bulk load multiple json files to cosmosDB. Each json file contains json array 5000 objects. total data size is around 120 GB. have used "copy data" with "foreach" iterator It is throwing error for respective file but inserts some records from file. I am not able to skip incompatible rows. also, not able to log skipped rows. have tried all available options. Can you please help?
@rohitkumar-it5qd
@rohitkumar-it5qd 2 жыл бұрын
How do I update the records in the same destination , the updated record and the new record without having any duplicates on ID. PLEASE SUGGEST.
@karthike1715
@karthike1715 2 жыл бұрын
Hi,I have to check all the colum duplicate and how to handle in aggregate activity, please help me
@rajkiranboggala9722
@rajkiranboggala9722 3 жыл бұрын
Well explained!! Thank you. If I have only one csv file and I want to delete the duplicate rows, I guess I can do the same by self union’ing the file, I’m not sure if there’s any other simpler method
@WafaStudies
@WafaStudies 3 жыл бұрын
Thank you 🙂
@AkshayKumar-ou8in
@AkshayKumar-ou8in 2 жыл бұрын
thank you for the video, very good explanation
@WafaStudies
@WafaStudies 2 жыл бұрын
Welcome 😊
@ACsakvith
@ACsakvith Жыл бұрын
Thank you for the nice explanation
@battulasuresh9306
@battulasuresh9306 2 жыл бұрын
What if we wanna remove both columns Point2 what if u wanna specifically want in middle of a row saying latest modified date column like that
@nareshpotla2588
@nareshpotla2588 2 жыл бұрын
Thank you Maheer. If we have 2 same records with unique empid you use last($$)/first($$) to get either of one. If we have 3 records like 1,abc 2,xyz 3,pqr. if we use first($$) we will get 1,abc and last($$) will give 3,pqr.How to get the middle one (2,xyz)?
@marcusrb1048
@marcusrb1048 2 жыл бұрын
Great video, it's clear. But, what happen with new records? Because If you use an Union table and use only upsert, check only duplicates rows isn't it? I tried same of yours, but new one are removed in the final step. I tried and I figure out an issue for INSERT, UPDATE and DELETE in three separate steps, how could I achieve it? Thanks
@MigmaStreet
@MigmaStreet Жыл бұрын
Thank you for this tutorial!
@arifkhan-qe4td
@arifkhan-qe4td 2 жыл бұрын
Aggregates is not allowing me add $$ as an expression. Any suggestions pls.
@pachinkosv
@pachinkosv Жыл бұрын
I don't want to import a few columns to datatable, how is it done?
@vishvesbhesania7767
@vishvesbhesania7767 3 жыл бұрын
why data is not sorted in output file ? even after using sort transformation in data flow.
@kumarpolisetty3048
@kumarpolisetty3048 4 жыл бұрын
Suppose if we have more than 2 records for one empid. And if i want to take Nth record , how can i do that ?
@luislacadena9689
@luislacadena9689 3 жыл бұрын
Excellent video, do you think that it is possible to eliminate the values keeping for example the one that has the higher department id/number? I've seen that you kept the first register by using first ($$), but im curious if you can remove duplicates in the RemoveDuplicateRows box based in other criteria. Is it possible to keep only the duplicates with higher department id?
@PhaniChakravarthi
@PhaniChakravarthi 4 жыл бұрын
Hi, thank you for the sessions. They are wonderful. Just have a query, can you make any video on identifying the DELTA change between two data sources and capture only the mismatched records with in ADF?
@WafaStudies
@WafaStudies 4 жыл бұрын
Sure. I will plan one video on this.
@benediktbuchert9002
@benediktbuchert9002 2 жыл бұрын
You could use a window function and mark all duplicates, and then use a filter and filter them out.
@MrSuryagitam
@MrSuryagitam 3 жыл бұрын
If we have multiple files in adf then how to remove dublicate files in adf in single time
@krishj8011
@krishj8011 2 жыл бұрын
great tutorial...
@WafaStudies
@WafaStudies 2 жыл бұрын
Thank you ☺️
@EmmaSelma
@EmmaSelma 4 жыл бұрын
Hello Wafa, Thank you so much for this tutorial, it's very helpful. New subscriber here. Thinking of scenarios to use this, I have a question please : Is it correct to use this to get last data from ODS to DWH in the case of a full load (only insertion occuring in ODS and no truncate) just like row partition by ? Thank you Upfront.
@DataForgeAcademy
@DataForgeAcademy 2 жыл бұрын
Why you didn't use sort function with remove duplicate option?
@paulhepple99
@paulhepple99 4 жыл бұрын
Great Vid - thnx
@varun8952
@varun8952 Жыл бұрын
super
@isanayang6338
@isanayang6338 4 жыл бұрын
Can you speak slowly and clearly?
@WafaStudies
@WafaStudies 4 жыл бұрын
Sure. Thank for feedback. I will try to improve on it.
@isanayang6338
@isanayang6338 4 жыл бұрын
Your strong accent makes it so difficult to understand you.
@kajalchopra695
@kajalchopra695 3 жыл бұрын
How we can optimize the cluster start up time. Basically it is taking 4m 48 sec to start a cluster. So how i can reduce that?
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 16 МЛН
If people acted like cats 🙀😹 LeoNata family #shorts
00:22
LeoNata Family
Рет қаралды 44 МЛН
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 20 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 18 МЛН
21. Dynamic Column mapping in Copy Activity in Azure Data Factory
23:43
1. Handle Error Rows in Data Factory Mapping Data Flows
31:48
WafaStudies
Рет қаралды 165 М.
Azure Data Factory Part 8 - ADF Data Flow Joins
15:55
databag
Рет қаралды 26 М.
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 16 МЛН