This is great video.. Thanks a lot!! Two QQ plz -- In order to achieve this ETL goal What are the PRE-REQUISITES please?, I think.. first CREATE CLUSTER and second Set up DynomoDB to create the table for Orchestration-- Is that Right? Also for both rules.. What exactly WORKFLOW rule is? I understand Job rule. Would appreciate your suggestion... Thanks In advance.
@jassimahallambardar33152 жыл бұрын
Also ..it seems that in the code on the in the link you have specified SQL statements instead of SP (1,2,3) .. Can you please advise on that also?
@jassimahallambardar33152 жыл бұрын
Can you please also advise when first EXEC statmt tuns in DYNDB -- these three ITEMS (0) and related variables ("S") will be changed or not OR what will be result for them upon executing First SQL from RSSttmnt table..please advise.
@jassimahallambardar33152 жыл бұрын
I understood this part as this is "targetSQL".. Never mind please. But please advise avout (0) and ("S") below. Thru SP I can certainly use or extend this for any Incremental load as you suggested....Apperciate that
@bryanbarrion45223 жыл бұрын
I see you are recreating the tables using the stored proc. What if you're working on incremental data? do you just enable glue job bookmark?
@AWSTutorialsOnline3 жыл бұрын
This SP was just an example. If you doing incremental data load using this method, you will have to handle that in SQL like make SP parameterized (to filter new data) and let the Lambda function call this SP passing the parameters dynamically. Job bookmark is another alternative using Glue Job.
@arunt7412 жыл бұрын
Hi, Could you please make a video on orchestrating the Redshift ETL using Step Functions. Many traditional BI projects with PL/SQL are looking to move to Redshift ETL and step functions. Thanks in advance.