Thank you for the pipeline video, very insightful.
@AWSTutorialsOnline2 жыл бұрын
Glad it was helpful!
@hirendra833 жыл бұрын
Excellent Tutorial. Thanks
@AWSTutorialsOnline3 жыл бұрын
You are welcome!
@rollinOnCode2 жыл бұрын
this is super good and helpful. thank you
@AWSTutorialsOnline2 жыл бұрын
You're very welcome!
@nttazitt13003 жыл бұрын
Very helpful tutorial, thanks.
@AWSTutorialsOnline3 жыл бұрын
Glad it was helpful!
@nrodriguezgal148 Жыл бұрын
Excellent video and explanation.
@AWSTutorialsOnline Жыл бұрын
Glad it was helpful!
@hsz73383 жыл бұрын
As always thank you for the video. The breakdown comparison is incredibly intuitive. I am curious about your view on which approach is best in handling pipeline replay (i.e. handling pipeline failure) and CI/CD process (i.e. pipeline as code)?
@AWSTutorialsOnline3 жыл бұрын
CICD support is available for all approaches. Replay is better in event driven approach because you can run part of pipeline based on error raised during the event.
@andresmerchan6418 Жыл бұрын
Hello! Which of the three methods is more cost effective?
@AWSTutorialsOnline Жыл бұрын
Event based.
@DavidChoqueluqueRoman Жыл бұрын
Hello, good video. Maybe someone knows when use Glue workflows and when use StepFunctions?
@AWSTutorialsOnline Жыл бұрын
Glue workflow when you want to orchestrate Glue Job and Crawler only. StepFunction when you want to orchestrate Glue Job, Crawler plus other services as well.
@adityag0203 жыл бұрын
Insightful tutorial. Can you make a practical video based on event based pipeline using Dynamodb to store metadata and configurations with retry mechanism in case if it fails?
@AWSTutorialsOnline3 жыл бұрын
sure - coming soon :)
@timmyzheng60492 жыл бұрын
Thank you for the pipeline video, very insightful. Quick question: to avoid hardcoding, can I also use DynamoDB for storing environment parameters like s3 paths / file names / business date for my ETL pipeline let's say using step functions, and what do you think is the best industry practice for storing parameters for AWS ETL pipeline?
@AWSTutorialsOnline2 жыл бұрын
if you want to decouple the configuration, you should keep the configuration centralized in the services likes DynamoDB or Parameter Store. DD is especially good if you are going for multi-account deployment and you still want to keep the configuration centralized.
@timmyzheng60492 жыл бұрын
@@AWSTutorialsOnline Thank you for the response. After doing some research it seems that to pass parameters to glue job, I have to use Lambda with boto3 in step function, and as lambda can call glue job using python glue API too, does that mean there is no need to put glue job separately to step function?
@alokanand8512 жыл бұрын
Hi all, We are using AWS Glue + PySpark to perform ETL to a destination RDS PostgreSql DB. Destination tables have columns with primary & foreign keys with UUID data type. We are failing to populate these destination UUID type columns. How can we achieve this, please suggest.
@AWSTutorialsOnline2 жыл бұрын
I am not sure what error you are getting. ETL job has to respect table level column constraint. As long as you are doing it; there should not be a problem.
@mangeshshinde28443 жыл бұрын
Nice tutorial. Can you make some practical tutorial for event based pipeline?
@AWSTutorialsOnline3 жыл бұрын
Yes, sure. I am getting multiple requests for that. I will do it.
@pachappagarimohanvamsi46413 жыл бұрын
Could you please make some practical workshop kind of thing on these approaches?
@AWSTutorialsOnline3 жыл бұрын
Sure. will do. The Glue workflow lab is already available @ aws-dojo.com/workshoplists/workshoplist29/
@ryany4202 жыл бұрын
awesome tutorial! I have a quetion to ask if dont mind: how shall we deal with upsert/delete in those landing/clean/curated zones? I know databricks has similar archtechture with brozne/silver/gold, but it comes with delta lake. if our destination is Redshift, should we move data into Redshift(RDBMS) in earlier stage, like before curated zone. I also send you email, hope you can help to answer. thanks heaps....
@radhasowjanya68723 жыл бұрын
Hello Sir..I follow all your videos. They are very useful in my project. Thank you very much.I have a quick question: Is there a possibility to add multiple SQL statements in one AWS glue Studio job? if yes can you help me with it.(use case: want to truncate the target table(Snowflake) before loading)
@AWSTutorialsOnline3 жыл бұрын
You can multiple SQL Transform one after another to run multiple SQL statement in sequence
@YEO199013 жыл бұрын
Wonderful.
@AWSTutorialsOnline3 жыл бұрын
Thanks
@SurendraUddagiri3 жыл бұрын
Can we do machine learning algorithm in glue job using coding
@AWSTutorialsOnline3 жыл бұрын
Yes, you can but not recommended. You should use glue job for feature engineering but not for training the model. Model training should be done in SageMaker.