Nice video. But one question is if we use the filter condition while loading the data, won't spark's catalyst optimizer pushes down the filter and reads less rows. is the predicate syntax used only on glue, can apache spark too use it
@AWSTutorialsOnline2 жыл бұрын
It is Apache Spark level thing. Purpose is to scan and load only that much data what is required. Please check this link - jaceklaskowski.gitbooks.io/mastering-spark-sql/content/spark-sql-Optimizer-PushDownPredicate.html
@cheluvesha Жыл бұрын
Here are a few reasons why you may want to explicitly specify pushdown predicates in AWS Glue: Improved performance: By explicitly specifying pushdown predicates in AWS Glue, you can control which filtering conditions are pushed down to the storage layer. This can result in improved query performance, as the storage layer will only return the data that is actually needed for the query. Fine-grained control: By explicitly specifying pushdown predicates in AWS Glue, you can have fine-grained control over the filtering conditions that are pushed down to the storage layer. This can be useful when you want to optimize a specific query or set of queries. Troubleshooting: By explicitly specifying pushdown predicates in AWS Glue, you can make it easier to troubleshoot performance issues. For example, if a query is not performing as expected, you can check whether the pushdown predicates are properly specified and being used effectively. In conclusion, while Apache Spark's Catalyst Optimizer can automatically push down filtering conditions to the storage layer, explicitly specifying pushdown predicates in AWS Glue can provide additional benefits in terms of performance, control, and troubleshooting.
Жыл бұрын
Hi , one question by other point , is posible create a table in glue catalog from a glue job ? with a s3 source target data ?, one condition is what table exist in the glue catalog , but exist other way for create it dynamically ?
@AWSTutorialsOnline Жыл бұрын
Technically possible. You can use python boto3 aws sdk in Glue job to check existence of a table. if it does not exist, you simply start the crawler to create table catalog.
@worldupdates. Жыл бұрын
Keep it up sir.
@AWSTutorialsOnline Жыл бұрын
Thank you, I will
@abeeya136 ай бұрын
Can this be used to read few columns from s3 bucket?
@VishalSharma-hv6ks2 жыл бұрын
Very Informative..
@AWSTutorialsOnline2 жыл бұрын
Thanks a lot
@mesaquestbsb Жыл бұрын
Hey, very nice video! I have some questions. Does this approach applies to delta tables? I work with a table with more than 150 millions of lines that I need weekly delete almost 50 million of lines and load the new data? What is your suggestion for deletion considering I use partitioning?
@AWSTutorialsOnline Жыл бұрын
Pushdown is designed for partition data. You should try Delta Lake framework in AWS Glue for your purpose. Details - docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-delta-lake.html
@ericguymon54182 жыл бұрын
What method in either LakeFormation or Glue would recommend for daily ingestion of a database with say 15 tables, 5 of which could add 1 million rows daily?
@AWSTutorialsOnline2 жыл бұрын
You can use AWS Glue job for it. Please check this link - aws-dojo.com/workshoplists/workshoplist33/
@abhijeetpatil-k5y Жыл бұрын
sir which iam role give you for aws glue also for jupyter notebook
@AWSTutorialsOnline Жыл бұрын
This link might help - docs.aws.amazon.com/glue/latest/dg/create-an-iam-role-notebook.html
@abhishek822 Жыл бұрын
Does push down predicate work for JDBC source ?
@AWSTutorialsOnline Жыл бұрын
No, it is designed for S3 bucket based partition or HIVE metadata only
@mohdshoeb51012 жыл бұрын
Can you please tell me how to optimize reading a large amount of files every hour.because I have 10- 12 glue jobs and every job is reading 10 tables in every run.
@AWSTutorialsOnline2 жыл бұрын
Please check part-5 where I explained batching. It might help. Please let me know.
@Books_Stories_Poetry Жыл бұрын
spark follws lazy evaluation and when it will prepare plan to fetch data it will take it into consideration . push down predicate does not make any sense
@PakIslam20122 жыл бұрын
Isnt glueContext.create_dynamic_frame also lazily evaluated like Spark DataFrame? Which would mean it should not end up loading the entire 410 records in the initial code?
@AWSTutorialsOnline2 жыл бұрын
You are right. It is lazy load. It is more about in memory processing size. do you want to process 410 record in memory to filter down to 160 records or do you want to processing 160 records right from the beginning.
@Ady_Sr Жыл бұрын
your are right, I think it would never load up 410 records at first place, due to lazy evaluation it will only pick filtered records. Not all , even if the filter is in 2nd statement.