Hello Bhawna. Regarding "partitions should be at least 1GB", it is not always as straightforward. If your use case is read-heavy, then large partitions make sense. For write-heavy use cases, smaller partitions work much better. Here is a reference video for this: kzbin.info/www/bejne/pWPOaoN_eLyXrpI
@cloudfitness3 жыл бұрын
Yes I agree!
@AyushSrivastava-gh7tb2 жыл бұрын
I haven't seen a better Data Engineering channel than this one!! 🙇♀
@pratiksharma8548 Жыл бұрын
Hi I just want to know how many files are scanned by the below query. Select I'd, name from table where Id= 1000:
@sreeragnambiar45792 жыл бұрын
How to delete partition folders/directories (which contains parquet files). I could remove the reference of the particular date partition from delta log but the original date partition folders are not getting deleted. Tried Vacuum as well.
@TheDataArchitect7 ай бұрын
What about using Partitioning and Optimization with zordering together, where zorder using multiple columns?
@186roy2 жыл бұрын
A small correction..Compacting (OPTIMIZE) is idempotent, Z-ordering is NOT idempotent.
@SpiritOfIndiaaa Жыл бұрын
thanks Bhawna , I have use -case , i have two files i.e. s3 "delta" files , i need to get 1 first file and delete those records in second file i.e. without changing the file path , is it possible if so how it can be done ?
@selvavinayaganmuthukumaran13327 ай бұрын
@SpiritOfIndiaaa When dealing with Delta files in an S3 bucket, it’s important to note that directly modifying the contents of a file in place (i.e., without changing the file path) is not possible. However, I can provide you with some alternative approaches: Local Modification and Upload: Download the second Delta file locally. Apply the necessary changes (deleting records) to the downloaded file. Upload the modified file back to the same S3 location, overwriting the original file. This approach ensures that the file path remains unchanged. Upsert Using Delta Lake (Databricks): If you have access to Databricks or a similar platform, you can use Delta Lake’s MERGE operation to upsert data from one Delta table into another. This method allows you to insert, update, or delete records in a target Delta table based on the contents of a source table or DataFrame1. Delta Lake with Databricks (Without Changing File Path): If you’re not using Databricks, modifying Delta files directly in S3 without changing the file path is challenging. You would need to follow the first approach (local modification) and then upload the modified file back to S3. Remember that directly modifying files in place (especially in distributed storage systems like S3) can be complex due to transactional guarantees and the distributed nature of the data. Always ensure data consistency and backup your files before making any changes. 😊
@SpiritOfIndiaaa7 ай бұрын
@@selvavinayaganmuthukumaran1332 thanks a lot for your detailed explanation...thanks a lot
@vipinkumarjha55873 жыл бұрын
Hi Bhavana , Thanks for he important video. Can you please create one video on how to read the streaming data incrementally in delta lake table.