The Parquet Format and Performance Optimization Opportunities Boudewijn Braams (Databricks)

  Рет қаралды 150,547

Databricks

Databricks

Күн бұрын

The Parquet format is one of the most widely used columnar storage formats in the Spark ecosystem. Given that I/O is expensive and that the storage layer is the entry point for any query execution, understanding the intricacies of your storage format is important for optimizing your workloads. As an introduction, we will provide context around the format, covering the basics of structured data formats and the underlying physical data storage model alternatives (row-wise, columnar and hybrid). Given this context, we will dive deeper into specifics of the Parquet format: representation on disk, physical data organization (row-groups, column-chunks and pages) and encoding schemes. Now equipped with sufficient background knowledge, we will discuss several performance optimization opportunities with respect to the format: dictionary encoding, page compression, predicate pushdown (min/max skipping), dictionary filtering and partitioning schemes. We will learn how to combat the evil that is 'many small files', and will discuss the open-source Delta Lake format in relation to this and Parquet in general. This talk serves both as an approachable refresher on columnar storage as well as a guide on how to leverage the Parquet format for speeding up analytical workloads in Spark using tangible tips and tricks.
About: Databricks provides a unified data analytics platform, powered by Apache Spark™, that accelerates innovation by unifying data science, engineering and business.
Read more here: databricks.com/product/unifie...
Connect with us:
Website: databricks.com
Facebook: / databricksinc
Twitter: / databricks
LinkedIn: / databricks
Instagram: / databricksinc
Get insights on how to launch a successful lakehouse architecture in Rise of the Data Lakehouse by Bill Inmon, the father of the data warehouse. Download the ebook: dbricks.co/3IMxugQ

Пікірлер: 56
@robinjamwal1
@robinjamwal1 3 жыл бұрын
Great talk, Great Teach, Excellent Tutor! One of the best presentation I have ever viewed and listened.
@SunilBuge
@SunilBuge 3 жыл бұрын
Great overview to address performance issues with storage layer design 👍
@manishsingh455
@manishsingh455 3 жыл бұрын
This content explained most of the thing and It is really amazing .
@prabhumaganur
@prabhumaganur 4 жыл бұрын
The best representation of Parquet file structure!! Simply Awesome!!
@ikercrew8830
@ikercrew8830 2 жыл бұрын
You prolly dont give a shit but does any of you know of a tool to get back into an Instagram account..? I was stupid forgot my login password. I appreciate any help you can give me
@kristophergunnar9551
@kristophergunnar9551 2 жыл бұрын
@Iker Crew Instablaster :)
@ikercrew8830
@ikercrew8830 2 жыл бұрын
@Kristopher Gunnar Thanks for your reply. I got to the site on google and I'm trying it out now. Looks like it's gonna take quite some time so I will get back to you later when my account password hopefully is recovered.
@ikercrew8830
@ikercrew8830 2 жыл бұрын
@Kristopher Gunnar it worked and I actually got access to my account again. I'm so happy:D Thank you so much, you saved my account :D
@kristophergunnar9551
@kristophergunnar9551 2 жыл бұрын
@Iker Crew glad I could help :D
@mallikarjunyadav7839
@mallikarjunyadav7839 2 жыл бұрын
Awesome video with great content and explanation. Very very useful.
@kehaarable
@kehaarable 3 жыл бұрын
Awesome video - not too much extraneous or labored points. Thank you!
@flaviofukabori2149
@flaviofukabori2149 3 жыл бұрын
Amazing. All concepts really well explained.
@AM-iz8gk
@AM-iz8gk Жыл бұрын
Impressive presentation well structured explanations.
@raghudesparado
@raghudesparado 3 жыл бұрын
Great Presentation. Thank you
@BuvanAlmighty
@BuvanAlmighty 3 жыл бұрын
Best presentation in Parquet.
@Pavi950
@Pavi950 4 жыл бұрын
Thanks for the content!
@YinghuaShen-kw5ys
@YinghuaShen-kw5ys 2 ай бұрын
Great, this makes me know more about Parquet. Thanks for the pre!
@raviiit6415
@raviiit6415 Жыл бұрын
great talk with simple explanations.
@lhok
@lhok 10 ай бұрын
Best Parquet File presentation I watch
@pavanreddy3321
@pavanreddy3321 3 жыл бұрын
Thanks for great explanation
@higiniofuentes2551
@higiniofuentes2551 Жыл бұрын
Thank you for this very useful video!
@ravann123
@ravann123 2 жыл бұрын
Very helpful, thank you 😊
@vt1454
@vt1454 Жыл бұрын
Great presentation 👏 👌
@hatemsiyala4944
@hatemsiyala4944 Жыл бұрын
Great talk. Thank you!
@ashokkumarsivasankaran5428
@ashokkumarsivasankaran5428 Жыл бұрын
Great! Well explained!
@aratithakare8016
@aratithakare8016 2 жыл бұрын
too good video. Excellent
@tadastadux
@tadastadux 3 жыл бұрын
@databricks - what is the best practice to use or not use nested columns. For Example, I have struct of customer with Age, Gender, Name, etc attributes. Is it better to keep it as struct or separate into its own columns?
@Azureandfabricmastery
@Azureandfabricmastery 3 жыл бұрын
Thank you!
@user-zz9lk2op1f
@user-zz9lk2op1f Жыл бұрын
Just excellent 👍
@chrisjfox8715
@chrisjfox8715 2 жыл бұрын
I haven't watched this yet but for the sake of prioritizing when I do, how well does this topic apply to platforms and systems other than Spark?
@payalbhatia6927
@payalbhatia6927 6 күн бұрын
Superb
@AmitParopkari
@AmitParopkari 4 ай бұрын
Finally understood what parquet format, thanks. So I have one small doubt, does it mean that footer metadata is nothing but schema details, like underlying table details? Like way to mention table name, column names? etc. I'll also dig from my side, but just meanwhile ....
@tasak_5542
@tasak_5542 10 ай бұрын
great talk
@dayserivera
@dayserivera Жыл бұрын
Great!
@higiniofuentes2551
@higiniofuentes2551 Жыл бұрын
Seems the time and i/o needed before use the data in doing the sort first is not considered?
@maxcoteclearning
@maxcoteclearning 2 жыл бұрын
Thankyou :)
@spacedustpi
@spacedustpi 4 жыл бұрын
Thanks for posting this presentation. Could you clarify something? How does performance improve when you compress pages only to decompress it again to read it? I'm sure I'm not understanding something, but not sure what.
@rescuemay
@rescuemay 4 жыл бұрын
He mentions around @19:30 that you only see a benefit when the I/O savings outweigh the cost of decompressing.
@SQLwithManoj
@SQLwithManoj 4 жыл бұрын
I/O is more expensive compared to the time taken by CPU to decompress the data, thus ColumnStore is faster compared to RowStore.
@rajeshgupta4466
@rajeshgupta4466 4 жыл бұрын
Snappy provides good compression with a low CPU overhead during compression/decompression. The real win in performance comes from reduced I/O cost when reading a column chunk's page. The overall cost (CPU+I/O) is generally lower for reading snappy compressed as compared to uncompressed.
@spacedustpi
@spacedustpi 4 жыл бұрын
@MGLondon How old are you? I am American (and not from China), and stick to common meats. This is an example of hate/harassment. Are you a high school kid?
@spacedustpi
@spacedustpi 4 жыл бұрын
@harsh savla Good for you. Ecoli enters the body on vegetables.
@salookie8000
@salookie8000 9 ай бұрын
interesting how Parquet (columnar analytical focused) data can be optimized using dictionary-based compression and partitioning
@rum81
@rum81 3 жыл бұрын
anyone who says parquet is columnar format is having just bookish knowledge
@immaculatesethu
@immaculatesethu 3 жыл бұрын
Its a mixture of both Horizontal and Vertical partitioning and combines best of both worlds
@jeremygiaco
@jeremygiaco 2 жыл бұрын
i like the way it compresses the data into dictionaries per file. reminds me a bit of an EAV database stored as a file
@jeremygiaco
@jeremygiaco 3 жыл бұрын
How is storing json/xml (not parquet) more efficient than csv? You literally store the "column names" in each "row" in xml/json (at least when stored in a text file) . Also, there is definitely the notion of a "record" in csv.
@happywednesday6741
@happywednesday6741 2 жыл бұрын
Example 1. If you wanted to add a new properties to records overtime, you only need to add it to the new records (no need to back date blanks for legacy records for example). So think scale and change at scale.
@happywednesday6741
@happywednesday6741 2 жыл бұрын
Example 2. You can leverage hash/dictionary data structures in programming, these can find records at a much better scaling, look up hash functions and big o. Again think scaling related to data access, hashing vs at best search trees.
@happywednesday6741
@happywednesday6741 2 жыл бұрын
Example 3. You can more easily partition records via collections paradigm. Again storage and access at scale.
@happywednesday6741
@happywednesday6741 2 жыл бұрын
Example 4. You will more easily access and operate xml / json - like data from applications via APIs. Systems and interoperability at scale.
@jeremygiaco
@jeremygiaco 2 жыл бұрын
@@happywednesday6741 i asked how it was more efficient to store it. if i have 500 million "entries" in a text file, I'm definitely storing it in a delimited format or parquet to take advantage of said dictionaries and not json/xml. you can parse either into objects directly from the file, or bulk insert into a db table. the json/xlm formats would be 10x slower to parse/read in based on sheer disk/network i/o alone... if we're talking about efficiency in processing it. no one is going to load csv into memory and start trying to scan row by row for data, it's going to get converted into objects or a db anyways. my concern is when people store json formatted files to disk to be read into objects later. what does that buy you?
@thevijayraj34
@thevijayraj34 2 жыл бұрын
Bucketing explanation was not great. Rest was fantabulous.
@chriskeo392
@chriskeo392 3 жыл бұрын
Or whatever.... 😂
@lax976
@lax976 7 ай бұрын
Worst lecture ever
Parquet File Format - Explained to a 5 Year Old!
11:28
Data Mozart
Рет қаралды 25 М.
Как бесплатно замутить iphone 15 pro max
00:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 6 МЛН
Sigma Kid Hair #funny #sigma #comedy
00:33
CRAZY GREAPA
Рет қаралды 34 МЛН
Опасность фирменной зарядки Apple
00:57
SuperCrastan
Рет қаралды 7 МЛН
One moment can change your life ✨🔄
00:32
A4
Рет қаралды 35 МЛН
The columnar roadmap: Apache Parquet and Apache Arrow
41:39
DataWorks Summit
Рет қаралды 33 М.
This INCREDIBLE trick will speed up your data processes.
12:54
Rob Mulla
Рет қаралды 260 М.
Making Apache Spark™ Better with Delta Lake
58:10
Databricks
Рет қаралды 174 М.
Database vs Data Warehouse vs Data Lake | What is the Difference?
5:22
Alex The Analyst
Рет қаралды 751 М.
What Why and How of Parquet Files
11:33
BigData Thoughts
Рет қаралды 13 М.
Красиво, но телефон жаль
0:32
Бесполезные Новости
Рет қаралды 1,5 МЛН
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
VA-PC
Рет қаралды 2,4 МЛН
Запрещенный Гаджет для Авто с aliexpress 2
0:50
Тимур Сидельников
Рет қаралды 147 М.
Копия iPhone с WildBerries
1:00
Wylsacom
Рет қаралды 7 МЛН