Finally a video on databricks hive metastore which is well explained, thanks Bryan
@andrewpotts99488 ай бұрын
That's the right level of detail that I needed. Well explained. Thank you.
@BryanCafferky8 ай бұрын
You're Welcome!
@bungaloebill44333 ай бұрын
Great video! I'm subscribing for the Red Green reference alone!
@JLRocco43 Жыл бұрын
I was just pondering on doing a deep dive in this today and reading a lot of docs and then you put out the video 😂 awesome work Bryan!
@jace_vizАй бұрын
Very clear explanation. Thanks @Bryan!
@soumyavema6515 Жыл бұрын
Pretty clear ...very much needed before exploring Unity catalog ....Waiting for the next
@TheAliakbarazadАй бұрын
Thank you so much for despite your great knowledge about the subject, you take the time to explain it so even I can understand!!😍
@BryanCafferkyАй бұрын
You're welcome. Glad it helps.
@ambarishdashora5440Ай бұрын
This is what I was really looking for. Thank you very much for providing such an amazing explaination.
@BryanCafferkyАй бұрын
You're welcome. Glad to help.
@daminimohite34006 ай бұрын
super clear explanation, loved the analogy used in the beginning
@BryanCafferky6 ай бұрын
Thank You!
@kvin007 Жыл бұрын
Love the direct and clear content! Keep it going!
@martalopezjurado Жыл бұрын
I love this video!! thanks a lot. Waiting for the unity catalog video!
@BryanCafferky Жыл бұрын
YW.
@tiwlan3 ай бұрын
Thank you very much for the video and the channel, I'm from Brazil and your work help me a lot!
@BryanCafferky3 ай бұрын
So glad my videos are helping you!
@awadelrahman6 ай бұрын
Thanks A LOT! One question: at 17:05; did you mean "Delta Files" instead of "Delta tables" ? when you said "Detla tables are rather interesting ...."
@BryanCafferky6 ай бұрын
Just that a Delta file is really a Delta Table that has not been cataloged in the Hive Metastore or the Unity Catalog. But that just by pointing to the Delta file path, you can use as a table.
@sumak1513 ай бұрын
That's so good i enjoyed the video thoroughly..i am.just starting to understand more about azure data bricks
@mehulkhare827811 ай бұрын
Thanks for making it simple to understand.
@BryanCafferky11 ай бұрын
You're Welcome! Glad it helped.
@danhai7276 Жыл бұрын
Great video, waiting for the next one unity catalog.🙌
@BryanCafferky Жыл бұрын
Yeah. There's a lot to Unity Catalog. Also doing Databricks AI Assistant which is very cool.
@renegade_of_funk Жыл бұрын
You’re doing the Lord’s work. 👌
@sujitunim Жыл бұрын
Thanks Bryan for this amazing session
@BryanCafferky Жыл бұрын
YW
@YiminWei-z6w7 ай бұрын
great explanation. Thanks!
@spursyou230Ай бұрын
thanks for video. but bit confused, when you do saveAsTable() and drop the table, will the physical data be deleted from original source? for example if I read data from AWS S3 and saveAsTable, but then drop the table, will the data in S3 also be deleted ?
@BryanCafferkyАй бұрын
When you create a schema on top of an existing file, schema on read, it's really a read only pseudo table. You can also create tables that are unmanaged which means Spark will not delete them when you drop the table. If the table is defined as a managed table, dropping the table will also drop the underlying data. you need to make sure you know whether you have a managed or unmanaged table to avoid bad surprises.
@rabeMa Жыл бұрын
Deadly clear, awesome 👌👌👌💯💯💯
@joshuawagner53507 ай бұрын
Exceptional explanation. Thank you.
@BryanCafferky7 ай бұрын
Glad it was helpful.
@devigugan5 ай бұрын
Excellent narrative ❤❤❤
@GhernieM7 ай бұрын
Hey Bryan, do you plan to create something about Unity Catalog?
@pal3201 Жыл бұрын
Can you tell us when are you releasing your take on Unity Catalog ? Looking forward to it.
@BryanCafferky Жыл бұрын
So many things to cover these days. Hopefully, soon. Thanks!
@Kete-Dude6 ай бұрын
have some confused about unmanaged and managed, in the step `create delta table that stored in hive` the type of dimgeography is Managed but it still can drop by not get rid of the physical files like Unmanaged(External), so what's the difference point of it?
@BryanCafferky6 ай бұрын
Yes. It is confusing. Think of a managed table as being like a SQL Server table if that helps. SQL Server tables are created and dropped with all the data via a DROP TABLE statement. Spark supports similar functionality for Managed tables in which the table schema and underlying data are created at the same time. This is to mimic SQL database type of functionality. Unmanaged tables are when you already have an external file and you create a schema defining the columns names and types describing the table so Spark can allow you to use SQL queries against it. Since the file pre-exists and is maintained separately from the Hive Metastore or Unity Catalog, you don't want the physical file deleted when you issue a SQL DROP TABLE statement. Bottom line: if you want the table to be treated just like an RDBMS would treat it, i.e. catalog entry and physical data handled via SQL, you want Managed. If you want to use SQL queries against a pre-existing data file, you want to define it as Unmanaged. Make sense?
@jbab961811 ай бұрын
Hi @BryanCafferky if CSV file meta data is change then hive metastore automatically update metadata in hive store, is it right else we can do any steps for refresh metadata ?
@BryanCafferky11 ай бұрын
A Hive table definition over a CSV file is read only and to get the meta data reloaded, I believe you would need to drop and re-create the table.
@nargesrokni6348 Жыл бұрын
very good explanation, thank you very much man
@BryanCafferky Жыл бұрын
YW
@etianemarcelino5706 Жыл бұрын
Great content... Like always
@malaka1234562 ай бұрын
Great video!
@BryanCafferky2 ай бұрын
Thanks!
@ManishSharma-fi2vr2 ай бұрын
Thanks Bryan!!
@BryanCafferky2 ай бұрын
You're welcome!
@benjaminwootton Жыл бұрын
Good video. Though I understand Hive Metastore, it confuses me why everything in data has a dependency on it. For instance, Iceberg seems to need it for everything even though it’s supposed to be a self describing table format.
@BryanCafferky Жыл бұрын
Technically, you don't need the Hive metastore to read Delta tables. But it provides a look up to where the table is physically stored. Otherwise, you need to provide the full path to the storage location. It also stores schemas for files that don't have built-in schemas like CSV and Text files.