Want to build a reliable, modern data architecture without the mess? Here’s a free checklist to help you → bit.ly/kds-checklist
@dertrickwinn79827 ай бұрын
Thank you for posting this video. It is a great video.
@vrta2 жыл бұрын
I've read the whole "The data warehouse toolkit" book by Kimball twice and this video explains the most important part of the 500 page book as clear as possible. Well done!
@KahanDataSolutions2 жыл бұрын
Love to hear that! Thanks for your feedback
@jhonsen98428 ай бұрын
So if someone watched the vidio its not nesessary to read that 500 book thats quite optimistic statement.
@blablabla-c5o25 күн бұрын
@@jhonsen9842 he has read that twice. You?
@AkhmadMizkat11 ай бұрын
This is a very clear explanation with a precise example of the star schema. Thanks a lot for the good video.
@apibarra2 жыл бұрын
I would agree with joshi. It would be cool for a future video to see how take data from conformed layer then create a star schema using dbt.
@KahanDataSolutions2 жыл бұрын
Noted! Thanks for the feedback
@alecryan8220 Жыл бұрын
What’s conformed layer?
@JimRohn-u8c2 жыл бұрын
Please make a video like this on Conformed Dimensions please! Also how do you handle the lack of primary key and foreign key constraints in Snowflake?
@ruslandr Жыл бұрын
Thanks! Great and simple overview for someone who is pretty new to this. What I would recommend is to explain pitfalls like - how to load dimensions for values there to be unique, why you have to care about having unique values in dimensions, what will happen if you left join to the dimension which has duplicate records.
@ruslandr Жыл бұрын
This can help beginners to avoid mistakes on early stages
@mojtabapeyrovi9841 Жыл бұрын
Thank you for the great and to the point presentation, especially this is great that you showed a simple live code to execute it since most tutorials just repeat the books and present the theory and don't show how to code it. It would be great though to cover SCD implementation in the dimension tables and what would happen to the natural keys when we will have to use surrogate keys for the dimension and fact tables, because in almost all real-world DWH examples there is always need for keeping the history of dimensions inside the dim tables, which of course the OLTP primary keys will not be applicable.
@sievpaobun Жыл бұрын
Brilliant video So helpful in basic understanding of star schema design Keep up the great work bro
@KahanDataSolutions Жыл бұрын
Glad it helped!
@sievpaobun Жыл бұрын
@@KahanDataSolutions Hi, it would be even more helpful if u could make a video about tutorial on how to add time and date dimension in star or snowflake schema
@varuntirupati2566 Жыл бұрын
Thanks for this video. I have a few questions: 1) in the star schema, we won't do any joins between the dimension tables right, but why did you create a table by joining all the dimension tables to flatten all the dimensions in a single table? 2) since we are creating the mart tables by joining with other tables, How these tables get refreshed because those are not views or materialized views?
@saheenahzan7825 Жыл бұрын
Wow..very beautifully explained. Loved it ❤ Makes me explore more of your content. Thank you.
@jpzhang8290 Жыл бұрын
Good in-depth data engineering video for professionals!
@lucashoww Жыл бұрын
Such a cool data concept man! Thanks for introducing me to it. Cheers!
@nlopedebarrios Жыл бұрын
Hi, love your channel, I'm learning a lot. What are your thoughts on Star-schema vs One Big Table (OBT)? Would you make a video comparing pros and cons of each other?
@wingnut292 жыл бұрын
Great video. One recommendation would be to change the font color of the text that is commented out, it is nearly impossible to read.
@KahanDataSolutions2 жыл бұрын
Ah good catch. Sorry about that but will take note for next time!
@AdamSmith-lg2vn2 жыл бұрын
Typo on the chapter header #7. Great video tho on a hard to teach and under-covered topic. I strongly agree about the usability case for Star schemas even in a modern stack. It creates a highly ordered, easy to reason about, junction point between the chaos of sources, ingestion, data lakes, etc and the complexity of ~infinite data consumer use cases. The payoff in downstream development efficiency is huge.
@KahanDataSolutions2 жыл бұрын
Ah, dang. Good catch on the typo. Unfortunately I can't edit that part after posting. Really appreciate your feedback on the video too. As you can probably tell, I'm on the same page as you and think it's still a great strategy.
@lingerhu4339 Жыл бұрын
Really great video!!!! Thank you!!! Hope to see new video concerning SCD, indexing!!!!
@KahanDataSolutions Жыл бұрын
Thanks for watching!
@mehdinazari88962 жыл бұрын
Great tutorial, Thanks for putting this together.
@KahanDataSolutions2 жыл бұрын
You're very welcome! Hope it was helpful.
@reviewyup59210 ай бұрын
Very succinct, and practical.
@TRZMac2 ай бұрын
Great video! Thank you!
@marcosoliveira8731 Жыл бұрын
Such a good explanation !
@freshjulian1 Жыл бұрын
Hey, thanks for the video! It gives a good overview how to model a star schema. But how could new staging data be ingested in the tables of a star schema? For example, an easy but inefficient approach would be to create the tables on a daily basis. But to be more optimal, you would need a process to ingest new data into the tables. Do you have an idea how that could be done in modern warehouses like Snowflake? Or some resources on that? I think it would be helpful to add some technical columns to the raw data layer, like a load timestamp, to track the records that need to be ingested. Furthermore, a valid_to and valid_from timestamp in dimension tables could be added where changes can occur (changed address of a customer).
@SoumitraMehrotra-ri4zb Жыл бұрын
Thanks for the video. Quick Question: This might be subjective and vary from business problem to problem but is it a good practice to create Fact Tables before Dimension? I mean isn't it necessary to understand what dimensions exist and what are its keys before creating a Fact Table.
@vinaychary781511 ай бұрын
First Create dimension tables then go for fact table
@sakeeta64989 ай бұрын
Was planning to commenting same, you should create first dimension tables as fact table is pointing to them using fk
@kalyanben102 жыл бұрын
Not related to the topic.. But, @Kahan, what do you think about Meltano? Will you add it to your Modern Data Stack?
@sadfasde31082 жыл бұрын
This is fantastic - keep it up.
@KahanDataSolutions2 жыл бұрын
Thanks, will do!
@thesaintp100 Жыл бұрын
@Kahan What tool are you using to show JSON along with other sources while querying? Thanks much.
@jeffrey6124 Жыл бұрын
Great! videos as always ... is there a way though to edit or re-create some of your videos including this where you would zoom in/out parts of it specially when highlighting stuff 🙂
@KahanDataSolutions Жыл бұрын
Thanks Jeff! Regarding the editing - unfortunately there's not much that can be done after it's initially uploaded (other than minor trimming).
@HvujES Жыл бұрын
question is would you even need this in DWH such as gcp bq - since its already uses dremel architecture.
@karangupta_DE Жыл бұрын
Hi, could you kindly make a video on how to load the fact table incrementally? What if we have an OLTP system and the dimension tables get big really quickly
@GT-bf7io11 ай бұрын
Hi, I loved your video. Is there anyway to get any sample databse with raw tables just to try and practice put it together on our own?
@vinaychary781511 ай бұрын
Take one dataset and create own dimension and fact tables
@MohamedMontaser919 ай бұрын
great video, i'm doing almost the same thing but i'm using views for marts instead of tables because views are updated automatically i would rather use tables instead of views but creating a table from select statement doesn't update automatically, is there a way to do it without using triggers?
@skontzias5 ай бұрын
Is it necessary to join in orders and customers in the transaction fact table when the transaction table already has customer_id and order_id assuming the source tables have referential integrity constraints?
@MohammadAnasKhan-e4v Жыл бұрын
Very crisp !!
@Gatitdone Жыл бұрын
Excellent
@mitchconnor70662 жыл бұрын
Great tutorial. I have trouble understanding the difference between data warehouse and database. The thing you made in this video is DWH, but what would be DB regarding this example?
@KahanDataSolutions2 жыл бұрын
Great question! This is something that took me a while to understand as well. The way I think about it is that a data warehouse is just a way to describe HOW you are using a database (or multiple databases). For example, in this scenario, the two databases I'm using are "RAW" and "ANALYTICS_100". And I'm creating new tables within the "ANALYTICS_100" database in a way that resembles what we would call a data warehouse design. But if you strip away the term data warehouse, it's still just a database with tables (or views). Using a star schema (facts & dimensions) is just one way to create your tables in an intentional way that's capable of operating in a way that we all agree to call a "data warehouse". I also just thought of this example - it's almost like saying you can build a physical "house" or an "office building" or an "apartment". They have different purposes and terminology but underneath it all they have the same components (windows, floors, roof, etc.). Just designed in different ways. Maybe not the best example but hopefully that helps!
@mitchconnor70662 жыл бұрын
@@KahanDataSolutions After numerous confusing articles on topic of "DB vs DWH", I think I finnaly got it. DWH is just bunch of tables within DB that are designed in specific way which supports analytics. Your example in this video and your comment made it crystal clear. Thank you!
@KahanDataSolutions2 жыл бұрын
@@mitchconnor7066 You got it!
@happyheart9431 Жыл бұрын
Million thanks
@SumaK-j2q Жыл бұрын
@Kahan Data Solutions, I wish you covered the concepts of Surrogate Keys with SCD Type 2,, while in this video you have conveniently skipped that and made it look like it is a simple task,, by joining multiple entities which Ralph Kimball strongly advocates to Avoid. I really want to see your approach for some of the most difficult questions, when there are many to many relationships in the real world.
@MManv84 Жыл бұрын
Agreed, I was shocked to see he was apparently (?) using natural keys from the raw source as the dimensional keys, instead of of surrogates. This is a pretty basic no-no, and this model will break once it encounters common complications like "late arriving" dimensional data, need to combine data from multiple sources (either parallel systems, or migrations across source systems over time) as well as SCD you bring up. As I type I see he does mention surrogate keys at the very end, but only as an *alternative* to the natural keys of the source system, not as standard practice. So I guess he would advocate using natural keys as dimension/fact foreign keys in some situations, then switch to surrogate keys only when there are dimensions without natural keys he likes, or (as you are pointing out) as soon as he needs to move beyond a Type 1 SCD for something like a customer or employee? Yuck. Just use surrogate keys consistently everywhere, as Kimball strongly advocates.
@shermin2o92 жыл бұрын
Hi! I am enjoying your content as a new subscriber. I am a business analyst trying to become a data engineer, and would love to begin my own projects building warehouses and pipelines, etc. I have been researching about using Snowflake as an individual and how much it would realistically cost me to use it for a project (to showcase on my resume), and cannot get a clear answer. Hoping you can assist me with this. Thanks!
@prafulbs7216 Жыл бұрын
How did you insert json data into snowflake table? Is it a direct load for m local using file format? I am using apache nifi to insert json data into table from azure blob.(I was unsuccessful to insert json object so i was convert json data to string then parse json into another column in dbt SQL script) later work on? Let me know is there way to insert json object data directly into SNOWFLAKE table have column with data type VARIANT.
@ostrich97 Жыл бұрын
You are great
@olayemiolaigbe8000 Жыл бұрын
Great content bro! However, you haven’t done justice to the subject: a Data model is a representation of business objects, their corresponding attribute, the relationships amongst them and other business semantics - usually represented through an Entity-relationship diagram (ERD) and, sometimes, class diagrams. What you’ve done here isn’t Data Modeling. It is, however, an explanation of how a Data Model is used/leveraged. Great resource, nonetheless.
@carltonseymour869 Жыл бұрын
ERDs are used as a modeling tool in both OLTP and OLAP systems, but their specific application and usage differ between the two. In OLTP, ERDs are used for database design and representation of operational data structures, while in OLAP, ERDs provide a conceptual foundation for building the data model used for analytical processing and data warehousing.
@punkdigerati2 жыл бұрын
Crap, I thought it was how to date a model...
@KahanDataSolutions2 жыл бұрын
Classic
@dertrickwinn79829 ай бұрын
Too much bass in youre voice in the recordings
@ktg87428 ай бұрын
Bro what does this have to do with the information 😂
@dertrickwinn79827 ай бұрын
@@ktg8742 I guess I should ask. Can you turn the bass down a bit in your voice. It's hard to focus on the listening experience is all.