Deliver more impact w/ modern data tools, without getting overwhelmed See how in The Starter Guide for Modern Data → www.kahandatasolutions.com/guide
@AlexKashie Жыл бұрын
Man, came across your platform today and just find it so valuable. From a Data Scientist curious to understand a little bit ELT, Pipelines and the backend. Thank you 🙏🏽
@TA-vf8yi2 жыл бұрын
A nice and concise video, thanks! Would be interesting to hear about some best practices on doing custom data ingestion (EL) pipelines (that is not using Airbyte/Fivetran/Stitch) but writing actual python scripts (which libraries are commonly used, how to structure the project etc).
@KahanDataSolutions2 жыл бұрын
Great suggestion, thanks for watching!
@rahulkishore33402 жыл бұрын
Very elegantly explained. Very concise & straight to the point. Loved the visual showing the different silos of data for Billing & CRM!
@KahanDataSolutions2 жыл бұрын
Appreciate the comment! Thanks for watching
@alessandroceccarelli68892 жыл бұрын
Thank you for your high-quality videos! In our use case, we ingest daily a .zip file containing 3 .csv’s related to sales, inventory and orders from different shops (20-30) and CRMs (4-5 ; each one with its own naming convention, dtypes, …). How would you improve the following pipeline? - Raw zip files are uploaded to a GCP bucket - The upload triggers a Python GCP Cloud function that transforms the data to create single naming/dtypes conventions and brief new columns (e.g. timestamp by merging date + time) - Transformed data is uploaded to MongoDB - 3 separate collection for sales, inventory and orders - and raw .csv’s to a separate GCP bucket as parquet files (1 folder for each CRM and PoS as subfolder) - A PubSub message posted by the function triggers a GCP Function that loads processed data from MongoDB, applies ML models and stores results in separate collections (1 for each analysis type; e.g. forecast, anomaly detection, …) - A Python web app directly reads ML output data from MongoDB Thank you so much and love your videos; 🤗
@alexperrine34972 жыл бұрын
I get to move my company into the modern ELT approach, thanks for the information!
@KahanDataSolutions2 жыл бұрын
It takes time but it's a nice approach. Good luck!
@alexperrine34972 жыл бұрын
@@KahanDataSolutions thank you! First time doing something like this so learning all over the place for me.
@noel_g2 жыл бұрын
Another EL option is Airbyte
@JJ-ki2mw6 ай бұрын
Thank you for expalining it thats super easy to understand
@KahanDataSolutions6 ай бұрын
Glad it was helpful!
@SameerSrinivas2 жыл бұрын
Would you also call building data models from analytical event tables as ETL? Or is it just abstracted as T of ELT? Thanks for making the video.
@rguez23322 жыл бұрын
Is Airflow another ELT/ETL tool? I mean, can you manage to create an entire data pipeline just with Talend/FiveTran/DBT or how does Airflow enters to the tool set?
@KahanDataSolutions2 жыл бұрын
Airflow is a "task orchestrator" that can be used to trigger other tools (like dbt) in your pipeline. Yes, you can definitely still have a data pipeline without Airflow, but it becomes helpful as your infrastructure become a little more complex and you need to trigger various tools in specific sequence. Airflow can help you manage and monitor it from a single place rather than across different tools. Hope that helps!
@rguez23322 жыл бұрын
@@KahanDataSolutions thxx!!
@woolfolkdoesthings-onemans93882 жыл бұрын
Great video! Super helpful and clear about ELT being the best approach. Question…I see you prefer dbt but how do you feel about Matillion? Thanks!
@KahanDataSolutions2 жыл бұрын
Thanks! I have not used Matillion before so I can't comment on that one
@woolfolkdoesthings-onemans93882 жыл бұрын
Thanks for the response! Great vids and I’ll subscribe to your channel. Matillion is Fivetran combined with dbt. Matillion has dbt components for customers that have existing dbt jobs. I’d love for you to give it a try while I am learning how to use dbt! Anyways, take care and keep up the great vids
@KahanDataSolutions2 жыл бұрын
@@woolfolkdoesthings-onemans9388 Appreciate the summary - I'll def check it out! And thanks for watching / subscribing! Take care
@muhammadfazeel69563 ай бұрын
How ELT is more scalable?
@Satyaamevjayathe Жыл бұрын
This seem you are suggesting various data type and formats be brought into the single platform and then use tools there to transform
@teamwasted1312 жыл бұрын
I don't understand how you can load the data into a "more permanent" table before you transform the data because many times when you transform the data by applying business logic, you are changing the grain and schema of the data. Am I missing something?
@KahanDataSolutions2 жыл бұрын
You can (and will) still change the grain, but the difference is "when" you choose to do that. By more permanent I mean you don't have to clear it out after each run, like you might do in an ETL process. This is possible b/c the cost of storing massive amounts of data on modern databases is now relatively cheap compared to 15-20 years ago. Computation is what's expensive.
@JamesRice-i2c2 ай бұрын
Una Cliffs
@poiskls2 жыл бұрын
Very useful 💖🥀 new subscriber here
@KahanDataSolutions2 жыл бұрын
Thanks and welcome!
@michaelsegel87588 ай бұрын
Sorry Michael, but you should have attended more CHUG meetups and learned something about Big Data and doing ETL. There is no such thing as ELT. Its really ETL.