Thanks! a comprehensive Databricks Notebooks that serves many purposes - Data Engineering (EDA), Data Cleansing(Impute et al), a 'how-to introduction to ' to pivot to Scala, SQL,Python(~4.11 minute marker), Plotly, & in a D-bricks Notebook and above all, salient features of setting up a model, exposure to delta lakes (bronze, silver, gold), saving data to a delta lake (RDBMS CRUD equivalent) that are the bread and butter of legions of ETL engineers. Joins and Tables (~4.59 MM), Create DB IF NOT EXISTS (6.05 MM), registering a delta table, consistent reads(7.33), handling missing data (8.35), see data via Seaborn(9.43) as part of a EDA exercise, a repeated run of a model to generate a table with data integrity (Gold Table, 11.04 MM), registering (aka Savings, CRUD connoisseurs),setting the stage for modeling (11.14), Koalas, a data manipulation tool (11.23) to distribute across clusters, then the meat of the presentation - building model in parallel across spark clusters and cardinal model features like xgboost, hyperopt, mlflow, building multiple models, lambda, learning rate, et al (from ~13.00 MM) , and THE MOST critical aspect - Parameter Tuning , hyperopt (13.35 MM). The author highlights the three main strengths here - Bayesian Optimizer, Spark integration and logging of the algorithm, mlflow. Actually, a fourth point he nonchalantly drops in is 'best loss' going down as the model works on the data. Logging of the mlflow for in-depth look and analysis of algo steps (~15MM)T and how to interpret the logs - compare 96 runs of the model in example, with focus on lower loss rates(~15.59 MM) , logging the model and feature explanation (~16.54 MM) , possible timeline to get "shap" for free (17.13 MM), declare victory and rope in DevOps /Webhook the pickle model (17.45 MM), managing the promotion to prod deploy via 'registry' (17.58 MM), he practical steps to Dockerize(a new verbiage, if it did not exist), productionalize (registry) and automate via a webhook (18:58 MM), interpreting the shap model visual (19.20 MM) promote to production (19.58 MM) , and wow, roping in the model built as a Spark UDF!(21 MM), deploy in Databrikcs/Azure Ml,kubernetese , REST API etc(22 MM), surface up a consumable Dashboard (22.57 MM), for the business users and subject matter experts. What is compelling is Owen's delivery style and story telling that is lucid and simple! Thank you, Sean!
@billyreich1121 Жыл бұрын
Great lesson! Thanks for taking the time to create it and post it here.
@ankitbarsainya11 ай бұрын
Great video. Can you share the notebook link? @Databricks
@NewGirlinCalgary Жыл бұрын
Great Video Sean! Consise and clear! thanks a lot
@jhngearsns50892 жыл бұрын
Mind sharing a notebook copy? Thanks!
@MrTulufan3 ай бұрын
Nice video but the first half is just a walk through of a typical data science project, the real MLflow introduction starts on 14:30
@sumitbhalla23213 жыл бұрын
Is there any api code snippets to enable model serving? I want to automate enable model serving. Please help. thank
@papachoudhary54823 жыл бұрын
Thanks! Sir
@21Gannu3 жыл бұрын
Gerat video sean...
@SherlynReusser-w8mАй бұрын
Fritz Forks
@siobhanahbois3 жыл бұрын
This video is 1 hour old, how can it have 5 likes already?