Production Machine Learning Monitoring: Principles, Patterns and Techniques (Alejandro Saucedo)

  Рет қаралды 6,396

Alejandro Saucedo

Alejandro Saucedo

Күн бұрын

Пікірлер: 9
@stackinglittlesats
@stackinglittlesats 3 жыл бұрын
Very nice explanation. Thank you.
@8eck
@8eck 3 жыл бұрын
Just wow! Will dive into all of that. Thank you!
@harshraj22_
@harshraj22_ 2 жыл бұрын
This video was quite insightful. Thanks
@Yzyou11
@Yzyou11 2 жыл бұрын
Which tool are you using real time statistical monitoring
@8eck
@8eck 3 жыл бұрын
We don't wana our business finding out that our models are performing bad. :D
@ajay240392
@ajay240392 3 жыл бұрын
This was an very interesting video. I liked it. Thank you for sharing this. I have few question regarding this: - Is it possible to get monitoring dashboard without feedback correct label? because this could not be possible for all of this use cases. If it is possible could you please help me to understand it? - Do we have to deploy separate microservices/API end point for drift and outlier as well? Thank you in advance.
@axsaucedo
@axsaucedo 3 жыл бұрын
Some metrics do not require correct / feedback label. For example the microservice performance metrics like requests per second or latency don't require the feedback. Similarly some of the advanced monitoring components like outliers, drift and explainers don't require the feedback label. It is the statistical performance metrics that do require the feedback label. In regards to your second question, yes it would be separate microservices for drift and outliers.
@ChuanChihChou
@ChuanChihChou 3 жыл бұрын
13:38 Why do we need to send the correct label to the model? Shouldn't we be able to compute performance metrics of the model by just using the stored inference data + the correct labels?
@axsaucedo
@axsaucedo 3 жыл бұрын
Good question. This is necessary as once you deploy your model, you are no longer dealing with training data with provided labels. This is production unseen inference data for which you do not have the labels. Because of this, the labels would have to be provided asynchronously through most likely a manual process where some person/system would at some point provide the correct prediction. Once the correct prediction is provided, it will then be stored with the production inference data (which would be different to the training inference data). For completeness if your question is why we actually keep track of the accuracy/precision/recall in prometheus as opposed to calculating it offline using the data in elasticsearch, in that case you actually can do both, but prometheus would be more likely used for real time insights, which can trigger alerts, and elasticsearch would be used for offline analytics which can allow for drill-downs - this how we currently approach it in Seldon Core.
Monitoring ML Models in Production
41:41
AICamp
Рет қаралды 4,3 М.
Help Me Celebrate! 😍🙏
00:35
Alan Chikin Chow
Рет қаралды 58 МЛН
ML System Design: Feature Store
31:36
Applied AI Course
Рет қаралды 32 М.
Andrew Ng: Bridging AI's Proof-of-Concept to Production Gap
1:02:08
Stanford HAI
Рет қаралды 19 М.
Webinar: MLOps automation with Git Based CI/CD for ML
57:52
CNCF [Cloud Native Computing Foundation]
Рет қаралды 17 М.
Machine Learning Model Deployment: Strategy to Implementation
45:25
DataWorks Summit
Рет қаралды 20 М.