Рет қаралды 331
Presented by:Adam Lieberman Head of AI & ML at Finastra.
Pushing a model into production is no small feat. From notebook to productized deployment we have to not only develop a high performing model, but consider latency, throughput, energy, memory usage, model size, deployment resources, and much more to get our model in the hands of our users. Once we get a model in the wild, we might think the fun is over, but we need constant monitoring to ensure models are functioning as intended. When monitoring models in production we need to constantly lookout for drift, a change in our data that can cause models to behave in unintended ways. In this talk we will define the concept of drift, the many forms it can take, statistical measures to quantify drift, and some mitigation strategies for keeping models healthy and serving the needs of our users.