Introduction to Model Deployment with Ray Serve

  Рет қаралды 2,501

MLOps World: Machine Learning in Production

MLOps World: Machine Learning in Production

Жыл бұрын

Speakers:
Jules Damji, Lead Developer Advocate, Anyscale Inc
Jules S. Damji is a lead developer advocate at Anyscale Inc, an MLflow contributor, and co-author of Learning Spark, 2nd Edition. He is a hands-on developer with over 25 years of experience and has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, and Databricks, building large-scale distributed systems. He holds a B.Sc and M.Sc in computer science (from Oregon State University and Cal State, Chico respectively), and an MA in political advocacy and communication (from Johns Hopkins University).
Archit Kulkarni, Software Engineer, Anyscale Inc
Archit Kulkarni is a software engineer on the Platform team at Anyscale working on the open-source library Ray Serve. Prior to Anyscale, he was a PhD student at UC Berkeley
Abstract:
This is a two-part introductory and hands-on guided tutorial of Ray and Ray Serve.
Part one covers a hands-on coding tour through the Ray core APIs, which provide powerful yet easy-to-use design patterns (tasks and actors) for implementing distributed systems in Python.
Building on the foundation of Ray Core APIs, part two of this tutorial focuses on Ray Serve concepts, what and why Ray Serve, scalable architecture, and model deployment patterns. Then, using code examples in Jupyter notebooks, we will take a coding tour of creating, exposing, and deploying models to Ray Serve using core deployment APIs.
And lastly, we will touch upon Ray Serve’s integration with model registries such as MLflow, walk through an end-to-end example, and discuss and show Ray Serve’s integration with FastAPI.
Key takeaways from students:
* Use Ray Core APIs to convert Python function/classes into a distributed setting
* Learn to use Ray Serve APIs to create, expose, and deploy models with Ray Server APIs
* Access and call deployment endpoints in Ray Serve via Python or HTTP
* Configure compute resources and replicas to scale models in production
* Learn about Ray Serve integrations with MLflow and FastAPI

Пікірлер: 1
@parasetamol6261
@parasetamol6261 Ай бұрын
can you give me an example notbook to do this. in video.
UnionML:  A Microframework for Building Machine Learning Applications
27:01
MLOps World: Machine Learning in Production
Рет қаралды 108
"I Hate Agile!" | Allen Holub On Why He Thinks Agile And Scrum Are Broken
8:33
Tom & Jerry !! 😂😂
00:59
Tibo InShape
Рет қаралды 58 МЛН
Alat Seru Penolong untuk Mimpi Indah Bayi!
00:31
Let's GLOW! Indonesian
Рет қаралды 11 МЛН
버블티로 체감되는 요즘 물가
00:16
진영민yeongmin
Рет қаралды 91 МЛН
How I Would Learn To Code (If I Could Start Over)
13:43
Namanh Kapur
Рет қаралды 7 МЛН
How to Start a Speech
8:47
Conor Neill
Рет қаралды 19 МЛН
Build an SQL Agent with Llama 3 | Langchain | Ollama
20:28
TheAILearner
Рет қаралды 2,5 М.
How I would learn to code (If I could start over)
9:16
Jason Goodison
Рет қаралды 4,5 МЛН
Building Python apps with Docker
50:01
PyTexas
Рет қаралды 89 М.
How does a Tank work? (M1A2 Abrams)
9:49
Jared Owen
Рет қаралды 53 МЛН
Introduction To Apache Cassandra
1:15:06
CernerEng
Рет қаралды 320 М.
What is Apache Kafka®?
11:42
Confluent
Рет қаралды 341 М.
Tom & Jerry !! 😂😂
00:59
Tibo InShape
Рет қаралды 58 МЛН