Рет қаралды 2,129
Have you ever wanted to train your own large language model? What about using terabytes to petabytes of your own data? Come learn about how to pick the right storage, compute, and performance optimizations to run your job to the maximum efficiency on Amazon SageMaker. We’ll explore patterns from NLP, learning how to shard our neural network onto multiple GPUs with SageMaker Model Parallel. We’ll learn how to profile our jobs, incrementally taking steps to increase the overall runtime performance and accuracy. We’ll discover about pros and cons of training on EC2, EKS, and SageMaker. Join us to find out how customers like self-driving car company Aurora are training the next generation ML models on Amazon SageMaker.
Learning Objectives:
Objective 1: Explore the best compute and storage options for training large scale models
Objective 2: Learn about optimizing your model's performance and accuracy
Objective 3: Discover how to leverage SageMaker model parallelism
***To learn more about the services featured in this talk, please visit: aws.amazon.com... Subscribe to AWS Online Tech Talks On AWS:
www.youtube.co...
Follow Amazon Web Services:
Official Website: aws.amazon.com...
Twitch: / aws
Twitter: / awsdevelopers
Facebook: / amazonwebservices
Instagram: / amazonwebservices
☁️ AWS Online Tech Talks cover a wide range of topics and expertise levels through technical deep dives, demos, customer examples, and live Q&A with AWS experts. Builders can choose from bite-sized 15-minute sessions, insightful fireside chats, immersive virtual workshops, interactive office hours, or watch on-demand tech talks at your own pace. Join us to fuel your learning journey with AWS.
#AWS