AWS and NVIDIA launch “Hands-on Machine Learning with Amazon SageMaker and NVIDIA GPUs” on Coursera
AWS and NVIDIA are excited to announce the new Hands-on Machine Learning with Amazon SageMaker and NVIDIA GPUs course. The course has four parts, and is designed to help machine learning (ML) enthusiasts quickly learn how to perform modern ML in the AWS Cloud. Sign up for the course today on Coursera. Machine learning can be complex,…
Machine learning can be complex, tedious, and time-consuming. AWS and NVIDIA provide the fastest, most effective, and easy-to-use ML tools to jump-start your ML project. This course is designed for ML practitioners, including data scientists and developers, who have a working knowledge of ML workflows. In this course, you gain hands-on experience with Amazon SageMaker and Amazon Elastic Compute Cloud (Amazon EC2) instances powered by NVIDIA GPUs.
Course overview
This course helps data scientists and developers prepare, build, train, and deploy high-quality ML models quickly by bringing together a broad set of capabilities purpose-built for ML within Amazon SageMaker. EC2 instances powered by NVIDIA GPUs offer the highest-performing GPU-based training instances in the cloud for efficient model training and cost-effective model inference hosting. In the course, you have hands-on labs and quizzes developed specifically for this course and hosted by AWS Partner Vocareum.
You’re first given a high-level overview of modern machine learning. Then, in the labs, you will dive right in and get you up and running with a GPU-powered SageMaker instance. You will learn how to prepare your dataset for model training using GPU-accelerated data prep with the RAPIDS library, how to build a GPU accelerated tree-based model, how to perform training of this model, and how to deploy and optimize the model for GPU powered inference. You will receive hands-on learning of how to similarly build, train, and deploy deep learning models for computer vision (CV) and natural language processing (NLP) use cases. After completing this course, you will have the knowledge to build, train, deploy, and optimize ML workflows with GPU acceleration in SageMaker and understand the key SageMaker services applicable to tabular, computer vision, and language ML tasks.
In the first module, you will learn the basics of Amazon SageMaker, GPUs in the cloud, and how to spin up an Amazon SageMaker notebook instance. Then you get a tour Amazon SageMaker Studio, the first fully integrated development environment (IDE) for machine learning, which gives you access to all the capabilities of Amazon SageMaker. This is followed by an introduction to the NVIDIA GPU Cloud or NGC Catalog, and how it can help you simplify and accelerate ML workflows.
In the second module, you use the knowledge from module 1 and discover how to handle large datasets to build ML models with the NVIDIA RAPIDS framework. In the hands-on lab, you download the Airline Service Quality Performance dataset, and run GPU accelerated data prep, model training, and model deployment.
In the third module, you gain a brief history of how computer vision (CV) has evolved, learn how to work with image data, and learn how to build end-to-end CV applications using Amazon SageMaker. In the hands-on lab, you download the CUB_200 dataset, and then train and deploy an object detection model on SageMaker.
In the fourth module, you learn about the application of deep learning for natural language processing (NLP). What does it mean to understand languages? What is language modeling? What is the BERT language model, and why are such language models used in many popular services like search, office productivity software, and voice agents? Are NVIDIA GPUs the fastest and the most cost-efficient platform to train and deploy NLP models? In the hands-on lab, you download the SQuAD dataset, and then train and deploy a BERT-based question answering model.
Enroll today
Hands-on Machine Learning with AWS and NVIDIA is a great way to achieve toolsets needed for modern ML in the cloud. With this course, you can move projects from conceptual phases to production phases faster by leaving the undifferentiated heavy lifting of building infrastructure to AWS and NVIDIA GPUs, and apply your new found knowledge to solve new challenges with AI and ML.
Improve your ML skills in the cloud, and start applying them to your own business challenges by enrolling today at Coursera!
About the Authors
Pavan Kumar Sunder is a Solutions Architect Leader with the Envision Engineering team at Amazon Web Services. He provides technical guidance and helps customers accelerate their ability to innovate through showing the art of the possible on AWS. He has built multiple prototypes and reusable solutions around AI/ML, IoT, and robotics for our customers.
Isaac Privitera is a Senior Data Scientist at the Amazon Machine Learning Solutions Lab, where he develops bespoke machine learning and deep learning solutions to address customers’ business problems. He works primarily in the computer vision space, focusing on enabling AWS customers with distributed training and active learning.
Cameron Peron is Senior Marketing Manager for AWS AI/ML Education and the AWS AI/ML community. He evangelizes how AI/ML innovation solves complex challenges facing community, enterprise, and startups alike. Out of the office, he enjoys staying active with kettlebell-sport and spending time with his family and friends, and is an avid fan of Euro-league basketball.
Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents
In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source
In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.
Implement model-independent safety measures with Amazon Bedrock Guardrails
In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture. Source
In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.
Visier’s data science team boosts their model output 10 times by migrating to Amazon SageMaker
In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker. Source
In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker.