Connect with us

Amazon

Improve governance of your machine learning models with Amazon SageMaker

As companies are increasingly adopting machine learning (ML) for their mainstream enterprise applications, more of their business decisions are influenced by ML models. As a result of this, having simplified access control and enhanced transparency across all your ML models makes it easier to validate that your models are performing well and take action when…

Published

on

As companies are increasingly adopting machine learning (ML) for their mainstream enterprise applications, more of their business decisions are influenced by ML models. As a result of this, having simplified access control and enhanced transparency across all your ML models makes it easier to validate that your models are performing well and take action when they are not.

In this post, we explore how companies can improve visibility into their models with centralized dashboards and detailed documentation of their models using two new features: SageMaker Model Cards and the SageMaker Model Dashboard. Both these features are available at no additional charge to SageMaker customers.

Overview of model governance

Model governance is a framework that gives systematic visibility into model development, validation, and usage. Model governance is applicable across the end-to-end ML workflow, starting from identifying the ML use case to ongoing monitoring of a deployed model through alerts, reports, and dashboards. A well-implemented model governance framework should minimize the number of interfaces required to view, track, and manage lifecycle tasks to make it easier to monitor the ML lifecycle at scale.

Today, organizations invest significant technical expertise into building tooling to automate large portions of their governance and auditability workflow. For example, model builders need to proactively record model specifications such as intended use for a model, risk rating, and performance criteria a model should be measured against. Furthermore, they also need to record observations on model behavior, and document the reason they made certain key decisions such as the objective function they optimized the model against.

It’s common for companies to use tools like Excel or email to capture and share such model information for use in approvals for production usage. But as the scale of ML development increases, information can be easily lost or misplaced, and keeping track of these details becomes infeasible quickly. Furthermore, after these models are deployed, you might stitch together data from various sources to gain end-to-end visibility into all your models, endpoints, monitoring history, and lineage. Without such a view, you can easily lose track of your models, and may not be aware of when you need to take action on them. This issue is intensified in highly regulated industries because you’re subject to regulations that require you to keep such measures in place.

As the volume of models starts to scale, managing custom tooling can become a challenge and gives organizations less time to focus on core business needs. In the following sections, we explore how SageMaker Model Cards and the SageMaker Model Dashboard can help you scale your governance efforts.

SageMaker Model Cards

Model cards enable you to standardize how models are documented, thereby achieving visibility into the lifecycle of a model, from designing, building, training, and evaluation. Model cards are intended to be a single source of truth for business and technical metadata about the model that can reliably be used for auditing and documentation purposes. They provide a factsheet of the model that is important for model governance.

Model cards allow users to author and store decisions such as why an objective function was chosen for optimization, and details such as intended usage and risk rating. You can also attach and review evaluation results, and jot down observations for future reference.

For models trained on SageMaker, Model cards can discover and auto-populate details such as training job, training datasets, model artifacts, and inference environment, thereby accelerating the process of creating the cards. With the SageMaker Python SDK, you can seamlessly update the Model card with evaluation metrics.

Model cards provide model risk managers, data scientists, and ML engineers the ability to perform the following tasks:

  • Document model requirements such as risk rating, intended usage, limitations, and expected performance
  • Auto-populate Model cards for SageMaker trained models
  • Bring your own info (BYOI) for non-SageMaker models
  • Upload and share model and data evaluation results
  • Define and capture custom information
  • Capture Model card status (draft, pending review, or approved for production)
  • Access the Model card hub from the AWS Management Console
  • Create, edit, view, export, clone, and delete Model cards
  • Trigger workflows using Amazon EventBridge integration for Model card status change events

Create SageMaker Model Cards using the console

You can easily create Model cards using the SageMaker console. Here you can see all the existing Model cards and create new ones as needed.

When creating a Model card, you can document critical model information such as who built the model, why it was developed, how it is performing for independent evaluations, and any observations that need to be considered prior to using the model for a business application.

To create a Model card on the console, complete the following steps:

  1. Enter model overview details.
  2. Enter training details (auto-populated if the model was trained on SageMaker).
  3. Upload evaluation results.
  4. Add additional details such as recommendations and ethical considerations.

After you create the Model card, you can choose a version to view it.

The following screenshot shows the details of our Model card.

You can also export the Model card to be shared as a PDF.

Create and explore SageMaker Model Cards through the SageMaker Python SDK

Interacting with Model cards isn’t limited to the console. You can also use the SageMaker Python SDK to create and explore Model cards. The SageMaker Python SDK allows data scientists and ML engineers to easily interact with SageMaker components. The following code snippets showcase the process to create a Model card using the newly added SageMaker Python SDK functionality.

Make sure you have the latest version of the SageMaker Python SDK installed:

$ pip install –upgrade “sagemaker>=2”

Once you have trained and deployed a model using SageMaker, you can use the information from the SageMaker model and the training job to automatically populate information into the Model card.

Using the SageMaker Python SDK and passing the SageMaker model name, we can automatically collect basic model information. Information such as the SageMaker model ARN, training environment, and model output Amazon Simple Storage Service (Amazon S3) location is all automatically populated. We can add other model facts, such as description, problem type, algorithm type, model creator, and owner. See the following code:

model_overview = ModelOverview.from_name( model_name=model_name, sagemaker_session=sagemaker_session, model_description=”This is a simple binary classification model used for Model Card demo”, problem_type=”Binary Classification”, algorithm_type=”Logistic Regression”, model_creator=”DEMO-ModelCard”, model_owner=”DEMO-ModelCard”, ) print(model_overview.model_id) # Provides us with the SageMaker Model ARN print(model_overview.inference_environment.container_image) # Provides us with the SageMaker inference container URI print(model_overview.model_artifact) # Provides us with the S3 location of the model artifacts

We can also automatically collect basic training information like training job ARN, training environment and training metrics. Additional training details can be added, like training objective function and observations. See the following code:

objective_function = ObjectiveFunction( function=Function( function=ObjectiveFunctionEnum.MINIMIZE, facet=FacetEnum.LOSS, ), notes=”This is a example objective function.”, ) training_details = TrainingDetails.from_model_overview( model_overview=model_overview, sagemaker_session=sagemaker_session, objective_function=objective_function, training_observations=”Additional training observations could be put here.” ) print(training_details.training_job_details.training_arn) # Provides us with the SageMaker Model ARN print(training_details.training_job_details.training_environment.container_image) # Provides us with the SageMaker training container URI print([{“name”: i.name, “value”: i.value} for i in training_details.training_job_details.training_metrics]) # Provides us with the SageMaker Training Job metrics

If we have evaluation metrics available, we can add those to the Model card as well:

my_metric_group = MetricGroup( name=”binary classification metrics”, metric_data=[Metric(name=”accuracy”, type=MetricTypeEnum.NUMBER, value=0.5)] ) evaluation_details = [ EvaluationJob( name=”Example evaluation job”, evaluation_observation=”Evaluation observations.”, datasets=[“s3://path/to/evaluation/data”], metric_groups=[my_metric_group], ) ]

We can also add additional information about the model that can help with model governance:

intended_uses = IntendedUses( purpose_of_model=”Test Model Card.”, intended_uses=”Not used except this test.”, factors_affecting_model_efficiency=”No.”, risk_rating=RiskRatingEnum.LOW, explanations_for_risk_rating=”Just an example.”, ) additional_information = AdditionalInformation( ethical_considerations=”You model ethical consideration.”, caveats_and_recommendations=”Your model’s caveats and recommendations.”, custom_details={“custom details1”: “details value”}, )

After we have provided all the details we require, we can create the Model card using the preceding configuration:

model_card_name = “sample-notebook-model-card” my_card = ModelCard( name=model_card_name, status=ModelCardStatusEnum.DRAFT, model_overview=model_overview, training_details=training_details, intended_uses=intended_uses, evaluation_details=evaluation_details, additional_information=additional_information, sagemaker_session=sagemaker_session, ) my_card.create()

The SageMaker SDK also provides the ability to update, load, list, export, and delete a Model card.

To learn more about Model cards, refer to the developer guide and follow this example notebook to get started.

SageMaker Model Dashboard

The Model dashboard is a centralized repository of all models that have been created in the account. The models are usually created by training on SageMaker, or you can bring your models trained elsewhere to host on SageMaker.

The Model dashboard provides a single interface for IT administrators, model risk managers, or business leaders to view all deployed models and how they’re performing. You can view your endpoints, batch transform jobs, and monitoring jobs to get insights into model performance. Organizations can dive deep to identify which models have missing or inactive monitors and add them using SageMaker APIs to ensure all models are being checked for data drift, model drift, bias drift, and feature attribution drift.

The following screenshot shows an example of the Model dashboard.

The Model dashboard provides an overview of all your models, what their risk rating is, and how those models are performing in production. It does this by pulling information from across SageMaker. The performance monitoring information is captured through Amazon SageMaker Model Monitor, and you can also see information on models invoked for batch predictions through SageMaker batch transform jobs. Lineage information such as how the model was trained, data used, and more is captured, and information from Model cards is pulled as well.

Model Monitor monitors the quality of SageMaker models used in production for batch inference or real-time endpoints. You can set up continuous monitoring or scheduled monitors via SageMaker APIs, and edit the alert settings through the Model dashboard. You can set alerts that notify you when there are deviations in the model quality. Early and proactive detection of these deviations enables you to take corrective actions, such as retraining models, auditing upstream systems, or fixing quality issues without having to monitor models manually or build additional tooling. The Model dashboard gives you quick insight into which models are being monitored and how they are performing. For more information on Model Monitor, visit Monitor models for data and model quality, bias, and explainability.

When you choose a model in the Model dashboard, you can get deeper insights into the model, such as the Model card (if one exists), model lineage, details about the endpoint the model has been deployed to, and the monitoring schedule for the model.

This view allows you to create a Model card if needed. The monitoring schedule can be activated, deactivated, or edited as well through the Model dashboard.

For models that don’t have a monitoring schedule, you can set this up by enabling Model Monitor for the endpoint the model has been deployed to. Through the alert details and status, you will be notified of models that are showing data drift, model drift, bias drift, or feature drift, depending on which monitors you set up.

Let’s look at an example workflow of how to set up model monitoring. The key steps of this process are:

  1. Capture data sent to the endpoint (or batch transform job).
  2. Establish a baseline (for each of the monitoring types).
  3. Create a Model Monitor schedule to compare the live predictions against the baseline to report violations and trigger alerts.

Based on the alerts, you can take actions like rolling back the endpoint to a previous version or retraining the model with new data. While doing this, it may be necessary to trace how the model was trained, which can be done by visualizing the model’s lineage.

The Model dashboard offers a rich set of information regarding the overall model ecosystem in an account, in addition to the ability to drill into the specific details of a model. To learn more about the Model dashboard, refer to developer guide.

Conclusion

Model governance is complex and often involves lots of customized needs specific to an organization or an industry. This could be based on the regulatory requirements your organization needs to comply with, the types of personas present in the organization, and the types of models being used. There’s no one-size-fits-all approach to governance, and it’s important to have the right tools available so that a robust governance process can be put into place.

With the purpose-built ML governance tools in SageMaker, organizations can implement the right mechanisms to improve control and visibility over ML projects for their specific use cases. Give Model cards and the Model dashboard a try, and leave your comments with questions and feedback. To learn more about Model cards and the Model dashboard, refer to developer guide.

About the authors

Kirit Thadaka is an ML Solutions Architect working in the SageMaker Service SA team. Prior to joining AWS, Kirit worked in early-stage AI startups followed by some time consulting in various roles in AI research, MLOps, and technical leadership.

Marc Karp is a ML Architect with the SageMaker Service team. He focuses on helping customers design, deploy and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Raghu Ramesha is an ML Solutions Architect with the Amazon SageMaker Service team. He focuses on helping customers build, deploy, and migrate ML production workloads to SageMaker at scale. He specializes in machine learning, AI, and computer vision domains, and holds a master’s degree in Computer Science from UT Dallas. In his free time, he enjoys traveling and photography.

Ram Vittal is an ML Specialist Solutions Architect at AWS. He has over 20 years of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure and scalable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis, photography, and action movies.

Sahil Saini is an ISV Solution Architect at Amazon Web Services . He works with AWS strategic customers product and engineering teams to help them with technology solutions using AWS services for AI/ML, Containers, HPC and IoT. He has helped set up AI/ML platforms for enterprise customers.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon

AWS Weekly Roundup: HIPAA eligible with Amazon Q Business, Amazon DCV, AWS re:Post Agent, and more (Oct 07, 2024)

Last Friday, I had the privilege of attending China Engineer’s Day 2024(CED 2024) in Hangzhou as the Amazon Web Services (AWS) speaker. The event was organized by the China Computer Federation (CCF), one of the most influential professional developer communities in China. At CED 2024, I spoke about how AI development tools can improve developer…

Published

on

By

Last Friday, I had the privilege of attending China Engineer’s Day 2024(CED 2024) in Hangzhou as the Amazon Web Services (AWS) speaker. The event was organized by the China Computer Federation (CCF), one of the most influential professional developer communities in China. At CED 2024, I spoke about how AI development tools can improve developer […]

Source

Continue Reading

Amazon

Build a generative AI Slack chat assistant using Amazon Bedrock and Amazon Kendra

In this post, we describe the development of a generative AI Slack application powered by Amazon Bedrock and Amazon Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content. Source

Published

on

By

In this post, we describe the development of a generative AI Slack application powered by Amazon Bedrock and Amazon Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content.

Source

Continue Reading

Amazon

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source

Published

on

By

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.