Connect with us

Amazon

Add AutoML functionality with Amazon SageMaker Autopilot across accounts

AutoML is a powerful capability, provided by Amazon SageMaker Autopilot, that allows non-experts to create machine learning (ML) models to invoke in their applications. The problem that we want to solve arises when, due to governance constraints, Amazon SageMaker resources can’t be deployed in the same AWS account where they are used. Examples of such…

Published

on

AutoML is a powerful capability, provided by Amazon SageMaker Autopilot, that allows non-experts to create machine learning (ML) models to invoke in their applications.

The problem that we want to solve arises when, due to governance constraints, Amazon SageMaker resources can’t be deployed in the same AWS account where they are used.

Examples of such a situation are:

  • A multi-account enterprise setup of AWS where the Autopilot resources must be deployed in a specific AWS account (the trusting account), and should be accessed from trusted accounts
  • A software as a service (SaaS) provider that offers AutoML to their users and adopts the resources in the customer AWS account so that the billing is associated to the end customer

This post walks through an implementation using the SageMaker Python SDK. It’s divided into two sections:

  • Create the AWS Identity and Access Management (IAM) resources needed for cross-account access
  • Perform the Autopilot job, deploy the top model, and make predictions from the trusted account accessing the trusting account

The solution described in this post is provided in the Jupyter notebook available in this GitHub repository.

For a full explanation of Autopilot, you can refer to the examples available in GitHub, particularly Top Candidates Customer Churn Prediction with Amazon SageMaker Autopilot and Batch Transform (Python SDK).

Prerequisites

We have two AWS accounts:

  • Customer (trusting) account – Where the SageMaker resources are deployed
  • SaaS (trusted) account – Drives the training and prediction activities

You have to create a user for each account, with programmatic access enabled and the IAMFullAccess managed policy associated.

You have to configure the user profiles in the .aws/credentials file:

  • customer_config for the user configured in the customer account
  • saas_config for the user configured in the SaaS account

To update the SageMaker SDK, run the following command in your Python environment:

!pip install –update sagemaker

The procedure has been tested in the SageMaker environment conda_python3.

Common modules and initial definitions

Import common Python modules used in the script:

import boto3 import json import sagemaker from botocore.exceptions import ClientError

Let’s define the AWS Region that will host the resources:

REGION = boto3.Session().region_name

and the reference to the dataset for the training of the model:

DATASET_URI = “s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt”

Set up the IAM resources

The following diagram illustrates the IAM entities that we create, which allow the cross-account implementation of the Autopilot job.

On the customer account, we define the single role customer_trusting_saas, which consolidates the permissions for Amazon Simple Storage Service (Amazon S3) and SageMaker access needed for the following:

  • The local SageMaker service that performs the Autopilot actions
  • The principal in the SaaS account that initiates the actions in the customer account

On the SaaS account, we define the following:

  • The AutopilotUsers group with the policy required to assume the customer_trusting_saas role via AWS Security Token Service (AWS STS)
  • The saas_user, which is a member of the AutopilotUsers group and is the actual principal triggering the Autopilot actions

For additional security, in the cross-account trust relationship, we use the external ID to mitigate the confused deputy problem.

Let’s proceed with the setup.

For each of the two accounts, we complete the following tasks:

  1. Create the Boto3 session with the profile of the respective configuration user.
  2. Retrieve the AWS account ID by means of AWS STS.
  3. Create the IAM client that performs the configuration steps in the account.

For the customer account, use the following code:

customer_config_session = boto3.session.Session(profile_name=”customer_config”) CUSTOMER_ACCOUNT_ID = customer_config_session.client(“sts”).get_caller_identity()[“Account”] customer_iam_client = customer_config_session.client(“iam”)

Use the following code in the SaaS account:

saas_config_session = boto3.session.Session(profile_name=”saas_config”) SAAS_ACCOUNT_ID = saas_config_session.client(“sts”).get_caller_identity()[“Account”] saas_iam_client = saas_config_session.client(“iam”)

Set up the IAM entities in the customer account

Let’s first define the role needed to perform cross-account tasks from the SaaS account in the customer account.

For simplicity, the same role is adopted for trusting SageMaker in the customer account. Ideally, consider splitting this role into two roles with fine-grained permissions in line with the principle of granting least privilege.

The role name and the references to the ARN of the SageMaker AWS managed policies are as follows:

CUSTOMER_TRUST_SAAS_ROLE_NAME = “customer_trusting_saas” CUSTOMER_TRUST_SAAS_ROLE_ARN = “arn:aws:iam::{}:role/{}”.format(CUSTOMER_ACCOUNT_ID, CUSTOMER_TRUST_SAAS_ROLE_NAME) SAGEMAKERFULLACCESS_POLICY_ARN = “arn:aws:iam::aws:policy/AmazonSageMakerFullAccess”

The following customer managed policy gives the role the permissions to access the Amazon S3 resources that are needed for the SageMaker tasks and for the cross-account copy of the dataset.

We restrict the access to the S3 buckets dedicated to SageMaker in the AWS Region for the customer account. See the following code:

CUSTOMER_S3_POLICY_NAME = “customer_s3” CUSTOMER_S3_POLICY = { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “s3:GetObject”, “s3:PutObject”, “s3:DeleteObject”, “s3:ListBucket” ], “Resource”: [ “arn:aws:s3:::sagemaker-{}-{}”.format(REGION, CUSTOMER_ACCOUNT_ID), “arn:aws:s3:::sagemaker-{}-{}/*”.format(REGION, CUSTOMER_ACCOUNT_ID) ] } ] }

Then we define the external ID to mitigate the confused deputy problem:

The trust relationships policy allows the principals from the trusted account and SageMaker to assume the role:

CUSTOMER_TRUST_SAAS_POLICY = { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “AWS”: “arn:aws:iam::{}:root”.format(SAAS_ACCOUNT_ID) }, “Action”: “sts:AssumeRole”, “Condition”: { “StringEquals”: { “sts:ExternalId”: EXTERNAL_ID } } }, { “Effect”: “Allow”, “Principal”: { “Service”: “sagemaker.amazonaws.com” }, “Action”: “sts:AssumeRole” } ] }

For simplicity, we don’t include the management of the exceptions in the following snippets. See the Jupyter notebook for the full code.

We create the customer managed policy in the customer account, create the new role, and attach the two policies. We use the maximum session duration parameter to manage long-running jobs. See the following code:

MAX_SESSION_DURATION = 10800 create_policy_response = customer_iam_client.create_policy(PolicyName=CUSTOMER_S3_POLICY_NAME, PolicyDocument=json.dumps(CUSTOMER_S3_POLICY)) customer_s3_policy_arn = create_policy_response[“Policy”][“Arn”] create_role_response = customer_iam_client.create_role(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME, AssumeRolePolicyDocument=json.dumps(CUSTOMER_TRUST_SAAS_POLICY), MaxSessionDuration=MAX_SESSION_DURATION) customer_iam_client.attach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME, PolicyArn=customer_s3_policy_arn) customer_iam_client.attach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME, PolicyArn=SAGEMAKERFULLACCESS_POLICY_ARN)

Set up IAM entities in the SaaS account

We define the following in the SaaS account:

  • A group of users allowed to perform the Autopilot job in the customer account
  • A policy associated with the group for assuming the role defined in the customer account
  • A policy associated with the group for uploading data to Amazon S3 and managing bucket policies
  • A user that is responsible for the implementation of the Autopilot jobs – the user has programmatic access
  • A user profile to store the user access key and secret in the file for the credentials

Let’s start with defining the name of the group (AutopilotUsers):

SAAS_USER_GROUP_NAME = “AutopilotUsers”

The first policy refers to the customer account ID and the role:

SAAS_ASSUME_ROLE_POLICY_NAME = “saas_assume_customer_role” SAAS_ASSUME_ROLE_POLICY = { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “sts:AssumeRole”, “Resource”: “arn:aws:iam::{}:role/{}”.format(CUSTOMER_ACCOUNT_ID, CUSTOMER_TRUST_SAAS_ROLE_NAME) } ] }

The second policy is needed to download the dataset, and to manage the Amazon S3 bucket used by SageMaker:

SAAS_S3_POLICY_NAME = “saas_s3” SAAS_S3_POLICY = { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “s3:GetObject” ], “Resource”: [ “arn:aws:s3:::{}”.format(DATASET_URI.split(‘://’)[1]) ] }, { “Effect”: “Allow”, “Action”: [ “s3:CreateBucket”, “s3:GetObject”, “s3:PutObject”, “s3:DeleteObject”, “s3:PutBucketPolicy”, “s3:DeleteBucketPolicy” ], “Resource”: [ “arn:aws:s3:::sagemaker-{}-{}”.format(REGION, SAAS_ACCOUNT_ID), “arn:aws:s3:::sagemaker-{}-{}/*”.format(REGION, SAAS_ACCOUNT_ID) ] } ] }

For simplicity, we give the same value to the user name and user profile:

SAAS_USER_PROFILE = SAAS_USER_NAME = “saas_user”

Now we create the two new managed policies. Next, we create the group, attach the policies to the group, create the user with programmatic access, and insert the user into the group. See the following code:

create_policy_response = saas_iam_client.create_policy(PolicyName=SAAS_ASSUME_ROLE_POLICY_NAME, PolicyDocument=json.dumps(SAAS_ASSUME_ROLE_POLICY)) saas_assume_role_policy_arn = create_policy_response[“Policy”][“Arn”] create_policy_response = saas_iam_client.create_policy(PolicyName=SAAS_S3_POLICY_NAME, PolicyDocument=json.dumps(SAAS_S3_POLICY)) saas_s3_policy_arn = create_policy_response[“Policy”][“Arn”] saas_iam_client.create_group(GroupName=SAAS_USER_GROUP_NAME) saas_iam_client.attach_group_policy(GroupName=SAAS_USER_GROUP_NAME,PolicyArn=saas_assume_role_policy_arn) saas_iam_client.attach_group_policy(GroupName=SAAS_USER_GROUP_NAME,PolicyArn=saas_s3_policy_arn) saas_iam_client.create_user(UserName=SAAS_USER_NAME) saas_iam_client.create_access_key(UserName=SAAS_USER_NAME) add_user_to_group(GroupName=SAAS_USER_GROUP_NAME,UserName=SAAS_USER_NAME)

Update the credentials file

Create the user profile for saas_user in the .aws/credentials file:

from pathlib import Path import configparser credentials_config = configparser.ConfigParser() credentials_config.read(str(Path.home()) + “/.aws/credentials”) if not credentials_config.has_section(SAAS_USER_PROFILE): credentials_config.add_section(SAAS_USER_PROFILE) credentials_config[SAAS_USER_PROFILE][“aws_access_key_id”] = create_akey_response[“AccessKey”][“AccessKeyId”] credentials_config[SAAS_USER_PROFILE][“aws_secret_access_key”] = create_akey_response[“AccessKey”][“SecretAccessKey”] with open(str(Path.home()) + “/.aws/credentials”, “w”) as configfile: credentials_config.write(configfile, space_around_delimiters=False)

This completes the configuration of IAM entities that are needed for the cross-account implementation of the Autopilot job.

Autopilot cross-account access

This is the core objective of the post, where we demonstrate the main differences with respect to the single-account scenario.

First, we prepare the dataset the Autopilot job uses for training the models.

Data

We reuse the same dataset adopted in the SageMaker example: Top Candidates Customer Churn Prediction with Amazon SageMaker Autopilot and Batch Transform (Python SDK).

For a full explanation of the data, refer to the original example.

We skip the data inspection and proceed directly to the focus of this post, which is the cross-account Autopilot job invocation.

Download the churn dataset with the following AWS Command Line Interface (AWS CLI) command:

!aws s3 cp $DATASET_URI ./ –profile saas_user

Split the dataset for the Autopilot job and the inference phase

After you load the dataset, split it into two parts:

  • 80% for the Autopilot job to train the top model
  • 20% for testing the model that we deploy

Autopilot applies a cross-validation resampling procedure, on the dataset passed as input, to all candidate algorithms to test their ability to predict data they have not been trained on.

Split the dataset with the following code:

import pandas as pd import numpy as np churn = pd.read_csv(“./churn.txt”) train_data = churn.sample(frac=0.8,random_state=200) test_data = churn.drop(train_data.index) test_data_no_target = test_data.drop(columns=[“Churn?”])

Let’s save the training data into a file locally that we pass to the fit method of the AutoML estimator:

train_file = “train_data.csv” train_data.to_csv(train_file, index=False, header=True)

Autopilot training job, deployment, and prediction overview

The training, deployment, and prediction process is illustrated in the following diagram.

The following are the steps for the cross-account invocation:

  1. Initiate a session as saas_user in the SaaS account and load the profile from the credentials.
  2. Assume the role in the customer account via the AWS STS.
  3. Set up and train the AutoML estimator in the customer account.
  4. Deploy the top candidate model proposed by AutoML in the customer account.
  5. Invoke the deployed model endpoint for the prediction on test data.

Initiate the user session in the SaaS account

The setup procedure of IAM entities, explained at the beginning of the post, created the saas_user, identified by the saas_user profile in the .aws/credentials file. We initiate a Boto3 session with this profile:

saas_user_session = boto3.session.Session(profile_name=SAAS_USER_PROFILE, region_name=REGION)

The saas_user inherits from the AutopilotUsers group the permission to assume the customer_trusting_saas role in the customer account.

Assume the role in the customer account via AWS STS

AWS STS provides the credentials for a temporary session that is initiated in the customer account:

saas_sts_client = saas_user_session.client(“sts”, region_name=REGION)

The default session duration (the DurationSeconds parameter) is 1 hour. We set it to the maximum duration session value set for the role. If the session expires, you can recreate it by performing the following steps again. See the following code:

assumed_role_object = saas_sts_client.assume_role(RoleArn=CUSTOMER_TRUST_SAAS_ROLE_ARN, RoleSessionName=”sagemaker_autopilot”, ExternalId=EXTERNAL_ID, DurationSeconds=MAX_SESSION_DURATION) assumed_role_credentials = assumed_role_object[“Credentials”] assumed_role_session = boto3.Session(aws_access_key_id=assumed_role_credentials[“AccessKeyId”], aws_secret_access_key=assumed_role_credentials[“SecretAccessKey”], aws_session_token=assumed_role_credentials[“SessionToken”], region_name=REGION) sagemaker_session = sagemaker.Session(boto_session=assumed_role_session)

The sagemaker_session parameter is needed for using the high-level AutoML estimator.

Set up and train the AutoML estimator in the customer account

We use the AutoML estimator from the SageMaker Python SDK to invoke the Autopilot job to train a set of candidate models for the training data.

The setup of the AutoML object is similar to the single-account scenario, but with the following differences for the cross-account invocation:

  • The role for SageMaker access in the customer account is CUSTOMER_TRUST_SAAS_ROLE_ARN
  • The sagemaker_session is the temporary session created by AWS STS

See the following code:

target_attribute_name = “Churn?” from sagemaker import AutoML from time import gmtime, strftime, sleep timestamp_suffix = strftime(“%d-%H-%M-%S”, gmtime()) base_job_name = “automl-churn-sdk-” + timestamp_suffix target_attribute_name = “Churn?” target_attribute_values = np.unique(train_data[target_attribute_name]) target_attribute_true_value = target_attribute_values[1] # ‘True.’ automl = AutoML(role=CUSTOMER_TRUST_SAAS_ROLE_ARN, target_attribute_name=target_attribute_name, base_job_name=base_job_name, sagemaker_session=sagemaker_session, max_candidates=10)

We now launch the Autopilot job by calling the fit method of the AutoML estimator in the same way as in the single-account example. We consider the following alternative options for providing the training dataset to the estimator.

First option: upload a local file and train by fit method

We simply pass the training dataset by referring to the local file that the fit method uploads into the default Amazon S3 bucket used by SageMaker in the customer account:

automl.fit(train_file, job_name=base_job_name, wait=False, logs=False)

Second option: cross-account copy

Most likely, the training dataset is located in an Amazon S3 bucket owned by the SaaS account. We copy the dataset from the SaaS account into the customer account and refer to the URI of the copy in the fit method.

  1. Upload the dataset into a local bucket of the SaaS account. For convenience, we use the SageMaker default bucket in the Region. DATA_PREFIX = “auto-ml-input-data” local_session = sagemaker.Session(boto_session=saas_user_session) local_session_bucket = local_session.default_bucket() train_data_s3_path = local_session.upload_data(path=train_file,key_prefix=DATA_PREFIX)
  2. To allow the cross-account copy, we set the following policy in the local bucket, only for the time needed for the copy operation: train_data_s3_arn = “arn:aws:s3:::{}/{}/{}”.format(local_session_bucket,DATA_PREFIX,train_file) bucket_policy = { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “AWS”: CUSTOMER_TRUST_SAAS_ROLE_ARN }, “Action”: “s3:GetObject”, “Resource”: train_data_s3_arn } ] } bucket_policy = json.dumps(bucket_policy) saas_s3_client = saas_user_session.client(“s3”) saas_s3_client.put_bucket_policy(Bucket=local_session_bucket,Policy=bucket_policy)
  3. Then the copy is performed by the assumed role in the customer account: assumed_role_s3_client = boto3.client(“s3”, aws_access_key_id=assumed_role_credentials[“AccessKeyId”], aws_secret_access_key=assumed_role_credentials[“SecretAccessKey”], aws_session_token=assumed_role_credentials[“SessionToken”]) target_train_key = “{}/{}”.format(DATA_PREFIX, train_file) assumed_role_s3_client.copy_object(Bucket=sagemaker_session.default_bucket(), CopySource=train_data_s3_path.split(“://”)[1], Key=target_train_key)
  4. Delete the bucket policy so that the access has been granted only for the time of the copy: saas_s3_client.delete_bucket_policy(Bucket=local_session_bucket)
  5. Finally, we launch the Autopilot job, passing the URI of the object copy:

target_train_uri = “s3://{}/{}”.format(sagemaker_session.default_bucket(), target_train_key) automl.fit(target_train_uri, job_name=base_job_name, wait=False, logs=False)

Another option is to refer to the URI of the source dataset in the bucket in SaaS account. In this case, the bucket policy should include the s3:ListBucket action for the source bucket.

The bucket policy should be assigned for the duration of all the training and allow the s3:ListBucket action for the source bucket, including a statement like the following:

{ “Effect”: “Allow”, “Principal”: { “AWS”: “arn:aws:iam::CUSTOMER_ACCOUNT_ID:role/customer_trusting_saas” }, “Action”: “s3:ListBucket”, “Resource”: “arn:aws:s3:::sagemaker-REGION-SAAS_ACCOUNT_ID” }

We can use the describe_auto_ml_job method to track the status of our SageMaker Autopilot job:

describe_response = automl.describe_auto_ml_job() print (describe_response[“AutoMLJobStatus”] + ” – ” + describe_response[“AutoMLJobSecondaryStatus”]) job_run_status = describe_response[“AutoMLJobStatus”] while job_run_status not in (“Failed”, “Completed”, “Stopped”): describe_response = automl.describe_auto_ml_job() job_run_status = describe_response[“AutoMLJobStatus”] print(describe_response[“AutoMLJobStatus”] + ” – ” + describe_response[“AutoMLJobSecondaryStatus”]) sleep(30)

Because an Autopilot job can take a long time, if the session token expires during the fit, you can create a new session following the steps described earlier and retrieve the current Autopilot job reference by implementing the following code:

automl = AutoML.attach(auto_ml_job_name=base_job_name,sagemaker_session=sagemaker_session)

Deploy the top candidate model proposed by AutoML

The Autopilot job trains and returns a set of trained candidate models, identifying among them the top candidate that optimizes the evaluation metric related to the ML problem.

In this post, we only demonstrate the deployment of the top candidate proposed by AutoML, but you can choose a different candidate that better fits your business criteria.

First, we review the performance achieved by the top candidate in the cross-validation:

best_candidate = automl.describe_auto_ml_job()[“BestCandidate”] best_candidate_name = best_candidate[“CandidateName”] print(“n”) print(“CandidateName: ” + best_candidate_name) print(“FinalAutoMLJobObjectiveMetricName: ” + best_candidate[“FinalAutoMLJobObjectiveMetric”][“MetricName”]) print(“FinalAutoMLJobObjectiveMetricValue: ” + str(best_candidate[“FinalAutoMLJobObjectiveMetric”][“Value”]))

If the performance is good enough for our business criteria, we deploy the top candidate in the customer account:

from sagemaker.predictor import Predictor from sagemaker.serializers import CSVSerializer from sagemaker.deserializers import CSVDeserializer inference_response_keys = [“predicted_label”, “probability”] predictor = automl.deploy(initial_instance_count=1, instance_type=”ml.m5.large”, inference_response_keys=inference_response_keys, predictor_cls=Predictor, serializer=CSVSerializer(), deserializer=CSVDeserializer()) print(“Created endpoint: {}”.format(predictor.endpoint_name))

The instance is deployed and billed to the customer account.

Prediction on test data

Finally, we access the model endpoint for the prediction of the label output for the test data:

predictor.predict(test_data_no_target.to_csv(sep=”,”, header=False, index=False))

If the session token expires after the deployment of the endpoint, you can recreate a new session following the steps described earlier and connect to the already deployed endpoint by implementing the following code:

predictor = Predictor(predictor.endpoint_name, sagemaker_session = sagemaker_session, serializer=CSVSerializer(), deserializer=CSVDeserializer())

Clean up

To avoid incurring unnecessary charges, delete the endpoints and resources that were created when deploying the model after they are no longer needed.

Delete the model endpoint

The model endpoint is deployed in a container that is always active. We delete it first to avoid consumption of credits:

predictor.delete_endpoint()

Delete the artifacts generated by the Autopilot job

Delete all the artifacts created by the Autopilot job, such as the generated candidate models, scripts, and notebook.

We use the high-level resource for Amazon S3 to simplify the operation:

assumed_role_s3_resource = boto3.resource(“s3”, aws_access_key_id=assumed_role_credentials[“AccessKeyId”], aws_secret_access_key=assumed_role_credentials[“SecretAccessKey”], aws_session_token=assumed_role_credentials[“SessionToken”]) s3_bucket = assumed_role_s3_resource.Bucket(automl.sagemaker_session.default_bucket()) s3_bucket.objects.filter(Prefix=base_job_name).delete()

Delete the training dataset copied into the customer account

Delete the training dataset in the customer account with the following code:

from urllib.parse import urlparse train_data_uri = automl.describe_auto_ml_job()[“InputDataConfig”][0][ “DataSource”][“S3DataSource”][“S3Uri”] o = urlparse(train_data_uri, allow_fragments=False) assumed_role_s3_resource.Object(o.netloc, o.path.lstrip(“/”)).delete()

Clean up IAM resources

We delete the IAM resources in reverse order to the creation phase.

  1. Remove the user from the group, and the profile from the credentials, and delete the user: saas_iam_client.remove_user_from_group(GroupName = SAAS_USER_GROUP_NAME, UserName = SAAS_USER_NAME) credentials_config.remove_section(SAAS_USER_PROFILE) with open(str(Path.home()) + “/.aws/credentials”, “w”) as configfile: credentials_config.write(configfile, space_around_delimiters=False) user_access_keys = saas_iam_client.list_access_keys(UserName=SAAS_USER_NAME) for AccessKeyId in [element[“AccessKeyId”] for element in user_access_keys[“AccessKeyMetadata”]]: saas_iam_client.delete_access_key(UserName=SAAS_USER_NAME, AccessKeyId=AccessKeyId) saas_iam_client.delete_user(UserName=SAAS_USER_NAME)
  2. Detach the policies from the group in the SaaS account, and delete the group and policies: attached_group_policies = saas_iam_client.list_attached_group_policies(GroupName=SAAS_USER_GROUP_NAME) for PolicyArn in [element[“PolicyArn”] for element in attached_group_policies[“AttachedPolicies”]]: saas_iam_client.detach_group_policy(GroupName=SAAS_USER_GROUP_NAME, PolicyArn=PolicyArn) saas_iam_client.delete_group(GroupName=SAAS_USER_GROUP_NAME) saas_iam_client.delete_policy(PolicyArn=saas_assume_role_policy_arn) saas_iam_client.delete_policy(PolicyArn=saas_s3_policy_arn)
  3. Detach the AWS policies from the role in the customer account, then delete the role and the policy: attached_role_policies = customer_iam_client.list_attached_role_policies(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME) for PolicyArn in [element[“PolicyArn”] for element in attached_role_policies[“AttachedPolicies”]]: customer_iam_client.detach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME, PolicyArn=PolicyArn) customer_iam_client.delete_role(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME) customer_iam_client.delete_policy(PolicyArn=customer_s3_policy_arn)

Conclusion

This post described a possible implementation, using the SageMaker Python SDK, of an Autopilot training job, model deployment, and prediction in a cross-account configuration. The originating account owns the data for the training and it delegates the activities to the account hosting the SageMaker resources.

You can use the API calls shown in this post to incorporate AutoML capabilities into a SaaS application, by delegating the management and billing of SageMaker resources to the customer account.

SageMaker decouples the environment where the data scientist drives the analysis from the containers that perform each phase of the ML process.

This capability simplifies other cross-account scenarios. For example: a SaaS provider who owns sensitive data, instead of sharing its data with the customer, could expose certified training algorithms and generate models on behalf of the customer. The customer will receive the trained model at the end of the Autopilot job.

For more examples of how to integrate Autopilot into SaaS products, see the following posts:

About the Authors

Francesco Polimeni is a Sr Solutions Architect at AWS with focus on Machine Learning. He has over 20 years of experience in professional services and pre-sales organizations for IT management software solutions.

Mehmet Bakkaloglu is a Sr Solutions Architect at AWS. He has vast experience in data analytics and cloud architecture, having provided technical leadership for transformation programs and pre-sales activities in a variety of sectors.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon

AWS Weekly Roundup: HIPAA eligible with Amazon Q Business, Amazon DCV, AWS re:Post Agent, and more (Oct 07, 2024)

Last Friday, I had the privilege of attending China Engineer’s Day 2024(CED 2024) in Hangzhou as the Amazon Web Services (AWS) speaker. The event was organized by the China Computer Federation (CCF), one of the most influential professional developer communities in China. At CED 2024, I spoke about how AI development tools can improve developer…

Published

on

By

Last Friday, I had the privilege of attending China Engineer’s Day 2024(CED 2024) in Hangzhou as the Amazon Web Services (AWS) speaker. The event was organized by the China Computer Federation (CCF), one of the most influential professional developer communities in China. At CED 2024, I spoke about how AI development tools can improve developer […]

Source

Continue Reading

Amazon

Build a generative AI Slack chat assistant using Amazon Bedrock and Amazon Kendra

In this post, we describe the development of a generative AI Slack application powered by Amazon Bedrock and Amazon Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content. Source

Published

on

By

In this post, we describe the development of a generative AI Slack application powered by Amazon Bedrock and Amazon Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content.

Source

Continue Reading

Amazon

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source

Published

on

By

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.