Connect with us

Amazon

Improve your data science workflow with a multi-branch training MLOps pipeline using AWS

In this post, you will learn how to create a multi-branch training MLOps continuous integration and continuous delivery (CI/CD) pipeline using AWS CodePipeline and AWS CodeCommit, in addition to Jenkins and GitHub. I discuss the concept of experiment branches, where data scientists can work in parallel and eventually merge their experiment back into the main…

Published

on

In this post, you will learn how to create a multi-branch training MLOps continuous integration and continuous delivery (CI/CD) pipeline using AWS CodePipeline and AWS CodeCommit, in addition to Jenkins and GitHub. I discuss the concept of experiment branches, where data scientists can work in parallel and eventually merge their experiment back into the main branch. I also show you how to create an Amazon SageMaker projects template that you can use from within Amazon SageMaker Studio.

SageMaker projects give organizations the ability to set up and standardize developer environments for data scientists and CI/CD systems for MLOps engineers. With SageMaker projects, MLOps engineers or organization admins can define templates that bootstrap the machine learning (ML) workflow with source version control, automated ML pipelines, and a set of code to start iterating over ML use cases. With SageMaker projects, organizations can set up dependency management, code repository management, build reproducibility, and artifact sharing and management. SageMaker projects are provisioned using AWS Service Catalog products.

SageMaker projects already provides a few MLOps CI/CD pipeline templates, which are the recommended way to get started with CI/CD in SageMaker. These training templates let you modify a single predefined branch called main, and all changes to this branch launch a training job. In some cases, though, you may want to use a multi-branch (trunk-based) training pipeline. This type of pipeline lets you get more flexibility and better performance with a thorough review of the code and experiment results before approving models and merging changes into main. This solution enables you to create several experimental branches that each launch their own training job and create their own model artifact. We can then use pull requests to approve some of these models, using a modified SageMaker template available in GitHub.

When data scientists are working on a new model, that work is often experimental, meaning that unsuccessful experiments may be discarded and those with good results can go into production. Each data scientist may be working on a particular attempt at improving the current objective function. While one may be trying out a different architecture, another may be trying a new set of hyperparameters, for instance.

Whatever may be the case, these experiments should be version controlled, tracked, and automated. When the winning experiment is found and becomes eligible to be merged into the main branch, it can be assessed by a lead data scientist. They can view the exact code that was run, identify the key metrics output by the experiment, and have the results of any automated tests before model approval and release. When each experiment has its own branch, you can automate training whenever a Git push is run. This way, we can make sure that all metrics and tests for the model are stored in Studio.

The following image illustrates a potential Git history of two data scientists experimenting on the same project.

This post includes the following sections:

  • Architecture overview
  • Deploy the baseline stack
  • Configure the template to be used from within Studio
  • Create a new project
  • Create and release a new experiment with CodePipeline and CodeCommit
  • Configure the Jenkins instance (optional)
  • Create and release a new experiment with Jenkins and GitHub (optional)
  • Review experiments from pull requests

Architecture overview

The following architecture shows how you can automate the creation of a new CodePipeline pipeline whenever someone creates a new branch. The pipeline also runs when changes are made to that specific branch. Additionally, the architecture shows a release pipeline that runs when a merge happens in the main branch and marks the related model as approved in the model registry.

This is all based on the concept of feature branches in trunk-based development.

The architecture workflow includes the following steps:

  1. The data scientist makes a Git push of a new experiment branch to the remote repository in CodeCommit.
  2. Amazon EventBridge listens for branch create events and invokes an AWS Lambda function.
  3. The function invokes AWS CloudFormation to create a new stack that contains the CodePipeline definition that is used for the new branch.
  4. The CodePipeline pipeline is triggered automatically after creation. In the final stage, the pipeline triggers CodeBuild, which builds a Docker image, pushes it to Amazon Elastic Container Registry (Amazon ECR), updates the SageMaker pipeline, and invokes it.
  5. The SageMaker pipeline runs all the steps required to train the model and store it in the model registry.
  6. After the pipeline runs, the model awaits approval.
  7. The data scientist creates a pull request in the CodeCommit repository, to have the new branch merge with the main branch.
  8. The lead data scientist approves the pull request if the model is satisfactory.
  9. Another Lambda function is triggered as part of releasing the model pipeline.
  10. The function approves the model in the model registry.

Optionally, you may not want to use CodePipeline and CodeCommit for the CI/CD pipeline. For that purpose, we also provide the required template to provision the necessary infrastructure to integrate with a Jenkins and GitHub option, as shown in the following diagram.

This architecture includes the following workflow:

  1. The data scientist creates a new branch with the experiment/ prefix and commits their experiment code, pushing the changes to the remote repository.
  2. CodeBuild launches the job in Amazon SageMaker Pipelines.
  3. The pipeline trains the model and stores it in the model registry.
  4. The model is stored with the status Pending.
  5. If the experiment is successful, the data scientist creates a pull request to the main branch.
  6. When the pull request is approved, it triggers the release pipeline.
  7. The release pipeline invokes a Lambda function that approves the model in the model registry.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Deploy the baseline stack

The purpose of the baseline stack is to provision the standard resources used as seed by the SageMaker projects template to create new projects.

Clone the sample repository and deploy the baseline stack with the following code:

git clone https://github.com/aws-samples/sagemaker-custom-project-templates.git mkdir sample-multi-branch-train cp -r sagemaker-custom-project-templates/multi-branch-mlops-train/* sample-multi-branch-train cd sample-multi-branch-train ./deploy.sh -p code_pipeline+code_commit

In the preceding example, you may instead deploy the stack to support Jenkins and GitHub by using ./deploy.sh -p jenkins.

Configure the template to use in Studio

To configure your template, complete the following steps:

  1. Create a portfolio in the AWS Service Catalog, providing entries for Portfolio name, Description, and Owner.
  2. Assign a new product by choosing Upload new product.
  3. Enter the details for Product name, Description, and Owner.

You also specify the CloudFormation template to be used by Service Catalog in provisioning the product.

  1. For Choose a method, select Use a CloudFormation template.
  2. Enter the CloudFormation template deployed by the baseline stack: https://cloud-formation--us-east-1.s3.amazonaws.com/model_train.yaml.
  3. On the navigation pane, choose Products.
  4. On the Tags tab, add the SageMaker visibility tag sagemaker:studio-visibility to the product with value true.
  5. In the navigation pane, choose Portfolios.
  6. On the Constraints tab, choose Create constraint.
  7. Select the role that was created by the baseline stack (MultiBranchTrainMLOpsLaunchRole).

Next, you need to add permission to groups, roles, and users to use the product.

  1. Choose Portfolios in the navigation pane.
  2. On the Groups, roles, and users tab, choose Add groups, roles, users.
  3. Add the relevant groups, roles, and users that should have permission to provision the product (for this list, I add the role Admin and my SageMaker execution role).

The template is now available in Studio.

  1. Open Studio and navigate to the Create project page.
  2. Choose Organization templates.

You can see the AWS Service Catalog product you created.

Create a new project

You’re now ready to create a new project.

  1. Choose the template you created.
  2. Choose Create project.
  3. For Name, enter a name.

Wait for the project to be created. You should see the project with the status Creating.

  1. Add the sample code to the created repository (continue from the previously used terminal):

git init git stage . git commit -m “adds sample code” git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/model-mymodel-train git push –set-upstream origin main

Create and release a new experiment with CodePipeline and CodeCommit

To create and release a new experiment, complete the following steps:

  1. Either clone the CodeCommit repository or start from the previous terminal to submit the experiment code to the repository:

git checkout -b experiment/myexperiment git stage git commit -m “adds some change” git push –set-upstream origin experiment/myexperiment

After a few seconds, a new pipeline is created in CodePipeline.

You can see the pipeline running, and you should see the Train step update to In progress.

The Train step of the pipeline launches a new SageMaker Pipelines pipeline that trains the model.

In Studio, under SageMaker resources, choose Pipelines on the drop-down menu. You should see the pipeline running.

When the pipeline is complete, a new model gets stored in the SageMaker Model Registry with Pending status.

You can choose Model registry on the drop-down menu to view the model on the SageMaker resources page.

At this point, the data scientist can assess the experiment results and push subsequent commits, attempting to achieve better results for the experiment goal. When doing so, the pipeline is triggered again and new model versions are stored in the model registry.

If the data scientist deems the experiment successful, they can create a pull request, asking to merge the changes from the experiment/myexperiment branch into main.

  1. On the CodeCommit console, under Repositories in the navigation pane, choose your repository.
  2. Choose Code in the navigation pane.
  3. Choose Create pull request.
  4. Choose the destination, generally main, and the source branch, which in this case is experiment/myexperiment.

After you create the pull request, you can review both the code and the experiment’s results, by using Amazon SageMaker Experiments.

You can also find the experiment results by using the Git commit ID of the latest commit in the branch that is being merged. With this ID, you can go to Studio, under SageMaker resources, and choose Experiments and trials. You can find all the experiments for your model, in this case named model-mymodel, and also the trials, named after the commit ID.

  1. On the CodeCommit console, choose Merge.
  2. Select Fast forward merge.
  3. Choose Merge pull request.

When the merge is complete, the respective model gets approved automatically in the model registry. Also, because we chose to delete the experiment branch after the merge, the provisioned experiment pipeline is automatically deleted.

Configure the Jenkins instance (optional)

To configure the Jenkins instance, you must install the required plugins, configure the train pipeline, and configure the release pipeline.

Install the required plugins

On the dashboard, choose Manage Jenkins, Manage Plugins, and Available. Install the following plugins:

Configure the train pipeline

To configure the train pipeline, complete the following steps:

  1. On the dashboard, choose New item.
  2. Enter the name model-mymodel-train.
  3. Select Freestyle Project.
  4. In the Source Code Management section, select Git.
  5. For Repository URL, enter your URL.
  6. For Branches to build, enter */experiment/*.
  7. In the Build Triggers section, select Poll SCM.
  8. For Schedule, enter H/2 * * * *.
  9. In the Build Environment section, select Delete workspace before build starts.
  10. Now you add a build step of type AWS CodeBuild.
  11. To generate the credentials, go to the user page and choose Create access key.
  12. Make sure to store the secret to use again in a later step.
  13. In the AWS Configuration section, select Manually specify access and secret keys.
  14. Enter keys for AWS Access Key and AWS Secret Key.
  15. In the Project Configuration section, provide the Region and project name (in this case, model-mymodel-train).
  16. Select Use Jenkins source.
  17. For Environment Variables Override, enter the required environment variables for the build script:

[ { BRANCH_NAME, ${GIT_BRANCH} }, { COMMIT_HASH, ${GIT_COMMIT} } ]

Next, you add a file operation build step to delete any remaining files after the build.

  1. Choose Add build step and File Operations.
  2. On the Add menu, choose File Delete.
  3. For Include File Pattern, enter *.

Configure the release pipeline

You’re now ready to configure the release pipeline.

  1. On the dashboard, choose New Item.
  2. Enter the name model-mymodel-release.
  3. Select Freestyle Project.
  4. In the Source Code Management section, select Git.
  5. For Repository URL, enter your URL.
  6. For Branches to build, enter */main*.
  7. In the Build Triggers section, select Poll SCM.
  8. For Schedule, enter H/2 * * * *.
  9. In the Build Environment section, select Delete workspace before build starts.
  10. Add a build step of type Execute shell.
  11. In the Command field, add the following command:

echo “MERGE_PARENT=$(git rev-parse HEAD^2)” >> env.properties

  1. Add build step of type Inject environment variables.
  2. For Properties File Path, enter env.properties.
  3. Add a build step of type AWS Lambda invocation.
  4. For AWS Region, enter your Region.
  5. For Function name, enter the name of your function (for this post, release-model-package-mymodel).
  6. For Payload in json format, enter {“commit_id”: “${MERGE_PARENT}”}.
  7. Lastly, you add a File Delete build step.
  8. For Include File Pattern, enter *.

Create and release a new experiment with Jenkins and GitHub (optional)

To create and release a new experiment with Jenkins and GitHub, you first submit the experiment code to the repository, then open a pull request with the successful experiment code.

Submit experiment code to the repository

Either clone the GitHub repository or start from the previous terminal:

git checkout -b experiment/myexperiment git stage git commit -m “adds some change” git push –set-upstream origin experiment/myexperiment

After a few seconds, the CodeBuild build starts running.

The status of the CodeBuild model-mymodel-train project changes to In Progress.

If you look in Studio, in the SageMaker resources section, you can see that the pipeline mymodel-experiment-myexperiment is running.

When the pipeline is complete, a new model gets stored in the SageMaker Model Registry with Pending status.

Looking in the Jenkins UI, if we select the model-mymodel-train pipeline and then choose Status, we should see that the pipeline ran successfully.

After the pipeline runs, if we go to Studio in the SageMaker resources section and choose Model registry, we should see the created model with Pending status.

At this point, the data scientist can assess the experiment results and push subsequent commits, attempting to achieve better results for the experiment goal. When doing so, the pipeline starts again and new model versions are stored in the model registry.

If the data scientist deems the experiment successful, they can create a pull request, asking to merge the changes from the experiment/myexperiment branch into main.

Open a pull request with the successful experiment code

In the GitHub UI, you can open a pull request from the experiment branch experiment/myexperiment into the main branch.

When the pull request gets created, both the code and the results of the experiment can be reviewed in the SageMaker resources section under Experiments and trials. This includes information such as charts, metrics, parameters, artifacts, debugger, model explainability, and bias reports.

If everything looks good, we can merge the pull request by choosing Create a merge commit and then choosing Merge pull request.

As soon as the merge is complete, the respective model gets approved automatically in the model registry. You can view it in SageMaker resources under Model registry.

Review experiments from pull requests

To review experiments from a pull request, data scientists need to identify the commit ID of the latest commit in the pull request. After doing so, they can find the trial with the given commit ID. Alternatively, teams can customize the trial name by building a string and assigning it.

In Studio resources, in the Experiments and trials section, SageMaker allows you to see several different types of metadata and information that can be associated with a model.

There are different aspects of an experiment that can be tracked and considered for approval.

Charts and metrics

You may want to store experiment metrics into the trial using the SageMaker Experiments SDK:

for epoch in range(5): my_loss = … tracker.log_metric( metric_name=’loss’, value=my_loss, iteration_number=epoch )

Experiments is integrated with Studio. When you use Studio, Experiments automatically tracks your experiments and trials and presents visualizations of the tracked data and an interface to search the data.

Experiments automatically organizes, ranks, and sorts trials based on a chosen metric using the concept of a trial leaderboard. Studio produces real-time data visualizations, such as metric charts and graphs, to quickly compare and identify the best performing models. These are updated in real time as the experiment progresses.

Parameters

You can log which parameters were used during the experiment using the log_parameters function of the SDK.

Artifacts

Optionally, you may want to add additional arbitrary data tied to the experiment, such as custom charts or visualizations. These are stored by SageMaker in Amazon Simple Storage Service (Amazon S3) at the end of the training job. To store them, you can simply use log_artifacts.

Debugger

Amazon SageMaker Debugger allows you to monitor training jobs in real time. You can detect suboptimal resource utilization as well as issues causing your model to not converge.

Model explainability and bias report

Amazon SageMaker Clarify provides tools to help explain how ML models make predictions. These tools can help ML modelers and developers and other internal stakeholders understand model characteristics before deployment and debug predictions provided by the model after it’s deployed.

Clean up

To clean up the resources created as part of this post, make sure to delete all the created stacks. To do that, empty the S3 buckets manually first, in addition to deleting the models from the model registry.

You can also delete the SageMaker project with the following code:

aws sagemaker delete-project –-project-name mymodel

Conclusion

In this post, I discussed how you can create a model training pipeline fully integrated with Git, using either CodePipeline and CodeCommit, or Jenkins and GitHub. Different data scientists can use this pipeline concurrently so that each of them can experiment independently. When a winning model is found, they can create a pull request and merge their changes into the main branch.

Additionally, because the pipeline is fully automated, ML engineers can add metadata and information about the experiments that is useful from a governance standpoint. For instance, they can collect metrics about the experiments and attach them with the model artifact or detect unwanted bias in the model. Try it out and tell us what you think in the comments!

About the Author

Bruno Klein is a Machine Learning Engineer in the AWS ProServe team. He particularly enjoys creating automations and improving the lifecycle of models in production. In his free time, he likes to spend time outdoors and hiking.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Amazon

Secure Amazon SageMaker Studio presigned URLs Part 2: Private API with JWT authentication

In part 1 of this series, we demonstrated how to resolve an Amazon SageMaker Studio presigned URL from a corporate network using Amazon private VPC endpoints without traversing the internet. In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API…

Published

on

By

In part 1 of this series, we demonstrated how to resolve an Amazon SageMaker Studio presigned URL from a corporate network using Amazon private VPC endpoints without traversing the internet. In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API Gateway as a proxy interface to generate and access Amazon SageMaker presigned URLs. Furthermore, we add an additional guardrail to ensure presigned URLs are only generated and accessed for the authenticated end-user within the corporate network.

Solution overview

The following diagram illustrates the architecture of the solution.

The process includes the following steps:

  1. In the Amazon Cognito user pool, first set up a user with the name matching their Studio user profile and register Studio as the app client in the user pool.
  2. The user federates from their corporate identity provider (IdP) and authenticates with the Amazon Cognito user pool for accessing Studio.
  3. Amazon Cognito returns a token to the user authorizing access to the Studio application.
  4. The user invokes createStudioPresignedUrl API on API Gateway along with a token in the header.
  5. API Gateway invokes a custom AWS Lambda authorizer and validates the token.
  6. When the token is valid, Amazon Cognito returns an access grant policy with studio user profile id to API Gateway.
  7. API Gateway invokes the createStudioPresignedUrl Lambda function for creating the studio presigned url.
  8. The createStudioPresignedUrl function creates a presigned URL using the SageMaker API VPC endpoint and returns to caller.
  9. User accesses the presigned URL from their corporate network that resolves over the Studio VPC endpoint.
  10. The function’s AWS Identity and Access Management (IAM) policy makes sure that the presigned URL creation and access are performed via VPC endpoints.

The following sections walk you through solution deployment, configuration, and validation for the API Gateway private API for creating and resolving a Studio presigned URL from a corporate network using VPC endpoints.

  1. Deploy the solution
  2. Configure the Amazon Cognito user
  3. Authenticating the private API for the presigned URL using a JSON Web Token
  4. Configure the corporate DNS server for accessing the private API
  5. Test the API Gateway private API for a presigned URL from the corporate network
  6. Pre-Signed URL Lambda Auth Policy
  7. Cleanup

Deploy the solution

You can deploy the solution through either the AWS Management Console or the AWS Serverless Application Model (AWS SAM).

To deploy the solution via the console, launch the following AWS CloudFormation template in your account by choosing Launch Stack. It takes approximately 10 minutes for the CloudFormation stack to complete.

To deploy the solution using AWS SAM, you can find the latest code in the aws-samples GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution using the AWS SAM CLI. If not currently installed, install the AWS SAM CLI.

  1. Clone the repository at https://github.com/aws-samples/secure-sagemaker-studio-presigned-url.
  2. After you clone the repo, navigate to the source and run the following code:

Configure the Amazon Cognito user

To configure your Amazon Cognito user, complete the following steps:

  1. Create an Amazon Cognito user with the same name as a SageMaker user profile: aws cognito-idp admin-create-user –user-pool-id –username
  2. Set the user password: aws cognito-idp admin-set-user-password –user-pool-id –username –password –permanent
  3. Get an access token: aws cognito-idp initiate-auth –auth-flow USER_PASSWORD_AUTH –client-id –auth-parameters USERNAME=,PASSWORD=

Authenticating the private API for the presigned URL using a JSON Web Token

When you deployed a private API for creating a SageMaker presigned URL, you added a guardrail to restrict access to access the presigned URL by anyone outside the corporate network and VPC endpoint. However, without implementing another control to the private API within the corporate network, any internal user within the corporate network would be able to pass unauthenticated parameters for the SageMaker user profile and access any SageMaker app.

To mitigate this issue, we propose passing a JSON Web Token (JWT) for the authenticated caller to the API Gateway and validating that token with a JWT authorizer. There are multiple options for implementing an authorizer for the private API Gateway, using either a custom Lambda authorizer or Amazon Cognito.

With a custom Lambda authorizer, you can embed a SageMaker user profile name in the returned policy. This prevents any users within the corporate network from being able to send any SageMaker user profile name for creating a presigned URL that they’re not authorized to create. We use Amazon Cognito to generate our tokens and a custom Lambda authorizer to validate and return the appropriate policy. For more information, refer to Building fine-grained authorization using Amazon Cognito, API Gateway, and IAM. The Lambda authorizer uses the Amazon Cognito user name as the user profile name.

If you’re unable to use Amazon Cognito, you can develop a custom application to authenticate and pass end-user tokens to the Lambda authorizer. For more information, refer to Use API Gateway Lambda authorizers.

Configure the corporate DNS server for accessing the private API

To configure your corporate DNS server, complete the following steps:

  1. On the Amazon Elastic Compute Cloud (Amazon EC2) console, choose your on-premises DNSA EC2 instance and connect via Systems Manager Session Manager.
  2. Add a zone record in the /etc/named.conf file for resolving to the API Gateway’s DNS name via your Amazon Route 53 inbound resolver, as shown in the following code: zone “zxgua515ef.execute-api..amazonaws.com” { type forward; forward only; forwarders { 10.16.43.122; 10.16.102.163; }; };
  3. Restart the named service using the following command: sudo service named restart

Validate requesting a presigned URL from the API Gateway private API for authorized users

In a real-world scenario, you would implement a front-end interface that would pass the appropriate Authorization headers for authenticated and authorized resources using either a custom solution or leverage AWS Amplify. For brevity of this blog post, the following steps leverages Postman to quickly validate the solution we deployed actually restricts requesting the presigned URL for an internal user, unless authorized to do so.

To validate the solution with Postman, complete the following steps:

  1. Install Postman on the WINAPP EC2 instance. See instructions here
  2. Open Postman and add the access token to your Authorization header: Authorization: Bearer
  3. Modify the API Gateway URL to access it from your internal EC2 instance:
    1. Add the VPC endpoint into your API Gateway URL: https://.execute-api..amazonaws.com/dev/EMPLOYEE_ID
    2. Add the Host header with a value of your API Gateway URL: .execute-api..amazonaws.com
    3. First, change the EMPLOYEE_ID to your Amazon Cognito user and SageMaker user profile name. Make sure you receive an authorized presigned URL.
    4. Then change the EMPLOYEE_ID to a user that is not yours and make sure you receive an access failure.
  4. On the Amazon EC2 console, choose your on-premises WINAPP instance and connect via your RDP client.
  5. Open a Chrome browser and navigate to your authorized presigned URL to launch Studio.

Studio is launched over VPC endpoint with remote address as the Studio VPC endpoint IP.

If the presigned URL is accessed outside of the corporate network, the resolution fails because the IAM policy condition for the presigned URL enforces creation and access from a VPC endpoint.

Pre-Signed URL Lambda Auth Policy

Above solution created the following Auth Policy for the Lambda that generated Pre-Signed URL for accessing SageMaker Studio.

{ “Version”: “2012-10-17”, “Statement”: [ { “Condition”: { “IpAddress”: { “aws:VpcSourceIp”: “10.16.0.0/16” } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” }, { “Condition”: { “IpAddress”: { “aws:SourceIp”: “192.168.10.0/24” } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” }, { “Condition”: { “StringEquals”: { “aws:sourceVpce”: [ “vpce-sm-api-xx”, “vpce-sm-api-yy” ] } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” } ] }

The above policy enforces Studio pre-signed URL is both generated and accessed via one of these three entrypoints:

  1. aws:VpcSourceIp as your AWS VPC CIDR
  2. aws:SourceIp as your corporate network CIDR
  3. aws:sourceVpce as your SageMaker API VPC endpoints

Cleanup

To avoid incurring ongoing charges, delete the CloudFormation stacks you created. Alternatively, if you deployed the solution using SAM, you need to authenticate to the AWS account the solution was deployed and run sam delete.

Conclusion

In this post, we demonstrated how to access Studio using a private API Gateway from a corporate network using Amazon private VPC endpoints, preventing access to presigned URLs outside the corporate network, and securing the API Gateway with a JWT authorizer using Amazon Cognito and custom Lambda authorizers.

Try out with this solution and experiment integrating this with your corporate portal, and leave your feedback in the comments!

About the Authors

Ram Vittal is a machine learning solutions architect at AWS. He has over 20+ years of experience architecting and building distributed, hybrid and cloud applications. He is passionate about building secure and scalable AI/ML and Big Data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis, photography, and action movies.

Jonathan Nguyen is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on Threat Detection and Incident Response. Today, he helps enterprise customers develop a comprehensive AWS Security strategy, deploy security solutions at scale, and train customers on AWS Security best practices.

Chris Childers is a Cloud Infrastructure Architect in Professional Services at AWS. He works with AWS customers to design and automate their cloud infrastructure and improve their adoption of DevOps culture and processes.



Source

Continue Reading

Amazon

Secure Amazon SageMaker Studio presigned URLs Part 1: Foundational infrastructure

You can access Amazon SageMaker Studio notebooks from the Amazon SageMaker console via AWS Identity and Access Management (IAM) authenticated federation from your identity provider (IdP), such as Okta. When a Studio user opens the notebook link, Studio validates the federated user’s IAM policy to authorize access, and generates and resolves the presigned URL for…

Published

on

By

You can access Amazon SageMaker Studio notebooks from the Amazon SageMaker console via AWS Identity and Access Management (IAM) authenticated federation from your identity provider (IdP), such as Okta. When a Studio user opens the notebook link, Studio validates the federated user’s IAM policy to authorize access, and generates and resolves the presigned URL for the user. Because the SageMaker console runs on an internet domain, this generated presigned URL is visible in the browser session. This presents an undesired threat vector for exfiltration and gaining access to customer data when proper access controls are not enforced.

Studio supports a few methods for enforcing access controls against presigned URL data exfiltration:

  • Client IP validation using the IAM policy condition aws:sourceIp
  • Client VPC validation using the IAM condition aws:sourceVpc
  • Client VPC endpoint validation using the IAM policy condition aws:sourceVpce

When you access Studio notebooks from the SageMaker console, the only available option is to use client IP validation with the IAM policy condition aws:sourceIp. However, you can use browser traffic routing products such as Zscaler to ensure scale and compliance for your workforce internet access. These traffic routing products generate their own source IP, whose IP range is not controlled by the enterprise customer. This makes it impossible for these enterprise customers to use the aws:sourceIp condition.

To use client VPC endpoint validation using the IAM policy condition aws:sourceVpce, the creation of a presigned URL needs to originate in the same customer VPC where Studio is deployed, and resolution of the presigned URL needs to happen via a Studio VPC endpoint on the customer VPC. This resolution of the presigned URL during access time for corporate network users can be accomplished using DNS forwarding rules (both in Zscaler and corporate DNS) and then into the customer VPC endpoint using an AWS Route 53 inbound resolver.

In this part, we discuss the overarching architecture for securing studio pre-signed url and demonstrate how to set up the foundational infrastructure to create and launch a Studio presigned URL through your VPC endpoint over a private network without traversing the internet. This serves as the foundational layer for preventing data exfiltration by external bad actors gaining access to Studio pre-signed URL and unauthorized or spoofed corporate user access within a corporate environment.

Solution overview

The following diagram illustrates over-arching solution architecture.

The process includes the following steps:

  1. A corporate user authenticates via their IdP, connects to their corporate portal, and opens the Studio link from the corporate portal.
  2. The corporate portal application makes a private API call using an API Gateway VPC endpoint to create a presigned URL.
  3. The API Gateway VPC endpoint “create presigned URL” call is forwarded to the Route 53 inbound resolver on the customer VPC as configured in the corporate DNS.
  4. The VPC DNS resolver resolves it to the API Gateway VPC endpoint IP. Optionally, it looks up a private hosted zone record if it exists.
  5. The API Gateway VPC endpoint routes the request via the Amazon private network to the “create presigned URL API” running in the API Gateway service account.
  6. API Gateway invokes the create-pre-signedURL private API and proxies the request to the create-pre-signedURL Lambda function.
  7. The create-pre-signedURL Lambda call is invoked via the Lambda VPC endpoint.
  8. The create-pre-signedURL function runs in the service account, retrieves authenticated user context (user ID, Region, and so on), looks up a mapping table to identify the SageMaker domain and user profile identifier, makes a sagemaker createpre-signedDomainURL API call, and generates a presigned URL. The Lambda service role has the source VPC endpoint conditions defined for the SageMaker API and Studio.
  9. The generated presigned URL is resolved over the Studio VPC endpoint.
  10. Studio validates that the presigned URL is being accessed via the customer’s VPC endpoint defined in the policy, and returns the result.
  11. The Studio notebook is returned to the user’s browser session over the corporate network without traversing the internet.

The following sections walk you through how to implement this architecture to resolve Studio presigned URLs from a corporate network using VPC endpoints. We demonstrate a complete implementation by showing the following steps:

  1. Set up the foundational architecture.
  2. Configure the corporate app server to access a SageMaker presigned URL via a VPC endpoint.
  3. Set up and launch Studio from the corporate network.

Set up the foundational architecture

In the post Access an Amazon SageMaker Studio notebook from a corporate network, we demonstrated how to resolve a presigned URL domain name for a Studio notebook from a corporate network without traversing the internet. You can follow the instructions in that post to set up the foundational architecture, and then return to this post and proceed to the next step.

Configure the corporate app server to access a SageMaker presigned URL via a VPC endpoint

To enable accessing Studio from your internet browser, we set up an on-premises app server on Windows Server on the on-premises VPC public subnet. However, the DNS queries for accessing Studio are routed through the corporate (private) network. Complete the following steps to configure routing Studio traffic through the corporate network:

  1. Connect to your on-premises Windows app server.

  2. Choose Get Password then browse and upload your private key to decrypt your password.
  3. Use an RDP client and connect to the Windows Server using your credentials.
    Resolving Studio DNS from the Windows Server command prompt results in using public DNS servers, as shown in the following screenshot.
    Now we update Windows Server to use the on-premises DNS server that we set up earlier.
  4. Navigate to Control Panel, Network and Internet, and choose Network Connections.
  5. Right-click Ethernet and choose the Properties tab.
  6. Update Windows Server to use the on-premises DNS server.
  7. Now you update your preferred DNS server with your DNS server IP.
  8. Navigate to VPC and Route Tables and choose your STUDIO-ONPREM-PUBLIC-RT route table.
  9. Add a route to 10.16.0.0/16 with the target as the peering connection that we created during the foundational architecture setup.

Set up and launch Studio from your corporate network

To set up and launch Studio, complete the following steps:

  1. Download Chrome and launch the browser on this Windows instance.
    You may need to turn off Internet Explorer Enhanced Security Configuration to allow file downloads and then enable file downloads.
  2. In your local device Chrome browser, navigate to the SageMaker console and open the Chrome developer tools Network tab.
  3. Launch the Studio app and observe the Network tab for the authtokenparameter value, which includes the generated presigned URL along with the remote server address that the URL is routed to for resolution.In this example, the remote address 100.21.12.108 is one of the public DNS server addresses to resolve the SageMaker DNS domain name d-h4cy01pxticj.studio.us-west-2.sagemaker.aws.
  4. Repeat these steps from the Amazon Elastic Compute Cloud (Amazon EC2) Windows instance that you configured as part of the foundational architecture.

We can observe that the remote address is not the public DNS IP, instead it’s the Studio VPC endpoint 10.16.42.74.

Conclusion

In this post, we demonstrated how to resolve a Studio presigned URL from a corporate network using Amazon private VPC endpoints without exposing the presigned URL resolution to the internet. This further secures your enterprise security posture for accessing Studio from a corporate network for building highly secure machine learning workloads on SageMaker. In part 2 of this series, we further extend this solution to demonstrate how to build a private API for accessing Studio with aws:sourceVPCE IAM policy validation and token authentication. Try out this solution and leave your feedback in the comments!

About the Authors

Ram Vittal is a machine learning solutions architect at AWS. He has over 20+ years of experience architecting and building distributed, hybrid and cloud applications. He is passionate about building secure and scalable AI/ML and Big Data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis and photography.

Neelam Koshiya is an enterprise solution architect at AWS. Her current focus is to help enterprise customers with their cloud adoption journey for strategic business outcomes. In her spare time, she enjoys reading and being outdoors.



Source

Continue Reading

Amazon

Use a custom image to bring your own development environment to RStudio on Amazon SageMaker

RStudio on Amazon SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio integrated development environment (IDE), and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. RStudio on…

Published

on

By

RStudio on Amazon SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio integrated development environment (IDE), and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. RStudio on SageMaker already comes with a built-in image preconfigured with R programming and data science tools; however, you often need to customize your IDE environment. Starting today, you can bring your own custom image with packages and tools of your choice, and make them available to all the users of RStudio on SageMaker in a few clicks.

Bringing your own custom image has several benefits. You can standardize and simplify the getting started experience for data scientists and developers by providing a starter image, preconfigure the drivers required for connecting to data stores, or pre-install specialized data science software for your business domain. Furthermore, organizations that have previously hosted their own RStudio Workbench may have existing containerized environments that they want to continue to use in RStudio on SageMaker.

In this post, we share step-by-step instructions to create a custom image and bring it to RStudio on SageMaker using the AWS Management Console or AWS Command Line Interface (AWS CLI). You can get your first custom IDE environment up and running in few simple steps. For more information on the content discussed in this post, refer to Bring your own RStudio image.

Solution overview

When a data scientist starts a new session in RStudio on SageMaker, a new on-demand ML compute instance is provisioned and a container image that defines the runtime environment (operating system, libraries, R versions, and so on) is run on the ML instance. You can provide your data scientists multiple choices for the runtime environment by creating custom container images and making them available on the RStudio Workbench launcher, as shown in the following screenshot.

The following diagram describes the process to bring your custom image. First you build a custom container image from a Dockerfile and push it to a repository in Amazon Elastic Container Registry (Amazon ECR). Next, you create a SageMaker image that points to the container image in Amazon ECR, and attach that image to your SageMaker domain. This makes the custom image available for launching a new session in RStudio.

Prerequisites

To implement this solution, you must have the following prerequisites:

We provide more details on each in this section.

RStudio on SageMaker domain

If you have an existing SageMaker domain with RStudio enabled prior to April 7, 2022, you must delete and recreate the RStudioServerPro app under the user profile name domain-shared to get the latest updates for bring your own custom image capability. The AWS CLI commands are as follows. Note that this action interrupts RStudio users on SageMaker.

aws sagemaker delete-app –domain-id –app-type RStudioServerPro –app-name default –user-profile-name domain-shared aws sagemaker create-app –domain-id –app-type RStudioServerPro –app-name default –user-profile-name domain-shared

If this is your first time using RStudio on SageMaker, follow the step-by-step setup process described in Get started with RStudio on Amazon SageMaker, or run the following AWS CloudFormation template to set up your first RStudio on SageMaker domain. If you already have a working RStudio on SageMaker domain, you can skip this step.

The following RStudio on SageMaker CloudFormation template requires an RStudio license approved through AWS License Manager. For more about licensing, refer to RStudio license. Also note that only one SageMaker domain is permitted per AWS Region, so you’ll need to use an AWS account and Region that doesn’t have an existing domain.

  1. Choose Launch Stack.
    Launch stack button
    The link takes you to the us-east-1 Region, but you can change to your preferred Region.
  2. In the Specify template section, choose Next.
  3. In the Specify stack details section, for Stack name, enter a name.
  4. For Parameters, enter a SageMaker user profile name.
  5. Choose Next.
  6. In the Configure stack options section, choose Next.
  7. In the Review section, select I acknowledge that AWS CloudFormation might create IAM resources and choose Next.
  8. When the stack status changes to CREATE_COMPLETE, go to the Control Panel on the SageMaker console to find the domain and the new user.

IAM policies to interact with Amazon ECR

To interact with your private Amazon ECR repositories, you need the following IAM permissions in the IAM user or role you’ll use to build and push Docker images:

{ “Version”:”2012-10-17″, “Statement”:[ { “Sid”: “VisualEditor0”, “Effect”:”Allow”, “Action”:[ “ecr:CreateRepository”, “ecr:BatchGetImage”, “ecr:CompleteLayerUpload”, “ecr:DescribeImages”, “ecr:DescribeRepositories”, “ecr:UploadLayerPart”, “ecr:ListImages”, “ecr:InitiateLayerUpload”, “ecr:BatchCheckLayerAvailability”, “ecr:PutImage” ], “Resource”: “*” } ] }

To initially build from a public Amazon ECR image as shown in this post, you need to attach the AWS-managed AmazonElasticContainerRegistryPublicReadOnly policy to your IAM user or role as well.

To build a Docker container image, you can use either a local Docker client or the SageMaker Docker Build CLI tool from a terminal within RStudio on SageMaker. For the latter, follow the prerequisites in Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks to set up the IAM permissions and CLI tool.

AWS CLI versions

There are minimum version requirements for the AWS CLI tool to run the commands mentioned in this post. Make sure to upgrade AWS CLI on your terminal of choice:

  • AWS CLI v1 >= 1.23.6
  • AWS CLI v2 >= 2.6.2

Prepare a Dockerfile

You can customize your runtime environment in RStudio in a Dockerfile. Because the customization depends on your use case and requirements, we show you the essentials and the most common customizations in this example. You can download the full sample Dockerfile.

Install RStudio Workbench session components

The most important software to install in your custom container image is RStudio Workbench. We download from the public S3 bucket hosted by RStudio PBC. There are many version releases and OS distributions for use. The version of the installation needs to be compatible with the RStudio Workbench version used in RStudio on SageMaker, which is 1.4.1717-3 at the time of writing. The OS (argument OS in the following snippet) needs to match the base OS used in the container image. In our sample Dockerfile, the base image we use is Amazon Linux 2 from an AWS-managed public Amazon ECR repository. The compatible RStudio Workbench OS is centos7.

FROM public.ecr.aws/amazonlinux/amazonlinux … ARG RSW_VERSION=1.4.1717-3 ARG RSW_NAME=rstudio-workbench-rhel ARG OS=centos7 ARG RSW_DOWNLOAD_URL=https://s3.amazonaws.com/rstudio-ide-build/server/${OS}/x86_64 RUN RSW_VERSION_URL=`echo -n “${RSW_VERSION}” | sed ‘s/+/-/g’` && curl -o rstudio-workbench.rpm ${RSW_DOWNLOAD_URL}/${RSW_NAME}-${RSW_VERSION_URL}-x86_64.rpm && yum install -y rstudio-workbench.rpm

You can find all the OS release options with the following command:

aws s3 ls s3://rstudio-ide-build/server/

Install R (and versions of R)

The runtime for your custom RStudio container image needs at least one version of R. We can first install a version of R and make it the default R by creating soft links to /usr/local/bin/:

# Install main R version ARG R_VERSION=4.1.3 RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm && yum install -y R-${R_VERSION}-1-1.x86_64.rpm && yum clean all && rm -rf R-${R_VERSION}-1-1.x86_64.rpm RUN ln -s /opt/R/${R_VERSION}/bin/R /usr/local/bin/R && ln -s /opt/R/${R_VERSION}/bin/Rscript /usr/local/bin/Rscript

Data scientists often need multiple versions of R so that they can easily switch between projects and code base. RStudio on SageMaker supports easy switching between R versions, as shown in the following screenshot.

RStudio on SageMaker automatically scans and discovers versions of R in the following directories:

/usr/lib/R /usr/lib64/R /usr/local/lib/R /usr/local/lib64/R /opt/local/lib/R /opt/local/lib64/R /opt/R/* /opt/local/R/*

We can install more versions in the container image, as shown in the following snippet. They will be installed in /opt/R/.

RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-4.0.5-1-1.x86_64.rpm && yum install -y R-4.0.5-1-1.x86_64.rpm && yum clean all && rm -rf R-4.0.5-1-1.x86_64.rpm RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-3.6.3-1-1.x86_64.rpm && yum install -y R-3.6.3-1-1.x86_64.rpm && yum clean all && rm -rf R-3.6.3-1-1.x86_64.rpm RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-3.5.3-1-1.x86_64.rpm && yum install -y R-3.5.3-1-1.x86_64.rpm && yum clean all && rm -rf R-3.5.3-1-1.x86_64.rpm

Install RStudio Professional Drivers

Data scientists often need to access data from sources such as Amazon Athena and Amazon Redshift within RStudio on SageMaker. You can do so using RStudio Professional Drivers and RStudio Connections. Make sure you install the relevant libraries and drivers as shown in the following snippet:

# Install RStudio Professional Drivers —————————————-# RUN yum update -y && yum install -y unixODBC unixODBC-devel && yum clean all ARG DRIVERS_VERSION=2021.10.0-1 RUN curl -O https://drivers.rstudio.org/7C152C12/installer/rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && yum install -y rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && yum clean all && rm -f rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && cp /opt/rstudio-drivers/odbcinst.ini.sample /etc/odbcinst.ini RUN /opt/R/${R_VERSION}/bin/R -e ‘install.packages(“odbc”, repos=”https://packagemanager.rstudio.com/cran/__linux__/centos7/latest”)’

Install custom libraries

You can also install additional R and Python libraries so that data scientists don’t need to install them on the fly:

RUN /opt/R/${R_VERSION}/bin/R -e “install.packages(c(‘reticulate’, ‘readr’, ‘curl’, ‘ggplot2’, ‘dplyr’, ‘stringr’, ‘fable’, ‘tsibble’, ‘dplyr’, ‘feasts’, ‘remotes’, ‘urca’, ‘sodium’, ‘plumber’, ‘jsonlite’), repos=’https://packagemanager.rstudio.com/cran/__linux__/centos7/latest’)” RUN /opt/python/${PYTHON_VERSION}/bin/pip install –upgrade ‘boto3>1.0<2.0' 'awscli>1.0<2.0' 'sagemaker[local]<3' 'sagemaker-studio-image-build' 'numpy'

When you’ve finished your customization in a Dockerfile, it’s time to build a container image and push it to Amazon ECR.

Build and push to Amazon ECR

You can build a container image from the Dockerfile from a terminal where the Docker engine is installed, such as your local terminal or AWS Cloud9. If you’re building it from a terminal within RStudio on SageMaker, you can use SageMaker Studio Image Build. We demonstrate the steps for both approaches.

In a local terminal where the Docker engine is present, you can run the following commands from where the Dockerfile is. You can use the sample script create-and-update-image.sh.

IMAGE_NAME=r-4.1.3-rstudio-1.4.1717-3 # the name for SageMaker Image REPO=rstudio-custom # ECR repository name TAG=$IMAGE_NAME # login to your Amazon ECR aws ecr get-login-password | docker login –username AWS –password-stdin ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com # create a repo aws ecr create-repository –repository-name ${REPO} # build a docker image and push it to the repo docker build . -t ${REPO}:${TAG} -t ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG} docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG}

In a terminal on RStudio on SageMaker, run the following commands:

pip install sagemaker-studio-image-build sm-docker build . –repository ${REPO}:${IMAGE_NAME}

After these commands, you have a repository and a Docker container image in Amazon ECR for our next step, in which we attach the container image for use in RStudio on SageMaker. Note the image URI in Amazon ECR .dkr.ecr..amazonaws.com/: for later use.

Update RStudio on SageMaker through the console

RStudio on SageMaker allows runtime customization through the use of a custom SageMaker image. A SageMaker image is a holder for a set of SageMaker image versions. Each image version represents a container image that is compatible with RStudio on SageMaker and stored in an Amazon ECR repository. To make a custom SageMaker image available to all RStudio users within a domain, you can attach the image to the domain following the steps in this section.

  1. On the SageMaker console, navigate to the Custom SageMaker Studio images attached to domain page, and choose Attach image.
  2. Select New image, and enter your Amazon ECR image URI.
  3. Choose Next.
  4. In the Image properties section, provide an Image name (required), Image display name (optional), Description (optional), IAM role, and tags.
    The image display name, if provided, is shown in the session launcher in RStudio on SageMaker. If the Image display name field is left empty, the image name is shown in RStudio on SageMaker instead.
  5. Leave EFS mount path and Advanced configuration (User ID and Group ID) as default because RStudio on SageMaker manages the configuration for us.
  6. In the Image type section, select RStudio image.
  7. Choose Submit.

You can now see a new entry in the list. It’s worth noting that, with the introduction of the support of custom RStudio images, you can see a new Usage type column in the table to denote whether an image is an RStudio image or an Amazon SageMaker Studio image.

It may take up to 5–10 minutes for the custom images to be available in the session launcher UI. You can then launch a new R session in RStudio on SageMaker with your custom images.

Over time, you may want to retire old and outdated images. To remove the custom images from the list of custom images in RStudio, select the images in the list and choose Detach.

Choose Detach again to confirm.

Update RStudio on SageMaker via the AWS CLI

The following sections describe the steps to create a SageMaker image and attach it for use in RStudio on SageMaker on the SageMaker console and using the AWS CLI. You can use the sample script create-and-update-image.sh.

Create the SageMaker image and image version

The first step is to create a SageMaker image from the custom container image in Amazon ECR by running the following two commands:

ROLE_ARN= DISPLAY_NAME=RSession-r-4.1.3-rstudio-1.4.1717-3 aws sagemaker create-image –image-name ${IMAGE_NAME} –display-name ${DISPLAY_NAME} –role-arn ${ROLE_ARN} aws sagemaker create-image-version –image-name ${IMAGE_NAME} –base-image “${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG}”

Note that the custom image displayed in the session launcher in RStudio on SageMaker is determined by the input of –display-name. If the optional display name is not provided, the input of –image-name is used instead. Also note that the IAM role allows SageMaker to attach an Amazon ECR image to RStudio on SageMaker.

Create an AppImageConfig

In addition to a SageMaker image, which captures the image URI from Amazon ECR, an app image configuration (AppImageConfig) is required for use in a SageMaker domain. We simplify the configuration for an RSessionApp image so we can just create a placeholder configuration with the following command:

IMAGE_CONFIG_NAME=r-4-1-3-rstudio-1-4-1717-3 aws sagemaker create-app-image-config –app-image-config-name ${IMAGE_CONFIG_NAME}

Attach to a SageMaker domain

With the SageMaker image and the app image configuration created, we’re ready to attach the custom container image to the SageMaker domain. To make a custom SageMaker image available to all RStudio users within a domain, you attach the image to the domain as a default user setting. All existing users and any new users will be able to use the custom image.

For better readability, we place the following configuration into the JSON file default-user-settings.json:

“DefaultUserSettings”: { “RSessionAppSettings”: { “CustomImages”: [ { “ImageName”: “r-4.1.3-rstudio-2022”, “AppImageConfigName”: “r-4-1-3-rstudio-2022” }, { “ImageName”: “r-4.1.3-rstudio-1.4.1717-3”, “AppImageConfigName”: “r-4-1-3-rstudio-1-4-1717-3” } ] } } }

In this file, we can specify the image and AppImageConfig name pairs in a list in DefaultUserSettings.RSessionAppSettings.CustomImages. This preceding snippet assumes two custom images are being created.

Then run the following command to update the SageMaker domain:

aws sagemaker update-domain –domain-id –cli-input-json file://default-user-settings.json

After you update the domaim, it may take up to 5–10 minutes for the custom images to be available in the session launcher UI. You can then launch a new R session in RStudio on SageMaker with your custom images.

Detach images from a SageMaker domain

You can detach images simply by removing the ImageName and AppImageConfigName pairs from default-user-settings.json and updating the domain.

For example, updating the domain with the following default-user-settings.json removes r-4.1.3-rstudio-2022 from the R session launching UI and leaves r-4.1.3-rstudio-1.4.1717-3 as the only custom image available to all users in a domain:

{ “DefaultUserSettings”: { “RSessionAppSettings”: { “CustomImages”: [ { “ImageName”: “r-4.1.3-rstudio-1.4.1717-3”, “AppImageConfigName”: “r-4-1-3-rstudio-1-4-1717-3” } ] } } }

Clean up

To safely remove images and resources in the SageMaker domain, complete the following steps in Clean up image resources.

To safely remove the RStudio on SageMaker and the SageMaker domain, complete the following steps in Delete an Amazon SageMaker Domain to delete any RSessionGateway app, user profile and the domain.

To safely remove images and repositories in Amazon ECR, complete the following steps in Deleting an image.

Finally, to delete the CloudFormation template:

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select the stack you deployed for this solution.
  3. Choose Delete.

Conclusion

RStudio on SageMaker makes it simple for data scientists to build ML and analytic solutions in R at scale, and for administrators to manage a robust data science environment for their developers. Data scientists want to customize the environment so that they can use the right libraries for the right job and achieve the desired reproducibility for each ML project. Administrators need to standardize the data science environment for regulatory and security reasons. You can now create custom container images that meet your organizational requirements and allow data scientists to use them in RStudio on SageMaker.

We encourage you to try it out. Happy developing!

About the Authors

Michael Hsieh is a Senior AI/ML Specialist Solutions Architect. He works with customers to advance their ML journey with a combination of AWS ML offerings and his ML domain knowledge. As a Seattle transplant, he loves exploring the great Mother Nature the city has to offer, such as the hiking trails, scenery kayaking in the SLU, and the sunset at Shilshole Bay.

Declan Kelly is a Software Engineer on the Amazon SageMaker Studio team. He has been working on Amazon SageMaker Studio since its launch at AWS re:Invent 2019. Outside of work, he enjoys hiking and climbing.

Sean MorganSean Morgan is an AI/ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Add-ons.



Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.