Connect with us

Amazon

Deploy variational autoencoders for anomaly detection with TensorFlow Serving on Amazon SageMaker

Anomaly detection is the process of identifying items, events, or occurrences that have different characteristics from the majority of the data. It has many applications in various fields, like fraud detection for credit cards, insurance, or healthcare; network intrusion detection for cybersecurity; KPI metrics monitoring for critical systems; and predictive maintenance for in-service equipment. There…

Published

on

[]Anomaly detection is the process of identifying items, events, or occurrences that have different characteristics from the majority of the data. It has many applications in various fields, like fraud detection for credit cards, insurance, or healthcare; network intrusion detection for cybersecurity; KPI metrics monitoring for critical systems; and predictive maintenance for in-service equipment. There are four main categories of techniques to detect anomalies: Classification, nearest neighbor, clustering, and statistical. In this post, we focus on a deep learning statistical anomaly detection approach using variational autoencoders.

[]Deep learning is a sub-field of machine learning (ML) and has been rapidly growing in the past few years. Due to its flexible structure and ability to learn non-linear relationships between data, deep learning models have been proven to be very powerful in solving different problems. An autoencoder is a type of neural network that can be used to learn hidden encoding of input data, which can be used for detecting anomalies. A variational autoencoder can be defined as being an autoencoder whose training is regularized to avoid overfitting and ensure that the latent space has good properties through a probabilistic encoder that enables the generative process.

[]To enable real-time predictions, you must deploy a trained ML model to an endpoint. Sometimes you may want to deploy more than one model at the same time. A standard practice is to deploy each model to a separate endpoint. Amazon SageMaker uses the TensorFlow Serving REST API to allow you to deploy multiple models to a single multi-model endpoint. Multi-model endpoints provide a scalable and cost-effective solution for deploying a large number of models. They use a shared TFS container that is enabled to host multiple models. This reduces hosting costs by improving endpoint utilization compared with using single-model endpoints. It also reduces deployment overhead because SageMaker manages loading models in memory and scaling them based on their traffic patterns.

[]In this post, we discuss the implementation of a variational autoencoder on SageMaker to solve an anomaly detection task. We also include examples of how to deploy multiple trained models to a single TensorFlow Serving multi-model endpoint. You can follow the code in the post to run the pipeline from beginning to end.

Dataset

[]The MNIST dataset is a large database of handwritten digits. It contains 60,000 training images and 10,000 testing images. They are small, 28×28 pixel, grayscale images between 0–9.

[]

Variational autoencoder

[]An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. An autoencoder has two connected networks:

  • Encoder – Takes an input and converts it into a compressed knowledge representation in the bottleneck layer
  • Decoder – Converts the compressed representation back to the original input

[]Standard autoencoders learn to generate compact representations of the input. One problem with autoencoders is overfitting, in which the data is reconstructed without any reconstruction loss, which leads to some points of the latent space giving meaningless content after they’re decoded. Another problem is that the latent space may not be continuous, which might cause the decoder to generate an unrealistic output because it doesn’t know how to deal with the region of latent space it hasn’t been seen before.

[]A variational autoencoder (VAE) provides a probabilistic manner for describing an observation in latent space. Compared with deterministic mappings used by an autoencoder for predictions, a VAE’s bottleneck layer provides a probabilistic Gaussian distribution of hidden vectors by predicting the mean and standard deviation of the distribution. A VAE’s latent spaces are continuous, allowing random sampling and interpolation. VAEs account for the variability of the latent space, which makes the model robust and able to achieve higher performance when compared with an autoencoder-based anomaly detection.

[]The following diagram illustrates this workflow.

[]

Construct the problem

[]In this post, we use the MNIST dataset to construct an anomaly detection problem. For an anomaly detection problem, we have normal data as well as anomalies—the normal data is the majority and anomalies the minority. We train the VAE model on normal data, then test the model on anomalies to observe the reconstruction error. This technique is called semi-supervised because the model has only seen normal data during training. In real-world scenarios, we don’t necessarily have labeled anomalies; under such circumstances the semi-supervised method is especially useful. We can train the model to learn the pattern of normal data, so when anomalies happened, the model can identify the data that doesn’t fall into the pattern.

[]For our use case, we choose 1 and 4 as normal numbers and train the VAE model on the images from MNIST that contain 1 and 4. We choose 5 as the anomaly number and test the model on images with 5 in them to observe the reconstruction error.

Prepare the data

[]First, import the required packages and set up the SageMaker role and session. We import two files from the src folder: the config file defines the parameters to be used in the scripts, and the model_def contains the functions defining the VAE model. See the following code:

import boto3 from IPython import display import matplotlib.pyplot as plt import numpy as np import pandas as pd import sagemaker from sagemaker.tensorflow import TensorFlow from sagemaker.tensorflow.serving import Model, Predictor from sagemaker.tensorflow import TensorFlowModel, TensorFlowPredictor from sklearn.decomposition import PCA import tensorflow as tf from tensorflow import keras from tensorflow.keras.datasets import mnist import tensorflow.keras.backend as K import time from scipy.stats import multivariate_normal from scipy import stats from statistics import mean from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import cohen_kappa_score from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix import os import sys PATH = os.path.abspath(‘..’) if PATH not in sys.path: sys.path.append(PATH) import src.config as config from src import model_def role = sagemaker.get_execution_role() region = boto3.Session().region_name sm = boto3.Session(region_name=region).client(‘sagemaker’) []Next, let’s load the MNIST dataset from TensorFlow and reshape the data. We use train_x, train_y, test_x, and test_y, whose shapes are (60000, 28, 28, 1), (10000, 28, 28, 1), (60000, 10), and (10000, 10), respectively. The training dataset has 60,000 images and the testing dataset has 10,000 images. Each image is 28×28 pixels in greyscale. The dataset has 10 numbers from 0–9. See the following code:

# Load MNIST Data (train_x, train_y), (test_x, test_y) = mnist.load_data() train_x = train_x.reshape((-1, 28, 28, 1)) test_x = test_x.reshape((-1, 28, 28, 1)) []Then we save the data locally for future usage. After the data is saved locally, we upload them to the default Amazon Simple Storage Service (Amazon S3) bucket. See the following code:

!mkdir -p ../data/train/ !mkdir -p ../data/test/ np.save(‘../data/train/train_x’, train_x) np.save(‘../data/test/test_x’, test_x) np.save(‘../data/train/train_y’, train_y) np.save(‘../data/test/test_y’, test_y) s3_prefix = ‘VAE’ train_s3_prefix = f'{s3_prefix}/train’ test_s3_prefix = f'{s3_prefix}/test’ train_s3 = sagemaker.Session().upload_data(path = ‘../data/train’, key_prefix = train_s3_prefix) test_s3 = sagemaker.Session().upload_data(path = ‘../data/test’, key_prefix = test_s3_prefix) []The MNIST dataset contains images with numbers 0-9. We selected 1 and 4 as normal numbers and 5 as the anomaly number. The next step is to separate the data accordingly into the normal and anomaly datasets for training and testing:

# Choose a number to be anomaly number and separate from the rest anomalyNumber = 5 validNumber = [1,4] allNumbers = validNumber + [anomalyNumber] train_validIdxs = np.where(np.isin(train_y, validNumber))[0] train_anomalyIdxs = np.where(train_y==anomalyNumber)[0] test_validIdxs = np.where(np.isin(test_y, validNumber))[0] test_anomalyIdxs = np.where(test_y==anomalyNumber)[0] []We now have an index of 12,585 normal images for training, 2,117 normal images for testing, and 6,313 anomaly images.

[]The next step is to prepare the data for training the model. For input data x, we convert the pixels to float and scale them to be between 0 and 1. For output data y, we one-hot encode the numbers into vectors of 0 and 1, with 1 representing the number. Then we use the index from the previous step to separate anomalies from normal data. See the following code:

# Data preparation # Convert from integers to float32 train_x = train_x.astype(‘float32’) test_x = test_x.astype(‘float32’) # Scale input to be between 0 and 1 train_x = train_x / 255 test_x = test_x / 255 # One hot encoding output variables train_y_one_hot = tf.keras.utils.to_categorical(train_y) test_y_one_hot = tf.keras.utils.to_categorical(test_y) # Prepare normal data and anomalies train_x_normal = train_x[train_validIdxs] train_y_normal = train_y[train_validIdxs] test_x_normal = test_x[test_validIdxs] test_y_normal = test_y[test_validIdxs] train_x_anomaly = train_x[train_anomalyIdxs] train_y_anomaly = train_y[train_anomalyIdxs] test_x_anomaly = test_x[test_anomalyIdxs] test_y_anomaly = test_y[test_anomalyIdxs] x_anomaly = np.concatenate([train_x_anomaly, test_x_anomaly]) y_anomaly = np.concatenate([train_y_anomaly, test_y_anomaly]) print(train_x_normal.shape, train_y_normal.shape, test_x_normal.shape, test_y_normal.shape, x_anomaly.shape, y_anomaly.shape)

Visualize the data

[]We plot the first 25 images of normal data and anomalies for double-checking:

def generate_original_images(x): plt.figure(figsize=(5,5)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(x[i], cmap=plt.cm.binary) plt.show() generate_original_images(train_x_normal[:25]) []The following image of the normal images shows 1 and 4.

[]

[]We plot the anomalies with the following code:

generate_original_images(x_anomaly[:25]) []The image of the anomalies shows 5.

[]

Train the model on SageMaker

[]SageMaker Script Mode allows you to train the model with the SageMaker pre-built containers for TensorFlow, PyTorch, and Apache MXNet and other popular frameworks on machines managed by SageMaker. For our use case, we use the TensorFlow 2.0 container provided by SageMaker. SageMaker training requires the data in Amazon S3 or an Amazon Elastic File System (Amazon EFS) or Amazon FSx for Lustre file system. For this post, we keep our data in Amazon S3. The training script (train.py) contains details of the training steps.

[]First, we set up a TensorFlow estimator object (estimator) for SageMaker hosted training. The key parameters for the estimator include the following:

  • Hyperparameters – The hyperparameters for training the model
  • entry_point – The path to the local Python source file, which should be run as the entry point to training
  • instance_type – The type of instances used for training
  • framework_version – The TensorFlow version you want to use for running your model training code
  • py_version – The Python version you want to use for running your model training code

[]The estimator.fit sends train.py to be run on the TensorFlow container running on SageMaker hosted training instances. See the following code:

model_dir = ‘/opt/ml/model’ hyperparameters = {‘epochs’: config.EPOCHS, ‘batch_size’: config.BATCH_SIZE, ‘learning_rate’: config.LEARNING_RATE} estimator = TensorFlow( entry_point = config.TRAIN_ENTRY_POINT, source_dir = config.TRAIN_SOURCE_DIR, model_dir = model_dir, instance_type = config.TRAIN_INSTANCE_TYPE, instance_count = config.TRAIN_INSTANCE_COUNT, hyperparameters = hyperparameters, role = role, base_job_name = config.TRAIN_BASE_JOB_NAME, framework_version = config.TRAIN_FRAMEWORK_VERSION, py_version = config.TRAIN_PY_VERSION, ) inputs = {‘train’: train_s3, ‘test’: test_s3} estimator.fit(inputs)

Download the model artifacts

[]After the model is trained, the model artifacts are saved in Amazon S3. We download the model artifacts from Amazon S3 to a local folder and extract them:

model_artifacts_s3 = estimator.model_data version = ‘v1′ os.makedirs(f’../model/{version}’, exist_ok=True) !aws s3 cp {model_artifacts_s3} ../model/{version}/model.tar.gz !tar -xzvf ../model/{version}/model.tar.gz -C ../model/{version}

Deploy trained models to one endpoint

[]Our VAE has an encoder and a decoder. We use the encoder to get the condensed vector representations from the hidden layer, and the decoder to recreate the input. The encoder gives us the hidden layer distribution, from which we randomly sample condensed vector representations. These vector representations are passed through the decoder to generate the output, which is used to calculate the reconstruction error. In this section, we demonstrate how to deploy the encoder, decoder, as well as the whole VAE model to one single endpoint.

[]To deploy multiple models to a single TensorFlow Serving endpoint, the model artifacts need to be constructed in the following format:

[]└── multi
├── model1
│   └──
│       ├── saved_model.pb
│       └── variables
│           └── …
└── model2
└──
├── saved_model.pb
└── variables
└── …

[]Each folder in the model artifact contains a saved model and the related variables. They are deployed separately to a single endpoint.

[]Following the preceding format, we construct our output model artifacts in train.py, which contains five models:

  • Variational autoencoders (model/vae)
  • The model generating the mean of the hidden distributions (model/encoder_mean)
  • The model generating the log variance of the hidden distributions (model/encoder_lgvar)
  • The model generating the random samples from the hidden layer distribution defined by encoder_mean and encoder_lgvar (model/encoder_sampler)
  • The decoder (model/decoder)

[]The model/encoder_mean, model/encoder_lgvar, and model/encoder_sampler models combined serve as an encoder used to generate hidden vectors.

[]The following code shows our model structure:

[]└── model
├── vae
│   └── 1
│       ├── saved_model.pb
│       └── variables
│           └── …
├── encoder_mean
│   └── 2
│       ├── saved_model.pb
│       └── variables
│           └── …
├── encoder_lgvar
│   └── 3
│       ├── saved_model.pb
│       └── variables
│           └── …
├── encoder_sampler
│   └── 4
│       ├── saved_model.pb
│       └── variables
│           └── …
├── decoder
│   └── 5
│       ├── saved_model.pb
│       └── variables
│           └── …
├──test_loss.npy
└──train_loss.npy

[]Next, we use TensorFlow Serving to deploy all the models in the model artifact to a single endpoint. We provide the S3 path, SageMaker execution role, TensorFlow framework version, and the default model name to a TensorFlow model object. Then we deploy the model by calling model.deploy, during which we can set the hosting instance count as well as the instance type.

[]When model.deploy is called, on each instance, three steps occur:

  1. Start a Docker container optimized for TensorFlow Serving.
  2. Start a TensorFlow Serving process configured to run your model.
  3. Start an HTTP server that provides access to TensorFlow Server through the SageMaker InvokeEndpoint

[]See the following code:

env = { ‘SAGEMAKER_TFS_DEFAULT_MODEL_NAME’: config.SAGEMAKER_TFS_DEFAULT_MODEL_NAME } model = TensorFlowModel(model_data = model_artifacts_s3, role = role, framework_version = config.TRAIN_FRAMEWORK_VERSION, env = env) predictor = model.deploy(initial_instance_count = config.INFERENCE_INITIAL_INSTANCE_COUNT, instance_type = config.INFERENCE_INSTANCE_TYPE) []Now that the endpoint is created, we can get the predictor for each model by creating TensorFlow predictors. When creating the predictors, we provide the endpoint as well as the name of the model, which is the name of the folder that contains the model and its variables. The predictor object returned by the deploy function is ready to use to make predictions using the default model (vae in this example). See the following code:

# get the endpoint name from the default predictor endpoint = predictor.endpoint_name # get a predictor for ‘encoder_sampler’ encoder_mean_predictor = TensorFlowPredictor(endpoint, model_name = ‘encoder_mean’) encoder_lgvar_predictor = TensorFlowPredictor(endpoint, model_name = ‘encoder_lgvar’) encoder_sampler_predictor = TensorFlowPredictor(endpoint, model_name = ‘encoder_sampler’) decoder_predictor = TensorFlowPredictor(endpoint, model_name = ‘decoder’)

Visualize the predictions

[]With the trained model, we can plot the prediction results for both normal and anomaly data. See the following code:

def generate_prediction_images(x): z_mean = encoder_mean_predictor.predict(x)[‘predictions’] z_lgvar = encoder_lgvar_predictor.predict(x)[‘predictions’] x_pred = predictor.predict(x)[‘predictions’] plt.figure(figsize=(5,5)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(x_pred[i], cmap=plt.cm.binary) plt.show() []Generate input and prediction images for normal data with the following code:

generate_original_images(train_x_normal[:25]) generate_prediction_images(train_x_normal[:25]) []The following image shows our inputs.

[]

[]The following image shows the model predictions.

[]

[]Generate input and prediction images for anomaly data with the following code:

generate_original_images(x_anomaly[:25]) generate_prediction_images(x_anomaly[:25]) []The following image shows our inputs.

[]

[]The following image shows the model predictions.

[]

[]The results show that the model can recreate normal data very well. For anomaly data, the model reproduced certain features but not completely.

PCA of bottleneck layer vectors

[]Principal Component Analysis (PCA) is a dimension reduction method used to reduce the dimensionality of large datasets by transforming a large set of variables into a smaller one that still contains most of the information in the large set. The hidden (bottleneck) layer of the model provides the latent representations of the input data. These vectors contain compressed knowledge of the inputs. In the following code, we use PCA to find the principal components of the hidden vectors and visualize them to observe the distribution of the data:

train_x = np.concatenate((train_x_normal[:1400], x_anomaly[:700]), axis=0) train_y = np.concatenate((train_y_normal[:1400], y_anomaly[:700])) # PCA on the latent variables train_x_hidden = encoder_sampler_predictor.predict(train_x)[‘predictions’] pca_3d = PCA(n_components = 3) PCA_hidden_3d = pca_3d.fit_transform(train_x_hidden) pca_2d = PCA(n_components = 2) PCA_hidden_2d = pca_2d.fit_transform(train_x_hidden) # Plot the principal components fig = plt.figure(figsize=(10,10)) ax0 = fig.add_subplot(211, projection=’3d’) p0 = ax0.scatter(PCA_hidden_3d[:, 0], PCA_hidden_3d[:, 1], PCA_hidden_3d[:, 2], c=train_y, cmap=’tab10′, s=1) plt.legend(handles=p0.legend_elements()[0], labels=allNumbers) plt.show # colors = [‘yellow’, ‘gold’, ‘blue’] ax1 = fig.add_subplot(212) p1 = ax1.scatter(PCA_hidden_2d[:,0], PCA_hidden_2d[:, 1], c =train_y, cmap=’tab10′) #matplotlib.colors.ListedColormap(colors)) plt.legend(handles=p1.legend_elements()[0], labels=allNumbers) plt.show() []The result shows that each number’s vectors cluster together. There is a little overlap between 4 and 5, which explains why some of the predictions of number 5 on the trained model preserve some features from 4.

[]

Detect anomalies with reconstruction error

[]Reconstruction error is calculated using the reduced mean of the binary cross entropy. It tells us the difference between input images and reconstructed images. If the reconstruction error is high, it means there is a large difference between the input and the reconstructed output. Let’s calculate the reconstruction error for the train and test (normal and anomalies) datasets. In the following code: we take 2,000 data points from each dataset for a demonstration:

def compute_reconstruction_error(predictor, x): x_pred = predictor.predict(x)[‘predictions’] cross_ent = K.binary_crossentropy(x, x_pred) recon = tf.reduce_sum(cross_ent, axis=[1,2,3]) #consolidate at each instance return recon train_normal_recon_loss = compute_reconstruction_error(predictor, train_x_normal[:2000]) test_normal_recon_loss = compute_reconstruction_error(predictor, test_x_normal[:2000]) anomaly_recon_loss = compute_reconstruction_error(predictor, x_anomaly[:2000]) []Next, we plot the reconstruction error for train normal and anomaly data:

plt.plot(train_normal_recon_loss[:50], label = ‘train normal’) plt.plot(test_normal_recon_loss[:50], label = ‘test normal’) plt.plot(anomaly_recon_loss[:50], label = ‘anomalies’) plt.title(‘Reconstruction Error’) plt.legend() plt.show() []From the graph, we have two observations:

  1. the reconstruction error for normal train and test is almost the same
  2. the reconstruction error for normal data is lower than the error for anomaly data.

[]

[]Further statistics analysis shows that the average reconstruction loss for anomalies (225.75) is 171.39 higher than that of the normal data (54.36):

print(stats.describe(train_normal_recon_loss)) print(stats.describe(anomaly_recon_loss))

Evaluate the model performance

[]To evaluate the ability of the model to differentiate between normal data and anomalies, we set a threshold: when the reconstruction error is higher, we assign it as an anomaly, and when it’s lower, we assign it as normal data. To find the threshold, let’s look at statistical properties of the reconstruction error:

print(f’1, 99% Percentile of normal reconstruction loss is {np.percentile(train_normal_recon_loss, 1)}, {np.percentile(train_normal_recon_loss, 99)}’) print(f’4, 99% Percentile of abnormal reconstruction loss is {np.percentile(anomaly_recon_loss, 4)}, {np.percentile(anomaly_recon_loss, 99)}’) []For normal data, 99% of the data has a reconstruction error lower than 120. For anomalies, 4% of the data has a reconstruction error lower than 126.94, which means 96% of the data has a reconstruction error higher than 126.94.

[]

[]In this case, the 99 percentile of normal data reconstruction errors is a good threshold to use because it can separate the anomalies from normal data pretty well:

threshold = np.ceil(np.percentile(train_normal_recon_loss, 99)) []For ground truth data, we label the normal numbers (1 and 4) as True and anomalies (5) as False. For prediction labels, when reconstruction error is higher than the threshold, we mark it as 1, and 0 otherwise. See the following code:

# 1 – anomaly, 0 – normal test_y_labels = np.concatenate([test_y_normal[:2000], y_anomaly[:2000]], axis=0) test_y_labels[np.where(np.isin(test_y_labels, validNumber))[0]] = [False]*len(np.where(np.isin(test_y_labels, validNumber))[0]) test_y_labels[np.where(test_y_labels==anomalyNumber)[0]] = [True]*len(np.where(test_y_labels==anomalyNumber)) # print(test_y_labels.shape, test_recon_loss.shape) test_recon_loss = np.concatenate([test_normal_recon_loss.numpy(), anomaly_recon_loss.numpy()], axis=0) test_y_pred = np.array([1 if x>threshold else 0 for x in test_recon_loss]) []The result shows the model can produce 98.12% accuracy, 98.49% precision, 97.75% recall, 98.12% F1 score, 96.25% Cohen Kappa score, and 98.13% ROC AUC:

# accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(test_y_labels, test_y_pred) print(‘Accuracy: %f’ % accuracy, ‘n’) # precision tp / (tp + fp) precision = precision_score(test_y_labels, test_y_pred) print(‘Precision: %f’ % precision, ‘n’) # recall: tp / (tp + fn) recall = recall_score(test_y_labels, test_y_pred) print(‘Recall: %f’ % recall, ‘n’) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(test_y_labels, test_y_pred) print(‘F1 score: %f’ % f1, ‘n’) # kappa kappa = cohen_kappa_score(test_y_labels, test_y_pred) print(‘Cohens kappa: %f’ % kappa, ‘n’) # ROC AUC auc = roc_auc_score(test_y_labels, test_y_pred) print(‘ROC AUC: %f’ % auc, ‘n’) # confusion matrix matrix = confusion_matrix(test_y_labels, test_y_pred) print(‘Confusion Matrix:’, ‘n’, matrix, ‘n’) []

Clean up

[]Now that we have finished the prediction and evaluation, we need to clean up to prevent unnecessary cost. We delete the endpoint with the following code:

# delete the SageMaker endpoint predictor.delete_endpoint()

Summary

[]Variational autoencoders are a powerful method for anomaly detection. This post provides an example application of a VAE on SageMaker. SageMaker provides the capability to train ML models quickly, as well as host the trained models on a REST API. When it comes to hosting more than one model, TensorFlow Serving on SageMaker is a great choice to host multiple models on one endpoint. This post is a peek into the usage of VAEs and SageMaker, we look forward to seeing you use this knowledge and apply to your use cases! To learn more about how to use TensorFlow with Amazon SageMaker, refer to the documentation.

About the Author

[]Yi Xiang is a Data Scientist at the Amazon Machine Learning Solutions Lab, where she helps AWS customers across different industries accelerate their AI and cloud adoption.

Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Amazon

Secure Amazon SageMaker Studio presigned URLs Part 2: Private API with JWT authentication

In part 1 of this series, we demonstrated how to resolve an Amazon SageMaker Studio presigned URL from a corporate network using Amazon private VPC endpoints without traversing the internet. In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API…

Published

on

By

In part 1 of this series, we demonstrated how to resolve an Amazon SageMaker Studio presigned URL from a corporate network using Amazon private VPC endpoints without traversing the internet. In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API Gateway as a proxy interface to generate and access Amazon SageMaker presigned URLs. Furthermore, we add an additional guardrail to ensure presigned URLs are only generated and accessed for the authenticated end-user within the corporate network.

Solution overview

The following diagram illustrates the architecture of the solution.

The process includes the following steps:

  1. In the Amazon Cognito user pool, first set up a user with the name matching their Studio user profile and register Studio as the app client in the user pool.
  2. The user federates from their corporate identity provider (IdP) and authenticates with the Amazon Cognito user pool for accessing Studio.
  3. Amazon Cognito returns a token to the user authorizing access to the Studio application.
  4. The user invokes createStudioPresignedUrl API on API Gateway along with a token in the header.
  5. API Gateway invokes a custom AWS Lambda authorizer and validates the token.
  6. When the token is valid, Amazon Cognito returns an access grant policy with studio user profile id to API Gateway.
  7. API Gateway invokes the createStudioPresignedUrl Lambda function for creating the studio presigned url.
  8. The createStudioPresignedUrl function creates a presigned URL using the SageMaker API VPC endpoint and returns to caller.
  9. User accesses the presigned URL from their corporate network that resolves over the Studio VPC endpoint.
  10. The function’s AWS Identity and Access Management (IAM) policy makes sure that the presigned URL creation and access are performed via VPC endpoints.

The following sections walk you through solution deployment, configuration, and validation for the API Gateway private API for creating and resolving a Studio presigned URL from a corporate network using VPC endpoints.

  1. Deploy the solution
  2. Configure the Amazon Cognito user
  3. Authenticating the private API for the presigned URL using a JSON Web Token
  4. Configure the corporate DNS server for accessing the private API
  5. Test the API Gateway private API for a presigned URL from the corporate network
  6. Pre-Signed URL Lambda Auth Policy
  7. Cleanup

Deploy the solution

You can deploy the solution through either the AWS Management Console or the AWS Serverless Application Model (AWS SAM).

To deploy the solution via the console, launch the following AWS CloudFormation template in your account by choosing Launch Stack. It takes approximately 10 minutes for the CloudFormation stack to complete.

To deploy the solution using AWS SAM, you can find the latest code in the aws-samples GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution using the AWS SAM CLI. If not currently installed, install the AWS SAM CLI.

  1. Clone the repository at https://github.com/aws-samples/secure-sagemaker-studio-presigned-url.
  2. After you clone the repo, navigate to the source and run the following code:

Configure the Amazon Cognito user

To configure your Amazon Cognito user, complete the following steps:

  1. Create an Amazon Cognito user with the same name as a SageMaker user profile: aws cognito-idp admin-create-user –user-pool-id –username
  2. Set the user password: aws cognito-idp admin-set-user-password –user-pool-id –username –password –permanent
  3. Get an access token: aws cognito-idp initiate-auth –auth-flow USER_PASSWORD_AUTH –client-id –auth-parameters USERNAME=,PASSWORD=

Authenticating the private API for the presigned URL using a JSON Web Token

When you deployed a private API for creating a SageMaker presigned URL, you added a guardrail to restrict access to access the presigned URL by anyone outside the corporate network and VPC endpoint. However, without implementing another control to the private API within the corporate network, any internal user within the corporate network would be able to pass unauthenticated parameters for the SageMaker user profile and access any SageMaker app.

To mitigate this issue, we propose passing a JSON Web Token (JWT) for the authenticated caller to the API Gateway and validating that token with a JWT authorizer. There are multiple options for implementing an authorizer for the private API Gateway, using either a custom Lambda authorizer or Amazon Cognito.

With a custom Lambda authorizer, you can embed a SageMaker user profile name in the returned policy. This prevents any users within the corporate network from being able to send any SageMaker user profile name for creating a presigned URL that they’re not authorized to create. We use Amazon Cognito to generate our tokens and a custom Lambda authorizer to validate and return the appropriate policy. For more information, refer to Building fine-grained authorization using Amazon Cognito, API Gateway, and IAM. The Lambda authorizer uses the Amazon Cognito user name as the user profile name.

If you’re unable to use Amazon Cognito, you can develop a custom application to authenticate and pass end-user tokens to the Lambda authorizer. For more information, refer to Use API Gateway Lambda authorizers.

Configure the corporate DNS server for accessing the private API

To configure your corporate DNS server, complete the following steps:

  1. On the Amazon Elastic Compute Cloud (Amazon EC2) console, choose your on-premises DNSA EC2 instance and connect via Systems Manager Session Manager.
  2. Add a zone record in the /etc/named.conf file for resolving to the API Gateway’s DNS name via your Amazon Route 53 inbound resolver, as shown in the following code: zone “zxgua515ef.execute-api..amazonaws.com” { type forward; forward only; forwarders { 10.16.43.122; 10.16.102.163; }; };
  3. Restart the named service using the following command: sudo service named restart

Validate requesting a presigned URL from the API Gateway private API for authorized users

In a real-world scenario, you would implement a front-end interface that would pass the appropriate Authorization headers for authenticated and authorized resources using either a custom solution or leverage AWS Amplify. For brevity of this blog post, the following steps leverages Postman to quickly validate the solution we deployed actually restricts requesting the presigned URL for an internal user, unless authorized to do so.

To validate the solution with Postman, complete the following steps:

  1. Install Postman on the WINAPP EC2 instance. See instructions here
  2. Open Postman and add the access token to your Authorization header: Authorization: Bearer
  3. Modify the API Gateway URL to access it from your internal EC2 instance:
    1. Add the VPC endpoint into your API Gateway URL: https://.execute-api..amazonaws.com/dev/EMPLOYEE_ID
    2. Add the Host header with a value of your API Gateway URL: .execute-api..amazonaws.com
    3. First, change the EMPLOYEE_ID to your Amazon Cognito user and SageMaker user profile name. Make sure you receive an authorized presigned URL.
    4. Then change the EMPLOYEE_ID to a user that is not yours and make sure you receive an access failure.
  4. On the Amazon EC2 console, choose your on-premises WINAPP instance and connect via your RDP client.
  5. Open a Chrome browser and navigate to your authorized presigned URL to launch Studio.

Studio is launched over VPC endpoint with remote address as the Studio VPC endpoint IP.

If the presigned URL is accessed outside of the corporate network, the resolution fails because the IAM policy condition for the presigned URL enforces creation and access from a VPC endpoint.

Pre-Signed URL Lambda Auth Policy

Above solution created the following Auth Policy for the Lambda that generated Pre-Signed URL for accessing SageMaker Studio.

{ “Version”: “2012-10-17”, “Statement”: [ { “Condition”: { “IpAddress”: { “aws:VpcSourceIp”: “10.16.0.0/16” } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” }, { “Condition”: { “IpAddress”: { “aws:SourceIp”: “192.168.10.0/24” } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” }, { “Condition”: { “StringEquals”: { “aws:sourceVpce”: [ “vpce-sm-api-xx”, “vpce-sm-api-yy” ] } }, “Action”: “sagemaker:CreatePresignedDomainUrl”, “Resource”: “arn:aws:sagemaker:::user-profile/*/*”, “Effect”: “Allow” } ] }

The above policy enforces Studio pre-signed URL is both generated and accessed via one of these three entrypoints:

  1. aws:VpcSourceIp as your AWS VPC CIDR
  2. aws:SourceIp as your corporate network CIDR
  3. aws:sourceVpce as your SageMaker API VPC endpoints

Cleanup

To avoid incurring ongoing charges, delete the CloudFormation stacks you created. Alternatively, if you deployed the solution using SAM, you need to authenticate to the AWS account the solution was deployed and run sam delete.

Conclusion

In this post, we demonstrated how to access Studio using a private API Gateway from a corporate network using Amazon private VPC endpoints, preventing access to presigned URLs outside the corporate network, and securing the API Gateway with a JWT authorizer using Amazon Cognito and custom Lambda authorizers.

Try out with this solution and experiment integrating this with your corporate portal, and leave your feedback in the comments!

About the Authors

Ram Vittal is a machine learning solutions architect at AWS. He has over 20+ years of experience architecting and building distributed, hybrid and cloud applications. He is passionate about building secure and scalable AI/ML and Big Data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis, photography, and action movies.

Jonathan Nguyen is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on Threat Detection and Incident Response. Today, he helps enterprise customers develop a comprehensive AWS Security strategy, deploy security solutions at scale, and train customers on AWS Security best practices.

Chris Childers is a Cloud Infrastructure Architect in Professional Services at AWS. He works with AWS customers to design and automate their cloud infrastructure and improve their adoption of DevOps culture and processes.



Source

Continue Reading

Amazon

Secure Amazon SageMaker Studio presigned URLs Part 1: Foundational infrastructure

You can access Amazon SageMaker Studio notebooks from the Amazon SageMaker console via AWS Identity and Access Management (IAM) authenticated federation from your identity provider (IdP), such as Okta. When a Studio user opens the notebook link, Studio validates the federated user’s IAM policy to authorize access, and generates and resolves the presigned URL for…

Published

on

By

You can access Amazon SageMaker Studio notebooks from the Amazon SageMaker console via AWS Identity and Access Management (IAM) authenticated federation from your identity provider (IdP), such as Okta. When a Studio user opens the notebook link, Studio validates the federated user’s IAM policy to authorize access, and generates and resolves the presigned URL for the user. Because the SageMaker console runs on an internet domain, this generated presigned URL is visible in the browser session. This presents an undesired threat vector for exfiltration and gaining access to customer data when proper access controls are not enforced.

Studio supports a few methods for enforcing access controls against presigned URL data exfiltration:

  • Client IP validation using the IAM policy condition aws:sourceIp
  • Client VPC validation using the IAM condition aws:sourceVpc
  • Client VPC endpoint validation using the IAM policy condition aws:sourceVpce

When you access Studio notebooks from the SageMaker console, the only available option is to use client IP validation with the IAM policy condition aws:sourceIp. However, you can use browser traffic routing products such as Zscaler to ensure scale and compliance for your workforce internet access. These traffic routing products generate their own source IP, whose IP range is not controlled by the enterprise customer. This makes it impossible for these enterprise customers to use the aws:sourceIp condition.

To use client VPC endpoint validation using the IAM policy condition aws:sourceVpce, the creation of a presigned URL needs to originate in the same customer VPC where Studio is deployed, and resolution of the presigned URL needs to happen via a Studio VPC endpoint on the customer VPC. This resolution of the presigned URL during access time for corporate network users can be accomplished using DNS forwarding rules (both in Zscaler and corporate DNS) and then into the customer VPC endpoint using an AWS Route 53 inbound resolver.

In this part, we discuss the overarching architecture for securing studio pre-signed url and demonstrate how to set up the foundational infrastructure to create and launch a Studio presigned URL through your VPC endpoint over a private network without traversing the internet. This serves as the foundational layer for preventing data exfiltration by external bad actors gaining access to Studio pre-signed URL and unauthorized or spoofed corporate user access within a corporate environment.

Solution overview

The following diagram illustrates over-arching solution architecture.

The process includes the following steps:

  1. A corporate user authenticates via their IdP, connects to their corporate portal, and opens the Studio link from the corporate portal.
  2. The corporate portal application makes a private API call using an API Gateway VPC endpoint to create a presigned URL.
  3. The API Gateway VPC endpoint “create presigned URL” call is forwarded to the Route 53 inbound resolver on the customer VPC as configured in the corporate DNS.
  4. The VPC DNS resolver resolves it to the API Gateway VPC endpoint IP. Optionally, it looks up a private hosted zone record if it exists.
  5. The API Gateway VPC endpoint routes the request via the Amazon private network to the “create presigned URL API” running in the API Gateway service account.
  6. API Gateway invokes the create-pre-signedURL private API and proxies the request to the create-pre-signedURL Lambda function.
  7. The create-pre-signedURL Lambda call is invoked via the Lambda VPC endpoint.
  8. The create-pre-signedURL function runs in the service account, retrieves authenticated user context (user ID, Region, and so on), looks up a mapping table to identify the SageMaker domain and user profile identifier, makes a sagemaker createpre-signedDomainURL API call, and generates a presigned URL. The Lambda service role has the source VPC endpoint conditions defined for the SageMaker API and Studio.
  9. The generated presigned URL is resolved over the Studio VPC endpoint.
  10. Studio validates that the presigned URL is being accessed via the customer’s VPC endpoint defined in the policy, and returns the result.
  11. The Studio notebook is returned to the user’s browser session over the corporate network without traversing the internet.

The following sections walk you through how to implement this architecture to resolve Studio presigned URLs from a corporate network using VPC endpoints. We demonstrate a complete implementation by showing the following steps:

  1. Set up the foundational architecture.
  2. Configure the corporate app server to access a SageMaker presigned URL via a VPC endpoint.
  3. Set up and launch Studio from the corporate network.

Set up the foundational architecture

In the post Access an Amazon SageMaker Studio notebook from a corporate network, we demonstrated how to resolve a presigned URL domain name for a Studio notebook from a corporate network without traversing the internet. You can follow the instructions in that post to set up the foundational architecture, and then return to this post and proceed to the next step.

Configure the corporate app server to access a SageMaker presigned URL via a VPC endpoint

To enable accessing Studio from your internet browser, we set up an on-premises app server on Windows Server on the on-premises VPC public subnet. However, the DNS queries for accessing Studio are routed through the corporate (private) network. Complete the following steps to configure routing Studio traffic through the corporate network:

  1. Connect to your on-premises Windows app server.

  2. Choose Get Password then browse and upload your private key to decrypt your password.
  3. Use an RDP client and connect to the Windows Server using your credentials.
    Resolving Studio DNS from the Windows Server command prompt results in using public DNS servers, as shown in the following screenshot.
    Now we update Windows Server to use the on-premises DNS server that we set up earlier.
  4. Navigate to Control Panel, Network and Internet, and choose Network Connections.
  5. Right-click Ethernet and choose the Properties tab.
  6. Update Windows Server to use the on-premises DNS server.
  7. Now you update your preferred DNS server with your DNS server IP.
  8. Navigate to VPC and Route Tables and choose your STUDIO-ONPREM-PUBLIC-RT route table.
  9. Add a route to 10.16.0.0/16 with the target as the peering connection that we created during the foundational architecture setup.

Set up and launch Studio from your corporate network

To set up and launch Studio, complete the following steps:

  1. Download Chrome and launch the browser on this Windows instance.
    You may need to turn off Internet Explorer Enhanced Security Configuration to allow file downloads and then enable file downloads.
  2. In your local device Chrome browser, navigate to the SageMaker console and open the Chrome developer tools Network tab.
  3. Launch the Studio app and observe the Network tab for the authtokenparameter value, which includes the generated presigned URL along with the remote server address that the URL is routed to for resolution.In this example, the remote address 100.21.12.108 is one of the public DNS server addresses to resolve the SageMaker DNS domain name d-h4cy01pxticj.studio.us-west-2.sagemaker.aws.
  4. Repeat these steps from the Amazon Elastic Compute Cloud (Amazon EC2) Windows instance that you configured as part of the foundational architecture.

We can observe that the remote address is not the public DNS IP, instead it’s the Studio VPC endpoint 10.16.42.74.

Conclusion

In this post, we demonstrated how to resolve a Studio presigned URL from a corporate network using Amazon private VPC endpoints without exposing the presigned URL resolution to the internet. This further secures your enterprise security posture for accessing Studio from a corporate network for building highly secure machine learning workloads on SageMaker. In part 2 of this series, we further extend this solution to demonstrate how to build a private API for accessing Studio with aws:sourceVPCE IAM policy validation and token authentication. Try out this solution and leave your feedback in the comments!

About the Authors

Ram Vittal is a machine learning solutions architect at AWS. He has over 20+ years of experience architecting and building distributed, hybrid and cloud applications. He is passionate about building secure and scalable AI/ML and Big Data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis and photography.

Neelam Koshiya is an enterprise solution architect at AWS. Her current focus is to help enterprise customers with their cloud adoption journey for strategic business outcomes. In her spare time, she enjoys reading and being outdoors.



Source

Continue Reading

Amazon

Use a custom image to bring your own development environment to RStudio on Amazon SageMaker

RStudio on Amazon SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio integrated development environment (IDE), and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. RStudio on…

Published

on

By

RStudio on Amazon SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio integrated development environment (IDE), and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. RStudio on SageMaker already comes with a built-in image preconfigured with R programming and data science tools; however, you often need to customize your IDE environment. Starting today, you can bring your own custom image with packages and tools of your choice, and make them available to all the users of RStudio on SageMaker in a few clicks.

Bringing your own custom image has several benefits. You can standardize and simplify the getting started experience for data scientists and developers by providing a starter image, preconfigure the drivers required for connecting to data stores, or pre-install specialized data science software for your business domain. Furthermore, organizations that have previously hosted their own RStudio Workbench may have existing containerized environments that they want to continue to use in RStudio on SageMaker.

In this post, we share step-by-step instructions to create a custom image and bring it to RStudio on SageMaker using the AWS Management Console or AWS Command Line Interface (AWS CLI). You can get your first custom IDE environment up and running in few simple steps. For more information on the content discussed in this post, refer to Bring your own RStudio image.

Solution overview

When a data scientist starts a new session in RStudio on SageMaker, a new on-demand ML compute instance is provisioned and a container image that defines the runtime environment (operating system, libraries, R versions, and so on) is run on the ML instance. You can provide your data scientists multiple choices for the runtime environment by creating custom container images and making them available on the RStudio Workbench launcher, as shown in the following screenshot.

The following diagram describes the process to bring your custom image. First you build a custom container image from a Dockerfile and push it to a repository in Amazon Elastic Container Registry (Amazon ECR). Next, you create a SageMaker image that points to the container image in Amazon ECR, and attach that image to your SageMaker domain. This makes the custom image available for launching a new session in RStudio.

Prerequisites

To implement this solution, you must have the following prerequisites:

We provide more details on each in this section.

RStudio on SageMaker domain

If you have an existing SageMaker domain with RStudio enabled prior to April 7, 2022, you must delete and recreate the RStudioServerPro app under the user profile name domain-shared to get the latest updates for bring your own custom image capability. The AWS CLI commands are as follows. Note that this action interrupts RStudio users on SageMaker.

aws sagemaker delete-app –domain-id –app-type RStudioServerPro –app-name default –user-profile-name domain-shared aws sagemaker create-app –domain-id –app-type RStudioServerPro –app-name default –user-profile-name domain-shared

If this is your first time using RStudio on SageMaker, follow the step-by-step setup process described in Get started with RStudio on Amazon SageMaker, or run the following AWS CloudFormation template to set up your first RStudio on SageMaker domain. If you already have a working RStudio on SageMaker domain, you can skip this step.

The following RStudio on SageMaker CloudFormation template requires an RStudio license approved through AWS License Manager. For more about licensing, refer to RStudio license. Also note that only one SageMaker domain is permitted per AWS Region, so you’ll need to use an AWS account and Region that doesn’t have an existing domain.

  1. Choose Launch Stack.
    Launch stack button
    The link takes you to the us-east-1 Region, but you can change to your preferred Region.
  2. In the Specify template section, choose Next.
  3. In the Specify stack details section, for Stack name, enter a name.
  4. For Parameters, enter a SageMaker user profile name.
  5. Choose Next.
  6. In the Configure stack options section, choose Next.
  7. In the Review section, select I acknowledge that AWS CloudFormation might create IAM resources and choose Next.
  8. When the stack status changes to CREATE_COMPLETE, go to the Control Panel on the SageMaker console to find the domain and the new user.

IAM policies to interact with Amazon ECR

To interact with your private Amazon ECR repositories, you need the following IAM permissions in the IAM user or role you’ll use to build and push Docker images:

{ “Version”:”2012-10-17″, “Statement”:[ { “Sid”: “VisualEditor0”, “Effect”:”Allow”, “Action”:[ “ecr:CreateRepository”, “ecr:BatchGetImage”, “ecr:CompleteLayerUpload”, “ecr:DescribeImages”, “ecr:DescribeRepositories”, “ecr:UploadLayerPart”, “ecr:ListImages”, “ecr:InitiateLayerUpload”, “ecr:BatchCheckLayerAvailability”, “ecr:PutImage” ], “Resource”: “*” } ] }

To initially build from a public Amazon ECR image as shown in this post, you need to attach the AWS-managed AmazonElasticContainerRegistryPublicReadOnly policy to your IAM user or role as well.

To build a Docker container image, you can use either a local Docker client or the SageMaker Docker Build CLI tool from a terminal within RStudio on SageMaker. For the latter, follow the prerequisites in Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks to set up the IAM permissions and CLI tool.

AWS CLI versions

There are minimum version requirements for the AWS CLI tool to run the commands mentioned in this post. Make sure to upgrade AWS CLI on your terminal of choice:

  • AWS CLI v1 >= 1.23.6
  • AWS CLI v2 >= 2.6.2

Prepare a Dockerfile

You can customize your runtime environment in RStudio in a Dockerfile. Because the customization depends on your use case and requirements, we show you the essentials and the most common customizations in this example. You can download the full sample Dockerfile.

Install RStudio Workbench session components

The most important software to install in your custom container image is RStudio Workbench. We download from the public S3 bucket hosted by RStudio PBC. There are many version releases and OS distributions for use. The version of the installation needs to be compatible with the RStudio Workbench version used in RStudio on SageMaker, which is 1.4.1717-3 at the time of writing. The OS (argument OS in the following snippet) needs to match the base OS used in the container image. In our sample Dockerfile, the base image we use is Amazon Linux 2 from an AWS-managed public Amazon ECR repository. The compatible RStudio Workbench OS is centos7.

FROM public.ecr.aws/amazonlinux/amazonlinux … ARG RSW_VERSION=1.4.1717-3 ARG RSW_NAME=rstudio-workbench-rhel ARG OS=centos7 ARG RSW_DOWNLOAD_URL=https://s3.amazonaws.com/rstudio-ide-build/server/${OS}/x86_64 RUN RSW_VERSION_URL=`echo -n “${RSW_VERSION}” | sed ‘s/+/-/g’` && curl -o rstudio-workbench.rpm ${RSW_DOWNLOAD_URL}/${RSW_NAME}-${RSW_VERSION_URL}-x86_64.rpm && yum install -y rstudio-workbench.rpm

You can find all the OS release options with the following command:

aws s3 ls s3://rstudio-ide-build/server/

Install R (and versions of R)

The runtime for your custom RStudio container image needs at least one version of R. We can first install a version of R and make it the default R by creating soft links to /usr/local/bin/:

# Install main R version ARG R_VERSION=4.1.3 RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm && yum install -y R-${R_VERSION}-1-1.x86_64.rpm && yum clean all && rm -rf R-${R_VERSION}-1-1.x86_64.rpm RUN ln -s /opt/R/${R_VERSION}/bin/R /usr/local/bin/R && ln -s /opt/R/${R_VERSION}/bin/Rscript /usr/local/bin/Rscript

Data scientists often need multiple versions of R so that they can easily switch between projects and code base. RStudio on SageMaker supports easy switching between R versions, as shown in the following screenshot.

RStudio on SageMaker automatically scans and discovers versions of R in the following directories:

/usr/lib/R /usr/lib64/R /usr/local/lib/R /usr/local/lib64/R /opt/local/lib/R /opt/local/lib64/R /opt/R/* /opt/local/R/*

We can install more versions in the container image, as shown in the following snippet. They will be installed in /opt/R/.

RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-4.0.5-1-1.x86_64.rpm && yum install -y R-4.0.5-1-1.x86_64.rpm && yum clean all && rm -rf R-4.0.5-1-1.x86_64.rpm RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-3.6.3-1-1.x86_64.rpm && yum install -y R-3.6.3-1-1.x86_64.rpm && yum clean all && rm -rf R-3.6.3-1-1.x86_64.rpm RUN curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-3.5.3-1-1.x86_64.rpm && yum install -y R-3.5.3-1-1.x86_64.rpm && yum clean all && rm -rf R-3.5.3-1-1.x86_64.rpm

Install RStudio Professional Drivers

Data scientists often need to access data from sources such as Amazon Athena and Amazon Redshift within RStudio on SageMaker. You can do so using RStudio Professional Drivers and RStudio Connections. Make sure you install the relevant libraries and drivers as shown in the following snippet:

# Install RStudio Professional Drivers —————————————-# RUN yum update -y && yum install -y unixODBC unixODBC-devel && yum clean all ARG DRIVERS_VERSION=2021.10.0-1 RUN curl -O https://drivers.rstudio.org/7C152C12/installer/rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && yum install -y rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && yum clean all && rm -f rstudio-drivers-${DRIVERS_VERSION}.el7.x86_64.rpm && cp /opt/rstudio-drivers/odbcinst.ini.sample /etc/odbcinst.ini RUN /opt/R/${R_VERSION}/bin/R -e ‘install.packages(“odbc”, repos=”https://packagemanager.rstudio.com/cran/__linux__/centos7/latest”)’

Install custom libraries

You can also install additional R and Python libraries so that data scientists don’t need to install them on the fly:

RUN /opt/R/${R_VERSION}/bin/R -e “install.packages(c(‘reticulate’, ‘readr’, ‘curl’, ‘ggplot2’, ‘dplyr’, ‘stringr’, ‘fable’, ‘tsibble’, ‘dplyr’, ‘feasts’, ‘remotes’, ‘urca’, ‘sodium’, ‘plumber’, ‘jsonlite’), repos=’https://packagemanager.rstudio.com/cran/__linux__/centos7/latest’)” RUN /opt/python/${PYTHON_VERSION}/bin/pip install –upgrade ‘boto3>1.0<2.0' 'awscli>1.0<2.0' 'sagemaker[local]<3' 'sagemaker-studio-image-build' 'numpy'

When you’ve finished your customization in a Dockerfile, it’s time to build a container image and push it to Amazon ECR.

Build and push to Amazon ECR

You can build a container image from the Dockerfile from a terminal where the Docker engine is installed, such as your local terminal or AWS Cloud9. If you’re building it from a terminal within RStudio on SageMaker, you can use SageMaker Studio Image Build. We demonstrate the steps for both approaches.

In a local terminal where the Docker engine is present, you can run the following commands from where the Dockerfile is. You can use the sample script create-and-update-image.sh.

IMAGE_NAME=r-4.1.3-rstudio-1.4.1717-3 # the name for SageMaker Image REPO=rstudio-custom # ECR repository name TAG=$IMAGE_NAME # login to your Amazon ECR aws ecr get-login-password | docker login –username AWS –password-stdin ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com # create a repo aws ecr create-repository –repository-name ${REPO} # build a docker image and push it to the repo docker build . -t ${REPO}:${TAG} -t ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG} docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG}

In a terminal on RStudio on SageMaker, run the following commands:

pip install sagemaker-studio-image-build sm-docker build . –repository ${REPO}:${IMAGE_NAME}

After these commands, you have a repository and a Docker container image in Amazon ECR for our next step, in which we attach the container image for use in RStudio on SageMaker. Note the image URI in Amazon ECR .dkr.ecr..amazonaws.com/: for later use.

Update RStudio on SageMaker through the console

RStudio on SageMaker allows runtime customization through the use of a custom SageMaker image. A SageMaker image is a holder for a set of SageMaker image versions. Each image version represents a container image that is compatible with RStudio on SageMaker and stored in an Amazon ECR repository. To make a custom SageMaker image available to all RStudio users within a domain, you can attach the image to the domain following the steps in this section.

  1. On the SageMaker console, navigate to the Custom SageMaker Studio images attached to domain page, and choose Attach image.
  2. Select New image, and enter your Amazon ECR image URI.
  3. Choose Next.
  4. In the Image properties section, provide an Image name (required), Image display name (optional), Description (optional), IAM role, and tags.
    The image display name, if provided, is shown in the session launcher in RStudio on SageMaker. If the Image display name field is left empty, the image name is shown in RStudio on SageMaker instead.
  5. Leave EFS mount path and Advanced configuration (User ID and Group ID) as default because RStudio on SageMaker manages the configuration for us.
  6. In the Image type section, select RStudio image.
  7. Choose Submit.

You can now see a new entry in the list. It’s worth noting that, with the introduction of the support of custom RStudio images, you can see a new Usage type column in the table to denote whether an image is an RStudio image or an Amazon SageMaker Studio image.

It may take up to 5–10 minutes for the custom images to be available in the session launcher UI. You can then launch a new R session in RStudio on SageMaker with your custom images.

Over time, you may want to retire old and outdated images. To remove the custom images from the list of custom images in RStudio, select the images in the list and choose Detach.

Choose Detach again to confirm.

Update RStudio on SageMaker via the AWS CLI

The following sections describe the steps to create a SageMaker image and attach it for use in RStudio on SageMaker on the SageMaker console and using the AWS CLI. You can use the sample script create-and-update-image.sh.

Create the SageMaker image and image version

The first step is to create a SageMaker image from the custom container image in Amazon ECR by running the following two commands:

ROLE_ARN= DISPLAY_NAME=RSession-r-4.1.3-rstudio-1.4.1717-3 aws sagemaker create-image –image-name ${IMAGE_NAME} –display-name ${DISPLAY_NAME} –role-arn ${ROLE_ARN} aws sagemaker create-image-version –image-name ${IMAGE_NAME} –base-image “${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:${TAG}”

Note that the custom image displayed in the session launcher in RStudio on SageMaker is determined by the input of –display-name. If the optional display name is not provided, the input of –image-name is used instead. Also note that the IAM role allows SageMaker to attach an Amazon ECR image to RStudio on SageMaker.

Create an AppImageConfig

In addition to a SageMaker image, which captures the image URI from Amazon ECR, an app image configuration (AppImageConfig) is required for use in a SageMaker domain. We simplify the configuration for an RSessionApp image so we can just create a placeholder configuration with the following command:

IMAGE_CONFIG_NAME=r-4-1-3-rstudio-1-4-1717-3 aws sagemaker create-app-image-config –app-image-config-name ${IMAGE_CONFIG_NAME}

Attach to a SageMaker domain

With the SageMaker image and the app image configuration created, we’re ready to attach the custom container image to the SageMaker domain. To make a custom SageMaker image available to all RStudio users within a domain, you attach the image to the domain as a default user setting. All existing users and any new users will be able to use the custom image.

For better readability, we place the following configuration into the JSON file default-user-settings.json:

“DefaultUserSettings”: { “RSessionAppSettings”: { “CustomImages”: [ { “ImageName”: “r-4.1.3-rstudio-2022”, “AppImageConfigName”: “r-4-1-3-rstudio-2022” }, { “ImageName”: “r-4.1.3-rstudio-1.4.1717-3”, “AppImageConfigName”: “r-4-1-3-rstudio-1-4-1717-3” } ] } } }

In this file, we can specify the image and AppImageConfig name pairs in a list in DefaultUserSettings.RSessionAppSettings.CustomImages. This preceding snippet assumes two custom images are being created.

Then run the following command to update the SageMaker domain:

aws sagemaker update-domain –domain-id –cli-input-json file://default-user-settings.json

After you update the domaim, it may take up to 5–10 minutes for the custom images to be available in the session launcher UI. You can then launch a new R session in RStudio on SageMaker with your custom images.

Detach images from a SageMaker domain

You can detach images simply by removing the ImageName and AppImageConfigName pairs from default-user-settings.json and updating the domain.

For example, updating the domain with the following default-user-settings.json removes r-4.1.3-rstudio-2022 from the R session launching UI and leaves r-4.1.3-rstudio-1.4.1717-3 as the only custom image available to all users in a domain:

{ “DefaultUserSettings”: { “RSessionAppSettings”: { “CustomImages”: [ { “ImageName”: “r-4.1.3-rstudio-1.4.1717-3”, “AppImageConfigName”: “r-4-1-3-rstudio-1-4-1717-3” } ] } } }

Clean up

To safely remove images and resources in the SageMaker domain, complete the following steps in Clean up image resources.

To safely remove the RStudio on SageMaker and the SageMaker domain, complete the following steps in Delete an Amazon SageMaker Domain to delete any RSessionGateway app, user profile and the domain.

To safely remove images and repositories in Amazon ECR, complete the following steps in Deleting an image.

Finally, to delete the CloudFormation template:

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select the stack you deployed for this solution.
  3. Choose Delete.

Conclusion

RStudio on SageMaker makes it simple for data scientists to build ML and analytic solutions in R at scale, and for administrators to manage a robust data science environment for their developers. Data scientists want to customize the environment so that they can use the right libraries for the right job and achieve the desired reproducibility for each ML project. Administrators need to standardize the data science environment for regulatory and security reasons. You can now create custom container images that meet your organizational requirements and allow data scientists to use them in RStudio on SageMaker.

We encourage you to try it out. Happy developing!

About the Authors

Michael Hsieh is a Senior AI/ML Specialist Solutions Architect. He works with customers to advance their ML journey with a combination of AWS ML offerings and his ML domain knowledge. As a Seattle transplant, he loves exploring the great Mother Nature the city has to offer, such as the hiking trails, scenery kayaking in the SLU, and the sunset at Shilshole Bay.

Declan Kelly is a Software Engineer on the Amazon SageMaker Studio team. He has been working on Amazon SageMaker Studio since its launch at AWS re:Invent 2019. Outside of work, he enjoys hiking and climbing.

Sean MorganSean Morgan is an AI/ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Add-ons.



Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.