Connect with us

Amazon

Create Amazon SageMaker projects using third-party source control and Jenkins

Launched at AWS re:Invent 2020, Amazon SageMaker Pipelines is the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning (ML). With Pipelines, you can create, automate, and manage end-to-end ML workflows at scale. You can integrate Pipelines with existing CI/CD tooling. This includes integration with existing source control systems such as…

Published

on

Launched at AWS re:Invent 2020, Amazon SageMaker Pipelines is the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning (ML). With Pipelines, you can create, automate, and manage end-to-end ML workflows at scale.

You can integrate Pipelines with existing CI/CD tooling. This includes integration with existing source control systems such as GitHub, GitHub Enterprise, and Bitbucket. This new capability also allows you to utilize existing installations of Jenkins for orchestrating your ML pipelines. Before this new feature, Amazon SageMaker projects and pipelines were optimized for use with AWS Developer Tools including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild. This new capability allows you to take advantage of Pipelines while still using existing skill sets and tooling when building your ML CI/CD pipelines.

With the newly added MLOps project templates, you can choose between the following options:

  • Model building, training, and deployment using a third-party Git repository and Jenkins
  • Model building, training, and deployment using a third-party Git repository and CodePipeline

The new template options are now available via the SDK or within the Amazon SageMaker Studio IDE, as shown in the following screenshot.

In this post, we walk through an example using GitHub and Jenkins to demonstrate these new capabilities. You can perform equivalent steps using GitHub Enterprise or Bitbucket as your source code repository. The MLOps project template specifically creates a CI/CD pipeline using Jenkins to build a model using a SageMaker pipeline. The resulting trained ML model is deployed from the model registry to staging and production environments.

Prerequisites

The following are prerequisites to completing the steps in this post:

  • Jenkins (we use Jenkins v2.3) installed with administrative privileges.
  • A GitHub user account.
  • Two GitHub repositories initialized with a README. You must create these repositories as a prerequisite because you supply the two repositories as input when creating your SageMaker project. The project templates automatically seed the code that is pushed to these repositories:
    • abalone-model-build – Seeded with your model build code, which includes the code needed for data preparation, model training, model evaluation, and your SageMaker pipeline code.
    • abalone-model-deploy – Seeded with your model deploy code, which includes the code needed to deploy your SageMaker endpoints using AWS CloudFormation.
  • An AWS account and access to services used in this post.

We also assume some familiarity with Jenkins. For general information on Jenkins, we recommend reading the Jenkins Handbook.

Solution overview

In the following sections, we cover the one-time setup tasks and the steps required when building new pipelines using the new SageMaker MLOps project templates to build out the following high-level architecture (click on image to expand).

The model build pipeline is triggered based on changes to the model build GitHub repository based on Jenkins polling the source repository every minute. The model deploy pipeline can be triggered based on changes to the model deploy code in GitHub or when a new model version is approved in the SageMaker Model Registry.

The one-time setup tasks include:

  1. Establish the AWS CodeStar connection from your AWS account to your GitHub user or organization.
  2. Install dependencies on your Jenkins server.
  3. Set up permissions for communication between Jenkins and AWS.
  4. Create an Amazon EventBridge rule and AWS Lambda function that is triggered to run the Jenkins model deploy pipeline when approved models are registered in the model registry.

We then use the new MLOps project template for third-party GitHub and Jenkins to provision and configure the following resources, which are also discussed in more detail later in this post:

  • SageMaker code repositories – Based on the existing GitHub code repository information you provide on input when creating your SageMaker project, a SageMaker code repository association with that same repository is created when you launch the project. This essentially creates an association with a GitHub repository that SageMaker is aware of using the CodeRepository AWS CloudFormation resource type.
  • Model build and deploy seed code triggers –AWS CloudFormation custom resources used by SageMaker projects to seed code in your model build and model deploy code repositories. This seed code includes an example use case, abalone, which is similar to the existing project template, and also the generated code required for building your Jenkins pipeline. When you indicate that you want the repositories seeded, this triggers a Lambda function that seeds your code into the GitHub repository you supply as input.
  • Lambda function – A new Lambda function called sagemaker-p--git-seedcodecheckin. This function is triggered by the custom resource in the CloudFormation template. It’s called along with the seed code information (what code needs to be populated), the Git repository information (where it needs to be populated), and the Git AWS CodeStar connection information. This function then triggers the CodeBuild run, which performs the population of the seed code.
  • CodeBuild project – A CodeBuild project using a buildspec.yml file from an Amazon Simple Storage Service (Amazon S3) bucket owned and maintained by SageMaker. This CodeBuild project is responsible for checking in the initial seed code into the repository supplied as input when creating the project.
  • MLOps S3 bucket – An S3 bucket for the MLOps pipeline that is used for inputs and artifacts of your project and pipeline.

All of the provisioning and configuration required to set up the end-to-end CI/CD pipeline using these resources is automatically performed by SageMaker projects.

Now that we’ve covered how the new feature works, let’s walk through the one-time setup tasks followed by using the new templates.

One-time setup tasks

The tasks in this section are required as part of the one-time setup activities that must be performed for each AWS Region where you use the new SageMaker MLOps project templates. The steps to create a GitHub connection and an AWS Identity and Access Management (IAM) user for Jenkins could be incorporated into a CloudFormation template for repeatability. For this post, we explicitly define the steps.

Set up the GitHub connection

In this step, you connect to your GitHub repositories using AWS Developer Tools and, more specifically, AWS CodeStar connections. The SageMaker project uses this connection to connect to your source code repositories.

  1. On the CodePipeline console, under Settings in the navigation pane, choose Connections.
  2. Choose Create connection.
  3. For Select a provider, select GitHub.
  4. For Connection name, enter a name.
  5. Choose Connect to GitHub.
  6. If the AWS Connector GitHub app isn’t previously installed, choose Install new app.

A list of all the GitHub personal accounts and organizations you have access to is displayed.

  1. Choose the account where you want to establish connectivity for use with SageMaker projects and GitHub repositories.
  2. Choose Configure.
  3. You can optionally select specific repositories, but for this post we create a repository in later steps, so we choose All repositories.
  4. Choose Save.

When the app is installed, you’re redirected to the Connect to GitHub page and the installation ID is automatically populated.

  1. Choose Connect.
  2. Add a tag with the key sagemaker and value true to this AWS CodeStar connection.
  3. Copy the connection ARN to save for later.

You use the ARN as a parameter in the project creation step.

Install Jenkins software dependencies

In this step, you ensure that several software dependencies are in place on the Jenkins server. If you don’t have an existing Jenkins server or need to create one for testing, you can install Jenkins.

  1. Make sure pip3 is installed.

On Unix or Mac, enter the following code:

sudo yum install python3-pip

On Ubuntu, enter the following code:

sudo apt install python3-pip

  1. Install Git on the Jenkins server if it’s not already installed.
  2. Install the following plugins on your Jenkins server:
    1. Job DSL
    2. Git
    3. Pipeline
    4. Pipeline: AWS Steps
    5. CloudBees AWS Credentials for the Jenkins plugin

Create a Jenkins user on IAM

In this step, you create an IAM user and permissions policy that allows for programmatic access to Amazon S3, SageMaker, and AWS CloudFormation. This IAM user is used by your Jenkins server to access the AWS resources needed to configure the integration with SageMaker projects and your Jenkins server. After this user is created, you configure the same on the Jenkins server using the IAM user credentials.

  1. On the IAM console, choose Policies in the navigation pane.
  2. Choose Create policy.
  3. On the JSON tab, enter the following policy:

{ “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “s3:CreateBucket”, “s3:PutObject” ], “Resource”: [ “arn:aws:s3:::sagemaker-*” ] }, { “Effect”: “Allow”, “Action”: [ “iam:PassRole” ], “Resource”: [ “arn:aws:iam::*:role/service-role/AmazonSageMakerServiceCatalogProductsUseRole” ] }, { “Effect”: “Allow”, “Action”: [ “sagemaker:CreatePipeline”, “sagemaker:DescribePipeline”, “sagemaker:DescribePipelineExecution”, “sagemaker:ListPipelineExecutionSteps”, “sagemaker:StartPipelineExecution”, “sagemaker:UpdatePipeline”, “sagemaker:ListModelPackages”, “sagemaker:ListTags”, “sagemaker:AddTags”, “sagemaker:DeleteTags”, “sagemaker:CreateModel”, “sagemaker:CreateEndpointConfig”, “sagemaker:CreateEndpoint”, “sagemaker:DeleteModel”, “sagemaker:DeleteEndpointConfig”, “sagemaker:DeleteEndpoint”, “sagemaker:DescribeEndpoint”, “sagemaker:DescribeModel”, “sagemaker:DescribeEndpointConfig”, “sagemaker:UpdateEndpoint” ], “Resource”: “arn:aws:sagemaker:${AWS::Region}:${AWS::AccountId}:*” }, { “Effect”: “Allow”, “Action”: [ “cloudformation:CreateStack”, “cloudformation:DescribeStacks”, “cloudformation:UpdateStack”, “cloudformation:DeleteStack” ], “Resource”: “arn:aws:cloudformation:*:*:stack/sagemaker-*” } ] }

  1. Choose Next: Tags.
  2. Choose Next: Review.
  3. Under Review policy, name your policy JenkinsExecutionPolicy.
  4. Choose Create policy.

We now need to create a user that the policy is attached to.

  1. In the navigation pane, choose Users.
  2. Choose Add user.
  3. For User name¸ enter jenkins.
  4. For Access type, select Programmatic access.
  5. Choose Next: Permissions.
  6. Under Set Permissions, select Attach existing policies directly, then search for the policy you created.
  7. Select the policy JenkinsExecutionPolicy.
  8. Choose Next: Tags.
  9. Choose Next: Review.
  10. Choose Create user.

You need the access key ID and secret key for Jenkins to be able to create and run the CI/CD pipeline. The secret key is only displayed one time, so make sure to save both values in a secure place.

Configure the Jenkins IAM user on the Jenkins server

In this step, you configure the AWS credentials for the Jenkins IAM user on your Jenkins server. To do this, you need to sign in to your Jenkins server with administrative credentials. The credentials are stored in the Jenkins Credential Store.

  1. On the Jenkins dashboard, choose Manage Jenkins.
  2. Choose Manage Credentials.
  3. Choose the store Jenkins.
  4. Choose Global credentials.
  5. Choose Add Credentials.
  6. For Kind, select AWS Credentials.
  7. For Scope, select Global.
  8. For Description, enter Jenkins AWS Credentials.
  9. For Access Key ID, enter the access key for the IAM user you created.
  10. For Secret Access Key, enter the secret access key for the IAM user you created.
  11. Choose OK.

Your new credentials are now listed under Global credentials.

Create a model deployment Jenkins pipeline trigger

In this step, you configure the trigger to run your Jenkins model deployment pipeline whenever a new model version gets registered into a model package group in the SageMaker Model Registry. To do this, you create an API token for communication with your Jenkins server. Then you run a CloudFormation template from your AWS account that sets up a new rule in EventBridge to monitor the approval status of a model package registered in the SageMaker Model Registry. We use the model registry to catalog models and metadata about those models, as well as manage the approval status and model deployment pipelines. The CloudFormation template also creates a Lambda function that is the event target when a new model gets registered. This function gets the Jenkins API user token credentials from AWS Secrets Manager and uses that to trigger the pipeline remotely based on the trigger, as shown in the following diagram (click on the image to expand).

Create the Jenkins API token

First, you need to create an API token for the Jenkins user.

  1. Choose your user name on the Jenkins console.
  2. Choose Configure.
  3. Under API Token, choose Add new Token.
  4. Choose Generate.
  5. Copy the generated token value and save it somewhere to use in the next step.

Create the trigger and Lambda function

Next, you create the trigger and Lambda function. To do this, you need the provided CloudFormation template, model_trigger.yml. The template takes three parameters as input:

  • JenkinsUser – Your Jenkins user with administrative privileges (for example, Jenkins-admin)
  • JenkinsAPIToken – The Jenkins API token you created (for example, 11cnnnnnnnnnnnnnn)
  • JenkinsURL– The URL of your Jenkins server (for example, http://ec2-nn-nn-nnn-n.eu-north-1.compute.amazonaws.com)

You can download and launch the CloudFormation template via the AWS CloudFormation Console, the AWS Command Line Interface (AWS CLI), or the SDK, or by simply choosing the following launch button:

This completes the one-time setup required to use the new MLOps SageMaker project templates for each Region. Depending on your organizational structure and roles across the ML development lifecycle, these one-time setup steps may need to be performed by your DevOps, MLOps, or system administrators.

We now move on to the steps for creating SageMaker projects using the new MLOps project template from SageMaker Studio.

Use the new MLOps project template with GitHub and Jenkins

In this section, we cover how to use one of the two new MLOps project templates released that allow you to utilize Jenkins as your orchestrator. First, we create a new SageMaker project using one of the new templates. Then we use the generated Jenkins pipeline code to create the Jenkins pipeline.

Create a new SageMaker project

To create your SageMaker project, complete the following steps:

  1. On the Studio console, choose SageMaker resources.
  2. On the drop-down menu, choose Projects.
  3. Choose Create project.
  4. For SageMaker project templates, choose MLOps template for model building, training, and deployment with third-party Git repositories using Jenkins.
  5. Choose Select project template.

You need to provide several parameters to configure the source code repositories for your model build and model deploy code.

  1. Under ModelBuild CodeRepository Info, provide the following parameters:
    1. For URL, enter the URL of your existing Git repository for the model build code in https:// format.
    2. For Branch, enter the branch to use from your existing Git repository for pipeline activities as well as for seeding code (if that option is enabled).
    3. For Full Repository Name, enter the Git repository name in the format of / or /.
    4. For Codestar Connection ARN, enter the ARN of the AWS CodeStar connection created as part of the one-time setup steps.
    5. For Sample Code, choose whether the seed code should be populated in the repository identified.

The seed code includes model build code for the abalone use case that is common to SageMaker projects; however, when this is enabled, a new /jenkins folder with Jenkins pipeline code is also seeded.

It’s recommended to allow SageMaker projects to seed your repositories with the code to ensure proper structure and for automatic generation of the Jenkins DSL pipeline code. If you don’t choose this option, you need to create your own Jenkins DSL pipeline code. You can then modify the seed code specific to your model based on your use case.

  1. Under ModelDeploy CodeRepository Info, provide the following parameters:
    1. For URL, enter the URL of your existing Git repository for the model deploy code in https:// format.
    2. For Branch, enter the branch to use from your existing Git repository for pipeline activities as well as for seeding code (if that option is enabled).
    3. For Full Repository Name, enter the Git repository name in the format of / or /.
    4. For Codestar Connection ARN, enter the ARN of the AWS CodeStar connection created as part of the one-time setup steps.
    5. For Sample Code, choose whether the seed code should be populated in the repository identified.

As we mentioned earlier, the seed code includes the model deploy code for the abalone use case that is common to SageMaker projects; however, when this is enabled, a /jenkins folder with Jenkins pipeline code is also seeded.

  1. Choose Create project.

A message appears indicating that SageMaker is provisioning and configuring the resources.

When the project is complete, you receive a successful message, and your project is now listed on the Projects list.

You now have seed code in your abalone-model-build and abalone-model-deploy GitHub repositories. You also have the /jenkins folders containing the Jenkins DSL to create your Jenkins pipeline.

Automatically generated Jenkins pipeline syntax

After you create the SageMaker project with seed code enabled, the code needed to create a Jenkins pipeline is automatically generated. Let’s review the code generated and push to the abalone-model-build and abalone-model-deploy GitHub repositories.

The model build pipeline contains the following:

  • seed_job.groovy – A Jenkins groovy script to create a model build Jenkins pipeline using the pipeline definition from the Jenkinsfile.
  • Jenkinsfile – The Jenkins pipeline definition for model build activities, including the following steps:
    • Checkout SCM – Source code checkout (abalone-model-build).
    • Build and install – Ensure latest version of the AWS CLI is installed.
    • Update and run the SageMaker pipeline – Run the SageMaker pipeline that corresponds to the SageMaker project ID. This pipeline is visible on the Studio console but is being triggered by Jenkins in this case.

The model deploy pipeline contains the following:

  • seed_job.groovy – A Jenkins groovy script to create a model deploy Jenkins pipeline using the pipeline definition from the Jenkinsfile.
  • Jenkinsfile – The Jenkins pipeline definition for model deploy activities, including the following steps:
    • Checkout SCM – Source code checkout (abalone-model-deploy).
    • Install – Ensure the latest version of the AWS CLI is installed.
    • Build – Run a script called build.py from your seeded source code, which fetches the approved model package from the SageMaker Model Registry and generates the CloudFormation templates for creating staging and production SageMaker endpoints.
    • Staging deploy – Launch the CloudFormation template to create a staging SageMaker endpoint.
    • Test staging – Run a script called test.py from your seeded source code. The generated code includes a test to describe the endpoint to ensure it’s showing InService and also includes code blocks to add your own custom testing code: def invoke_endpoint(endpoint_name): “”” Add custom logic here to invoke the endpoint and validate reponse “”” return {“endpoint_name”: endpoint_name, “success”: True}
    • Manual approval for production – A Jenkins step to enable continuous delivery requiring manual approval being deploying to a production environment.
    • Prod deploy – Launch the CloudFormation template to create a production SageMaker endpoint.

Create a Jenkins model build pipeline

In this step, we create the Jenkins pipeline using the DSL generated in the seed code created through the SageMaker project in the previous step.

  1. On your Jenkins server, choose New Item on the dashboard menu.
  2. For Enter an item name¸ enter CreateJenkinsPipeline.
  3. Choose Freestyle project.
  4. Choose OK.
  5. On the General tab, select This project is parameterized.
  6. On the Add Parameter drop-down menu, choose Credentials Parameter.

You must provide the following information for the AWS credentials that are used by your Jenkins pipeline to integrate with AWS.

  1. For Name, enter AWS_CREDENTIAL.
  2. For Credential type, choose AWS Credentials.
  3. For Default Value, choose the Jenkins AWS credentials that you created during the one-time setup tasks.
  4. On the Source Code Management tab, select Git.
  5. For Repository URL, enter the URL for the GitHub repository containing the model build code (for this post, abalone-model-build).
  6. For Branches to build, make sure to indicate the correct branch.
  7. On the Build Triggers tab, in the Build section, choose Process Job DSLs on the drop-down menu.
  8. For Process Job DSLs, select Look on Filesystem.
  9. For DSL Scripts, enter the value of jenkins/seed_job.groovy.

seed_job.groovy was automatically generated by your SageMaker project and pushed to your GitHub repository when seeding was indicated.

  1. Choose Save.

Next, we want to run our Jenkins job to create the Jenkins pipeline.

  1. Choose Build with Parameters.
  2. Choose Build.

The first run of the pipeline fails with an error that the script is not approved. Jenkins implements security controls to ensure only approved user-provided groovy scripts can be run (for more information, see In-process Script Approval). As a result, we need to approve the script before running the build again.

  1. On the Jenkins dashboard, choose Manage Jenkins.
  2. Choose In-process Script Approval.

You should see a message that a script is pending approval.

  1. Choose Approve.
  2. Repeat the steps to build the pipeline again.

This time, the job should run successfully and create a new modelbuild pipeline.

  1. Choose your new pipeline (sagemaker-jenkings-btd-1-p--modelbuild) to view its details.

This is the pipeline generated by the Jenkins DSL code that was seeded in your GitHub repository. This is the actual model building pipeline.

  1. On the Studio UI, return to your project.
  2. Choose the Pipelines tab.

You still have visibility to your model build pipeline, but the orchestration for the CI/CD pipeline steps is performed by Jenkins.

If a data scientist wants to update any of the model build code, they can clone the repository to their Studio environment by choosing clone repo. When new code is committed and pushed to the GitHub repository, the Jenkins model build pipeline is automatically triggered.

Create a Jenkins model deploy pipeline

In this step, we perform the same steps as we did with the model build pipeline to create a model deploy pipeline, using the model deploy GitHub repo.

You can now see a new pipeline called sagemaker-jenkings-btd-1-p--modeldeploy. This is the pipeline generated by the Jenkins DSL code that was seeded in your model deploy GitHub repository (abalone-model-deploy).

The first time this pipeline builds, it fails. Similar to the previous steps, you need to approve the script and rebuild the pipeline.

After the two pipelines are created, two additional pipelines appear in Jenkins that are associated with the SageMaker project.

The model deploy pipeline fails because the first time it runs, there are no approved models in the model registry.

When you navigate to the model registry, you can see a model that has been trained and registered by the model build pipeline. You can approve the model by updating its status, which triggers the deploy pipeline.

You can see the deploy pipeline running and the model is deployed to a staging environment.

After the model is deployed to staging, a manual approval option is available to deploy the model into a production environment

On the SageMaker console, the endpoint deployed by Jenkins is also visible.

After you approve the Jenkins pipeline, a model is deployed to a production environment and is visible on the SageMaker console.

Summary

In this post, we walked through one of the new SageMaker MLOps project templates that you can use to build and configure a CI/CD pipeline that takes advantage of SageMaker features for model building, training, and deployment while still using your existing tooling and skillsets. For our use case, we focused on using GitHub and Jenkins, but you can also use GitHub Enterprise or Bitbucket depending on your needs. You can also utilize the other new template to combine your choice of source code repository (GitHub, GitHub Enterprise, or Bitbucket) with CodePipeline. Try it out and let us know if you have any questions in the comments section!

About the Authors

Shelbee Eigenbrode is a Principal AI and Machine Learning Specialist Solutions Architect at Amazon Web Services (AWS). She holds 6 AWS certifications and has been in technology for 23 years spanning multiple industries, technologies, and roles. She is currently focusing on combining her DevOps and ML background to deliver and manage ML workloads at scale. With over 35 patents granted across various technology domains, she has a passion for continuous innovation and using data to drive business outcomes. Shelbee co-founded the Denver chapter of Women in Big Data.

 

Saumitra Vikram is a Software Developer on the Amazon SageMaker team and is based in Chennai, India. Outside of work, he loves spending time running, trekking and motor bike riding through the Himalayas.

 

 

Venkatesh Krishnan is a Principal Product Manager – Technical for Amazon SageMaker in AWS. He is the product owner for a portfolio of services in the MLOps space including SageMaker Pipelines, Model Registry, Projects, and Experiments. Earlier he was the Head of Product, Integrations and the lead product manager for Amazon AppFlow, a new AWS service that he helped build from the ground up. Before joining Amazon in 2018, Venkatesh served in various research, engineering, and product roles at Qualcomm, Inc. He holds a PhD in Electrical and Computer Engineering from Georgia Tech and an MBA from UCLA’s Anderson School of Management.

 

Kirit Thadaka is an ML Solutions Architect working in the SageMaker Service SA team. Prior to joining AWS, Kirit spent time working in early stage AI startups followed by some time in consulting in various roles in AI research, MLOps, and technical leadership.

Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Amazon

Customize pronunciation using lexicons in Amazon Polly

Amazon Polly is a text-to-speech service that uses advanced deep learning technologies to synthesize natural-sounding human speech. It is used in a variety of use cases, such as contact center systems, delivering conversational user experiences with human-like voices for automated real-time status check, automated account and billing inquiries, and by news agencies like The Washington…

Published

on

By

Amazon Polly is a text-to-speech service that uses advanced deep learning technologies to synthesize natural-sounding human speech. It is used in a variety of use cases, such as contact center systems, delivering conversational user experiences with human-like voices for automated real-time status check, automated account and billing inquiries, and by news agencies like The Washington Post to allow readers to listen to news articles.

As of today, Amazon Polly provides over 60 voices in 30+ language variants. Amazon Polly also uses context to pronounce certain words differently based upon the verb tense and other contextual information. For example, “read” in “I read a book” (present tense) and “I will read a book” (future tense) is pronounced differently.

However, in some situations you may want to customize the way Amazon Polly pronounces a word. For example, you may need to match the pronunciation with local dialect or vernacular. Names of things (e.g., Tomato can be pronounced as tom-ah-to or tom-ay-to), people, streets, or places are often pronounced in many different ways.

In this post, we demonstrate how you can leverage lexicons for creating custom pronunciations. You can apply lexicons for use cases such as publishing, education, or call centers.

Customize pronunciation using SSML tag

Let’s say you stream a popular podcast from Australia and you use the Amazon Polly Australian English (Olivia) voice to convert your script into human-like speech. In one of your scripts, you want to use words that are unknown to Amazon Polly voice. For example, you want to send Mātariki (Māori New Year) greetings to your New Zealand listeners. For such scenarios, Amazon Polly supports phonetic pronunciation, which you can use to achieve a pronunciation that is close to the correct pronunciation in the foreign language.

You can use the Speech Synthesis Markup Language (SSML) tag to suggest a phonetic pronunciation in the ph attribute. Let me show you how you can use SSML tag.

First, login into your AWS console and search for Amazon Polly in the search bar at the top. Select Amazon Polly and then choose Try Polly button.

In the Amazon Polly console, select Australian English from the language dropdown and enter following text in the Input text box and then click on Listen to test the pronunciation.

I’m wishing you all a very Happy Mātariki.

Sample speech without applying phonetic pronunciation:

If you hear the sample speech above, you can notice that the pronunciation of Mātariki – a word which is not part of Australian English – isn’t quite spot-on. Now, let’s look at how in such scenarios we can use phonetic pronunciation using SSML tag to customize the speech produced by Amazon Polly.

To use SSML tags, turn ON the SSML option in Amazon Polly console. Then copy and paste following SSML script containing phonetic pronunciation for Mātariki specified inside the ph attribute of the tag.

I’m wishing you all a very Happy Mātariki.

With the tag, Amazon Polly uses the pronunciation specified by the ph attribute instead of the standard pronunciation associated by default with the language used by the selected voice.

Sample speech after applying phonetic pronunciation:

If you hear the sample sound, you’ll notice that we opted for a different pronunciation for some of vowels (e.g., ā) to make Amazon Polly synthesize the sounds that are closer to the correct pronunciation. Now you might have a question, how do I generate the phonetic transcription “mA:.tA:.ri.ki” for the word Mātariki?

You can create phonetic transcriptions by referring to the Phoneme and Viseme tables for the supported languages. In the example above we have used the phonemes for Australian English.

Amazon Polly offers support in two phonetic alphabets: IPA and X-Sampa. Benefit of X-Sampa is that they are standard ASCII characters, so it is easier to type the phonetic transcription with a normal keyboard. You can use either of IPA or X-Sampa to generate your transcriptions, but make sure to stay consistent with your choice, especially when you use a lexicon file which we’ll cover in the next section.

Each phoneme in the phoneme table represents a speech sound. The bolded letters in the “Example” column of the Phoneme/Viseme table in the Australian English page linked above represent the part of the word the “Phoneme” corresponds to. For example, the phoneme /j/ represents the sound that an Australian English speaker makes when pronouncing the letter “y” in “yes.”

Customize pronunciation using lexicons

Phoneme tags are suitable for one-off situations to customize isolated cases, but these are not scalable. If you process huge volume of text, managed by different editors and reviewers, we recommend using lexicons. Using lexicons, you can achieve consistency in adding custom pronunciations and simultaneously reduce manual effort of inserting phoneme tags into the script.

A good practice is that after you test the custom pronunciation on the Amazon Polly console using the tag, you create a library of customized pronunciations using lexicons. Once lexicons file is uploaded, Amazon Polly will automatically apply phonetic pronunciations specified in the lexicons file and eliminate the need to manually provide a tag.

Create a lexicon file

A lexicon file contains the mapping between words and their phonetic pronunciations. Pronunciation Lexicon Specification (PLS) is a W3C recommendation for specifying interoperable pronunciation information. The following is an example PLS document:

Matariki Mātariki mA:.tA:.ri.ki NZ New Zealand

Make sure that you use correct value for the xml:lang field. Use en-AU if you’re uploading the lexicon file to use with the Amazon Polly Australian English voice. For a complete list of supported languages, refer to Languages Supported by Amazon Polly.

To specify a custom pronunciation, you need to add a element which is a container for a lexical entry with one or more element and one or more pronunciation information provided inside element.

The element contains the text describing the orthography of the element. You can use a element to specify the word whose pronunciation you want to customize. You can add multiple elements to specify all word variations, for example with or without macrons. The element is case-sensitive, and during speech synthesis Amazon Polly string matches the words inside your script that you’re converting to speech. If a match is found, it uses the element, which describes how the is pronounced to generate phonetic transcription.

You can also use for commonly used abbreviations. In the preceding example of a lexicon file, NZ is used as an alias for New Zealand. This means that whenever Amazon Polly comes across “NZ” (with matching case) in the body of the text, it’ll read those two letters as “New Zealand”.

For more information on lexicon file format, see Pronunciation Lexicon Specification (PLS) Version 1.0 on the W3C website.

You can save a lexicon file with as a .pls or .xml file before uploading it to Amazon Polly.

Upload and apply the lexicon file

Upload your lexicon file to Amazon Polly using the following instructions:

  1. On the Amazon Polly console, choose Lexicons in the navigation pane.
  2. Choose Upload lexicon.
  3. Enter a name for the lexicon and then choose a lexicon file.
  4. Choose the file to upload.
  5. Choose Upload lexicon.

If a lexicon by the same name (whether a .pls or .xml file) already exists, uploading the lexicon overwrites the existing lexicon.

Now you can apply the lexicon to customize pronunciation.

  1. Choose Text-to-Speech in the navigation pane.
  2. Expand Additional settings.
  3. Turn on Customize pronunciation.
  4. Choose the lexicon on the drop-down menu.

You can also choose Upload lexicon to upload a new lexicon file (or a new version).

It’s a good practice to version control the lexicon file in a source code repository. Keeping the custom pronunciations in a lexicon file ensures that you can consistently refer to phonetic pronunciations for certain words across the organization. Also, keep in mind the pronunciation lexicon limits mentioned on Quotas in Amazon Polly page.

Test the pronunciation after applying the lexicon

Let’s perform quick test using “Wishing all my listeners in NZ, a very Happy Mātariki” as the input text.

We can compare the audio files before and after applying the lexicon.

Before applying the lexicon:

After applying the lexicon:

Conclusion

In this post, we discussed how you can customize pronunciations of commonly used acronyms or words not found in the selected language in Amazon Polly. You can use SSML tag which is great for inserting one-off customizations or testing purposes. We recommend using Lexicon to create a consistent set of pronunciations for frequently used words across your organization. This enables your content writers to spend time on writing instead of the tedious task of adding phonetic pronunciations in the script repetitively. You can try this in your AWS account on the Amazon Polly console.

Summary of resources

About the Authors

Ratan Kumar is a Solutions Architect based out of Auckland, New Zealand. He works with large enterprise customers helping them design and build secure, cost-effective, and reliable internet scale applications using the AWS cloud. He is passionate about technology and likes sharing knowledge through blog posts and twitch sessions.

Maciek Tegi is a Principal Audio Designer and a Product Manager for Polly Brand Voices. He has worked in professional capacity in the tech industry, movies, commercials and game localization. In 2013, he was the first audio engineer hired to the Alexa Text-To- Speech team. Maciek was involved in releasing 12 Alexa TTS voices across different countries, over 20 Polly voices, and 4 Alexa celebrity voices. Maciek is a triathlete, and an avid acoustic guitar player.



Source

Continue Reading

Amazon

AWS Week in Review – May 16, 2022

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! I had been on the road for the last five weeks and attended many of the AWS Summits in Europe. It was great to talk to so many of you…

Published

on

By

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

I had been on the road for the last five weeks and attended many of the AWS Summits in Europe. It was great to talk to so many of you in person. The Serverless Developer Advocates are going around many of the AWS Summits with the Serverlesspresso booth. If you attend an event that has the booth, say “Hi ” to my colleagues, and have a coffee while asking all your serverless questions. You can find all the upcoming AWS Summits in the events section at the end of this post.

Last week’s launches
Here are some launches that got my attention during the previous week.

AWS Step Functions announced a new console experience to debug your state machine executions – Now you can opt-in to the new console experience of Step Functions, which makes it easier to analyze, debug, and optimize Standard Workflows. The new page allows you to inspect executions using three different views: graph, table, and event view, and add many new features to enhance the navigation and analysis of the executions. To learn about all the features and how to use them, read Ben’s blog post.

Example on how the Graph View looks

Example on how the Graph View looks

AWS Lambda now supports Node.js 16.x runtime – Now you can start using the Node.js 16 runtime when you create a new function or update your existing functions to use it. You can also use the new container image base that supports this runtime. To learn more about this launch, check Dan’s blog post.

AWS Amplify announces its Android library designed for Kotlin – The Amplify Android library has been rewritten for Kotlin, and now it is available in preview. This new library provides better debugging capacities and visibility into underlying state management. And it is also using the new AWS SDK for Kotlin that was released last year in preview. Read the What’s New post for more information.

Three new APIs for batch data retrieval in AWS IoT SiteWise – With this new launch AWS IoT SiteWise now supports batch data retrieval from multiple asset properties. The new APIs allow you to retrieve current values, historical values, and aggregated values. Read the What’s New post to learn how you can start using the new APIs.

AWS Secrets Manager now publishes secret usage metrics to Amazon CloudWatch – This launch is very useful to see the number of secrets in your account and set alarms for any unexpected increase or decrease in the number of secrets. Read the documentation on Monitoring Secrets Manager with Amazon CloudWatch for more information.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other launches and news that you may have missed:

IBM signed a deal with AWS to offer its software portfolio as a service on AWS. This allows customers using AWS to access IBM software for automation, data and artificial intelligence, and security that is built on Red Hat OpenShift Service on AWS.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS podcasts in Spanish. This week’s episode introduces you to Amazon DynamoDB and shares stories on how different customers use this database service. You can listen to all the episodes directly from your favorite podcast app or the podcast web page.

AWS Open Source News and Updates – Ricardo Sueiras, my colleague from the AWS Developer Relation team, runs this newsletter. It brings you all the latest open-source projects, posts, and more. Read edition #112 here.

Upcoming AWS Events
It’s AWS Summits season and here are some virtual and in-person events that might be close to you:

You can register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia



Source

Continue Reading

Amazon

Personalize your machine translation results by using fuzzy matching with Amazon Translate

A person’s vernacular is part of the characteristics that make them unique. There are often countless different ways to express one specific idea. When a firm communicates with their customers, it’s critical that the message is delivered in a way that best represents the information they’re trying to convey. This becomes even more important when…

Published

on

By

A person’s vernacular is part of the characteristics that make them unique. There are often countless different ways to express one specific idea. When a firm communicates with their customers, it’s critical that the message is delivered in a way that best represents the information they’re trying to convey. This becomes even more important when it comes to professional language translation. Customers of translation systems and services expect accurate and highly customized outputs. To achieve this, they often reuse previous translation outputs—called translation memory (TM)—and compare them to new input text. In computer-assisted translation, this technique is known as fuzzy matching. The primary function of fuzzy matching is to assist the translator by speeding up the translation process. When an exact match can’t be found in the TM database for the text being translated, translation management systems (TMSs) often have the option to search for a match that is less than exact. Potential matches are provided to the translator as additional input for final translation. Translators who enhance their workflow with machine translation capabilities such as Amazon Translate often expect fuzzy matching data to be used as part of the automated translation solution.

In this post, you learn how to customize output from Amazon Translate according to translation memory fuzzy match quality scores.

Translation Quality Match

The XML Localization Interchange File Format (XLIFF) standard is often used as a data exchange format between TMSs and Amazon Translate. XLIFF files produced by TMSs include source and target text data along with match quality scores based on the available TM. These scores—usually expressed as a percentage—indicate how close the translation memory is to the text being translated.

Some customers with very strict requirements only want machine translation to be used when match quality scores are below a certain threshold. Beyond this threshold, they expect their own translation memory to take precedence. Translators often need to apply these preferences manually either within their TMS or by altering the text data. This flow is illustrated in the following diagram. The machine translation system processes the translation data—text and fuzzy match scores— which is then reviewed and manually edited by translators, based on their desired quality thresholds. Applying thresholds as part of the machine translation step allows you to remove these manual steps, which improves efficiency and optimizes cost.

Machine Translation Review Flow

Figure 1: Machine Translation Review Flow

The solution presented in this post allows you to enforce rules based on match quality score thresholds to drive whether a given input text should be machine translated by Amazon Translate or not. When not machine translated, the resulting text is left to the discretion of the translators reviewing the final output.

Solution Architecture

The solution architecture illustrated in Figure 2 leverages the following services:

  • Amazon Simple Storage Service – Amazon S3 buckets contain the following content:
    • Fuzzy match threshold configuration files
    • Source text to be translated
    • Amazon Translate input and output data locations
  • AWS Systems Manager – We use Parameter Store parameters to store match quality threshold configuration values
  • AWS Lambda – We use two Lambda functions:
    • One function preprocesses the quality match threshold configuration files and persists the data into Parameter Store
    • One function automatically creates the asynchronous translation jobs
  • Amazon Simple Queue Service – An Amazon SQS queue triggers the translation flow as a result of new files coming into the source bucket

Solution Architecture Diagram

Figure 2: Solution Architecture

You first set up quality thresholds for your translation jobs by editing a configuration file and uploading it into the fuzzy match threshold configuration S3 bucket. The following is a sample configuration in CSV format. We chose CSV for simplicity, although you can use any format. Each line represents a threshold to be applied to either a specific translation job or as a default value to any job.

default, 75 SourceMT-Test, 80

The specifications of the configuration file are as follows:

  • Column 1 should be populated with the name of the XLIFF file—without extension—provided to the Amazon Translate job as input data.
  • Column 2 should be populated with the quality match percentage threshold. For any score below this value, machine translation is used.
  • For all XLIFF files whose name doesn’t match any name listed in the configuration file, the default threshold is used—the line with the keyword default set in Column 1.

Auto-generated parameter in Systems Manager Parameter Store

Figure 3: Auto-generated parameter in Systems Manager Parameter Store

When a new file is uploaded, Amazon S3 triggers the Lambda function in charge of processing the parameters. This function reads and stores the threshold parameters into Parameter Store for future usage. Using Parameter Store avoids performing redundant Amazon S3 GET requests each time a new translation job is initiated. The sample configuration file produces the parameter tags shown in the following screenshot.

The job initialization Lambda function uses these parameters to preprocess the data prior to invoking Amazon Translate. We use an English-to-Spanish translation XLIFF input file, as shown in the following code. It contains the initial text to be translated, broken down into what is referred to as segments, represented in the source tags.

Consent Form CONSENT FORM FORMULARIO DE CONSENTIMIENTO Screening Visit: Screening Visit Selección

The source text has been pre-matched with the translation memory beforehand. The data contains potential translation alternatives—represented as tags—alongside a match quality attribute, expressed as a percentage. The business rule is as follows:

  • Segments received with alternative translations and a match quality below the threshold are untouched or empty. This signals to Amazon Translate that they must be translated.
  • Segments received with alternative translations with a match quality above the threshold are pre-populated with the suggested target text. Amazon Translate skips those segments.

Let’s assume the quality match threshold configured for this job is 80%. The first segment with 99% match quality isn’t machine translated, whereas the second segment is, because its match quality is below the defined threshold. In this configuration, Amazon Translate produces the following output:

Consent Form FORMULARIO DE CONSENTIMIENTO CONSENT FORM FORMULARIO DE CONSENTIMIENTO Screening Visit: Visita de selección Screening Visit Selección

In the second segment, Amazon Translate overwrites the target text initially suggested (Selección) with a higher quality translation: Visita de selección.

One possible extension to this use case could be to reuse the translated output and create our own translation memory. Amazon Translate supports customization of machine translation using translation memory thanks to the parallel data feature. Text segments previously machine translated due to their initial low-quality score could then be reused in new translation projects.

In the following sections, we walk you through the process of deploying and testing this solution. You use AWS CloudFormation scripts and data samples to launch an asynchronous translation job personalized with a configurable quality match threshold.

Prerequisites

For this walkthrough, you must have an AWS account. If you don’t have an account yet, you can create and activate one.

Launch AWS CloudFormation stack

  1. Choose Launch Stack:
  2. For Stack name, enter a name.
  3. For ConfigBucketName, enter the S3 bucket containing the threshold configuration files.
  4. For ParameterStoreRoot, enter the root path of the parameters created by the parameters processing Lambda function.
  5. For QueueName, enter the SQS queue that you create to post new file notifications from the source bucket to the job initialization Lambda function. This is the function that reads the configuration file.
  6. For SourceBucketName, enter the S3 bucket containing the XLIFF files to be translated. If you prefer to use a preexisting bucket, you need to change the value of the CreateSourceBucket parameter to No.
  7. For WorkingBucketName, enter the S3 bucket Amazon Translate uses for input and output data.
  8. Choose Next.

    Figure 4: CloudFormation stack details

  9. Optionally on the Stack Options page, add key names and values for the tags you may want to assign to the resources about to be created.
  10. Choose Next.
  11. On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources.
  12. Review the other settings, then choose Create stack.

AWS CloudFormation takes several minutes to create the resources on your behalf. You can watch the progress on the Events tab on the AWS CloudFormation console. When the stack has been created, you can see a CREATE_COMPLETE message in the Status column on the Overview tab.

Test the solution

Let’s go through a simple example.

  1. Download the following sample data.
  2. Unzip the content.

There should be two files: an .xlf file in XLIFF format, and a threshold configuration file with .cfg as the extension. The following is an excerpt of the XLIFF file.

English to French sample file extract

Figure 5: English to French sample file extract

  1. On the Amazon S3 console, upload the quality threshold configuration file into the configuration bucket you specified earlier.

The value set for test_En_to_Fr is 75%. You should be able to see the parameters on the Systems Manager console in the Parameter Store section.

  1. Still on the Amazon S3 console, upload the .xlf file into the S3 bucket you configured as source. Make sure the file is under a folder named translate (for example, /translate/test_En_to_Fr.xlf).

This starts the translation flow.

  1. Open the Amazon Translate console.

A new job should appear with a status of In Progress.

Auto-generated parameter in Systems Manager Parameter Store

Figure 6: In progress translation jobs on Amazon Translate console

  1. Once the job is complete, click into the job’s link and consult the output. All segments should have been translated.

All segments should have been translated. In the translated XLIFF file, look for segments with additional attributes named lscustom:match-quality, as shown in the following screenshot. These custom attributes identify segments where suggested translation was retained based on score.

Custom attributes identifying segments where suggested translation was retained based on score

Figure 7: Custom attributes identifying segments where suggested translation was retained based on score

These were derived from the translation memory according to the quality threshold. All other segments were machine translated.

You have now deployed and tested an automated asynchronous translation job assistant that enforces configurable translation memory match quality thresholds. Great job!

Cleanup

If you deployed the solution into your account, don’t forget to delete the CloudFormation stack to avoid any unexpected cost. You need to empty the S3 buckets manually beforehand.

Conclusion

In this post, you learned how to customize your Amazon Translate translation jobs based on standard XLIFF fuzzy matching quality metrics. With this solution, you can greatly reduce the manual labor involved in reviewing machine translated text while also optimizing your usage of Amazon Translate. You can also extend the solution with data ingestion automation and workflow orchestration capabilities, as described in Speed Up Translation Jobs with a Fully Automated Translation System Assistant.

About the Authors

Narcisse Zekpa is a Solutions Architect based in Boston. He helps customers in the Northeast U.S. accelerate their adoption of the AWS Cloud, by providing architectural guidelines, design innovative, and scalable solutions. When Narcisse is not building, he enjoys spending time with his family, traveling, cooking, and playing basketball.

Dimitri Restaino is a Solutions Architect at AWS, based out of Brooklyn, New York. He works primarily with Healthcare and Financial Services companies in the North East, helping to design innovative and creative solutions to best serve their customers. Coming from a software development background, he is excited by the new possibilities that serverless technology can bring to the world. Outside of work, he loves to hike and explore the NYC food scene.



Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.