Connect with us

Amazon

Label text for aspect-based sentiment analysis using SageMaker Ground Truth

The Amazon Machine Learning Solutions Lab (MLSL) recently created a tool for annotating text with named-entity recognition (NER) and relationship labels using Amazon SageMaker Ground Truth. Annotators use this tool to label text with named entities and link their relationships, thereby building a dataset for training state-of-the-art natural language processing (NLP) machine learning (ML) models. Most…

Published

on

[]The Amazon Machine Learning Solutions Lab (MLSL) recently created a tool for annotating text with named-entity recognition (NER) and relationship labels using Amazon SageMaker Ground Truth. Annotators use this tool to label text with named entities and link their relationships, thereby building a dataset for training state-of-the-art natural language processing (NLP) machine learning (ML) models. Most importantly, this is now publicly available to all AWS customers.

Customer Use Case: Booking.com

[]Booking.com is one of the world’s leading online travel platforms. Understanding what customers are saying about the company’s 28 million+ property listings on the platform is essential for maintaining a top-notch customer experience. Previously, Booking.com could only utilize traditional sentiment analysis to interpret customer-generated reviews at scale. Looking to upgrade the specificity of these interpretations, Booking.com recently turned to the MLSL for help with building a custom annotated dataset for training an aspect-based sentiment analysis model.

[]Traditional sentiment analysis is the process of classifying a piece of text as positive, negative, or neutral as a singular sentiment. This works to broadly understand if users are satisfied or unsatisfied with a particular experience. For example, with traditional sentiment analysis, the following text may be classified as “neutral”:

[]Our stay at the hotel was nice. The staff was friendly and the rooms were clean, but our beds were quite uncomfortable.

[]Aspect-based sentiment analysis offers a more nuanced understanding of content. In the case of Booking.com, rather than taking a customer review as a whole and classifying it categorically, it can take sentiment from within a review and assign it to specific aspects. For example, customer reviews of a given hotel might praise the immaculate pool and fitness area, but give critical feedback on the restaurant and lounge.

[]The statement which would have been classified as “neutral” by traditional sentiment analysis will, with aspect-based sentiment analysis, become:

[]Our stay at the hotel was nice. The staff was friendly and the rooms were clean, but our beds were quite uncomfortable.

  • Hotel: Positive
  • Staff: Positive
  • Room: Positive
  • Beds: Negative

[]Booking.com sought to build a custom aspect-based sentiment analysis model that would tell them which specific parts of the guest experience (from a list of 50+ aspects) were positive, negative, or neutral.

[]Before Booking.com could build a training dataset for this model, they needed a way to annotate it. MLSL’s annotation tool provided the much-needed customized solution. Human review was performed on a large collection of hotel reviews. Then, annotators completed named-entity annotation on sentiment and guest-experience text spans and phrases before linking appropriate spans together.

[]

[]The new aspect-based model lets Booking.com personalize both accommodations and reviews to its customers. Highlighting the positive and negative aspects of each accommodation enables the customers to choose their perfect match. In addition, different customers care about different aspects of the accommodation, and the new model opens up the opportunity to show the most relevant reviews to each one.

Labeling Requirements

[]Although Ground Truth provides a built-in NER text annotation capability, it doesn’t provide the ability to link entities together. With this in mind, Booking.com and MLSL worked out the following high-level requirements for a new named entity recognition text labeling tool that:

  • Accepts as input: text, entity labels, relationship labels, and classification labels.
  • Optionally accepts as input pre-annotated data with the preceding label and relationship annotations.
  • Presents the annotator with either unannotated or pre-annotated text.
  • Allows annotators to highlight and annotate arbitrary text with an entity label.
  • Allows annotators to create relationships between two entity annotations.
  • Allows annotators to easily navigate large numbers of entity labels.
  • Supports grouping entity labels into categories.
  • Allow overlapping relationships, which means that the same annotated text segment can be related to more than one other annotated text segment.
  • Allows overlapping entity label annotations, which means that two annotations can overlap the same piece of text. For example, the text “Seattle Space Needle” can have both the annotations “Seattle” → “locations”, and “Seattle Space Needle” → “attractions”.
  • Output format is compatible with input format, and it can be fed back into subsequent labeling tasks.
  • Supports UTF-8 encoded text containing emoji and other multi-byte characters.
  • Supports left-to-right languages.

Sample Annotation

[]Consider the following document:

[]We loved the location of this hotel! The rooftop lounge gave us the perfect view of space needle. It is also a short drive away from pike place market and the waterfront.
Food was only available via room service, which was a little disappointing but makes sense in this post-pandemic world.
Overall, a reasonably priced experience.

[]Loading this document into the new NER annotation presents a worker with the following interface:

Worker presented with an unannotated document []Worker presented with an unannotated document

[]In this case, the worker’s job is to:

  • Label entities related to the property (location, price, food, etc.)
  • Label entities related to sentiment (positive, negative, or neutral)
  • Link property-related named entities to sentiment-related keywords to accurately capture the guest experience

Worker performing annotations []Worker performing annotations

[]Annotation speed was an important consideration of the tool. Using a sequence of intuitive keyboard shortcuts and mouse gestures, annotators can drive the interface and:

  • Add and remove named entity annotations
  • Add relationships between named entities
  • Jump to the beginning and end of the document
  • Submit the document

[]Additionally, there is support for overlapping labels. For example, Seattle Space Needle: in this phrase, Seattle is annotated both as a location by itself and as a part of the attraction name.

[]The completed annotation provides a more complete, nuanced analysis of the data:

Completed document []Completed document

[]Relationships can be configured in many levels, from entity categories to other entity categories (for example, from “food” to “sentiment”), or between individual entity types. Relationships are directed, so annotators can link an aspect like food to a sentiment, but not vice-versa (unless explicitly enabled). When drawing relationships, the annotation tool will automatically deduce the relationship label and direction.

Configuring the NER Annotation Tool

[]In this section, we cover how to customize the NER annotation tool for customer-specific use cases. This includes configuring:

  • The input text to annotate
  • Entity labels
  • Relationship Labels
  • Classification Labels
  • Pre-annotated data
  • Worker instructions

[]We’ll cover the specifics of the input and output document formats, as well as provide some examples of each.

Input Document Format

[]The NER annotation tool expects the following JSON formatted input document (Fields with a question mark next to the name are optional).

{ text: string; tokenRows?: string[][]; documentId?: string; entityLabels?: { name: string; shortName?: string; category?: string; shortCategory?: string; color?: string; }[]; classificationLabels?: string[]; relationshipLabels?: { name: string; allowedRelationships?: { sourceEntityLabelCategories?: string[]; targetEntityLabelCategories?: string[]; sourceEntityLabels?: string[]; targetEntityLabels?: string[]; }[]; }[]; entityAnnotations?: { id: string; start: number; end: number; text: string; label: string; labelCategory?: string; }[]; relationshipAnnotations?: { sourceEntityAnnotationId: string; targetEntityAnnotationId: string; label: string; }[]; classificationAnnotations?: string[]; meta?: { instructions?: string; disableSubmitConfirmation?: boolean; multiClassification: boolean; }; } []In a nutshell, the input format has these characteristics:

  • Either entityLabels or classificationLabels (or both) are required to annotate.
  • If entityLabels are given, then relationshipLabels can be added.
  • Relationships can be allowed between different entity/category labels or a mix of these.
  • The “source” of a relationship is the entity that the directed arrow starts with, while the “target” is where it’s heading.
Field Type Description
text string Required. Input text for annotation.
tokenRows string[][] Optional. Custom tokenization of input text. Array of arrays of strings. Top level array represents each row of text (line breaks), and second level array represents tokens on each row. All characters/runes in the input text must be accounted for in tokenRows, including any white space.
documentId string Optional. Optional value for customers to keep track of document being annotated.
entityLabels object[] Required if classificationLabels is blank. Array of entity labels.
entityLabels[].name string Required. Entity label display name.
entityLabels[].category string Optional. Entity label category name.
entityLabels[].shortName string Optional. Display this text over annotated entities rather than the full name.
entityLabels[].shortCategory string Optional. Display this text in the entity annotation select dropdown instead of the first four letters of the category name.
entityLabels.color string Optional. Hex color code with “#” prefix. If blank, then it will automatically assign a color to the entity label.
relationshipLabels object[] Optional. Array of relationship labels.
relationshipLabels[].name string Required. Relationship label display name.
relationshipLabels[].allowedRelationships object[] Optional. Array of values restricting what types of source and destination entity labels this relationship can be assigned to. Each item in array is “OR’ed” together.
relationshipLabels[].allowedRelationships[].sourceEntityLabelCategories string[] Required to set either sourceEntityLabelCategories or sourceEntityLabels (or both). List of legal source entity label category types for this relationship.
relationshipLabels[].allowedRelationships[].targetEntityLabelCategories string[] Required to set either targetEntityLabelCategories or targetEntityLabels (or both). List of legal target entity label category types for this relationship.
relationshipLabels[].allowedRelationships[].sourceEntityLabels string[] Required to set either sourceEntityLabelCategories or sourceEntityLabels (or both). List of legal source entity label types for this relationship.
relationshipLabels[].allowedRelationships[].sourceEntityLabels string[] Required to set either targetEntityLabelCategories or targetEntityLabels (or both). List of legal target entity label types for this relationship.
classificationLabels string[] Required if entityLabels is blank. List of document level classification labels.
entityAnnotations object[] Optional. Array of entity annotations to pre-annotate input text with.
entityAnnotations[].id string Required. Unique identifier for this entity annotation. Used to reference this entity in relationshipAnnotations.
entityAnnotations[].start number Required. Start rune offset of this entity annotation.
entityAnnotations[].end number Required. End rune offset of this entity annotation.
entityAnnotations[].text string Required. Text content between start and end rune offset.
entityAnnotations[].label string Required. Associated entity label name (from the names in entityLabels).
entityAnnotations[].labelCategory string Optional.Associated entity label category (from the categories in entityLabels).
relationshipAnnotations object[] Optional. Array of relationship annotations.
relationshipAnnotations[].sourceEntityAnnotationId string Required. Source entity annotation ID for this relationship.
relationshipAnnotations[].targetEntityAnnotationId string Required. Target entity annotation ID for this relationship.
relationshipAnnotations[].label string Required. Associated relationship label name.
classificationAnnotations string[] Optional. Array of classifications to pre-annotate the document with.
meta object Optional. Additional configuration parameters.
meta.instructions string Optional. Instructions for the labeling annotator in Markdown format.
meta.disableSubmitConfirmation boolean Optional. Set to true to disable submit confirmation modal.
meta.multiClassification boolean Optional. Set to true to enable multi-label mode for classificationLabels.

[]Here are a few sample documents to get a better sense of this input format

[]Documents that adhere to this schema are provided to Ground Truth as individual line items in an input manifest.

Output Document Format

[]The output format is designed to feedback easily into a new annotation task. Optional fields in the output document are set if they are also set in the input document. The only difference between the input and output formats is the meta object.

{ text: string; tokenRows?: string[][]; documentId?: string; entityLabels?: { name: string; shortName?: string; category?: string; shortCategory?: string; color?: string; }[]; relationshipLabels: { name: string; allowedRelationships?: { sourceEntityLabelCategories?: string[]; targetEntityLabelCategories?: string[]; sourceEntityLabels?: string[]; targetEntityLabels?: string[]; }[]; }[]; classificationLabels?: string[]; entityAnnotations?: { id: string; start: number; end: number; text: string; labelCategory?: string; label: string; }[]; relationshipAnnotations?: { sourceEntityAnnotationId: string; targetEntityAnnotationId: string; label: string; }[]; classificationAnnotations?: string[]; meta: { instructions?: string; disableSubmitConfirmation?: boolean; multiClassification: boolean; runes: string[]; rejected: boolean; rejectedReason: string; } }

Field Type Description
meta.rejected boolean Is set to true if the annotator rejected this document.
meta.rejectedReason string Annotator’s reason given for rejecting the document.
meta.runes string[] Array of runes accounting for all of the characters in the input text. Used to calculate entity annotation start and end offsets.

[]Here is a sample output document that’s been annotated:

Runes note:

[]A “rune” in this context is a single highlight-able character in text, including multi-byte characters such as emoji.

  • Because different programming languages represent multi-byte characters differently, using “Runes” to define every highlight-able character as a single atomic element means that we have an unambiguous way to describe any given text selection.
  • For example, Python treats the Swedish flag as four characters:

    But JavaScript treats the same emoji as two characters

[]To eliminate any ambiguity, we will treat the Swedish flag (and all other emoji and multi-byte characters) as a single atomic element.

  • Offset: Rune position relative to Input Text (starting with index 0)

Performing NER Annotations with Ground Truth

[]As a fully managed data labeling service, Ground Truth builds training datasets for ML. For this use case, we use Ground Truth to send a collection of text documents to a pool of workers for annotation. Finally, we review for quality.

[]Ground Truth can be configured to build a data labeling job using the new NER tool as a custom template.

[]Specifically, we will:

  1. Create a private labeling workforce of workers to perform the annotation task
  2. Create a Ground Truth input manifest with the documents we want to annotate and then upload it to Amazon Simple Storage Service (Amazon S3)
  3. Create pre-labeling task and post-labeling task Lambda functions
  4. Create a Ground Truth labeling job using the custom NER template
  5. Annotate documents
  6. Review results

NER Tool Resources

[]A complete list of referenced resources and sample documents can be found in the following chart:

Labeling Workforce Creation

[]Ground Truth uses SageMaker labeling workforces to manage workers and distribute tasks. Create a private workforce, a worker team called ner-worker-team, and assign yourself to the team using the instructions found in Create a Private Workforce (Amazon SageMaker Console).

[]Once you’ve added yourself to a private workforce and confirmed your email, note the worker portal URL from the AWS Management Console:

  • Navigate to SageMaker
  • Navigate to Ground Truth → Labeling workforces
  • Select the Private tab
  • Note the URL Labeling portal sign-in URL

[]Log in to the worker portal to view and start work on labeling tasks.

Input Manifest

[]The Ground Truth input data manifest is a JSON-lines file where each line contains a single worker task. In our case, each line will contain a single JSON encoded Input Document containing the text that we want to annotate and the NER annotation schema.

[]Download a sample input manifest reviews.manifest from https://assets.solutions-lab.ml/NER/0.2.1/sample-data/reviews.manifest

[]Note: each row in the input manifest needs a top-level key source or source-ref. You can learn more in Use an Input Manifest File in the Amazon SageMaker Developer Guide.

Upload Input Manifest to Amazon S3

[]Upload this input manifest to an S3 bucket using the AWS Management Console or from the command line, thereby replacing your-bucket with an actual bucket name.

aws s3 cp reviews.manifest s3://your-bucket/ner-input/reviews.manifest

Download custom worker template

[]Download the NER tool custom worker template from https://assets.solutions-lab.ml/NER/0.2.1/worker-template.liquid.html by viewing the source and saving the contents locally, or from the command line:

wget https://assets.solutions-lab.ml/NER/0.2.1/worker-template.liquid.html

Create pre-labeling task and post-labeling task Lambda functions

[]Download sample pre-labeling task Lambda function: smgt-ner-pre-labeling-task-lambda.py from https://assets.solutions-lab.ml/NER/0.2.1/sample-scripts/smgt-ner-pre-labeling-task-lambda.py

[]Download sample pre-labeling task Lambda function: smgt-ner-post-labeling-task-lambda.py from https://assets.solutions-lab.ml/NER/0.2.1/sample-scripts/smgt-ner-post-labeling-task-lambda.py

  • Create pre-labeling task Lambda function from the AWS Management Console:
    • Navigate to Lambda
    • Select Create function
    • Specify Function name as smgt-ner-pre-labeling-task-lambda
    • Select Runtime → Python 3.6
    • Select Create function
    • In Function code → lambda_hanadler.py, paste the contents of smgt-ner-pre-labeling-task-lambda.py
    • Select Deploy
  • Create post-labeling task Lambda function from the AWS Management Console:
    • Navigate to Lambda
    • Select Create function
    • Specify Function name as smgt-ner-post-labeling-task-lambda
    • Select Runtime → Python 3.6
    • Expand Change default execution role
    • Select Create a new role from AWS policy templates
    • Enter the Role name: smgt-ner-post-labeling-task-lambda-role
    • Select Create function
    • Select the Permissions tab
    • Select the Role name: smgt-ner-post-labeling-task-lambda-role to open the IAM console
    • Add two policies to the role
      • Select Attach policies
      • Attach the AmazonS3FullAccess policy
      • Select Add inline policy
      • Select the JSON tab
      • Paste in the following inline policy: { “Version”: “2012-10-17”, “Statement”: { “Effect”: “Allow”, “Action”: “sts:AssumeRole”, “Resource”: “arn:aws:iam::YOUR_ACCOUNT_NUMBER:role/service-role/AmazonSageMaker-ExecutionRole-*” } }
    • Navigate back to the smgt-ner-post-labeling-task-lambda Lambda function configuration page
    • Select the Configuration tab
    • In Function code → lambda_hanadler.py, paste the contents of smgt-ner-post-labeling-task-lambda.py
    • Select Deploy

Create a Ground Truth labeling job

[]From the AWS Management Console:

  • Navigate to the Amazon SageMaker service
  • Navigate to Ground Truth → Labeling Jobs.
  • Select Create labeling job
  • Specify a Job Name
  • Select Manual Data Setup
  • Specify the Input dataset location where you uploaded the input manifest earlier (e.g., s3://your-bucket/ner-input/sample-smgt-input-manifest.jsonl)
  • Specify the Output dataset location to point to a different folder in the same bucket (e.g., s3://your-bucket/ner-output/)
  • Specify an IAM Role by selecting Create new role
    • Allow this role to access any S3 bucket by selecting S3 buckets you specify → Any S3 bucket when creating the policy
    • In a new AWS Management Console window, open the IAM console and select Roles
    • Search for the name of the role that you just created (for example, AmazonSageMaker-ExecutionRole-20210301T154158)
    • Select the role name to open the role in the console
    • Attach the following three policies:
      • Select Attach policies
      • Attach the AWSLambda_FullAccess to the role
      • Select Trust Relationships → Edit Trust Relationships
      • Edit the trust relationship JSON,
      • Replace YOUR_ACCOUNT_NUMBER with your numerical AWS Account number, to read: { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Service”: “sagemaker.amazonaws.com” }, “Action”: “sts:AssumeRole” }, { “Effect”: “Allow”, “Principal”: { “AWS”: “arn:aws:iam::YOUR_ACCOUNT_NUMBER:role/service-role/smgt-ner-post-labeling-task-lambda-role” }, “Action”: “sts:AssumeRole” } ] }
      • Save the trust relationship
  • Return to the new Ground Truth job in the previous AWS Management Console window: under Task Category, select Custom
  • Select Next
  • Select Worker types: Private
  • Select the Private team : ner-worker-team that was created in the preceding section
  • In the Custom labeling task setup text area, clear the default content and paste in the content of the worker-template.liquid.html file obtained earlier
  • Specify the Pre-labeling task Lambda function with the previously created function: smgt-ner-pre-labeling
  • Specify the Post-labeling task Lambda function with the function created earlier: smgt-ner-post-labeling
  • Select Create

Annotate documents

[]Once the Ground Truth job is created, we can start annotating documents. Open the worker portal for our workforce created earlier (In the AWS Management Console, navigate to the SageMaker , Ground Truth → Labeling workforces, Private, and open the Labeling portal sign-in URL )

[]Sign in and select the first labeling task in the table, and then select “Start working” to open the annotator. Perform your annotations and select submit on all three of the sample documents.

Review results

[]As Ground Truth annotators complete tasks, results will be available in the output S3 bucket:

s3://your-bucket/path-to-your-ner-job/annotations/worker-response/iteration-1/0/ []Once all tasks for a labeling job are complete, the consolidated output is available in the output.manifest file located here:

s3://your-bucket/path-to-your-ner-job/manifests/output/output.manifest []This output manifest is a JSON-lines file with one annotated text document per line in the “Output Document Format” specified previously. This file is compatible with the “Input Document Format”, and it can be fed directly into a subsequent Ground Truth job for another round of annotation. Alternatively, it can be parsed and sent to an ML training job. Some scenarios where we might employ a second round of annotations are:

  • Breaking the annotation process into two steps where the first annotator identifies entity annotations and the second annotator draws relationships
  • Taking a sample of our output.manifest and sending it to a second, more experienced annotator for review as a quality control check

Custom Ground Truth Annotation Templates

[]The NER annotation tool described in this document is implemented as a custom Ground Truth annotation template. AWS customers can build their own custom annotation interfaces using the instructions found here:

Conclusion

[]By working together, Booking.com and the Amazon MLSL were able to develop a powerful text annotation tool that is capable of creating complex named-entity recognition and relationship annotations.

[]We encourage AWS customers with an NER text annotation use case to try the tool described in this post. If you’d like help accelerate the use of ML in your products and services, please contact the Amazon Machine Learning Solutions Lab.

About the Authors

[]Dan Noble is a Software Development Engineer at Amazon where he helps build delightful user experiences. In his spare time, he enjoys reading, exercising, and having adventures with his family.

[]Pri Nonis is a Deep Learning Architect at the Amazon ML Solutions Lab, where he works with customers across various verticals, and helps them accelerate their cloud migration journey, and to solve their ML problems using state-of-the-art solutions and technologies.

[]Niharika Jayanthi is a Front End Engineer at AWS, where she develops custom annotation solutions for Amazon SageMaker customers. Outside of work, she enjoys going to museums and working out.

[]Amit Beka is a Machine Learning Manager at Booking.com, with over 15 years of experience in software development and machine learning. He is fascinated with people and languages, and how computers are still puzzled by both.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon

Build a multi-tenant generative AI environment for your enterprise on AWS

While organizations continue to discover the powerful applications of generative AI, adoption is often slowed down by team silos and bespoke workflows. To move faster, enterprises need robust operating models and a holistic approach that simplifies the generative AI lifecycle. In the first part of the series, we showed how AI administrators can build a…

Published

on

By

While organizations continue to discover the powerful applications of generative AI, adoption is often slowed down by team silos and bespoke workflows. To move faster, enterprises need robust operating models and a holistic approach that simplifies the generative AI lifecycle. In the first part of the series, we showed how AI administrators can build a […]

Source

Continue Reading

Amazon

Unleashing Stability AI’s most advanced text-to-image models for media, marketing and advertising: Revolutionizing creative workflows

To stay competitive, media, advertising, and entertainment enterprises need to stay abreast of recent dramatic technological developments. Generative AI has emerged as a game-changer, offering unprecedented opportunities for creative professionals to push boundaries and unlock new realms of possibility. At the forefront of this revolution is Stability AI’s  family of cutting-edge text-to-image AI models. These…

Published

on

By

To stay competitive, media, advertising, and entertainment enterprises need to stay abreast of recent dramatic technological developments. Generative AI has emerged as a game-changer, offering unprecedented opportunities for creative professionals to push boundaries and unlock new realms of possibility. At the forefront of this revolution is Stability AI’s  family of cutting-edge text-to-image AI models. These […]

Source

Continue Reading

Amazon

How Zalando optimized large-scale inference and streamlined ML operations on Amazon SageMaker

This post is cowritten with Mones Raslan, Ravi Sharma and Adele Gouttes from Zalando. Zalando SE is one of Europe’s largest ecommerce fashion retailers with around 50 million active customers. Zalando faces the challenge of regular (weekly or daily) discount steering for more than 1 million products, also referred to as markdown pricing. Markdown pricing is…

Published

on

By

This post is cowritten with Mones Raslan, Ravi Sharma and Adele Gouttes from Zalando. Zalando SE is one of Europe’s largest ecommerce fashion retailers with around 50 million active customers. Zalando faces the challenge of regular (weekly or daily) discount steering for more than 1 million products, also referred to as markdown pricing. Markdown pricing is […]

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.