Integrate Amazon Lex and Uneeq’s digital human platform
In today’s digital landscape, customers are expecting a high-quality experience that is responsive and delightful. Chatbots and virtual assistants have transformed the customer experience from a point-and-click or a drag-and-drop experience to one that is driven by voice or text. You can create a more engaging experience by further augmenting the interaction with a visual…
In today’s digital landscape, customers are expecting a high-quality experience that is responsive and delightful. Chatbots and virtual assistants have transformed the customer experience from a point-and-click or a drag-and-drop experience to one that is driven by voice or text. You can create a more engaging experience by further augmenting the interaction with a visual modality.
Uneeq is an AWS Partner that specializes in developing animated visualizations of these voice bots and virtual agents, called. Uneeq’s digital humans can help provide a next-generation customer experience that is visual, animated, and emotional. Having worked with brands across numerous verticals such as UBS (financial services), Vodafone (telecommunications ), and Mentemia (healthcare), Uneeq helps customers enable innovative customer experiences powered by Amazon Lex.
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides natural language understanding (NLU) and automatic speech recognition (ASR), enabling customer experiences that are highly engaging through conversational interactions.
In this post, we guide you through the steps required to configure an Amazon Lex V2 chatbot, connect it to Uneeq’s digital human, and manage a conversation.
Overview of solution
This solution uses the following services:
The following diagram illustrates the architecture of our solution.
The architecture utilizes AWS serverless resources for ease of deployment and to minimize any associated run costs with the deploying the solution.
The Uneeq digital human interfaces with a simple REST API, configured with Lambda proxy integration that in turn interacts with a deployed Amazon Lex bot.
After you deploy the bot, you need to configure it with a basic Welcome intent. In the first interaction with Uneeq’s digital human, the Welcome intent determines the initial phrase the Uneeq digital human gives. For example, “Hi, my name is Crissy and I am your digital assistant today. How can I help you?”
You deploy the solution with three high-level steps:
Deploy an Amazon Lex bot.
Deploy the integration, which is a simple API Gateway REST API and Lambda function using AWS Serverless Application Model (AWS SAM) .
Create a Uneeq 14-day free trial account and connect Uneeq’s digital human to the Amazon Lex bot.
Prerequisites
To implement this solution, you need the following prerequisites:
These instructions assume a general working knowledge of the listed Amazon services, particularly AWS SAM and AWS CloudFormation.
Deploy an Amazon Lex Bot
For this solution, we use the BookTrip sample bot that is provided in Amazon Lex.
On the Amazon Lex v2 console, choose Bots in the navigation pane.
Choose Create bot.
Select Start with an example.
For Example bot, choose BookTrip.
In the Bot configuration section, enter a bot name and optional description.
Under IAM permissions, select Create a role with basic Amazon Lex permissions.
Because this is a bot for demo purposes, it’s not subject to COPPA, so in the Children’s Online Privacy Protection Act (COPPA) section, select No.
Leave the remainder of the settings as default and choose Next.
Choose your preferred language and voice, which is provided by Amazon Polly.
Choose Done to create your bot.
Edit the BookTrip bot welcome intent
When first initiated, Uneeq’s digital human utters dialog to introduce itself based on a welcome intent defined in the Amazon Lex bot.
To add the welcome intent, browse to the intents for the BookTrip bot just created and create a new intent called Welcome by choosing Add intent.
To configure the welcome intent, in the Closing Response section, enter the initial phrase that you want Uneeq’s digital human to utter. For this post, we use “Hi, my name is Crissy and I am your digital assistant today. How can I help you?”
This is the only configuration required for this intent.
Choose Save intent.
Choose Build to build the bot with the Welcome intent.
Record the bot ID, alias ID, locale ID, and Welcome intent name to use in the next step to deploy the integration.
Deploy the integration using AWS SAM
Browse to the GitHub repo and clone the lexV2 branch. The template.yaml file is the AWS SAM configuration for the application; the swagger.yaml is the OpenAPI configuration for the API.
Deploy this application by following the instructions in the README file.
Browse to the root of the cloned repository and install the required dependencies by running the following command: cd function && npm install && cd ..
Prior to running the deploy command, upload the swagger.yaml file to an S3 bucket.
Deploy the serverless application by running the following command from the root of the repository, and assign values to the listed parameters:
In this post, I implemented a solution that integrates Amazon Lex with Uneeq’s digital human by enhancing the visual modality of the user experience. You can use this solution for multiple use cases by simply configuring it to a different Amazon Lex bot.
It’s easy to get started. Sign up for a free trial account with Uneeq’s digital human, and clone the GitHub repo to get started enhancing your customers’ interactions with your business. For more information about Amazon Lex, see Getting started with Amazon Lex and the V2 Developer Guide.
About the Author
Barry Conway is an Enterprise Solutions Architect with years of experience in the technology industry bridging the gap between business and technology. Barry has helped banking, manufacturing, logistics, and retail organizations realize their business goals.
Reducing hallucinations in LLM agents with a verified semantic cache using Amazon Bedrock Knowledge Bases
This post introduces a solution to reduce hallucinations in Large Language Models (LLMs) by implementing a verified semantic cache using Amazon Bedrock Knowledge Bases, which checks if user questions match curated and verified responses before generating new answers. The solution combines the flexibility of LLMs with reliable, verified answers to improve response accuracy, reduce latency,…
This post introduces a solution to reduce hallucinations in Large Language Models (LLMs) by implementing a verified semantic cache using Amazon Bedrock Knowledge Bases, which checks if user questions match curated and verified responses before generating new answers. The solution combines the flexibility of LLMs with reliable, verified answers to improve response accuracy, reduce latency, and lower costs while preventing potential misinformation in critical domains such as healthcare, finance, and legal services.
Orchestrate an intelligent document processing workflow using tools in Amazon Bedrock
This intelligent document processing solution uses Amazon Bedrock FMs to orchestrate a sophisticated workflow for handling multi-page healthcare documents with mixed content types. The solution uses the FM’s tool use capabilities, accessed through the Amazon Bedrock Converse API. This enables the FMs to not just process text, but to actively engage with various external tools…
This intelligent document processing solution uses Amazon Bedrock FMs to orchestrate a sophisticated workflow for handling multi-page healthcare documents with mixed content types. The solution uses the FM’s tool use capabilities, accessed through the Amazon Bedrock Converse API. This enables the FMs to not just process text, but to actively engage with various external tools and APIs to perform complex document analysis tasks.