Connect with us

Amazon

Optimize F1 aerodynamic geometries via Design of Experiments and machine learning

FORMULA 1 (F1) cars are the fastest regulated road-course racing vehicles in the world. Although these open-wheel automobiles are only 20–30 kilometers (or 12–18 miles) per-hour faster than top-of-the-line sports cars, they can speed around corners up to five times as fast due to the powerful aerodynamic downforce they create. Downforce is the vertical force…

Published

on

FORMULA 1 (F1) cars are the fastest regulated road-course racing vehicles in the world. Although these open-wheel automobiles are only 20–30 kilometers (or 12–18 miles) per-hour faster than top-of-the-line sports cars, they can speed around corners up to five times as fast due to the powerful aerodynamic downforce they create. Downforce is the vertical force generated by the aerodynamic surfaces that presses the car towards the road, increasing the grip from the tires. F1 aerodynamicists must also monitor the air resistance or drag, which limits straight-line speed.

The F1 engineering team is in charge of designing the next generation of F1 cars and putting together the technical regulation for the sport. Over the last 3 years, they have been tasked with designing a car that maintains the current high levels of downforce and peak speeds, but is also not adversely affected by driving behind another car. This is important because the previous generation of cars can lose up to 50% of their downforce when racing closely behind another car due to the turbulent wake generated by wings and bodywork.

Instead of relying on time-consuming and costly track or wind tunnel tests, F1 uses Computational Fluid Dynamics (CFD), which provides a virtual environment to study the flow of fluids (in this case the air around the F1 car) without ever having to manufacture a single part. With CFD, F1 aerodynamicists test different geometry concepts, assess their aerodynamic impact, and iteratively optimize their designs. Over the past 3 years, the F1 engineering team has collaborated with AWS to set up a scalable and cost-efficient CFD workflow that has tripled the throughput of CFD runs and cut the turnaround time per run by half.

F1 is in the process of looking into AWS machine learning (ML) services such as Amazon SageMaker to help optimize the design and performance of the car by using the CFD simulation data to build models with additional insights. The aim is to uncover promising design directions and reduce the number of CFD simulations, thereby reducing the time taken to converge to optimal designs.

In this post, we explain how F1 collaborated with the AWS Professional Services team to develop a bespoke Design of Experiments (DoE) workflow powered by ML to advise F1 aerodynamicists on which design concepts to test in CFD to maximize learning and performance.

Problem statement

When exploring new aerodynamic concepts, F1 aerodynamicists sometimes employ a process called Design of Experiments (DoE). This process systematically studies the relationship between multiple factors. In the case of a rear wing, this might be wing chord, span, or camber, with respect to aerodynamic metrics such as downforce or drag. The goal of a DoE process is to efficiently sample the design space and minimize the number of candidates tested before converging to an optimal result. This is achieved by iteratively changing multiple design factors, measuring the aerodynamic response, studying the impact and relationship between factors, and then continuing testing in the most optimum or informative direction. In the following figure, we present an example rear wing geometry that F1 has kindly shared with us from their UNIFORM baseline. Four design parameters which F1 aerodynamicists could investigate in a DoE routine are labeled.

In this project, F1 worked with AWS Professional Services to investigate using ML to enhance DoE routines. Traditional DoE methods require a well-populated design space in order to understand the relationship between design parameters and therefore rely on a large number of upfront CFD simulations. ML regression models could use the results from previous CFD simulations to predict the aerodynamic response given the set of design parameters, as well as give you an indication of the relative importance of each design variable. You could use these insights to predict optimal designs and help designers converge to optimum solutions with fewer upfront CFD simulations. Secondly, you could use data science techniques to understand which regions in the design space haven’t been explored and could potentially hide optimal designs.

To illustrate the bespoke ML-powered DoE workflow, we walk through a real example of designing a front wing.

Designing a front wing

F1 cars rely on wings such as the front and rear wings to generate most of their downforce, which we refer to throughout this example by the coefficient Cz. Throughout this example, the downforce values have been normalized. In this example, F1 aerodynamicists used their domain expertise to parameterize the wing geometry as follows (refer to the following figure for a visual representation):

  • LE-Height – Leading edge height
  • Min-Z – Minimum ground clearance
  • Mid-LE-Angle – Leading edge angle of the third element
  • TE-Angle – Trailing edge angle
  • TE-Height – Trailing edge height

This front wing geometry was shared by F1 and is part of the UNIFORM baseline.

These parameters were selected because they are sufficient to describe the main aspects of the geometry efficiently and because in the past, aerodynamic performance has shown notable sensitivity with respect to these parameters. The goal of this DoE routine was to find the combination of the five design parameters that would maximize aerodynamic downforce (Cz). The design freedom is also limited by setting maximum and minimum values to the design parameters, as shown in the following table.

. Minimum Maximum
TE-Height 250.0 300.0
TE-Angle 145.0 165.0
Mid-LE-Angle 160.0 170.0
Min-Z 5.0 50.0
LE-Height 100.0 150.0

Having established the design parameters, the target output metric, and the bounds of our design space, we have all we need to get started with the DoE routine. A workflow diagram of our solution is presented in the following image. In the following section, we dive deep into the different stages.

Initial sampling of the design space

The first step of the DoE workflow is to run in CFD an initial set of candidates that efficiently sample the design space and allow us to build the first set of ML regression models to study the influence of each feature. First, we generate a pool of N samples using Latin Hypercube Sampling (LHS) or a regular grid method. Then, we select k candidates to test in CFD by means of a greedy inputs algorithm, which aims to maximize the exploration of the design space. Starting with a baseline candidate (the current design), we iteratively select candidates furthest away from all the previously tested candidates. Suppose that we already tested k designs; for the remaining design candidates, we find the minimum distance d with respect to the tested k designs:

The greedy inputs algorithm selects the candidate that maximizes the distance in the feature space to the previously tested candidates:

In this DoE, we selected three greedy inputs candidates and ran those in CFD to assess their aerodynamic downforce (Cz). The greedy inputs candidates explore the bounds of the design space and at this stage, none of them proved superior to the baseline candidate in terms of aerodynamic downforce (Cz). The results of this initial round of CFD testing together with the design parameters are displayed in the following table.

. TE-Height TE-Angle Mid-LE-Angle Min-Z LE-Height Normalized Cz
Baseline 292.25 154.86 166 5 130 0.975
GI 0 250 165 160 50 100 0.795
GI 1 300 145 170 50 100 0.909
GI 2 250 145 170 5 100 0.847

Initial ML regression models

The goal of the regression model is to predict Cz for any combination of the five design parameters. With such a small dataset, we prioritized simple models, applied model regularization to avoid overfitting, and combined the predictions of different models where possible. The following ML models were constructed:

  • Ordinary Least Squares (OLS)
  • Support Vector Regression (SVM) with an RBF kernel
  • Gaussian Process Regression (GP) with a Matérn kernel
  • XGBoost

In addition, a two-level stacked model was built, where the predictions of the GP, SVM, and XGBoost models are assimilated by a Lasso algorithm to produce the final response. This model is referred to throughout this post as the stacked model. To rank the predictive capabilities of the five models we described, a repeated k-fold cross validation routine was implemented.

Generating the next design candidate to test in CFD

Selecting which candidate to test next requires careful consideration. The F1 aerodynamicist must balance the benefit of exploiting options predicted by the ML model to provide high downforce with the cost of failing to explore uncharted regions of the design space, which may provide even higher downforce. For that reason, in this DoE routine, we propose three candidates: one performance-driven and two exploration-driven. The purpose of the exploration-driven candidates is also to provide additional data points to the ML algorithm in regions of the design space where the uncertainty around the prediction is highest. This in turn leads to more accurate predictions in the next round of design iteration.

Genetic algorithm optimization to maximize downforce

To obtain the candidate with the highest expected aerodynamic downforce, we could run a prediction over all possible design candidates. However, this wouldn’t be efficient. For this optimization problem, we use a genetic algorithm (GA). The goal is to efficiently search through a huge solution space (obtained via the ML prediction of Cz) and return the most optimal candidate. GAs are advantageous when the solution space is complex and non-convex, so that classical optimization methods such as gradient descent are an ineffective means to find a global solution. GA is a subset of evolutionary algorithms and inspired by concepts from natural selection, genetic crossover, and mutation to solve the search problem. Over a series of iterations (known as generations), the best candidates of an initially randomly selected set of design candidates are combined (much like reproduction). Eventually, this mechanism allows you to find the most optimal candidates in an efficient manner. For more information about GAs, refer to Using genetic algorithms on AWS for optimization problems.

Generating exploration-driven candidates

In generating what we term exploration-driven candidates, a good sampling strategy must be able to adapt to a situation of effect sparsity, where only a subset of the parameters significantly affects the solution. Therefore, the sampling strategy should spread out the candidates across the input design space but also avoid unnecessary CFD runs, changing variables that have little effect on performance. The sampling strategy must take into account the response surface predicted by the ML regressor. Two sampling strategies were employed to obtain exploration-driven candidates.

In the case of Gaussian Process Regressors (GP), the standard deviation of the predicted response surface can be used as an indication of the uncertainty of the model. The sampling strategy consists of selecting out of the pool of N samples , the candidate that maximizes . By doing so, we’re sampling in the region of the design space where the regressor is least confident about its prediction. In mathematical terms, we select the candidate that satisfies the following equation:

Alternatively, we employ a greedy inputs and outputs sampling strategy, which maximizes both the distances in the feature space and in the response space between the proposed candidate and the already tested designs. This tackles the effect sparsity situation because candidates that modify a design parameter of little relevance have a similar response, and therefore the distances in the response surface are minimal. In mathematical terms, we select the candidate that satisfies the following equation, where the function f is the ML regression model:



Candidate selection, CFD testing, and optimization loop

At this stage, the user is presented with both performance-driven and exploration-driven candidates. The next step consists of selecting a subset of the proposed candidates, running CFD simulations with those design parameters, and recording the aerodynamic downforce response.

After this, the DoE workflow retrains the ML regression models, runs the genetic algorithm optimization, and proposes a new set of performance-driven and exploration-driven candidates. The user runs a subset of the proposed candidates and continues iterating in this fashion until the stopping criteria is met. The stopping criteria is generally met when a candidate deemed optimum is obtained.

Results

In the following figure, we record the normalized aerodynamic downforce (Cz) from the CFD simulation (blue) and the one predicted beforehand using the ML regression model of choice (pink) for each iteration of the DoE workflow. The goal was to maximize aerodynamic downforce (Cz). The first four runs (to the left of the red line) were the baseline and the three greedy inputs candidates outlined previously. From there on, a combination of performance-driven and exploration-driven candidates were tested. In particular, the candidates at iterations 6 and 8 were exploratory candidates, both showing lower levels of downforce than the baseline candidate (iteration 1). As expected, as we recorded more candidates, the ML prediction became increasingly accurate, as denoted by the decreasing distance between the predicted and actual Cz. At iteration 9, the DoE workflow managed to find a candidate with a similar performance to the baseline, and at iteration 12, the DoE workflow was concluded when the performance-driven candidate surpassed the baseline.

The final design parameters together with the resultant normalized downforce value is presented in the following table. The normalized downforce level for the baseline candidate was 0.975, whereas the optimum candidate for the DoE workflow recorded a normalized downforce level of 1.000. This is an important 2.5% relative increase.

For context, a traditional DoE approach with five variables would require 25 upfront CFD simulations before achieving a good enough fit to predict an optimum. On the other hand, this active learning approach converged to an optimum in 12 iterations.

. TE-Height TE-Angle Mid-LE-Angle Min-Z LE-Height Normalized Cz
Baseline 292.25 154.86 166 5 130 0.975
Optimal 299.97 156.79 166.27 5.01 135.26 1.000

Feature importance

Understanding the relative feature importance for a predictive model can provide a useful insight into the data. It can help feature selection with less important variables being removed, thereby reducing the dimensionality of the problem and potentially improving the predictive powers of the regression model, particularly in the small data regime. In this design problem, it provides F1 aerodynamicists an insight into which variables are the most sensitive and therefore require more careful tuning.

In this routine, we implemented a model-agnostic technique called permutation importance. The relative importance of each variable is measured by calculating the increase in the model’s prediction error after randomly shuffling the values for that variable alone. If a feature is important for the model, the prediction error increases greatly, and vice versa for lesser important features. In the following figure, we present the permutation importance for a Gaussian Process Regressor (GP) predicting aerodynamic downforce (Cz). The trailing edge height (TE-Height) was deemed the most important.

Conclusion

In this post, we explained how F1 aerodynamicists are using ML regression models in DoE workflows when designing novel aerodynamic geometries. The ML-powered DoE workflow developed by AWS Professional Services provides insights into which design parameters will maximize performance or explore uncharted regions in the design space. As opposed to iteratively testing candidates in CFD in a grid search fashion, the ML-powered DoE workflow is able to converge to optimal design parameters in fewer iterations. This saves both time and resources because fewer CFD simulations are required.

Whether you’re a pharmaceutical company looking to speed up chemical composition optimization or a manufacturing company looking to find the design dimensions for the most robust designs, DoE workflows can help reach optimal candidates more efficiently. AWS Professional Services is ready to supplement your team with specialized ML skills and experience to develop the tools to streamline DoE workflows and help you achieve better business outcomes. For more information, see AWS Professional Services, or reach out through your account manager to get in touch.

About the Authors

Pablo Hermoso Moreno is a Data Scientist in the AWS Professional Services Team. He works with clients across industries using Machine Learning to tell stories with data and reach more informed engineering decisions faster. Pablo’s background is in Aerospace Engineering and having worked in the motorsport industry he has an interest in bridging physics and domain expertise with ML. In his spare time, he enjoys rowing and playing guitar.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Amazon

Break through language barriers with Amazon Transcribe, Amazon Translate, and Amazon Polly

Imagine a surgeon taking video calls with patients across the globe without the need of a human translator. What if a fledgling startup could easily expand their product across borders and into new geographical markets by offering fluid, accurate, multilingual customer support and sales, all without the need of a live human translator? What happens…

Published

on

By

[]Imagine a surgeon taking video calls with patients across the globe without the need of a human translator. What if a fledgling startup could easily expand their product across borders and into new geographical markets by offering fluid, accurate, multilingual customer support and sales, all without the need of a live human translator? What happens to your business when you’re no longer bound by language?

[]It’s common today to have virtual meetings with international teams and customers that speak many different languages. Whether they’re internal or external meetings, meaning often gets lost in complex discussions and you may encounter language barriers that prevent you from being as effective as you could be.

[]In this post, you will learn how to use three fully managed AWS services (Amazon Transcribe, Amazon Translate, and Amazon Polly) to produce a near-real-time speech-to-speech translator solution that can quickly translate a source speaker’s live voice input into a spoken, accurate, translated target language, all with zero machine learning (ML) experience.

Overview of solution

[]Our translator consists of three fully managed AWS ML services working together in a single Python script by using the AWS SDK for Python (Boto3) for our text translation and text-to-speech portions, and an asynchronous streaming SDK for audio input transcription.

Amazon Transcribe: Streaming speech to text

[]The first service you use in our stack is Amazon Transcribe, a fully managed speech-to-text service that takes input speech and transcribes it to text. Amazon Transcribe has flexible ingestion methods, batch or streaming, because it accepts either stored audio files or streaming audio data. In this post, you use the asynchronous Amazon Transcribe streaming SDK for Python, which uses the HTTP/2 streaming protocol to stream live audio and receive live transcriptions.

[]When we first built this prototype, Amazon Transcribe streaming ingestion didn’t support automatic language detection, but this is no longer the case as of November 2021. Both batch and streaming ingestion now support automatic language detection for all supported languages. In this post, we show how a parameter-based solution though a seamless multi-language parameterless design is possible through the use of streaming automatic language detection. After our transcribed speech segment is returned as text, you send a request to Amazon Translate to translate and return the results in our Amazon Transcribe EventHandler method.

Amazon Translate: State-of-the-art, fully managed translation API

[]Next in our stack is Amazon Translate, a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. As of June of 2022, Amazon Translate supports translation across 75 languages, with new language pairs and improvements being made constantly. Amazon Translate uses deep learning models hosted on a highly scalable and resilient AWS Cloud architecture to quickly deliver accurate translations either in real time or batched, depending on your use case. Using Amazon Translate is straightforward and requires no management of underlying architecture or ML skills. Amazon Translate has several features, like creating and using a custom terminology to handle mapping between industry-specific terms. For more information on Amazon Translate service limits, refer to Guidelines and limits. After the application receives the translated text in our target language, it sends the translated text to Amazon Polly for immediate translated audio playback.

Amazon Polly: Fully managed text-to-speech API

[]Finally, you send the translated text to Amazon Polly, a fully managed text-to-speech service that can either send back lifelike audio clip responses for immediate streaming playback or batched and saved in Amazon Simple Storage Service (Amazon S3) for later use. You can control various aspects of speech such as pronunciation, volume, pitch, speech rate, and more using standardized Speech Synthesis Markup Language (SSML).

[]You can synthesize speech for certain Amazon Polly Neural voices using the Newscaster style to make them sound like a TV or radio newscaster. You can also detect when specific words or sentences in the text are being spoken based on the metadata included in the audio stream. This allows the developer to synchronize graphical highlighting and animations, such as the lip movements of an avatar, with the synthesized speech.

[]You can modify the pronunciation of particular words, such as company names, acronyms, foreign words, or neologisms, for example “P!nk,” “ROTFL,” or “C’est la vie” (when spoken in a non-French voice), using custom lexicons.

Architecture overview

[]The following diagram illustrates our solution architecture.

Architectural Diagram []This diagram shows the data flow from the client device to Amazon Transcribe, Amazon Translate, and Amazon Polly

[]The workflow is as follows:

  1. Audio is ingested by the Python SDK.
  2. Amazon Polly converts the speech to text, in 39 possible languages.
  3. Amazon Translate converts the languages.
  4. Amazon Live Transcribe converts text to speech.
  5. Audio is outputted to speakers.

Prerequisites

[]You need a host machine set up with a microphone, speakers, and reliable internet connection. A modern laptop should work fine for this because no additional hardware is needed. Next, you need to set up the machine with some software tools.

[]You must have Python 3.7+ installed to use the asynchronous Amazon Transcribe streaming SDK and for a Python module called pyaudio, which you use to control the machine’s microphone and speakers. This module depends on a C library called portaudio.h. If you encounter issues with pyaudio errors, we suggest checking your OS to see if you have the portaudio.h library installed.

[]For authorization and authentication of service calls, you create an AWS Identity and Access Management (IAM) service role with permissions to call the necessary AWS services. By configuring the AWS Command Line Interface (AWS CLI) with this IAM service role, you can run our script on your machine without having to pass in keys or passwords, because the AWS libraries are written to use the configured AWS CLI user’s credentials. This is a convenient method for rapid prototyping and ensures our services are being called by an authorized identity. As always, follow the principle of least privilege when assigning IAM policies when creating an IAM user or role.

[]To summarize, you need the following prerequisites:

  • A PC, Mac, or Linux machine with microphone, speakers, and internet connection
  • The portaudio.h C library for your OS (brew, apt get, wget), which is needed for pyaudio to work
  • AWS CLI 2.0 with properly authorized IAM user configured by running aws configure in the AWS CLI
  • Python 3.7+
  • The asynchronous Amazon Transcribe Python SDK
  • The following Python libraries:
    • boto3
    • amazon-transcribe
    • pyaudio
    • asyncio
    • concurrent

Implement the solution

[]You will be relying heavily on the asynchronous Amazon Transcribe streaming SDK for Python as a starting point, and are going to build on top of that specific SDK. After you have experimented with the streaming SDK for Python, you add streaming microphone input by using pyaudio, a commonly used Python open-source library used for manipulating audio data. Then you add Boto3 calls to Amazon Translate and Amazon Polly for our translation and text-to-speech functionality. Finally, you stream out translated speech through the computer’s speakers again with pyaudio. The Python module concurrent gives you the ability to run blocking code in its own asynchronous thread to play back your returned Amazon Polly speech in a seamless, non-blocking way.

[]Let’s import all our necessary modules, transcribe streaming classes, and instantiate some globals:

import boto3 import asyncio import pyaudio import concurrent from amazon_transcribe.client import TranscribeStreamingClient from amazon_transcribe.handlers import TranscriptResultStreamHandler from amazon_transcribe.model import TranscriptEvent polly = boto3.client(‘polly’, region_name = ‘us-west-2′) translate = boto3.client(service_name=’translate’, region_name=’us-west-2′, use_ssl=True) pa = pyaudio.PyAudio() #for mic stream, 1024 should work fine default_frames = 1024 #current params are set up for English to Mandarin, modify to your liking params[‘source_language’] = “en” params[‘target_language’] = “zh” params[‘lang_code_for_polly’] = “cmn-CN” params[‘voice_id’] = “Zhiyu” params[‘lang_code_for_transcribe’] = “en-US” []First, you use pyaudio to obtain the input device’s sampling rate, device index, and channel count:

#try grabbing the default input device and see if we get lucky default_indput_device = pa.get_default_input_device_info() # verify this is your microphone device print(default_input_device) #if correct then set it as your input device and define some globals input_device = default_input_device input_channel_count = input_device[“maxInputChannels”] input_sample_rate = input_device[“defaultSampleRate”] input_dev_index = input_device[“index”] []If this isn’t working, you can also loop through and print your devices as shown in the following code, and then use the device index to retrieve the device information with pyaudio:

print (“Available devices:n”) for i in range(0, pa.get_device_count()): info = pa.get_device_info_by_index(i) print (str(info[“index”]) + “: t %s n t %s n” % (info[“name”], p.get_host_api_info_by_index(info[“hostApi”])[“name”])) # select the correct index from the above returned list of devices, for example zero dev_index = 0 input_device = pa.get_device_info_by_index(dev_index) #set globals for microphone stream input_channel_count = input_device[“maxInputChannels”] input_sample_rate = input_device[“defaultSampleRate”] input_dev_index = input_device[“index”] []You use channel_count, sample_rate, and dev_index as parameters in a mic stream. In that stream’s callback function, you use an asyncio nonblocking thread-safe callback to put the input bytes of the mic stream into an asyncio input queue. Take note of the loop and input_queue objects created with asyncio and how they’re used in the following code:

async def mic_stream(): # This function wraps the raw input stream from the microphone forwarding # the blocks to an asyncio.Queue. loop = asyncio.get_event_loop() input_queue = asyncio.Queue() def callback(indata, frame_count, time_info, status): loop.call_soon_threadsafe(input_queue.put_nowait, indata) return (indata, pyaudio.paContinue) # Be sure to use the correct parameters for the audio stream that matches # the audio formats described for the source language you’ll be using: # https://docs.aws.amazon.com/transcribe/latest/dg/streaming.html print(input_device) #Open stream stream = pa.open(format = pyaudio.paInt16, channels = input_channel_count, rate = int(input_sample_rate), input = True, frames_per_buffer = default_frames, input_device_index = input_dev_index, stream_callback=callback) # Initiate the audio stream and asynchronously yield the audio chunks # as they become available. stream.start_stream() print(“started stream”) while True: indata = await input_queue.get() yield indata []Now when the generator function mic_stream() is called, it continually yields input bytes as long as there is microphone input data in the input queue.

[]Now that you know how to get input bytes from the microphone, let’s look at how to write Amazon Polly output audio bytes to a speaker output stream:

#text will come from MyEventsHandler def aws_polly_tts(text): response = polly.synthesize_speech( Engine = ‘standard’, LanguageCode = params[‘lang_code_for_polly’], Text=text, VoiceId = params[‘voice_id’], OutputFormat = “pcm”, ) output_bytes = response[‘AudioStream’] #play to the speakers write_to_speaker_stream(output_bytes) #how to write audio bytes to speakers def write_to_speaker_stream(output_bytes): “””Consumes bytes in chunks to produce the response’s output'””” print(“Streaming started…”) chunk_len = 1024 channels = 1 sample_rate = 16000 if output_bytes: polly_stream = pa.open( format = pyaudio.paInt16, channels = channels, rate = sample_rate, output = True, ) #this is a blocking call – will sort this out with concurrent later while True: data = output_bytes.read(chunk_len) polly_stream.write(data) #If there’s no more data to read, stop streaming if not data: output_bytes.close() polly_stream.stop_stream() polly_stream.close() break print(“Streaming completed.”) else: print(“Nothing to stream.”) []Now let’s expand on what you built in the post Asynchronous Amazon Transcribe Streaming SDK for Python. In the following code, you create an executor object using the ThreadPoolExecutor subclass with three workers with concurrent. You then add an Amazon Translate call on the finalized returned transcript in the EventHandler and pass that translated text, the executor object, and our aws_polly_tts() function into an asyncio loop with loop.run_in_executor(), which runs our Amazon Polly function (with translated input text) asynchronously at the start of next iteration of the asyncio loop.

#use concurrent package to create an executor object with 3 workers ie threads executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) class MyEventHandler(TranscriptResultStreamHandler): async def handle_transcript_event(self, transcript_event: TranscriptEvent): #If the transcription is finalized, send it to translate results = transcript_event.transcript.results if len(results) > 0: if len(results[0].alternatives) > 0: transcript = results[0].alternatives[0].transcript print(“transcript:”, transcript) print(results[0].channel_id) if hasattr(results[0], “is_partial”) and results[0].is_partial == False: #translate only 1 channel. the other channel is a duplicate if results[0].channel_id == “ch_0”: trans_result = translate.translate_text( Text = transcript, SourceLanguageCode = params[‘source_language’], TargetLanguageCode = params[‘target_language’] ) print(“translated text:” + trans_result.get(“TranslatedText”)) text = trans_result.get(“TranslatedText”) #we run aws_polly_tts with a non-blocking executor at every loop iteration await loop.run_in_executor(executor, aws_polly_tts, text) []Finally, we have the loop_me() function. In it, you define write_chunks(), which takes an Amazon Transcribe stream as an argument and asynchronously writes chunks of streaming mic input to it. You then use MyEventHandler() with the output transcription stream as its argument and create a handler object. Then you use await with asyncio.gather() and pass in the write_chunks() and handler with the handle_events() method to handle the eventual futures of these coroutines. Lastly, you gather all event loops and loop the loop_me() function with run_until_complete(). See the following code:

async def loop_me(): # Setup up our client with our chosen AWS region client = TranscribeStreamingClient(region=”us-west-2″) stream = await client.start_stream_transcription( language_code=params[‘lang_code_for_transcribe’], media_sample_rate_hz=int(device_info[“defaultSampleRate”]), number_of_channels = 2, enable_channel_identification=True, media_encoding=”pcm”, ) recorded_frames = [] async def write_chunks(stream): # This connects the raw audio chunks generator coming from the microphone # and passes them along to the transcription stream. print(“getting mic stream”) async for chunk in mic_stream(): t.tic() recorded_frames.append(chunk) await stream.input_stream.send_audio_event(audio_chunk=chunk) t.toc(“chunks passed to transcribe: “) await stream.input_stream.end_stream() handler = MyEventHandler(stream.output_stream) await asyncio.gather(write_chunks(stream), handler.handle_events()) #write a proper while loop here loop = asyncio.get_event_loop() loop.run_until_complete(loop_me()) loop.close() []When the preceding code is run together without errors, you can speak into the microphone and quickly hear your voice translated to Mandarin Chinese. The automatic language detection feature for Amazon Transcribe and Amazon Translate translates any supported input language into the target language. You can speak for quite some time and because of the non-blocking nature of the function calls, all your speech input is translated and spoken, making this an excellent tool for translating live speeches.

Conclusion

[]Although this post demonstrated how these three fully managed AWS APIs can function seamlessly together, we encourage you to think about how you could use these services in other ways to deliver multilingual support for services or media like multilingual closed captioning for a fraction of the current cost. Medicine, business, and even diplomatic relations could all benefit from an ever-improving, low-cost, low-maintenance translation service.

[]For more information about the proof of concept code base for this use case check out our Github.

About the Authors

[]Michael Tran is a Solutions Architect with Envision Engineering team at Amazon Web Services. He provides technical guidance and helps customers accelerate their ability to innovate through showing the art of the possible on AWS. He has built multiple prototypes around AI/ML, and IoT for our customers. You can contact me @Mike_Trann on Twitter.

[]Cameron Wilkes is a Prototyping Architect on the AWS Industry Accelerator team. While on the team he delivered several ML based prototypes to customers to demonstrate the “Art of the Possible” of ML on AWS. He enjoys music production, off-roading and design.



Source

Continue Reading

Amazon

Use Amazon SageMaker Data Wrangler in Amazon SageMaker Studio with a default lifecycle configuration

If you use the default lifecycle configuration for your domain or user profile in Amazon SageMaker Studio and use Amazon SageMaker Data Wrangler for data preparation, then this post is for you. In this post, we show how you can create a Data Wrangler flow and use it for data preparation in a Studio environment…

Published

on

By

If you use the default lifecycle configuration for your domain or user profile in Amazon SageMaker Studio and use Amazon SageMaker Data Wrangler for data preparation, then this post is for you. In this post, we show how you can create a Data Wrangler flow and use it for data preparation in a Studio environment with a default lifecycle configuration.

Data Wrangler is a capability of Amazon SageMaker that makes it faster for data scientists and engineers to prepare data for machine learning (ML) applications via a visual interface. Data preparation is a crucial step of the ML lifecycle, and Data Wrangler provides an end-to-end solution to import, explore, transform, featurize, and process data for ML in a visual, low-code experience. It lets you easily and quickly connect to AWS components like Amazon Simple Storage Service (Amazon S3), Amazon Athena, Amazon Redshift, and AWS Lake Formation, and external sources like Snowflake and DataBricks DeltaLake. Data Wrangler supports standard data types such as CSV, JSON, ORC, and Parquet.

Studio apps are interactive applications that enable Studio’s visual interface, code authoring, and run experience. App types can be either Jupyter Server or Kernel Gateway:

  • Jupyter Server – Enables access to the visual interface for Studio. Every user in Studio gets their own Jupyter Server app.
  • Kernel Gateway – Enables access to the code run environment and kernels for your Studio notebooks and terminals. For more information, see Jupyter Kernel Gateway.

Lifecycle configurations (LCCs) are shell scripts to automate customization for your Studio environments, such as installing JupyterLab extensions, preloading datasets, and setting up source code repositories. LCC scripts are triggered by Studio lifecycle events, such as starting a new Studio notebook. To set a lifecycle configuration as the default for your domain or user profile programmatically, you can create a new resource or update an existing resource. To associate a lifecycle configuration as a default, you first need to create a lifecycle configuration following the steps in Creating and Associating a Lifecycle Configuration

Note: Default lifecycle configurations set up at the domain level are inherited by all users, whereas those set up at the user level are scoped to a specific user. If you apply both domain-level and user profile-level lifecycle configurations at the same time, the user profile-level lifecycle configuration takes precedence and is applied to the application irrespective of what lifecycle configuration is applied at the domain level. For more information, see Setting Default Lifecycle Configurations.

Data Wrangler accepts the default Kernel Gateway lifecycle configuration, but some of the commands defined in the default Kernel Gateway lifecycle configuration aren’t applicable to Data Wrangler, which can cause Data Wrangler to fail to start. The following screenshot shows an example of an error message you might get when launching the Data Wrangler flow. This may happen only with default lifecycle configurations and not with lifecycle configurations.

Data Wrangler Error

Solution overview

Customers using the default lifecycle configuration in Studio can follow this post and use the supplied code block within the lifecycle configuration script to launch a Data Wrangler app without any errors.

Set up the default lifecycle configuration

To set up a default lifecycle configuration, you must add it to the DefaultResourceSpec of the appropriate app type. The behavior of your lifecycle configuration depends on whether it’s added to the DefaultResourceSpec of a Jupyter Server or Kernel Gateway app:

  • Jupyter Server apps – When added to the DefaultResourceSpec of a Jupyter Server app, the default lifecycle configuration script runs automatically when the user logs in to Studio for the first time or restarts Studio. You can use this to automate one-time setup actions for the Studio developer environment, such as installing notebook extensions or setting up a GitHub repo. For an example of this, see Customize Amazon SageMaker Studio using Lifecycle Configurations.
  • Kernel Gateway apps – When added to the DefaultResourceSpec of a Kernel Gateway app, Studio defaults to selecting the lifecycle configuration script from the Studio launcher. You can launch a notebook or terminal with the default script or choose a different one from the list of lifecycle configurations.

A default Kernel Gateway lifecycle configuration specified in DefaultResourceSpec applies to all Kernel Gateway images in the Studio domain unless you choose a different script from the list presented in the Studio launcher.

When you work with lifecycle configurations for Studio, you create a lifecycle configuration and attach it to either your Studio domain or user profile. You can then launch a Jupyter Server or Kernel Gateway application to use the lifecycle configuration.

The following table summarizes these errors you may encounter when launching a Data Wrangler application with default lifecycle configurations.

Level at Which the Lifecycle Configuration Is Applied

Create Data Wrangler Flow Works (or) Error

Workaround
Domain Bad Request Error Apply the script (see below)
User Profile Bad Request Error Apply the script (see below)
Application Works—No issue Not required

When you use the default lifecycle configuration associated with Studio and Data Wrangler (Kernel Gateway app), you might encounter Kernel Gateway app failure. In this post, we demonstrate how to set the default lifecycle configuration properly to exclude running commands in a Data Wrangler application so you don’t encounter Kernel Gateway app failure.

Let’s say you want to install a git-clone-repo script as the default lifecycle configuration that checks out a Git repository under the user’s home folder automatically when the Jupyter server starts. Let’s look at each scenario of applying a lifecycle configuration (Studio domain, user profile, or application level).

Apply lifecycle configuration at the Studio domain or user profile level

To apply the default Kernel Gateway lifecycle configuration at the Studio domain or user profile level, complete the steps in this section. We start with instructions for the user profile level.

In your lifecycle configuration script, you have to include the following code block that checks and skips the Data Wrangler Kernel Gateway app:

#!/bin/bash set -eux STATUS=$( python3 -c “import sagemaker_dataprep” echo $? ) if [ “$STATUS” -eq 0 ]; then echo ‘Instance is of Type Data Wrangler’ else echo ‘Instance is not of Type Data Wrangler’ fi

For example, let’s use the following script as our original (note that the folder to clone the repo is changed to /root from /home/sagemaker-user):

# Clones a git repository into the user’s home folder #!/bin/bash set -eux # Replace this with the URL of your git repository export REPOSITORY_URL=”https://github.com/aws-samples/sagemaker-studio-lifecycle-config-examples.git” git -C /root clone $REPOSITORY_URL

The new modified script looks like the following:

#!/bin/bash set -eux STATUS=$( python3 -c “import sagemaker_dataprep” echo $? ) if [ “$STATUS” -eq 0 ]; then echo ‘Instance is of Type Data Wrangler’ else echo ‘Instance is not of Type Data Wrangler’ # Replace this with the URL of your git repository export REPOSITORY_URL=”https://github.com/aws-samples/sagemaker-studio-lifecycle-config-examples.git” git -C /root clone $REPOSITORY_URL fi

You can save this script as git_command_test.sh.

Now you run a series of commands in your terminal or command prompt. You should configure the AWS Command Line Interface (AWS CLI) to interact with AWS. If you haven’t set up the AWS CLI, refer to Configuring the AWS CLI.

  1. Convert your git_command_test.sh file into Base64 format. This requirement prevents errors due to the encoding of spacing and line breaks. LCC_GIT=openssl base64 -A -in /Users/abcde/Downloads/git_command_test.sh
  2. Create a Studio lifecycle configuration. The following command creates a lifecycle configuration that runs on launch of an associated Kernel Gateway app: aws sagemaker create-studio-lifecycle-config —region us-east-2 —studio-lifecycle-config-name lcc-git —studio-lifecycle-config-content $LCC_GIT —studio-lifecycle-config-app-type KernelGateway
  3. Use the following API call to create a new user profile with an associated lifecycle configuration: aws sagemaker create-user-profile –domain-id d-vqc14vvvvvvv –user-profile-name test –region us-east-2 –user-settings ‘{ “KernelGatewayAppSettings”: { “LifecycleConfigArns” : [“arn:aws:sagemaker:us-east-2:000000000000:studio-lifecycle-config/lcc-git”], “DefaultResourceSpec”: { “InstanceType”: “ml.m5.xlarge”, “LifecycleConfigArn”: “arn:aws:sagemaker:us-east-2:00000000000:studio-lifecycle-config/lcc-git” } } }’

    Alternatively, if you want to create a Studio domain to associate your lifecycle configuration at the domain level, or update the user profile or domain, you can follow the steps in Setting Default Lifecycle Configurations.

  4. Now you can launch your Studio app from the SageMaker Control Panel.control Panel
  5. In your Studio environment, on the File menu, choose New and Data Wrangler Flow.The new Data Wrangler flow should open without any issues.
    New Data Wrangler Flow
  6. To validate the Git clone, you can open a new Launcher in Studio.
  7. Under Notebooks and compute resources, choose the Python 3 notebook and the Data Science SageMaker image to start your script as your default lifecycle configuration script.
    Notebook and Compute

You can see the Git cloned to /root in the following screenshot.

Git cloned to /root

We have successfully applied the default Kernel lifecycle configuration at the user profile level and created a Data Wrangler flow. To configure at the Studio domain level, the only change is instead of creating a user profile, you pass the ARN of the lifecycle configuration in a create-domain call.

Apply lifecycle configuration at the application level

If you apply the default Kernel Gateway lifecycle configuration at the application level, you won’t have any issues because Data Wrangler skips the lifecycle configuration applied at the application level.

Conclusion

In this post, we showed how to configure your default lifecycle configuration properly for Studio when you use Data Wrangler for data preparation and visualization requirements.

To summarize, if you need to use the default lifecycle configuration for Studio to automate customization for your Studio environments and use Data Wrangler for data preparation, you can apply the default Kernel Gateway lifecycle configuration at the user profile or Studio domain level with the appropriate code block included in your lifecycle configuration so that the default lifecycle configuration checks it and skips the Data Wrangler Kernel Gateway app.

For more information, see the following resources:

About the Authors

Rajakumar Sampathkumar is a Principal Technical Account Manager at AWS, providing customers guidance on business-technology alignment and supporting the reinvention of their cloud operation models and processes. He is passionate about cloud and machine learning. Raj is also a machine learning specialist and works with AWS customers to design, deploy, and manage their AWS workloads and architectures.

Vicky Zhang is a Software Development Engineer at Amazon SageMaker. She is passionate about problem solving. In her spare time, she enjoys watching detective movies and playing badminton.

Rahul Nabera is a Data Analytics Consultant in AWS Professional Services. His current work focuses on enabling customers build their data and machine learning workloads on AWS. In his spare time, he enjoys playing cricket and volleyball.



Source

Continue Reading

Amazon

AWS Week in Review – July 4, 2022

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! Summer has arrived in Finland, and these last few days have been hotter than in the Canary Islands! Today in the US it is Independence Day. I hope that if…

Published

on

By

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Summer has arrived in Finland, and these last few days have been hotter than in the Canary Islands! Today in the US it is Independence Day. I hope that if you are celebrating, you’re having a great time. This week I’m very excited about some developer experience and artificial intelligence launches.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

AWS SAM Accelerate is now generally available – SAM Accelerate is a new capability of the AWS Serverless Application Model CLI, which makes it easier for serverless developers to test code changes against the cloud. You can do a hot swap of code directly in the cloud when making a change in your local development environment. This allows you to develop applications faster. Learn more about this launch in the What’s New post.

Amplify UI for React is generally available – Amplify UI is an open-source UI library that helps developers build cloud-native applications. Amplify UI for React comes with over 35 components that you can use, an authentication component that allows you to connect to your backend with no extra configuration, theming for your components. You can also build your UI using Figma. Check the Amplify UI for React site to learn more about all the capabilities offered.

Amazon Connect has new announcements – First, Amazon Connect added support to personalize the flows of the customer experience using Amazon Lex sentiment analysis. It also added support to branch out the flows depending on Amazon Lex confidence scores. Lastly, it added confidence scores to Amazon Connect Customer Profiles to help companies merge duplicate customer records.

Amazon QuickSight – QuickSight authors can now learn and experience Q before signing up. Authors can choose from six different sample topics and explore different visualizations. In addition, QuickSight now supports Level Aware Calculations (LAC) and rolling date functionality. These two new features bring flexibility and simplification to customers to build advanced calculation and dashboards.

Amazon SageMaker – RStudio on SageMaker now allows you to bring your own development environment in a custom image. RStudio on SageMaker is a fully managed RStudio Workbench in the cloud. In addition, SageMaker added four new tabular data modeling algorithms: LightGBM, CatBoost, AutoGluon-Tabular, and TabTransformer to the existing set of built-in algorithms, pre-trained models and pre-built solution templates it provides.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

AWS Support announced an improved experience when creating a case – There is a new interface for creating support cases in the AWS Support Center console. Now you can create a case with a simplified three-step process that guides you through the flow. Learn more about this new process in the What’s new post.

New AWS Step Functions workflows collection on Serverless Land – The Step Functions workflows collection is a new experience that makes it easier to discover, deploy, and share AWS Step Functions workflows. In this collection, you can find opinionated templates that implement the best practices to build using Step Functions. Learn more about this new collection in Ben’s blog post.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS Podcasts in Spanish, which shares a new episode ever other week. The podcast is meant for builders, and it shares stories about how customers implement and learn AWS, how to architect applications, and how to use new services. You can listen to all the episodes directly from your favorite podcast app or from the AWS Podcasts en español website.

AWS open-source news and updates – A newsletter curated by my colleague Ricardo brings you the latest open-source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Summit New York – Join us on July 12 for the in-person AWS Summit. You can register on the AWS Summit page for free.

AWS re:Inforce – This is an in-person learning conference with a focus on security, compliance, identity, and privacy. You can register now to access hundreds of technical sessions, and other content. It will take place July 26 and 27 in Boston, MA.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia



Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.