Connect with us

Amazon

New – Amazon EC2 M6a Instances Powered By 3rd Gen AMD EPYC Processors

AWS and AMD have collaborated to give customers more choice and value in cloud computing, starting with the first generation AMD EPYC™ processors in 2018 such as M5a/R5a, M5ad/R5ad, and T3a instances. In 2020, we expanded the second generation AMD EPYC™ processors to include C5a/C5ad instances and recently G4ad instances, combining the power of both…

Published

on

AWS and AMD have collaborated to give customers more choice and value in cloud computing, starting with the first generation AMD EPYC™ processors in 2018 such as M5a/R5a, M5ad/R5ad, and T3a instances. In 2020, we expanded the second generation AMD EPYC™ processors to include C5a/C5ad instances and recently G4ad instances, combining the power of both second-generation AMD EPYC™ processors and AMD Radeon Pro GPUs.

Today, I am happy to announce the general availability of Amazon EC2 M6a instances featuring the 3rd Gen AMD EPYC processors, running at frequencies up to 3.6 GHz to offer up to 35 percent price performance versus the previous generation M5a instances.

You can launch M6a instances today in ten sizes in the AWS US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions as On-Demand, Spot, and Reserved Instance or as part of a Savings Plan. Here are the specs:

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
m6a.large 2 8 Up to 12.5 Up to 6.6
m6a.xlarge 4 16 Up to 12.5 Up to 6.6
m6a.2xlarge 8 32 Up to 12.5 Up to 6.6
m6a.4xlarge 16 64 Up to 12.5 Up to 6.6
m6a.8xlarge 32 128 12.5 6.6
m6a.12xlarge 48 192 18.75 10
m6a.16xlarge 64 256 25 13.3
m6a.24xlarge 96 384 37.5 20
m6a.32xlarge 128 512 50 26.6
m6a.48xlarge 192 768 50 40

Compared to M5a instances, the new M6a instances offer:

  • Larger instance size with 48xlarge with up to 192 vCPUs and 768 GiB of memory, enabling you to consolidate more workloads on a single instance. M6a also offers Elastic Fabric Adapter (EFA) support for workloads that benefit from lower network latency and highly scalable inter-node communication, such as HPC and video processing.
  • Up to 35 percent higher price performance per vCPU versus comparable M5a instances, up to 50 Gbps of networking speed, and up to 40 Gbps bandwidth of Amazon EBS, more than twice that of M5a instances.
  • Always-on memory encryption and support for new AVX2 instructions for accelerating encryption and decryption algorithms

M6a instances expand the 6th generation general purpose instances portfolio and provide high-performance processing at 10 percent lower cost over comparable x86 instances. M6a instances are a good fit for running general-purpose workloads such as web servers,  application servers, and small data stores.

To learn more, visit the M6a instances page. Please send feedback to ec2-amd-customer-feedback@amazon.com, AWS forum for EC2, or through your usual AWS Support contacts.

— Channy



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source

Published

on

By

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

Source

Continue Reading

Amazon

Implement model-independent safety measures with Amazon Bedrock Guardrails

In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture. Source

Published

on

By

In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.

Source

Continue Reading

Amazon

Visier’s data science team boosts their model output 10 times by migrating to Amazon SageMaker

In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker. Source

Published

on

By

In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.