Amazon Inspector Now Scans AWS Lambda Functions for Vulnerabilities
Amazon Inspector is a vulnerability management service that continually scans workloads across Amazon Elastic Compute Cloud (Amazon EC2) instances, container images living in Amazon Elastic Container Registry (Amazon ECR), and, starting today, AWS Lambda functions and Lambda layers. Until today, customers that wanted to analyze their mixed workloads (including EC2 instances, container images, and Lambda…
Until today, customers that wanted to analyze their mixed workloads (including EC2 instances, container images, and Lambda functions) against common vulnerabilities needed to use AWS and third-party tools. This increased the complexity of keeping all their workloads secure.
In addition, the log4j vulnerability a few months ago was a great example that scanning your functions for vulnerabilities only before deployment is not enough. Because new vulnerabilities can appear at any time, it is very important for the security of your applications that the workloads are continuously monitored and rescanned in near real-time as new vulnerabilities are published.
Getting started The first step to getting started with Amazon Inspector is to enable it for your account or your entire AWS Organizations. Once activated, Amazon Inspector automatically scans the functions in the selected accounts. Amazon Inspector is a native AWS service; this means that you don’t need to install a library or agent in your functions or layers for this to work.
Amazon Inspector is available starting today for functions and layers written in Java, NodeJS, and Python. By default, it continually scans all the functions inside your account, but if you want to exclude a particular Lambda function, you can attach the tag with the key InspectorExclusion and the value LambdaStandardScanning.
Amazon Inspector scans functions and layers initially upon deployment and automatically rescans them when there are changes in the workloads, for example, when a Lambda function is updated or when a new vulnerability (CVE) is published.
In addition to functions, Amazon Inspector scans your Lambda layers; however, it only scans the specific layer version that is used in a function. If a layer or layer version is not used by any function, then it won’t get analyzed. If you are using third-party layers, Amazon Inspector also scans them for vulnerabilities.
You can see the findings for the different functions in the Amazon Inspector Findings console filtered By Lambda function. When Amazon Inspector finds something, all the findings are routed to AWS Security Hub and to Amazon EventBridge so you can build automation workflows, like sending notifications to the developers or system administrators.
Available Now Amazon Inspector support for AWS Lambda functions and layers is generally available today in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (Sao Paulo).
Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents
In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source
In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.
Implement model-independent safety measures with Amazon Bedrock Guardrails
In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture. Source
In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.
Visier’s data science team boosts their model output 10 times by migrating to Amazon SageMaker
In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker. Source
In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker.