Connect with us

Amazon

Protecting Consumers and Promoting Innovation – AI Regulation and Building Trust in Responsible AI

Artificial intelligence (AI) is one of the most transformational technologies of our generation and provides huge opportunities to be a force for good and drive economic growth. It can help scientists cure terminal diseases, engineers build inconceivable structures, and farmers yield more crops. AI allows us to make sense of our world as never before—and…

Published

on

Artificial intelligence (AI) is one of the most transformational technologies of our generation and provides huge opportunities to be a force for good and drive economic growth. It can help scientists cure terminal diseases, engineers build inconceivable structures, and farmers yield more crops. AI allows us to make sense of our world as never before—and build products and services to address some of our most challenging problems, like climate change and responding to humanitarian disasters. AI is also helping industries innovate and overcome more commonplace challenges. Manufacturers are deploying AI to avoid equipment downtime through predictive maintenance and streamlining their logistics and distribution channels through supply chain optimization. Airlines are taking advantage of AI technologies to enhance the customer booking experience, assist with crew scheduling, and transporting passengers with greater fuel efficiency by simulating routes based on distance, aircraft weight, and weather.

While the benefits of AI are already plain to see and improving our lives each day, unlocking AI’s full potential will require building greater confidence among consumers. That means earning public trust that AI will be used responsibly and in a manner that is consistent with the rule of law, human rights, and the values of equity, privacy, and fairness.

Understanding the important need for public trust, we work closely with policymakers across the country and around the world as they assess whether existing consumer protections remain fit-for-purpose in an AI era. An important baseline for any regulation must be to differentiate between high-risk AI applications and those that pose low-to-no risk. The great majority of AI applications fall in the latter category, and their widespread adoption provides opportunities for immense productivity gains and, ultimately, improvements in human well-being. If we are to inspire public confidence in the overwhelmingly good, businesses must demonstrate they can confidently mitigate the potential risks of high-risk AI. The public should be confident that these sorts of high-risk systems are safe, fair, appropriately transparent, privacy protective, and subject to appropriate oversight.

At AWS, we recognize that we are well positioned to deliver on this vision and are proud to support our customers as they invent, build, and deploy AI systems to solve real-world problems. As AWS offers the broadest and deepest set of AI services and the supporting cloud infrastructure, we are committed to developing fair and accurate AI services and providing customers with the tools and guidance needed to build applications responsibly. We recognize that responsible AI is the shared responsibility of all organizations that develop and deploy AI systems.

We are committed to providing tools and resources to aide customers using our AI and machine learning (ML) services. Earlier this year, we launched our Responsible Use of Machine Learning guide, providing considerations and recommendations for responsibly using ML across all phases of the ML lifecycle. In addition, at our 2020 AWS re:Invent conference, we rolled out Amazon SageMaker Clarify, a service that provides developers with greater insights into their data and models, helping them understand why an ML model made a specific prediction and also whether the predictions were impacted by bias. Additional resources, access to AI/ML experts, and education and training can also be found on our Responsible use of artificial intelligence and machine learning page.

We continue to expand efforts to provide guidance and support to customers and the broader community in the responsible use space. This week at our re:Invent 2022 conference, we announced the launch of AWS AI Service Cards, a new transparency resource to help customers better understand our AWS AI services. The new AI Service Cards deliver a form of responsible AI documentation that provide customers with a single place to find information.

Each AI Service Card covers four key topics to help you better understand the service or service features, including intended use cases and limitations, responsible AI design considerations, and guidance on deployment and performance optimization. The content of the AI Service Cards addresses a broad audience of customers, technologists, researchers, and other stakeholders who seek to better understand key considerations in the responsible design and use of an AI service.

Conversations among policymakers regarding AI regulations continue as the technologies become more established. AWS is focused on not only offering the best-in-class tools and services to provide for the responsible development and deployment of AI services, but also continuing our engagement with lawmakers to promote strong consumer protections while encouraging the fast pace of innovation.

About the Author

Nicole Foster is Director of AWS Global AI/ML and Canada Public Policy at Amazon, where she leads the direction and strategy of artificial intelligence public policy for Amazon Web Services (AWS) around the world as well as the company’s public policy efforts in support of the AWS business in Canada. In this role, she focuses on issues related to emerging technology, digital modernization, cloud computing, cyber security, data protection and privacy, government procurement, economic development, skilled immigration, workforce development, and renewable energy policy.



Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Source

Published

on

By

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

Source

Continue Reading

Amazon

Implement model-independent safety measures with Amazon Bedrock Guardrails

In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture. Source

Published

on

By

In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.

Source

Continue Reading

Amazon

Visier’s data science team boosts their model output 10 times by migrating to Amazon SageMaker

In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker. Source

Published

on

By

In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using Amazon SageMaker.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.