A year after major US tech companies agreed to a voluntary initiative led by the Biden administration to manage the risks posed by artificial intelligence (AI), Amazon said a global coalition is needed on responsible AI practices.
“It’s now clear that we can enact rules that protect against risks while also ensuring that we don’t stifle innovation,” David Zapolsky, Amazon’s senior vice president of global public policy and general counsel, said in a post on the company’s website on Monday (July 22). “But to protect the economic prosperity and security of the United States, we need global alignment on responsible AI practices.”
One way to achieve a responsible approach to AI is for all companies in the AI space to commit to adopting responsible technology, Zapolsky said. This could include measures like those Amazon has taken to embed invisible watermarks in tools that allow users to generate images to reduce the spread of misinformation.
Additionally, all companies need to be transparent about how they develop and deploy AI, Zapolsky said. Amazon, for example, has created AI service cards to inform Amazon Web Services (AWS) customers of the limitations of its AI services, best practices for responsible AI, and how to build AI applications securely.
Another key to responsible AI is cooperation and information sharing between companies and governments, Zapolsky said, pointing to the National Consortium of Artificial Intelligence Safety Labs, established by the National Institute of Standards and Technology to advance research and measurement of AI safety.
“For the United States and our allies to maximize the benefits of AI while minimizing its risks, we must continue to work together to establish AI guardrails that are consistent with democratic values, ensure economic prosperity and security, ensure global interoperability, promote competition, and enhance safe and responsible innovation,” Zapolsky said.
On July 21, 2023, the White House announced that Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI had made voluntary commitments to support efforts to improve the safety, security, and transparency of AI technology.
These efforts include a range of measures aimed at better understanding the risks and ethical implications of AI, as well as increasing transparency and limiting its potential for misuse.