SCIPAGE.COM Daily Science News

SCIPAGE.COM Daily Science News

Headlines

AI Companies Agree To Safeguards To Ensure AI Tools Are Secure

Published by Jean Jarvaise

July 22, 2023 4:20 am

AI Firms Agree to Safeguards to Protect Against Bias and Misuse

A group of leading artificial intelligence (AI) firms have agreed to a set of voluntary safeguards to help ensure that their products are safe and fair.

The safeguards, which were announced by the White House on Friday, include commitments to:

  • Conducting security testing of AI systems to identify and mitigate potential risks.

  • Providing transparency about the design and operation of AI systems, including how they are trained and how they make decisions.

  • Publicly reporting flaws and risks in AI systems, including effects on fairness and bias.

  • Using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.

The participating firms include Amazon, Google, Meta, Microsoft, OpenAI, and Anthropic.

"These safeguards are an important step forward in ensuring that AI is developed and used responsibly," said White House National Security Advisor Jake Sullivan. "They will help to protect against the misuse of AI for harmful purposes, and they will help to build public trust in this powerful technology."

The safeguards are voluntary, but the White House said it hopes that other AI firms will adopt them. The administration is also working on legislation that would codify some of the safeguards into law.

The agreement comes at a time when there is growing concern about the potential risks of AI. AI systems have been used to create deepfakes that can be used to spread misinformation or to impersonate real people. AI systems have also been used to discriminate against people based on their race, gender, or other factors.

The safeguards announced by the White House are designed to help address these concerns. By increasing transparency and accountability, the safeguards will help to ensure that AI systems are developed and used in a responsible way.

The agreement is a positive step forward, but it is important to note that the safeguards are voluntary. It is still possible that some AI firms will not adopt them. However, the White House's announcement is a sign that the government is taking the potential risks of AI seriously. It is also a sign that the government is working with the private sector to develop responsible AI practices.