G7’s AI Code of Conduct for Responsible Development

G7's AI Code of Conduct for Responsible Development

G7’s AI Code of Conduct for Responsible Development

Reuters reports that the Group of Seven (G7) industrial nations will concur on an artificial intelligence (AI) code of conduct for developers on October 30.

According to the report, the code consists of eleven points that seek to promote “safe, secure, and trustworthy AI worldwide” and help “capture” the benefits of AI while addressing and mitigating its risks.

The G7 leaders drafted the proposal in September. It states that it provides voluntary action guidance to “organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”

In addition, it suggests that firms should publish reports on the capabilities, limitations, use, and exploitation of the systems they are developing.

Also recommended are robust security controls for the systems in question. Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union are G7 participants.

On April 29 and 30, this year’s G7 was conducted in Hiroshima, Japan, with a meeting between all participating digital and tech ministers. Emerging technologies, digital infrastructure, and AI were discussed at the meeting, with a specific agenda item devoted to responsible AI and global AI governance.

The G7’s code of conduct for AI arrives at a time when governments around the world are attempting to navigate the emergence of AI and its useful capabilities and concerns.

The EU was among the first to establish guidelines with the passage of the landmark EU AI Act’s first draft in June. The United Nations established a 39-member advisory committee on October 26 to address global AI regulation issues.

In August, the Chinese government also enacted its artificial intelligence regulation.

OpenAI, the developer of the popular AI chatbot ChatGPT, announced its intention to establish a “preparedness” team that will evaluate a variety of AI-related risks.

Read Previous

UK Government to Regulate Fiat-Backed Stablecoins

Read Next

Emerging Trends – The Evolution of Blockchain and IoT Convergence