NIST's Call for Public Input on Responsible AI Development
The National Institute of Standards and Technology (NIST) of the United States, a part of the United States Department of Commerce, issued a request for information to support its duties outlined in the most recent presidential executive order on the development and use of artificial intelligence (AI) in a secure and responsible manner.
In order to collect vital feedback for the purpose of performing tests to assure the safety of artificial intelligence systems, the organization has stated that it is inviting public input until February 2, 2024.
Secretary of Commerce Gina Raimondo of the United States stated that President Joe Biden issued an executive order in October instructing the National Institute of Standards and Technology (NIST) to develop guidelines.
These guidelines include evaluation and red-teaming, the promotion of consensus-based standards, and the establishment of testing environments for artificial intelligence system evaluation. The goal of this framework is to provide assistance to the community of artificial intelligence developers so that they can develop AI in a responsible, dependable, and safe manner.
In its request for information, the National Institute of Standards and Technology (NIST) is looking for feedback from artificial intelligence (AI) organizations and the general public regarding the management of risks associated with generative AI and the reduction of risks associated with AI-generated misinformation.
Generative artificial intelligence, which is able to generate text, images, and videos based on open-ended cues, has received both excitement and alarm from the scientific community.
Concerns have been raised over the possibility of technology surpassing human skills, which might have potentially disastrous repercussions, as well as the displacement of jobs and disturbances to election processes.
Additionally, the request from the NIST requests information regarding the most efficient areas for “red-teaming” in artificial intelligence risk assessment as well as the establishment of best practices.
In the context of Cold War simulations, the term “red-teaming” refers to a technique in which a group of individuals, collectively referred to as the “red team,” simulates potential adversarial situations or attacks in order to evaluate the vulnerabilities and weaknesses of a system, process, or organization.
For a long time, cybersecurity professionals have utilized this strategy to discover new threats. AI Village, SeedAI, and Humane Intelligence coordinated the first public evaluation red-teaming exercise in the United States, which took place in August during a cybersecurity conference.
In November, the National Institute of Standards and Technology (NIST) made an announcement about the establishment of a new artificial intelligence consortium. An official notice accompanied this announcement, expressing the office’s desire to recruit individuals who possessed the appropriate credentials.
In order to ensure that lawmakers in the United States take a human-centered approach to the safety and regulation of artificial intelligence, the consortium intends to develop and put into practice certain policies and measurements.