Biden Administration has released its first comprehensive policy for managing AI risks, requiring agencies to appoint a chief AI officer.
With the announcement of its first complete strategy for addressing the risks connected with artificial intelligence (AI), the White House has mandated that agencies increase their reporting on AI deployment and address the possible vulnerabilities posed by the technology.
Biden Administration Plots Out Regulatory Measures
The White House issued a memorandum on March 28 mandating that federal agencies appoint a chief artificial intelligence officer within sixty days, report on their use of AI, and implement protective measures.
This regulation follows the executive order on artificial intelligence that President Joe Biden issued in October 2023. Vice President Kamala Harris stated the following during a teleconference with reporters:
“I believe that all leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone can enjoy its full benefits.”
“The most recent regulation, which is an initiative by the Office of Management and Budget (OMB), aims to guide the entire federal government in safely and efficiently utilizing artificial intelligence amid its rapid expansion.”
Even though the government is working to capitalize on the possibilities of artificial intelligence, the administration of Vice President Joe Biden continues to be wary of the constantly emerging threats.
The memo states that the inventory would not require the disclosure of certain AI use cases, particularly those within the Department of Defense.
This is due to the fact that disclosing these use cases would contravene existing laws and policies across the government. Applications of artificial intelligence that potentially violate the rights or safety of Americans must have specified protections in place by the first of December.
For instance, we recommend allowing passengers to opt out of the Transportation Security Administration’s (TSA) facial recognition technology at airports.
Organizations that are unable to apply these safeguards are required to stop utilizing the artificial intelligence system, unless the leadership of the organization can justify why doing otherwise would increase risks to safety or rights or hamper essential agency activities.
The Biden administration’s October 2022 “AI Bill of Rights” and the National Institute of Standards and Technology’s January 2023 AI Risk Management Framework align with recent AI directives from the Office of Management and Budget (OMB).
These endeavors emphasize the need to develop trustworthy artificial intelligence systems. The Office of Management and Budget is also looking for feedback on enforcing compliance and best practices among government contractors who deliver technology.
It plans to align its policy with the AI contracts of various agencies in the latter part of 2024. Additionally, the administration disclosed its aim to bring 100 artificial intelligence workers into the government by the summer, as described in the executive order issued in October and referred to as the “talent surge.”