Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. Among the highlights: the establishment of the United States AI Safety Institute, the first release of draft policy guidance on the federal government’s use of AI and a declaration on the responsible military applications for the emerging technology.
“President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said in her prepared remarks.
“Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” she said. The existential threats that generative AI systems present was a central theme of the summit.
“To define AI safety we must consider and address the full spectrum of AI risk — threats to humanity as a whole, threats to individuals, to our communities and to our institutions, and threats to our most vulnerable populations,” she continued. “To make sure AI is safe, we must manage all these dangers.”
To that end, Harris announced Wednesday that the White House, in cooperation with the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) within the NIST. It will be responsible for actually creating and publishing the all of the guidelines, benchmark tests, best practices and such for testing and evaluating potentially dangerous AI systems.
These tests could include the red-team exercises that President Biden had mentioned in his EO. The AISI would also be tasked in providing technical guidance to lawmakers and law enforcement on a wide range of AI-related topics, including identifying generated content, authenticating live-recorded content, mitigating AI-driven discrimination, and ensuring transparency in its use.
Additionally, the Office of Management and Budget (OMB) is set to release for public comment the administration’s first draft policy guidance on government AI use later this week. Like the Blueprint for an AI Bill of Rights that it builds upon, the draft policy guidance outlines steps that the national government can take to “advance responsible AI innovation” while maintaining transparency and protecting federal workers from increased surveillance and job displacement. This draft guidance will eventually be used to establish safeguards for the use of AI in a broad swath of public sector applications including transportation, immigration, health and education so it is being made available for public comment at ai.gov/input.
Harris also announced during her remarks that the Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy the US issued in February has collected 30 signatories to date, all of whom have agreed to a set of norms for responsible development and deployment of military AI systems. Just 165 nations to go! The administration is also launching a a virtual hackathon in efforts to blunt the harm AI-empowered phone and internet scammers can inflict. Hackathon participants will work to build AI models that can counter robocalls and robotexts, especially those targeting elderly folks with generated voice scams.
Content authentication is a growing focus of the Biden-Harris administration. President Biden’s EO explained that the Commerce Department will be spearheading efforts to validate content produced by the White House through a collaboration with the C2PA and other industry advocacy groups. They’ll work to establish industry norms, such as the voluntary commitments previously extracted from 15 of the largest AI firms in Silicon Valley. In her remarks, Harris extended that call internationally, asking for support from all nations in developing global standards in authenticating government-produced content.
“These voluntary [company] commitments are an initial step toward a safer AI future, with more to come,” she said. “As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies.”
“One important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation,” Harris continued.