Biden’s Executive Order to Establish Regulations for Artificial Intelligence Usage
On Monday, Oct. 30, President Biden outlined the first federal regulations over the use of artificial intelligence (AI) systems. In light of AI becoming a fast-growing sector, Biden’s executive order seeks to oversee AI research and enforce AI testing requirements, in order to prevent the potential development of weapons or supercharged cyberattacks.
The executive order on AI is the first of its kind in the US, reflecting the nationwide urgency in responding to the uncertain impact of AI technology on the labor market, national security, and consumer privacy. The order is meant to build on previous AI efforts such as Executive Order 13960 and the Blueprint for an AI Bill of Rights. Bruce Reed, a White House deputy chief of staff, has remarked, “President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust.”
Set to go into effect in the next 90 days, the White House emphasizes the key rules of the executive order: Creating new safety and security standards, protecting workers and jobs, and advancing equity and civil rights.
Other government agencies have been affected by this change, acquiring a new set of responsibilities. For instance, the Department of Commerce is tasked to create authentication and watermarking standards for generative AI systems; the National Institute of Standards and Technology is to create red-team testing standards for AI systems; the Departments of Energy and Homeland Security will study the threat of AI to critical infrastructure. Biotechnology firms that use AI to manipulate biological material are also urged to take precautions and be transparent about its use.
The order also demands that AI companies share test results with government officials before their systems and services become available to the public. This is to mitigate a number of risks associated with AI, such as preventing the dissemination of false information and deepfakes that could influence next year’s election season. As said by Sarah Kreps, director of the Tech Policy Institute at Cornell University, "The new executive order strikes the right tone by recognizing both the promise and perils of AI.”
Despite the order’s focus on regulating AI for domestic enterprises, software development is a largely global endeavor that concerns all technologically advanced countries. The US is bound to face diplomatic challenges in enforcing AI rules unless other allies develop similar regulations.
On Wednesday, Nov. 1, Vice President Kamala Harris represented the US at a conference in London to encourage action on AI regulation and garner support for AI initiatives. The summit successfully brought together 28 nations to sign up and work toward a shared agreement about AI risks. Over the next year, the nations aim to continue discussing the future of AI through summits in South Korea and France. Ultimately, Harris hopes that the actions on AI implemented by the Biden Administration is “inspiring and instructive to other nations.”
It is expected that, moving forward, the AI industry will be influenced by this executive order. The order may be seeking an ambitious future for AI, yet whether or not it causes an industry-wide shift is yet to be seen, as companies start making necessary adjustments. Having legally defined the limitations of AI, the current concern for the US is how to align their policies with the interests of their allied countries, working toward a sustainable and easy transition into the AI future.