On Jan. 29, 2024, U.S. President Joe Biden announced actions based on an October 2023 executive order on the use and regulation of artificial intelligence. These actions fulfill the 90-day goals of the AI Executive Order.
The executive order featured wide-ranging guidance on maintaining safety, civil rights and privacy within government agencies while promoting AI innovation and competition throughout the U.S.
Although the executive order didn’t specify generative artificial intelligence, it was likely issued in reaction to the proliferation of generative AI, which has become a hot topic since the public release of OpenAI’s ChatGPT in November 2022.
Note: This story was updated on Jan. 30, 2024 with the progress report of the AI Executive Order.
What actions were taken as a result of the AI Executive Order?
As of January 2024, the following actions had been taken as a result of the AI Executive Order.
- Government agencies mandated makers of the most powerful AI systems to report information, including safety testing results, to the Department of Commerce.
- The Department of Commerce proposed a draft rule that would compel U.S. cloud computing companies to report if they are providing computing power for AI training outside the U.S.
- Nine agencies completed risk assessments around AI.
- Government agencies have launched AI resource programs, encouraged hiring AI professionals in the U.S. government, created an AI education initiative for K-12 through undergraduate students and funded clinical healthcare initiatives powered by AI.
The U.S. government provided more details in the fact sheet.
What does the executive order on safe, secure and trustworthy AI cover?
The executive order’s guidelines about AI are broken up into the following sections.
Safety and security
Any company developing ” … any foundation model that poses a serious risk to national security, national economic security, or national public health and safety … ” must keep the U.S. government informed of their training and red team safety tests, the executive order states. In red team tests, security researchers attempt to break into an organization to test the organization’s defenses. New standards will be created for companies using AI to develop biological materials.
Privacy
The development and use of privacy-preserving techniques will be prioritized in terms of federal support. Privacy guidance for federal agencies will be strengthened with AI risks in mind.
Equity and civil rights
Landlords, federal benefits programs and federal contractors will receive guidelines to keep AI algorithms from exacerbating discrimination. Best practices will be developed for the use of AI in the criminal justice system.
Consumers, patients and students
AI use will be assessed in healthcare and education.
Supporting workers
Principles and best practices will be developed to reduce harm from AI in terms of job displacement, labor equity, collective bargaining and other potential labor impacts.
Promoting innovation and competition
The federal government will encourage AI innovation in the U.S., including streamlining visa criteria, interviews and reviews for immigrants highly skilled in AI.
Advancing American leadership abroad
The federal government will work with other countries on advancing AI technology, standards and safety.
Responsible and effective government use of AI
The executive order promotes helping federal agencies access AI and hire AI specialists. The government will issue guidance for agencies’ use of AI.
Is this AI executive order a law, and how will its guidelines be used?
An executive order isn’t a law, and may be modified. The executive order on AI security doesn’t include revoking the right of any existing AI company to operate, an anonymous senior official from the Biden administration told The Verge.
The executive order directs the way specific government agencies should be involved in AI regulation going forward. For example, the National Institute of Standards and Technology will lead the way on establishing standards for red team testing for high-risk AI foundation models. The Department of Homeland Security will be responsible for applying those standards in critical infrastructure sectors and will create an AI Safety and Security Board. AI threats to critical infrastructure and other major risks will be the purview of the Department of Energy and the Department of Homeland Security.
SEE: It’s important to balance the benefits of AI with the downsides of the “dehumanization” of work, Gartner says. (TechRepublic)
The federal AI Cyber Challenge will be used as groundwork for an advanced cybersecurity program to discover and mitigate vulnerabilities in critical software.
The National Security Council and White House Chief of Staff will work on a National Security Memorandum to direct future guidelines for the federal government related to AI, particularly in the military and intelligence agencies. The National Science Foundation will work with a Research Coordination Network to advance work on privacy-related research and technologies.
The Department of Justice and federal civil rights officers will coordinate on combating algorithm-based discrimination.
“Recommendations are not regulations, and without mandates, it’s hard to see a path towards accountability when it comes to regulating AI,” Forrester Senior Analyst Alla Valente told TechRepublic in an email. “Let’s recall that when Colonial Pipeline experienced a ransomware attack that triggered a domino effect of negative consequences, pipeline operators had cybersecurity guidelines that were voluntary, not mandatory.”
She compared the executive order to the EU AI Act, which offers a more “risk-based” approach.
“For this executive order to have teeth, requirements must be clear, and actions must be mandated when it comes to ensuring safe and compliant AI practices,” Valente said. “Otherwise, the order will be simply more suggestions that will be ignored by those standing to benefit from them most.”
“We believe reasonable regulatory oversight is inevitable for AI, just as we’ve seen implemented for broadcasting, aviation, pharmaceuticals — all the key transformative tech of the past 150 years,” wrote Graham Glass, CEO of AI education company CYPHER Learning, in an email to TechRepublic. “Compliance with eventual ‘rules of [the] road’ for AI will improve with international coordination.”
Global discussions of AI safety continue
The U.K. held an AI Safety Summit on Nov. 1 and Nov. 2, 2023, where international governments discussed the safety and risks of generative AI. The EU is still working on finalizing its AI Act.