Skip to main content
All
November 9, 2023

President Biden’s Executive Order on Artificial Intelligence: Shaping the Future of AI

Advisory

On October 30, President Biden signed his Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO), addressing a myriad of issues related to artificial intelligence (AI). Aiming directives or requests at over 40 separate federal agencies, the EO reflects the Biden administration’s determination (in the words of one White House advisor) to “pull every lever” to meet the opportunities and challenges brought by these exciting technologies.

Key Takeaways for Businesses

  • The Department of Commerce (Commerce) has a big role. Commerce must develop and administer extremely broad reporting requirements related to: cutting-edge, dual-use foundation models; certain large-scale computing clusters; and certain U.S. Infrastructure as a Service (IaaS) transactions involving foreign persons. It will be important for businesses to work to ensure that the resulting framework is one in which they can innovate and compete.
  • Federal contracting practices will affect AI developers doing business with the government. The EO requires the development and implementation of certain responsible AI practices — including with respect to content authentication as well as detecting and labeling synthetic content. Procurement of AI systems will likely reflect these responsible AI practices, indirectly influencing the products and services available for private-sector enterprises.
  • Expect numerous rules, reports, and other actions over the next year to support workers, advance equity and civil rights, and protect consumers.
  • The immediate impact on businesses will be limited. However, the rulemakings and other actions called for by the EO — some of which already are underway — likely will have significant effects on how companies develop, deploy, and use AI systems.
  • Congress may lag behind. Although there is bipartisan, bicameral interest in advancing policy governing AI, it may be difficult in the closely divided Congress to find the political will to act soon — especially on more ambitious proposals. Congress will use its oversight authorities to examine agency and industry practices, however.
  • Even though AI legislation may not emerge from the current Congress, it remains crucial for companies developing or deploying AI systems to educate members of the House and Senate to set the stage for future policymaking.

Background

The emergence of highly capable generative AI systems over the past year has sparked a debate over how to regulate AI and whether AI development should be constrained or accelerated. AI promises benefits across the range of human experience from combating climate change to improving medical care; from customizing instruction to reclaiming leisure time through automation; from allowing people to overcome physical limitations on communication and creative endeavors to productivity gains. We want innovation to deliver on these promises. Yet, these same technologies also present a plethora of risks — including inaccuracy, discrimination, privacy invasions, “deepfakes” and other disinformation, copyright infringement, more dangerous cyberattacks, more convincing frauds, broader access to weapons of mass destruction (WMD), and, just maybe, the superintelligent systems threatening humanity about which science fiction has warned.

Governments across the globe are attempting to find the proper balance between innovation and safety and among priorities for risk mitigation. They also are weighing whether to adopt regulations that apply broadly across the economy and society, most likely enforced by a single AI regulator, or to proceed sector-by-sector, with each regulator responsible for AI governance in its remit.

The European Union is close to adopting the AI Act, with a single set of rules for almost all sectors. Based on the precautionary principle — that an innovation must be proven safe before it is permitted — many of the AI Act’s rules will be highly prescriptive for AI systems deployed in use cases deemed high-risk.

China, likewise, has put in place measures that apply across the economy. These measures include a range of strong protections for individuals as well as provisions to preserve social order and Communist Party control.

In the United Kingdom, on the other hand, the government issued a White Paper in March, calling for “light-touch” regulation under which existing regulators will apply several responsible AI principles to systems deployed in their sectors. As something of an exception, over the intervening months, Prime Minister Rishi Sunak and his team have emphasized the need to protect against the largescale dangers of AI-like attacks against critical infrastructure, proliferation of WMD, and superintelligent AI systems going rogue. One result of this emphasis was the AI Safety Summit that the prime minister just hosted at Bletchley Park.

In the United States, the Biden administration made its first major foray into AI policy with the October 2022 Blueprint for an AI Bill of Rights, laying out principles to foster policies and practices — and automated systems — that protect civil rights and promote democratic values. In January, the National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework version 1.0 (AI RMF 1.0). NIST, which has no regulatory authority, designed AI RMF 1.0 to “[b]e law- and regulation-agnostic,” suitable for any set of substantive provisions.

Various U.S. agencies that, unlike NIST, do have regulatory authority have stepped up warnings that they intend to exercise that authority against AI-related breaches of existing statutes in their sectors. For instance, in April 2023, the heads of the Civil Rights Division, Consumer Financial Protection Bureau (CFPB), Equal Employment Opportunity Commission, and Federal Trade Commission (FTC) threatened to crack down on violations of antidiscrimination laws caused by AI systems. As another example, The Washington Post reported in July that the FTC had opened an investigation over potential unfair or deceptive acts or practices related to ChatGPTTM. The FTC, Securities and Exchange Commission, banking regulators, and other agencies have opened rulemakings on potential new AI regulations.

On Capitol Hill, Senate Majority Leader Chuck Schumer (D-NY), along with Sens. Todd Young (R-IN), Martin Heinrich (D-NM), and Mike Rounds (R-SD), have taken the lead on AI regulation. Leader Schumer since has established a special process for educating the Senate on AI policy and, eventually, crafting AI legislation. In the House of Representatives, the bipartisan Congressional Artificial Intelligence Caucus has initially avoided the tough questions on regulation with a bill to create a widely accessible AI research platform. Other compatible and competing proposals abound in both houses of Congress, and it remains uncertain how or whether the legislative process will move forward. Notwithstanding the high level of interest in AI policy, it is not clear that there is a sufficient political will to make law before a new Congress is seated at the start of 2025. Narrow proposals on particular topics will stand a better — but still uncertain — chance for adoption than more comprehensive initiatives.

Unveiled against this backdrop, the EO seeks to stake out a middle ground to promote the continued development of AI technologies and advance the conversation for building an effective regulatory regime. The EO requires agencies to deliver a large number of reports, proposals, and rules in a matter of months, and we expect the administration to engage more actively with Congress on AI legislation.

EO Overview

The EO lays out the administration’s vision for the emerging AI industry, built around eight primary objectives: (1) establish new standards for AI safety and security; (2) promote responsible innovation and competition; (3) support workers; (4) advance equity and civil rights; (5) protect consumers; (6) protect privacy and civil liberties; (7) use AI responsibly in the federal government and build federal AI governance capacity; (8) advance American leadership in global AI governance. These eight objectives expand on the five principles outlined in the administration’s Blueprint for an AI Bill of Rights by mandating a set of minimum evaluation, monitoring, and risk-mitigation practices for use in the federal government. These practices attempt to use the federal example and procurement policy to foster responsible AI deployment and development in the absence of congressional action. 

The EO follows an “all-of-government” strategy, as the Biden administration has done with other issues. Tasking more than 40 federal agencies with numerous standards, safeguards, reports, and other steps to advance the nation’s AI strategy, the EO sets out aggressive implementation deadlines spanning from 30 to 365 days. 

To coordinate implementation, the EO establishes a White House AI Council to oversee all AI-related activities across the federal government. The White House AI Council will be led by the Assistant to the President and Deputy Chief of Staff for Policy and include nearly 30 agency representatives across the federal government. The links below contain analyses from teams of Arnold & Porter lawyers of the impacts the EO’s directives could have on various sectors.

Export Controls 

Although it does not explicitly address export controls, the EO directs the Secretary of Commerce to use the International Emergency Economic Powers Act to impose expansive reporting requirements on dual-use foundation models and large scale computing clusters, and U.S. IaaS transactions involving foreign persons. We expect these reports will likely inform the Department of Commerce in refining or adding export control measures that may impact the AI sector. For more information, please see here.

Critical Infrastructure, Financial Institutions, National Security Systems, and Other Government Information Systems

Focused on the potential threats from AI systems, the EO directs government agencies to manage AI systems interfacing with critical-infrastructure sectors, national security systems, and other important government information systems. The Department of Homeland Security (DHS) will coordinate interagency efforts to assess vulnerabilities in critical-infrastructure and financial sectors, in addition to developing guidelines for mitigating AI-related risks. The Department of Defense (DOD) and the DHS, respectively, will develop pilot AI projects to bolster the cyber-defense capability of U.S. government national security and non-national security information systems. 

The EO also requires the National Security Advisor and the White House Deputy Chief of Staff for Policy to coordinate an interagency national security memorandum on the governance of AI used as a component of national security systems and for military and intelligence purposes. The memorandum will guide how the DOD, the Department of State, and the Intelligence Community address the national security risks and potential benefits of AI. For more information, please see here.

Healthcare

The EO directs the Department of Health and Human Services (HHS) to create an AI Task Force to build a strategic plan for AI-enabled healthcare technology tools. HHS will also provide guidance to incorporate safety, privacy, and security standards to protect patients’ personally identifiable information, including measures to address cybersecurity threats. For more information, please see here.

Competition

The administration touts the EO as promoting “a fair, open, and competitive AI ecosystem.” Indeed, the EO calls on agencies to promote competition in AI and discusses “stopping unlawful collusion and addressing risks from dominant firms’ use of key assets,” specifically mentioning the use of “semiconductors, computing power, cloud storage, and data to disadvantage competitors.” The EO also encourages the FTC to consider exercising its rulemaking authority to “ensure fair competition in the AI marketplace." For more information, please see here.

Consumer Protection and Securities 

The EO encourages independent regulatory agencies — which are outside presidential authority and cannot be commanded — to employ “their full range of authorities” to protect consumers from fraud, discrimination, privacy invasions, and other injuries from AI. For more information, please see here

Privacy

To address concerns that AI may exacerbate risks to privacy, the EO directs federal agencies to scrutinize how they collect and use personal information in connection with AI. The EO also places a substantial emphasis on the development of “privacy-enhancing technologies.” The EO provides more granular direction for specific agencies in the mitigation of privacy risks in AI. Importantly, in announcing the EO, the administration called on Congress to address AI privacy risks through broad privacy legislation, which members of Congress have attempted numerous times in the past decade without success. For more information, please see here.

Intellectual Property

To clarify issues around AI and its potential to develop IP assets, the EO directs the U.S. Patent and Trademark Office (USPTO) to issue guidance on the patent eligibility of inventions developed using AI, as well as emerging issues at the intersection of AI and IP. The EO also directs the USPTO to consult with the U.S. Copyright Office to issue recommendations to the White House on potential executive actions relating to copyright and AI. For more information, please see here.

Labor and Employment

Focused on mitigating “AI’s potential harms to employees’ well-being and maximize its potential benefits,” the EO directs the Department of Labor to publish principles and best practices for employers within 180 days. For more information, please see here.

Financial Services/Housing

To protect against discrimination and biases in AI and AI-enabled products, the EO encourages the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau (CFPB) to require regulated entities, where possible, to evaluate (1) underwriting models for bias affecting protected groups and (2) collateral-valuation and appraisal processes to minimize bias. To combat unlawful discrimination by AI in housing and other real-estate transactions, the EO directs the Secretary of Housing and Urban Development and encourages the Director of the CFPB to issue guidance on several topics within 180 days of the order. The EO also encourages all independent regulatory agencies to consider using existing authorities, including their rulemaking authority, to protect consumers from fraud, discrimination, and threats to privacy, as well as any other risks that may arise from AI and AI-enabled products. For more information, please see here.

Education and Workforce 

In light of these potential uses of AI in education contexts, the EO directs the Department of Education to develop resources and guidance for the education sector within a year of the EO’s enactment, which will consider how to develop and deploy AI in a safe, responsible, and nondiscriminatory way that considers the impact of AI systems on “vulnerable and underserved communities.” For more information, please see here.  

Transportation 

To support the safe integration of AI into the transportation sector, the EO directs the Department of Transportation (DOT) to work through the Nontraditional and Emerging Transportation Technology (NETT) Council to assess the need for guidance regarding the use of AI in transportation. As part of this effort, the NETT Council will support existing and future pilot transportation-related applications of AI, evaluate the outcomes of these pilot programs, and establish a Cross-Modal Executive Working Group to coordinate this work. The EO also Directs DOT, through the Advanced Research Project Agency-Infrastructure (ARPA-I), to explore and prioritize funding opportunities for AI transportation projects. For more information, please see here.

Conclusion

The EO reflects the Biden administration’s creative use of executive power to expand the nation’s capacities in AI and avert potential harms ahead of new statutes and regulations. The immediate impact on businesses will be limited. The EO is long on the government’s leading by example, encouragement, and capacity-building and short on new legal obligations for businesses in the near term. 

But businesses may only have a bit of breathing room. Legal obligations will flow from the new regulations and contracting policies for which the EO calls even if the lawmaking the EO encourages lags behind. Companies developing or deploying AI systems or using third-party AI solutions should watch the various rulemakings, other agency actions, and the legislative process carefully. They should be gearing up their AI risk-management efforts in anticipation of new rules. 

In addition, enterprises concerned that new rules may constrain their operations should consider participating in the policymaking process. We anticipate there will be many opportunities for companies to share their perspectives, from filing comments in rulemakings to meeting with officials in the administration and Congress to working with trade associations, think tanks, and other nongovernmental organizations. Please contact any member of Arnold & Porter’s comprehensive AI team for more specific guidance on your company’s AI risk-management or policymaking needs.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.