So, there’s this big government paper out, Executive Order 14110, all about artificial intelligence. It’s pretty detailed and covers a lot of ground, from making sure AI is safe to how the government itself should use it. It’s a pretty significant move, building on earlier ideas about AI and setting a lot of tasks for different government groups. We’re going to break down what it actually means and why it matters for the future of AI.
Key Takeaways
- Executive Order 14110 is a major directive from the White House aimed at guiding the development and use of AI, touching on safety, security, and innovation.
- The order assigns specific tasks and deadlines to over 50 federal entities, creating a broad framework for AI implementation across the government.
- It builds upon previous guidance, like the Blueprint for an AI Bill of Rights, aiming for a more structured approach to ethical AI.
- While focusing on safety and responsible use, the order also emphasizes promoting AI innovation and competition within the U.S.
- Tracking the implementation of Executive Order 14110 is important to see how effectively the government is meeting the goals set out in this comprehensive AI policy.
Understanding Executive Order 14110’s Foundation
Executive Order 14110, signed on October 30, 2023, isn’t just a standalone document; it’s built upon previous efforts to guide the responsible development and use of artificial intelligence. Think of it as the latest chapter in a story that’s been unfolding for a while.
Building on Prior AI Guidance
Before EO 14110, there were other important documents. One notable example is the White House OSTP Blueprint for an AI Bill of Rights, which came out in October 2022. This blueprint laid out a framework for using AI ethically and fairly, especially when AI systems affect people’s rights. It talked about key ideas like making sure AI systems are safe and work well, protecting against unfair treatment caused by algorithms, keeping data private, being open about when AI is used, and giving people choices or alternatives to AI systems. While it was more of a guide than a strict rulebook, it really set the stage for what would come next.
The Scope and Priorities of Executive Order 14110
This executive order is pretty broad, covering a lot of ground when it comes to AI. It lays out a total of 150 requirements for federal agencies. These tasks range from creating new guidelines and frameworks to doing studies, setting up task forces, making recommendations, and even putting new policies in place. The order places a significant emphasis on safety, innovation, and how the government itself uses AI. These three areas account for about two-thirds of all the requirements. It’s clear the administration wants to make sure AI develops in a way that benefits everyone, while also keeping the U.S. competitive. You can see a breakdown of these requirements in the Safe, Secure, and Trustworthy AI EO Tracker.
Tracking Federal Implementation of Executive Order 14110
With so many requirements, keeping track of who’s doing what and when is a big job. The order itself assigns tasks to various parts of the government, with agencies like the Executive Office of the President and the Department of Commerce having the most responsibilities. But it’s not just a few agencies; the order requires a whole-of-government approach, meaning many different federal actors need to collaborate. To help monitor progress, tools like the AI EO Tracker have been developed. These trackers list out the requirements, who’s responsible, deadlines, and the type of action needed. This helps policymakers and others see if the government is actually following through on its commitments. It’s a big undertaking, and seeing how it all plays out will be important.
Key Pillars of Executive Order 14110
Executive Order 14110 is built around a few main ideas, aiming to guide how we develop and use AI. It’s not just about one thing; it touches on several important areas.
Ensuring Safety and Security in AI Development
This part is about making sure AI systems are built in a way that doesn’t cause harm. Think about AI used in critical areas like national security or healthcare. If these systems aren’t reliable or if we can’t understand how they make decisions, it could lead to big problems. The order pushes for research into making AI more predictable and understandable. It also touches on making sure the government buys AI that is fair and doesn’t have built-in biases.
- Focus on AI interpretability and explainability.
- Address AI robustness to prevent unexpected failures.
- Procure AI systems that are unbiased and neutral.
Fostering Innovation and Competition
While safety is key, the order also wants to make sure the U.S. stays ahead in AI development. This means making it easier for companies to build and test new AI technologies. A big part of this involves infrastructure. The government is looking at ways to speed up the process for building data centers, which are needed to power AI. They’re also considering using federal land for these projects and making sure there’s enough reliable power.
- Streamline approvals for data center construction.
- Make federal land available for AI infrastructure.
- Promote reliable power generation for AI needs.
Government’s Role in AI Deployment
This pillar looks at how the government itself will use AI. It’s about setting up clear rules and structures within government agencies to manage AI responsibly. This includes figuring out who is responsible for what when it comes to AI projects and making sure agencies have the right tools and knowledge. The NIST AI Risk Management Framework is mentioned as a key guide for agencies to follow. The goal is to have a consistent approach across the government.
- Define agency responsibilities for AI implementation.
- Utilize the NIST AI Risk Management Framework.
- Establish clear AI governance structures within agencies.
Navigating the Shifting AI Policy Landscape
![]()
The AI policy scene in the U.S. has seen some significant shifts, and understanding these changes is key for anyone involved with artificial intelligence. It’s not just about what the government is doing now, but also how past approaches compare and what the future might hold, especially with state-level actions coming into play.
Comparing Biden’s and Trump’s AI Approaches
When President Biden issued Executive Order 14110, it was a big step towards guiding AI development and use. The focus was on safety, security, and building trust. However, the landscape changed dramatically. Upon taking office, President Trump revoked Biden’s Executive Order 14110, viewing it as a hurdle to American AI innovation. This move signaled a clear pivot towards deregulation. Trump’s administration later released "Winning the Race: America’s AI Action Plan" in July 2025. This plan prioritizes accelerating AI innovation, building domestic AI infrastructure, and leading in international AI diplomacy. The core principles include safeguarding workers, ensuring AI systems are free from bias, and preventing misuse of advanced AI technologies. While the Biden administration emphasized responsible regulation, the Trump administration’s approach leans heavily on promoting U.S. global dominance in AI through innovation and reduced regulatory burdens.
The Impact of Deregulation on AI Risks
It’s easy to think that deregulation means AI risks disappear, but that’s not the case. Even with a strong push for innovation, the "AI Action Plan" itself acknowledges significant risks like interpretability, robustness, and misalignment. The government is still allocating resources to research and mitigate these issues. For instance, the NIST AI Risk Management Framework is still in place, though some aspects might be adjusted. Federal agencies are also being directed to establish AI governance structures, policies, and processes. This means that while the federal government might be easing up on private sector mandates, the underlying risks of AI development and deployment remain. Companies still need to be mindful of these potential problems, even if the regulatory pressure from the top has lessened. It’s more about internal diligence now.
State-Level AI Legislation and Federal Influence
While federal policy has shifted, states haven’t stood still. There are now over 130 state AI laws on the books, with places like California and Colorado leading the way with their own AI acts. This creates a complex patchwork of rules for companies operating across different states. The federal government, under the new direction, has indicated it might withhold funding from states with "burdensome AI regulations." This suggests a federal effort to encourage a more uniform, less regulated approach nationwide. However, the existence of these state laws means that AI governance and compliance remain a significant consideration. Companies can’t just ignore state-specific rules, even if federal policy is more hands-off. It’s a balancing act, trying to innovate while staying compliant with a varied legal landscape.
Federal Agency Responsibilities Under Executive Order 14110
So, Executive Order 14110 dropped, and suddenly, a whole bunch of government agencies got homework. It’s not just a few folks in Washington; we’re talking about over 50 different federal entities that have tasks to complete. It’s a pretty big undertaking, and the order lays out about 150 specific requirements. Think of it like a massive to-do list for the entire government when it comes to AI.
Agency Requirements and Deadlines
The Executive Office of the President (EOP) and the Department of Commerce are shouldering a big chunk of these responsibilities, with each having a significant number of tasks. The Director of the Office of Management and Budget (OMB), which is part of the EOP, is leading the charge on a good portion of these. Other agencies like Homeland Security, the Office of Personnel Management, and the Department of State also have quite a bit on their plates. It’s clear that the order is trying to spread the work around, aligning with the different areas the EO focuses on. Some requirements are even directed at all federal agencies, or groups of agencies, meaning a lot of collaboration is going to be needed. The order also has a sense of urgency, with some deadlines coming up pretty quickly.
The Role of the NIST AI Risk Management Framework
One of the key pieces of guidance agencies will be using is the NIST AI Risk Management Framework. This framework is designed to help organizations manage the risks associated with AI. Agencies are expected to review and update this framework, making sure it addresses the complexities of AI development and deployment. It’s all about creating a structured way to think about AI risks and how to handle them. The goal is to make sure that as AI gets used more, it’s done so in a way that’s safe and responsible. This framework is a big part of how the government plans to keep up with the fast pace of AI development and avoid potential problems. It’s a living document, meaning it will likely evolve as AI technology does. You can find more details about the framework and its updates on the NIST website.
Establishing AI Governance Structures within Agencies
Beyond just following the NIST framework, agencies need to build their own internal systems for managing AI. This means setting up clear lines of responsibility and processes for how AI is developed, acquired, and used within their departments. It’s about creating a culture of responsible AI use from the ground up. This also involves making sure agencies have the right people with the necessary AI skills. The order specifically calls for efforts to attract and keep AI talent in the federal government, recognizing that this expertise is vital for implementing the EO’s goals. Some agencies might even find themselves needing to challenge state laws that could overly restrict AI development, aiming for a balanced approach across the country. This is part of the broader effort to ensure that AI innovation isn’t stifled by inconsistent or overly burdensome regulations, a point that the Department of Justice is also focusing on with its new AI Litigation Task Force to identify and challenge restrictive laws. Ultimately, it’s about building robust governance that can adapt to the ever-changing AI landscape.
AI and the Future Workforce
So, AI is changing things, and that includes jobs. It’s not just about robots taking over; it’s more about how AI tools can change what people do at work every day. The government’s looking at this, and Executive Order 14110 talks about making sure we’re ready for these shifts. We need to think about how to help people adapt.
Addressing AI’s Impact on Employment
It’s pretty clear AI will change a lot of jobs. Some tasks might become automated, meaning people will need to learn new skills or focus on different aspects of their work. This isn’t necessarily a bad thing, but it does mean we need to be prepared. Think about it: if AI can handle the routine stuff, humans can focus on the more creative or complex problems. The goal is to make sure AI helps us, not hinders us, in our careers. It’s about finding that balance.
Initiatives for AI Skill Development
To keep up, there are efforts to boost AI skills. This means looking at education and training programs. The idea is to get people ready for jobs that involve AI, whether that’s developing it, using it, or managing it. This could involve:
- Updating college curricula to include AI basics.
- Creating short courses and certifications for specific AI tools.
- Encouraging on-the-job training for employees.
These programs aim to make sure workers have the knowledge they need to succeed in an AI-driven economy. It’s about building a workforce that can work alongside AI effectively. You can find more information on federal AI strategy at federal AI strategy.
Supporting Workers Through AI Disruption
When jobs change, people need support. This means thinking about how to help workers who might be displaced or need to switch careers. It’s not just about training; it’s also about providing resources and guidance. The government is looking into ways to help workers transition smoothly. This could involve:
- Providing career counseling services.
- Offering financial assistance for retraining.
- Developing programs to help workers find new employment opportunities.
The aim is to make sure that as AI advances, people aren’t left behind. It’s a big challenge, but one that’s getting attention.
International AI Diplomacy and Security
When we talk about AI, it’s not just about what’s happening here at home. The U.S. is really trying to lead the charge on the global stage, too. It’s a big part of Executive Order 14110, this idea of promoting American AI technology and making sure it’s seen as the gold standard. Think of it like this: the U.S. wants to export not just AI software, but the whole package – hardware, models, and even the standards we think AI should follow. They’re pushing for international rules that encourage innovation, rather than getting bogged down in too much regulation. It’s a clear move to counter influence from places like China, especially in international groups that set AI standards.
Promoting American AI Exports
The government is looking at creating "full-stack AI export packages." This means bundling up U.S.-made AI tech, from the chips to the applications, and making it easier for other countries to buy. The goal is to boost American businesses and make sure our AI is the one being adopted worldwide. This ties into efforts to speed up approvals for things like data centers, which are the backbone of AI development. It’s about having the infrastructure to back up our AI ambitions. The administration is also looking at strengthening existing export controls and even creating new ones, particularly for components used in making advanced chips. This is a way to keep a closer eye on who’s getting what and how it’s being used.
Countering Global AI Influence
There’s a definite focus on making sure American values are reflected in how AI is governed internationally. This means pushing back against approaches that might be too restrictive or don’t align with the U.S. vision for AI development. They’re actively participating in global forums to shape discussions and standards. It’s a competitive landscape, and the U.S. wants to ensure its voice is heard and its approach to AI is the one that gains traction globally. This also involves keeping tabs on how AI is being used by other nations and addressing any potential downsides.
Addressing National Security Risks of Frontier AI
One of the big concerns is the security implications of the most advanced AI systems, often called "frontier AI." The government is planning to work with AI companies to test these powerful models for national security risks. This involves partnerships with organizations like NIST’s Center for AI Standards and Innovation (CAISI). They’re looking at how to evaluate these systems and develop standards to make sure they’re safe and secure. It’s a complex area, and the U.S. is trying to get ahead of potential problems before they arise. This proactive stance is key to maintaining a secure AI ecosystem, both domestically and internationally. The idea is to adapt existing international agreements for AI applications, using an AI-driven approach to manage and comply with treaties managing and complying with treaties.
Here’s a quick look at some of the actions being considered:
- Develop new export controls for specific semiconductor manufacturing components.
- Fund evaluations of frontier AI systems for national security risks.
- Advocate for international AI governance that supports innovation and American values.
- Strengthen enforcement of current AI export controls using advanced tracking technologies.
Wrapping It Up
So, where does all this leave us with Executive Order 14110? It’s a pretty big deal, setting a lot of expectations for how the government should handle AI. We’ve seen how it tries to cover everything from safety to innovation, with a lot of requirements spread across different agencies. It’s not the only thing happening, though. Other administrations have different ideas, and states are stepping in with their own rules, like Colorado’s AI Act. It’s a complex picture, for sure. The main thing to remember is that even with shifts in policy, thinking about AI risks and how to manage them is still super important for any company working with this tech. It’s not just about following rules; it’s about being smart and responsible as AI keeps changing the world around us.
Frequently Asked Questions
What is Executive Order 14110 all about?
Think of Executive Order 14110 as a big set of instructions from the President about how the government should handle Artificial Intelligence (AI). It’s like a rulebook that tells different government departments what they need to do to make sure AI is developed and used safely, securely, and in a way that people can trust. It covers a lot of ground, from making sure AI isn’t harmful to encouraging new ideas and making sure American workers aren’t left behind.
Why is safety and security so important in AI, according to this order?
AI can be incredibly powerful, and with great power comes the need for great responsibility. This order emphasizes safety and security because AI systems, especially the really advanced ones, could potentially cause harm if not built carefully. It’s about making sure AI doesn’t make mistakes that hurt people, protecting sensitive information, and preventing AI from being used for bad things.
How does this order try to help businesses and new AI ideas grow?
While focusing on safety, the order also wants to make sure the U.S. stays ahead in AI. It encourages new ideas and competition by looking at how government rules might be slowing things down. The goal is to find a balance: be safe and responsible, but also keep pushing the boundaries of what AI can do so American companies can lead the way.
What does the order say about jobs and workers in the age of AI?
The order recognizes that AI will change the job market. It talks about making sure workers have the skills they need for the future by supporting training and education programs. It also aims to help people who might lose their jobs because of AI, by providing support and resources to help them adapt.
How does this order compare to what other countries or past administrations have done with AI?
This order is seen as a big step for the U.S. in setting clear rules for AI. It builds on earlier ideas about ethical AI. Other countries, like those in Europe, also have their own rules. There have also been different approaches in the past, with some administrations focusing more on letting AI develop freely without many rules, while others, like this one, are trying to guide its development more closely.
What are some of the challenges in following all the rules in this order?
This order has a lot of requirements – over 150! – for different government groups. Keeping track of all these tasks, making sure they are done on time, and figuring out the best way to do them can be challenging. It’s like having a huge project with many different parts that all need to be managed carefully to make sure the whole thing is successful.
