So, artificial intelligence, or AI, is everywhere now, right? It’s changing how we do things, from how we work to how we get information. The government’s been trying to figure out how to handle all this, and there have been some big moves. One of the most talked-about is Executive Order 14110, which really tried to set some rules for AI. But things change, and policies get updated. This article looks at what Executive Order 14110 was all about and how the government’s approach to AI has evolved since then, especially with new plans and orders coming out.
Key Takeaways
- Executive Order 14110 was a significant step in trying to guide AI development and use in the US, focusing on safety and trust.
- The government’s stance on AI has shifted, with a newer focus on accelerating innovation and reducing regulations, as seen in later plans and orders.
- Federal agencies have specific tasks related to AI, like creating inventories of AI uses and developing policies for generative AI.
- There’s an ongoing effort to build national AI infrastructure and promote American AI technology abroad.
- Addressing bias in AI and making sure AI systems are neutral is a stated goal, with specific directives for federal procurement of AI models.
Understanding Executive Order 14110: A Foundational Overview
So, President Biden signed this big Executive Order, number 14110, back in October 2023. It’s all about how we handle artificial intelligence, from making it to using it. Think of it as the government trying to get a handle on this super-fast-moving tech.
The Genesis of Executive Order 14110
This order didn’t just pop out of nowhere. It’s a response to how quickly AI is changing things. The idea is to set a national direction for AI, kind of like establishing a federal strategy for it. It’s meant to guide how AI is developed and used across the country, and maybe even prevent a messy patchwork of different state rules. It’s a pretty significant move in the whole AI conversation.
Key Objectives and Scope of the EO
What’s this order actually trying to do? Well, it’s pretty broad. It covers a lot of ground, aiming for AI to be developed and used safely and securely. Some of the main goals include:
- Making sure AI systems are trustworthy and don’t pose risks.
- Pushing for innovation in AI while keeping safety in mind.
- Setting up ways for the government to manage AI.
- Looking at how AI affects jobs and the economy.
- Thinking about national security and AI.
It’s a big document with a lot of requirements for different government agencies. We’re talking about 150 specific tasks, ranging from creating new guidelines to doing studies and even making rules. It really touches almost every part of the government.
Initial Impact and Federal Requirements
Right off the bat, this order means a lot of federal agencies have homework to do. The Office of Management and Budget (OMB), for example, is tasked with leading a good chunk of these requirements. Other agencies like Homeland Security and the Department of State also have significant roles. It’s a real "whole-of-government" effort, meaning lots of different parts of the government have to work together. The clock is ticking on many of these tasks, so things are already starting to move.
Navigating the Shifting Landscape of AI Governance
The Trump Administration’s AI Action Plan
So, the Trump administration put out this big plan called "Winning the Race: America’s AI Action Plan" back in July 2025. It came with a few executive orders too. The main idea seemed to be pushing AI innovation and trying to make sure America stays ahead globally. While they talked a lot about speeding things up and cutting down on rules, they did mention some risks with AI, like making sure we can understand how it works and that it’s reliable. They also pointed to research funding to help sort out these issues. It’s not exactly a deep dive into strict rules, but it’s a sign that even a pro-innovation push acknowledges some potential problems.
Divergent Approaches: EU vs. US AI Governance
When you look at how different countries are handling AI rules, the US and the European Union are on pretty different paths. The EU is going for a more centralized, rule-heavy approach. Think of it like building a strong wall around AI use with clear laws and big fines if you mess up. They’ve got this AI Act that covers all 27 member states. The US, on the other hand, is more like the Wild West – a "frontier" style. It’s a mix of federal orders, state laws popping up everywhere, and different agencies stepping in. It’s not one single, unified plan, which can make things a bit messy but also allows for more flexibility, depending on how you look at it.
The Role of NIST in AI Risk Management
The National Institute of Standards and Technology, or NIST, is still playing a part. Their AI Risk Management Framework is still around, though some specific mentions of things like DEI and climate change were taken out. NIST’s Center for Advanced Cybersecurity Research (CAISR) is going to be involved in checking out federal AI systems and new AI models. They’ll also help update plans for when AI has problems and work on standards, especially for things like intelligence agencies. NIST’s framework is seen as a practical tool for organizations to figure out how risky their AI systems are. It’s about making sure AI is developed and used safely, even as the rules around it keep changing.
Federal Agency Mandates and AI Implementation
So, the government is really getting serious about how it uses AI. It’s not just about playing around with new tech anymore; there are actual rules and plans in place now. Think of it like this: the government is trying to get its own house in order before telling everyone else what to do.
OMB Memoranda on AI Governance Structures
Okay, so the Office of Management and Budget (OMB) dropped a couple of important memos back in April 2025. These aren’t just suggestions; they’re directives for federal agencies. Basically, they’re telling agencies they have to set up proper systems for managing AI. This includes creating policies, figuring out how AI will be used, and making sure there are people in charge of overseeing it all. It’s a big push to make sure AI is used responsibly within the government itself.
AI Use Case Inventories and High-Impact Systems
Part of this new directive means agencies need to keep a running list, or an "inventory," of all the ways they’re using AI. This sounds simple, but it’s a big deal. They also have to pay extra attention to "high-impact" AI systems. What makes a system high-impact? Well, it’s usually when the AI’s output is pretty important for making decisions or affecting people’s lives. The goal here is to know exactly where AI is being used and to put extra checks in place for the systems that matter most.
Developing Generative AI Usage Policies
Generative AI, like the stuff that writes text or creates images, is everywhere. Because of this, the OMB memos specifically require agencies to come up with clear rules on how their employees can and can’t use these tools. This is to prevent misuse, ensure data privacy, and keep things professional. It’s about setting boundaries so that these powerful tools are used for good, not for chaos.
AI Development and Infrastructure Priorities
Accelerating AI Innovation and Deregulation
The administration’s approach to AI development really leans into the idea that less regulation means faster progress. The thinking is that if we can cut down on the red tape, companies will be able to build and test new AI systems much quicker. This is seen as a way to keep the U.S. ahead in the global AI race, especially when you look at how quickly other countries are moving. They’re trying to make it easier for AI companies to do their work, believing that this will naturally lead to more breakthroughs and better technology.
Building National AI Infrastructure
This part is all about the physical stuff needed for AI. Think data centers and the power to run them. The plan includes speeding up the approval process for building new data centers and even looking at using federal land for this purpose. It’s a big push to make sure the U.S. has the hardware backbone to support advanced AI development. Reliable power is also a big focus, because AI systems need a lot of it, and it needs to be consistent. They’re also looking into special secure computing environments for government AI work.
Promoting American AI Exports Internationally
Beyond just building AI here, there’s a strong push to sell American AI technology to other countries. This includes everything from the hardware and software to the AI models themselves. The idea is to create "full-stack AI export packages" that allies can use. This not only helps other countries but also strengthens the U.S. position globally and helps American companies grow. They want to promote an international approach to AI governance that they feel aligns with American values and doesn’t get bogged down in overly strict rules. This also involves working to counter the influence of other nations in international AI discussions and standard-setting bodies.
Addressing Bias and Ensuring Neutrality in AI
It’s a big deal, right? Making sure AI doesn’t have its own weird opinions or unfair leanings. The government’s been talking a lot about this, especially with how AI is used in hiring or when it’s making decisions that affect people’s lives. They’re pushing for AI systems that are supposed to be, well, neutral.
Executive Order on Preventing Woke AI
This part of the executive order is pretty direct. It tells federal agencies to be careful about the AI they bring in, particularly large language models (LLMs). The goal is to get AI that sticks to facts and objective reality, not something that pushes a specific viewpoint. Think of it like wanting a history book that tells you what happened, not one that tries to convince you of a certain political take. They want AI that’s based on truth-seeking and scientific inquiry, and definitely not something that’s been programmed with what they call "ideological dogmas." It’s a move to keep government AI systems from being biased.
Procuring Unbiased and Ideologically Neutral LLMs
So, how do you actually get these neutral AI models? The government is saying they’ll only buy LLMs that meet certain standards. This means looking at the AI’s training data and how it’s built to make sure it’s not leaning one way or another. It’s not just about avoiding obvious prejudice; it’s about making sure the AI doesn’t subtly steer conversations or decisions based on beliefs that aren’t universally accepted or fact-based. This is a tough ask, though, because what one person considers neutral, another might not. It’s a constant balancing act.
Updating AI Risk Management Frameworks
To help with all this, they’re looking at frameworks like the one from NIST (National Institute of Standards and Technology). The idea is to make these frameworks better at spotting and fixing bias. This involves looking at how AI systems are tested and monitored.
Here are some key areas they’re focusing on:
- Identifying potential biases: This means digging into the data used to train AI and the algorithms themselves to find where unfairness might creep in.
- Testing for neutrality: Developing ways to check if the AI’s outputs are consistent and fair across different groups or situations.
- Monitoring AI performance: Keeping an eye on AI systems after they’re deployed to catch any new biases that might show up over time.
- Establishing accountability: Making sure there are clear lines of responsibility when AI systems don’t perform as expected or cause harm.
Federal Enforcement and Existing Legal Frameworks
So, even though we don’t have one big, all-encompassing federal law for AI yet, don’t think for a second that the government is just sitting back. Agencies are definitely using the tools they already have to deal with AI stuff. It’s kind of like using a regular screwdriver for a slightly different kind of screw – it might not be perfect, but it gets the job done.
FTC’s Operation AI Comply
The Federal Trade Commission (FTC) has been pretty active. They kicked off something called "Operation AI Comply" back in September 2024. Basically, they’re looking out for companies that are being shady or misleading with their AI claims. They’re using Section 5 of the FTC Act, which is all about stopping unfair or deceptive business practices. The message here is pretty clear: just because you’re using AI doesn’t mean you get a free pass on lying to customers. We’ve already seen cases where companies got in trouble for making wild claims about their AI services. For instance, one company was fined for saying they were the "world’s first robot lawyer" when they clearly weren’t. Another got shut down for helping people create fake reviews. It shows that if you’re selling AI, you better have proof for what you’re saying, and you can’t just let others use your tech to be deceptive either.
EEOC’s Role in AI and Employment
Then there’s the Equal Employment Opportunity Commission (EEOC). They’re focused on making sure AI tools used in hiring and employment don’t discriminate. Think about AI that screens resumes or monitors employee performance. The EEOC is making sure these systems aren’t unfairly disadvantaging certain groups of people. They’re looking at how these tools might perpetuate existing biases or create new ones. It’s a complex area because AI can sometimes pick up on patterns that aren’t obvious to humans, and those patterns might be discriminatory. The EEOC is working to make sure that while companies use AI to streamline hiring, they aren’t accidentally breaking anti-discrimination laws in the process.
Applying Existing Laws to AI Challenges
It’s a bit of a patchwork, really. We’ve got the FTC looking at consumer protection and deceptive practices, and the EEOC focusing on employment discrimination. On top of that, other agencies are figuring out how their existing rules apply to AI in their specific areas, like healthcare or finance. The main takeaway is that AI isn’t operating in a legal vacuum; existing laws are being interpreted and applied to this new technology. It means companies need to be really careful about how they develop and deploy AI, making sure they’re not just compliant with AI-specific guidance but also with all the other laws that have been around for ages. It’s a lot to keep track of, for sure.
Wrapping It Up
So, what’s the big takeaway from all this? Executive Order 14110, and the whole AI policy landscape it sits in, is a pretty complex thing. It’s clear the government is trying to figure out how to handle AI, pushing for innovation while also thinking about safety and what it all means for us. It’s not just about the big tech companies; it touches on everything from national security to how we work. Keeping up with these changes is going to be important for everyone involved, whether you’re building AI, using it, or just trying to understand it. Things are moving fast, and staying informed seems like the best bet.
Frequently Asked Questions
What is Executive Order 14110 about?
Executive Order 14110 was a plan from the Biden administration focused on making sure Artificial Intelligence (AI) is developed and used safely and responsibly. It aimed to protect people’s privacy, ensure fairness, and reduce risks associated with AI technology.
Did President Trump change the rules for AI?
Yes, President Trump signed a new order, EO 14179, which replaced EO 14110. He felt the previous order put too many limits on AI development and wanted to speed up innovation in the U.S. He also released an ‘AI Action Plan’ focusing on innovation and building AI infrastructure.
What are the main goals of the ‘America’s AI Action Plan’?
This plan has three main goals: speeding up AI innovation by cutting down on rules, building more AI technology and data centers in the U.S., and leading other countries in AI technology and security.
How does the U.S. plan to handle AI bias?
The government wants to make sure AI used by federal agencies is fair and not biased. They plan to buy AI systems, especially large language models, that are truthful, historically accurate, and neutral, avoiding ideas like DEI (Diversity, Equity, and Inclusion) if they are seen as ideological.
Are government agencies still checking how AI is used?
Yes, even with the focus on faster innovation, government agencies like the FTC (Federal Trade Commission) and EEOC (Equal Employment Opportunity Commission) are using existing laws to watch over how AI is used. The FTC is looking into unfair or misleading AI practices, and the EEOC is concerned about AI in hiring.
What is the difference between the U.S. and the EU’s approach to AI rules?
The European Union (EU) is creating strict, centralized rules for AI, like building a strong wall around it with clear laws and big fines. The U.S. is taking a more open, market-focused approach, with rules coming from different places like executive orders, state laws, and agency actions, which can be less predictable.
