Is the Government Finally Stepping In? (Federal AI Regulation)
By Matthew Berman
Summary
## Key takeaways - **Trump's One Rule Book Tweet**: Trump tweeted there must be only one rule book for AI to keep leading the race, warning that 50 states with rules will destroy AI in its infancy. He plans a one rule executive order this week since companies can't get 50 approvals every time. [00:12], [01:05] - **AI as Interstate Commerce**: AI models are developed in one state like California, trained in another like Texas, inferenced elsewhere, and delivered nationwide via internet, making it clearly interstate commerce reserved for federal regulation. This multi-state nature negates state-level jurisdiction. [02:12], [02:41] - **Patchwork Crushes Startups**: A patchwork of 50 state regulations would force startups to build multiple model versions, hire lawyers, and incur massive costs, making it impossible for small teams to compete against giants like Google and OpenAI. This regulatory capture favors big tech and hurts US competitiveness against China. [03:37], [04:10] - **California Car Analogy**: California's strict emission laws forced automakers to produce two car versions, adding costs and logistics, but eventually California standards became national due to its size. Unlike cars with local pollution risks, AI risks are global, demanding federal oversight. [06:06], [07:49] - **1200+ State AI Bills**: Over 1,200 AI bills introduced in state legislatures with more than 100 passed, creating a complex patchwork. Examples include Colorado, California, Illinois holding AI developers liable for 'algorithmic discrimination' impacting protected groups like English proficiency. [05:47], [10:20] - **Federal Preemption Addresses 4Cs**: Federal preemption won't override state laws on child safety, local data center infrastructure, federal copyright for creators, or blue state censorship threats. It ensures competitiveness to win the AI race against Europe's regulatory stagnation. [13:02], [14:35]
Topics Covered
- Federal AI Regulation Prevents State Patchwork
- State Regulations Crush AI Startups
- California Cars Prefigure AI Failures
- AI Risks Span Globe, Demand Federal Control
- Preempt States to Beat China
Full Transcript
Trump just outlined his plan for AI regulation and it actually makes a lot of sense. I'm excited. Let me tell you
of sense. I'm excited. Let me tell you about it. I'll break it all down. Now,
about it. I'll break it all down. Now,
first, let me read his tweet. There must
be only one rule book. If we are going to continue to lead in AI, we are beating all countries at this point in the race, but that won't last long if we're going to have 50 states. Many of
them bad actors involved in rules and the approval process. That is the crux of what Trump and his administration are proposing, that AI regulation needs to be done at the federal level. And if
you've watched this channel at all, I agree. We do not need a patchwork of 50
agree. We do not need a patchwork of 50 different states coming up with their own regulation, often extremely partisan regulation trying to compete with each
other on that regulation. And thus,
startups get hurt. And I'll explain all of this in a moment. Let's continue.
There can be no doubt about this. AI
will be destroyed in its infancy. I
don't know if I should be yelling when I'm reading these all caps. I will be doing a one rule executive order this week. You can't expect a company to get
week. You can't expect a company to get 50 approvals every time they want to do something. That will never work. Now,
something. That will never work. Now,
that was the tweet. And David Saxs, the AI and cryptozar, the guy who works for Trump and advises him on these AI and crypto policy, basically followed up and
gave his legal and logical argument for why AI needs to be regulated at the federal level. So let's read this and
federal level. So let's read this and I'm going to explain what he means.
First, this is not an AI amnesty or AI moratorum. It is an attempt to settle a
moratorum. It is an attempt to settle a question of jurisdiction. What he's
saying is that they are not saying, "Hey, AI doesn't need regulation." They
are saying that the federal government should be regulating AI, not each individual state. And I already know
individual state. And I already know California has passed their own AI regulation. I know they've tried to pass
regulation. I know they've tried to pass even more. I know other states are
even more. I know other states are thinking about or have already passed some AI regulation. So, we're already heading in that direction. And this is their attempt to say, "No, we're going
to handle it." Now, here is his legal argument for why regulation, AI regulation needs to be done at the federal level. When an AI model is
federal level. When an AI model is developed in state A, let's say California, because usually it is trained in state B, let's say Texas, because there are major data centers in
Texas, inferenced in state C, that might also be Texas, it might be another state. That's where they're actually
state. That's where they're actually running inference on the GPUs and serving you your answers and delivered over the internet which is everywhere through national telecommunications
infrastructure that is clearly interstate commerce. So that is legal
interstate commerce. So that is legal argument number one and it would be difficult for me to imagine any counterarguments to that argument. It is
true artificial intelligence is already multi-state and it is being delivered through the internet which is in every single state and international and exactly the type of economic activity
that the framers of the constitution intended to reserve for the federal government to regulate. Now what if we didn't have federal level AI regulation?
What if we left it up to each individual state? Well, we would have a complex
state? Well, we would have a complex patchwork of AI regulation, legal regulation that really makes it almost
impossible for companies that are not incredibly wellunded, the top companies in the world, Google, Meta, OpenAI, Anthropic, companies that are not those
major companies, startups, the lifeblood of the economy of the United States to actually compete. It would make it
actually compete. It would make it impossible. Imagine as a startup, as a
impossible. Imagine as a startup, as a fivep person startup trying to build the next incredible AI model, the next incredible AI application, that you had
to build multiple versions of the model just to satisfy regulation between California and Florida and Texas and New York and all of these. It costs extra
money to train. It costs lawyer investment for them to go lawyer things and all of this makes it quite prohibitive to actually start a new
business. This is not good. It will not
business. This is not good. It will not allow us to compete with China. Think
about how China operates. There's
effectively one man who controls all of it. He just says the word and the entire
it. He just says the word and the entire economy shifts and changes into exactly what he says. Now, we definitely don't want that here, but if we went the
opposite route and allowed each state to choose their AI regulation, it would not make sense. And one of those AI startups
make sense. And one of those AI startups that is growing really quickly that I want to tell you about is the sponsor of today's video, Lindy. AI agents are no longer a developer's bestkept secret.
They are going mainstream. Wire just
published an article exploring companies that are almost entirely run by AI agents, examining what happens when not only the individual contributors, but the executives are essentially just AI.
And the company leading this transformation is Lindy. I've covered a lot of agent platforms on this channel, but what sets Lindy apart is just how easy they make it to build AI agents.
You don't need to be a developer. You
don't even need to understand complex workflows. You just tell Lindy what you
workflows. You just tell Lindy what you want to build. Three examples. Create a
social media spam monitor. Create a QA engineer agent for my app. And help me manage my personal CRM. From sales to support to operations, Lindy's agents
work 24/7 for you. And they're dead simple to set up so you can focus on growing your business, not on the grunt work. So check out Lindy. Thanks to
work. So check out Lindy. Thanks to
Lindy for sponsoring this video. I'll
drop all the links down in the description below. Now, back to the
description below. Now, back to the video. Now he finishes this argument
video. Now he finishes this argument with over 1,200 bills have been introduced in state legislatores and over a 100 measures have already passed.
So we're again as I mentioned earlier well on our way to having this crazy complex patchwork of AI regulation between states. Now let me give a very
between states. Now let me give a very specific example. the automobile
specific example. the automobile industry. California has very strict
industry. California has very strict laws on pollution from cars that was very different from other states. And
there's a reason for it. I grew up in Los Angeles and I remember in the '9s you couldn't see even a couple miles out because there was so much pollution.
There was so much smog. It was really bad back then. And then California passed laws that increased the threshold of what was required to get a smog check
passed and to have these vehicles be more efficient and capture more of that pollution. And then one day the skies
pollution. And then one day the skies cleared up. It was really very sudden
cleared up. It was really very sudden and it did work. But you're probably saying, Matt, aren't you making the argument against federal regulation?
Aren't you making the argument for states rights? And that is the
states rights? And that is the counterargument is states rights. And in
that example with automobiles, it actually made more sense. But the flip side of the California example is that automobile companies had to make two versions of their cars. They had to make
one for California and one for the rest of the states. That caused them to have additional cost, additional logistics.
And so with cars specifically, a car in California is emitting pollution in California. So it was a very California
California. So it was a very California specific problem. Obviously, cars are in
specific problem. Obviously, cars are in every state, but every state got to choose how much they cared or optimized for some other metric based on their pollution threshold levels. But in
California, we really cared. And so,
since everything about that car is based in California, California seems like it should have the right to regulate. But
here's the thing with AI. It is by nature, by definition, not only interstate, but international. And so,
as I mentioned before, AI is created here. It's trained somewhere else. It's
here. It's trained somewhere else. It's
inferenced somewhere else. It's
delivered throughout the entire world.
So, just by that nature, it seems like it negates the right of a state to regulate it since it's everywhere already. And so, what ended up happening
already. And so, what ended up happening as an automaker, you had a car that was built for California, then you had a car that was built for the other states.
Eventually, the automakers figured out, well, our cars that we sell to California, we actually sell the majority of cars to California because the population is so large. And thus,
maybe we'll just have a California car and we'll sell that California car to the rest of the country. California
asserted its opinion on what that threshold should be. And so, for cars, California basically decided for the rest of the country what their emission standards should be. Now, here's a
little difference between the car example and AI. With cars, there is a very clear metric to measure risk, and that is the emissions. Some states don't
care as much about higher emissions.
They care more about the cost to produce it because when you have additional regulation, you have higher cost.
California cared a lot and they didn't mind that it costs more. But with
artificial intelligence, the risk involved is actually much more nebulous.
It's not clear exactly what the risks are. We know some of them, but we
are. We know some of them, but we certainly don't know all of them. And of
course, we don't know what we don't know. And also, the profile of the risk
know. And also, the profile of the risk is quite different. With automobiles,
the risk is very specific to the location that the car is in. If a car is emitting pollution in California, it is a risk to California. with AI. It is not that way. If somebody's using AI in
that way. If somebody's using AI in California or it gets created in California, it can be a risk to the entire world. But again, bringing it
entire world. But again, bringing it back to having this patchwork of 50 different states determining what the regulation should be. And of course, we have blue states and red states and
purple states, and they're competing on regulation. What's important to them? Is
regulation. What's important to them? Is
it censorship? Is it bias? Is it
discrimination? Obviously, you probably know which states care about which issues. So, now back to David Sax's
issues. So, now back to David Sax's post. So, he gives a few examples. Of
post. So, he gives a few examples. Of
course, keep in mind David Sax is a Republican. He leans quite right. He
Republican. He leans quite right. He
tends to be pretty consistent about his views, but it is very right-leaning. So
he says, for example, states like Colorado, California, and Illinois have made AI developers liable if their models contribute to algorithmic discrimination, which is defined as
having a disperate impact on a protected group. Colorado's list of protected
group. Colorado's list of protected groups even includes English language proficiency. So presumably, it's against
proficiency. So presumably, it's against the law for an AI model to criticize illegal aliens. So obviously what he's
illegal aliens. So obviously what he's saying in a very partisan way is that blue states care about those issues whereas red states might not. And if all of these states are fighting to get
their issues front and center in AI regulation, that's going to be a big problem. He gives the example of this is
problem. He gives the example of this is how we ended up with black George Washington. I don't know if that's
Washington. I don't know if that's actually how we ended up with black George Washington. And he's referencing
George Washington. And he's referencing Google about 18 months ago. They had an AI model that said, "What does George Washington look like?" and it produced a black George Washington. Then he goes on
to say only a federal framework can achieve this goal. And of course here he makes a dig on Europe at best will end up with 50 different AI models for 50
different states. A regulatory morass
different states. A regulatory morass worse than Europe. And what is he referencing? Well, Europe is basically
referencing? Well, Europe is basically in stagnation because they have regulated everything. It is very
regulated everything. It is very difficult to get anything done in Europe right now. Obviously, there's a lot of
right now. Obviously, there's a lot of arguments for the way they're doing things as good, but that is my perception is it is very difficult to get stuff done in Europe right now. And
I think specifically one thing that I'm reminded of daily is the fact that I have to accept cookies on every website that I go to. That is GDPR. That came
from Europe. That came from overregulation.
It is absurd. It does nothing. And every
day my web browsing experience is negatively affected by the fact that I have this popup and I have to click it.
And it doesn't sound like a big deal, but imagine how many different times you've had to do that. And then multiply that by the billions of internet users every single day. Imagine how much time
is wasted clicking on that just because Europe wanted to regulate the whole world. And most importantly, if that
world. And most importantly, if that happens, China will race ahead. And also
as I said before, startups can't compete. This is regulatory capture and
compete. This is regulatory capture and that can happen both at the state level where multiple states are competing to get their regulation passed and at the federal level. If there's too much
federal level. If there's too much regulation at the federal level, we will also have regulatory capture in which the major companies, the major tech companies like OpenAI and Google are able to basically just capture the
market because startups can't compete.
They don't have the resources, both human capital and capital capital. But
what about the four C's? He says, let me address those concerns. Child safety, a very serious issue that of course we need to keep a close eye on. Preeemption
would not apply to generally applicable state laws. So, state laws requiring
state laws. So, state laws requiring online platforms to protect children from online predators or sexually explicit material would remain in effect. Okay, good. Communities. AI
effect. Okay, good. Communities. AI
preeemption would not apply to local infrastructure. That's a separate issue.
infrastructure. That's a separate issue.
In short, preeemption would not force communities to host data centers they don't want. He's specifically talking
don't want. He's specifically talking about there's been a lot of push back lately, a lot of nimism towards data centers. Of course, a lot of politicians
centers. Of course, a lot of politicians on the right have been saying, "Hey, anytime you set up a data center in my state, the electricity costs skyrocket for my constituents, and I don't want that." Next, something that's obviously
that." Next, something that's obviously very important to me, creators.
Copyright law is already federal. So
there is no need for preeemption here.
Questions about how copyright law should be applied to AI are already playing out in the courts. That's where this issue will be decided. Next, censorship. As
mentioned, the biggest threat of censorship is coming from certain blue states. Of course, remember he's very
states. Of course, remember he's very right-leaning. Red states can't stop
right-leaning. Red states can't stop this. Only President Trump's leadership
this. Only President Trump's leadership at the federal level can. Now, put
politics aside. I'm not sharing my opinion on this statement, but it is obviously extremely partisan. Censorship
is obviously not good. But he's saying only one side is doing the censorship, which is unlikely to be true. And then
he finishes with there is actually a fifth C, and that's competitiveness. If
we want America to win the AI race, a confusing patchwork of regulation will not work. So Trump is looking to pass an
not work. So Trump is looking to pass an executive order this week making it so that AI regulation is done at the federal level. So, what do you think
federal level. So, what do you think about this? Let me know in the comments.
about this? Let me know in the comments.
If you enjoyed this video, please consider giving a like and subscribe.
Loading video analysis...