The real threat posed by AI: an interview with Alex Stamos
By Frankly Fukuyama
Summary
## Key takeaways - **No Existential AI Threat**: I'm not an existential threat guy or AI doomer. AI like LLMs lacks true world understanding, such as gravity or physical interactions, making Terminator scenarios unlikely without human design. [01:33], [02:45] - **AI Lacks Desires**: AI is just a box of numbers with no wants, instincts, or intentionality; it doesn't seek to reproduce, kill, or make more AIs unless prompted by humans. [05:04], [05:51] - **Attackers Automate Kill Chain**: Over the last six months, attackers use AI across the kill chain for reconnaissance, exploits, and control, trading humans for faster, parallel computer operations that outpace defenders. [08:30], [10:11] - **Defenders Face Automation Risks**: Defenders must automate responses but face constraints from bosses, auditors, and rules against risky actions like shutting down systems, unlike reckless ransomware actors who automate freely. [13:16], [14:21] - **China Leads AI Hacking Tools**: Chinese groups built open-source tools using DeepSeek LLM trained on Kali Linux to automate hacking, nearly as advanced as Anthropic-detected attempts, evading logs on closed models. [23:14], [24:25] - **North Korean AI Job Scams**: North Koreans use face and voice replacement in remote interviews, posing as Americans via mules to get jobs, control shipped laptops remotely, and funnel billions via crypto. [33:33], [34:14]
Topics Covered
- LLMs Lack Physical World Understanding
- AI Has No Intrinsic Desires
- AI Automates Attacker Kill Chains
- China Leads AI Hacking Tools
- AI Democratizes Elite Exploits
Full Transcript
very happy to be speaking to Alex Stamos uh on the next episode of Frankly Fukuyama. If you do like this series,
Fukuyama. If you do like this series, please uh like and subscribe to it. So,
Alex was the um head of security at Facebook back in 2016 when all the Russian hacking uh happened. Uh he's
been with us here at Stanford at our cyber policy center. uh for a number of years. He teaches a really popular
years. He teaches a really popular course called Hack Lab uh where he teaches white hat hackers how to get into computers among other things.
>> Yes. Yes.
>> Um but uh you really what I wanted to talk about today was the whole question of AI and how that's going to affect
computer security. Uh it seems to me
computer security. Uh it seems to me that um if you pe listen to people like uh Jeffrey Hinton uh you know he he says
that there are these existential threats out there that really may affect the future of humanity as a whole. Elon Musk
I mean there are a lot of people that are into this and um there's also another view that says that the threats are shorter term. It's basically not
computers doing bad things to people, but people doing bad things to other people using computers. Uh, since you are really one of the foremost experts on computer security, how do you think
about the impact that AI is going to have on our well-being?
>> Yeah, I I'm much more of a short-term worried about humans person. I I'm not an existential threat guy. um a AI
doomer uh is is the short hand. I'm not
a doomer. Um, one, you know, I think, you know, over the last year or so, people have become a lot more skeptical of the idea of AGI, of artificial
general intelligence, that a number of experts have either put the idea of AGI in the post 2030, even further out. Um,
a number of people have made the argument that LLMs, the current set of technologies that we're pursuing, will never get us to AGI. That we're going to need completely different types of
models to get us there. That LMS don't have >> true understanding of the world in a way that human beings do. And we'll never have an understanding of the world that
will rival human beings. And
>> that's a good thing if you're worried about AI taking over the world. Um, it,
you know, having a a real 3D model of how things interact, uh, would be necessary to have that Terminator future.
>> The LLM knows that an apple fell on Newton's head, but it doesn't actually understand gravity. And
understand gravity. And >> that's right. Yeah. The example I often use is like um, you know, my golden doodle that our family has, she can catch a Frisbee. She doesn't bring it back because she's, you know, she's
mostly poodle. Uh, so she's not a full
mostly poodle. Uh, so she's not a full golden retrieval, but she can catch a Frisbee. Uh,
Frisbee. Uh, She can't write poetry, right? Uh LM's
can write poetry, but pretty much it's impossible to train an LM if you gave it control of a robot to get your Frisbee, right?
>> Yeah.
>> Language is the hardest thing human beings do. It's one of the last things
beings do. It's one of the last things we evolved. It's one of the highest
we evolved. It's one of the highest functions of our brains. Um, and we jumped with LLMs to kind of the hardest things humans do. And then we've kind of
backed into it doing these simple things. And so you have all these
things. And so you have all these experiments where folks give control of robots to LMS and then they're really bad at doing things like navigating
simple spaces and such. And what folks are realizing is that like LM just don't understand the world. they understand
like the world through the written word.
>> Um it's like a novelist's view of the world [laughter] but they don't have an actual physical understanding. So people are talking
understanding. So people are talking about physical models and physics based models and vision models and other things just like you know a dog and
other animals have a much more innate sense of space and innate sense of of the 3D world that is much simpler and kind of a a base brain view that doesn't
require that kind of high level and and there's been a bunch of research um into simpler models that actually uh beat the more complicated models and things like
solving suduku or uh you know solving uh uh problems that are practical problems but they can't write poetry um which is
you know uh a whole set of research uh that is is really effective um and I think uh perhaps gets us to AGI and perhaps what you end up doing is like the human brain you end up with a bunch
of different models put together if you want to build an AI system that like a human has a capability to interact in 3D world and write poetry and have
instincts and have all these things. But
the other thing about AI is it doesn't want anything, right? Like
>> uh an LLM is just a a it's a t a bunch of tensors. It's a box of numbers.
of tensors. It's a box of numbers.
>> Mhm.
>> The most complicated AI model just sits there, right, until you do something.
>> Mhm. Now, a human being can prompt it to do something terrible, can give it a system prompt, can give it a a goal to go do something, and then can take the output and plug the output into a system
that has the ability to do things. And
so, yes, and I we'll talk about, I'm sure, the bad things you can do with AI, >> but on its own, AI has no wants. It has
no instincts. It has not evolved like we have to have, you know, basic mamillian desires to >> intentionality.
>> Intentionality. It doesn't want to reproduce. It doesn't want to make more
reproduce. It doesn't want to make more of itself. It doesn't want to kill or
of itself. It doesn't want to kill or eat uh or make more AIs. And so, like, that's the other thing that I think is missing here, that like if AI was going to do something bad, >> it's because somebody
>> made it right >> to want to do it. And so even in the end if we ended up with like a Terminator situation it's because somebody decided to create AI and then to give it the desire to do something terrible.
>> Mhm. So it always requires a human in the loop to actually give it that >> at some point a human had to decide I am going to one train AI with these capabilities and then to set it on this
path to have >> uh you know to give it the start of like this is what you're going to do and then to plug it into a bunch of capabilities to give it the capability to do bad things. I think another thing I've heard
things. I think another thing I've heard you say is that LLMs can't be creative uh in the sense of thinking of things that are genuine that they've never processed previously.
>> That's right. and and ohms are not they they simulate creativity and and you know when I talk about it you know you heard me I was talking to our our our students here at at Stanford about the
use of LMS in a security context and the warning I give security people is you know LM are being used all kinds of defensive purposes and that's really good I mean there's there's a lot of
useful purposes that LMS can be used for cyber security um but you have to be careful and that they're not going to foresee new attacks
>> uh and they're not they can emulate and they can be trained on we have done all this defensive stuff in the past here is a set of attacks that have been seen but what they're not going to do is is think
oh well what might a human do in the future what creative things could happen that's never going to happen um we actually have some interesting examples from the trust and safety world right in that
>> companies that have over pivoted into using AI to do trust and safety work end up being outsmarted by human bad guys
who end up just changing their approach a little bit and then getting around the AI because the AI has been trained on everything that's happened before and it doesn't take that you can end up having
very quick, very broad, very cheap protections that operate at scale that are also very brittle because it only takes a little bit of creativity to get around them.
>> Okay. Well, that's uh that's [clears throat] reassuring in many ways.
>> Yes. Uh but um as a security expert, you are worried about what's going to happen in the near term. Yes. With bad people using uh these technologies for bad purposes.
>> Yeah, we're already starting to see we're right at the start of bad guys starting to use AI to make their attacks more effective.
>> Um just over the last six months, we've seen a real increase in the use of AI for offensive purposes. Um this started
with some experimentation around the use of AI to write code. Um uh we saw some parts of the killchain. So you know in cyber we have this idea of the kill
chain that we stole from the military.
Now when the military says killch chain they mean to actually kill things. uh in
the cyber world when we say killchain it's the steps you take to break into a computer network and then to have an effect on a target. So that might be to
steal data, it might be to shut a computer down. Um it might be to implant
computer down. Um it might be to implant a back door so you can come back later, right? Uh and so for those step into the
right? Uh and so for those step into the kill chain, uh you they're a little bit different depending on exactly what you're doing, but it's often you need to do reconnaissance. you you need to map
do reconnaissance. you you need to map out a network. Uh you need to find a way in. You need to build a uh an exploit to
in. You need to build a uh an exploit to do the attack. Then you exploit a network. You dep you deliver a payload.
network. You dep you deliver a payload.
That payload gets you command and control. So it gets you control of a
control. So it gets you control of a system. Then you might explore once you
system. Then you might explore once you get an initial payload delivered. You
might explore the network. It's called
east west movement. You might get bounce onto a couple other computers. You might
need to escalate your privilege. So
maybe you're on like a a computer that you don't really care about, but that gets you initial foothold and then you can get higher and higher privileges until you get into a really powerful computer and that's a computer you do care about.
>> Um, and so these are the steps of a kill chain. And so what we've seen over the
chain. And so what we've seen over the last 6 months to a year is attackers have systematically figured out how to how to use AI in more and more parts of the
kill chain. Mhm.
kill chain. Mhm.
>> And this means one that as they do that it means that they can trade you know what AI allows you to do is you know
trade human beings for computers. So
they need fewer human attackers. Um this
is useful for for all kinds of attackers both state sponsored actors uh and for financially motivated actors where you know every kind of actor has does not
have enough people right nobody has enough skilled attackers. So if you have a smaller number of skilled hackers it's great to have them get a force multiplier by them training computers to
do parts of their job. Um the second is then it allows them to do a lot of work in parallel, right? So that you can have a smaller number of humans then supervise a bunch of AI agents doing
their work at the same time. Um and so as they do take each part of that kill chain and they automate it, that also means that they can move very quickly,
right? Uh and uh a friend of mine, Rob
right? Uh and uh a friend of mine, Rob Joyce, who worked at the NSA, um he likes to say speed kills, right? From a
defender's perspective, if an attacker is really fast, that's a real problem for you, right? Because as a defender, what you try to do for all those parts of the kill chain, your goal as a defender is you try to set out trip wires for every part of that kill chain,
right? So you have a trip wire for the
right? So you have a trip wire for the initial exploitation. You have a trip
initial exploitation. You have a trip wire for the command and control, right?
So you like your firewall, you try to listen for command and control, you record every DNS request from your network. That's a different kind of
network. That's a different kind of command and control. You look for east west movement. You look for the payload
west movement. You look for the payload being dropped onto your computer. You
look for different kinds of malware.
Those are all trip wires you have in your network. And once the attacker
your network. And once the attacker hits one of those trip wires, then hopefully it sets off an alert and you are able to respond fast enough to stop before they get all the way to the end of the kill chain.
>> If the attacker is able to automate all those steps and have AI run through it, then even if they trip one of the trip wires, if it takes you time,
let's say it, you know, you still have humans in the loop as a defender. So,
it's 3:00 a.m. and somebody has to get on their computer and they're like, you know, their phone starts beeping, wake them up and they're like, "Oh no, I've got an alert." So, they get their computer and they open up the computer
and they do their fingerprint and they have to twofactor and then they have to get in like, "Oh, no. Okay, yeah, this looks real." And they have to get into
looks real." And they have to get into another system and from that system they have to read some details and like, "Oh, okay. Yeah, I have to get in here." And
okay. Yeah, I have to get in here." And
they go and they shut down the system and that takes 15 minutes.
>> Mhm. If the entire kill chain's on AI in those 15 minutes, the whole thing could be over.
>> Yeah. But can't you delegate to an AI agent?
>> Yes.
>> That what you just described the human being doing, >> right? And so that's what we have to do
>> right? And so that's what we have to do defensively. I think this is the race
defensively. I think this is the race that's on now >> is that attackers are now automating all the parts of the kill chain. And so
defenders have to do that too.
>> The problem for defenders is we have bosses. We have Sarbain Oxley letters.
bosses. We have Sarbain Oxley letters.
We have we work for corporations that have to live up to rules. We have
auditors. Like it's much more dangerous for us to do things like hook up all the parts of our network to an AI system that can just shut parts of it down
>> at any point where if it feels bad um versus attackers where a lot of them just don't care. Now, some of them do, right? Like if you're the Russian SVR,
right? Like if you're the Russian SVR, if you're like the tip of the spear of the Russian Foreign Intelligence Service, then those guys are very careful. They don't want to get caught.
careful. They don't want to get caught.
Mhm.
>> They're probably not automating a lot of their kill chain. But if you're a Russian ransomware actor, those guys hack thousands of targets with the goal of getting a dozen of them to pay a ransom.
>> They don't care.
>> Mhm.
>> They'll automate everything, right? And
so if they can go from 10 ransoms to 50 ransoms, they've got five times as much money.
>> So sure, from their perspective, it's great. They'll they'll automate even if
great. They'll they'll automate even if the AI screws up in a bunch of times.
That's fantastic for them. And so, uh, yeah, for them it it automation makes a lot of sense. And so, this is, I think, the big race right now is for defenders
to get automation in place that they feel comfortable enough turning on to do the defensive steps necessary. And so,
you have to have automation within constraints that also the automation because like >> being defender is like I'm I'm sure you've seen bridge over the river quai, right? I have to explain this metaphor
right? I have to explain this metaphor to our students. I I I talk about feeling old. I'm sure I'm sure you feel
feeling old. I'm sure I'm sure you feel old teaching our our our joint students, right? But like I'm at that point where
right? But like I'm at that point where it's like you you make movie references and their faces are just blank, right?
So like >> you know at the end, right, he uh spoiler alert, right? He blows up the bridge.
>> Yeah.
>> You know, he goes and he he he he blows it up. You know, they they take this
it up. You know, they they take this beautiful bridge that they build and they rig it with explosives. That's what
it's like to be a security person, right? You have this beautiful
right? You have this beautiful infrastructure and you have to rig it with explosives because in the end to stop an attack you usually have to break the infrastructure in some way, right?
You have to >> cut off a firewall. You have to drop internet transit. You have to shut down
internet transit. You have to shut down servers, shut down Kubernetes containers.
>> Often like in a real >> I've done a lot of instant response and sometimes I tell the CEO, it's time to shut the company down. It's time to turn
off all internet access to your company.
>> You need to send all your employees home now, right? Like that is the only way we
now, right? Like that is the only way we can contain this breach. Yeah.
>> Is we have to shut down all internet access to your company. That that has happened multiple times. I have told the CEO that is a bad thing to tell the CEO, but sometimes it's the only way for them to retain any enterprise value. Right.
>> And you're saying that you don't want an agent to do this automatically.
>> That is a hard thing to give an AI agent is the power to do all of that. So, I
think you probably don't have to give an AI agent the ability to turn off all internet transit, but you probably have to give it the ability to turn off accounts, to at least suspend accounts
in Active Directory, to turn off individual containers, individual virtual machines, to create firewall rules. And that can be really risky in a
rules. And that can be really risky in a corporate environment. It's extremely
corporate environment. It's extremely risky in a production environment. And
so this is the race we're in right now as the attackers uh automate their kill chain for defenders to automate their defensive systems. So um in thinking
about AI, it seems to me the real problem is agentic AI because so I don't know anything about AI itself but I do
know something about um uh hierarchies, human hierarchies and you know in every human organization you delegate authority to some lower level of the uh
organization and it's almost inevitable that at some point you delegate too much authority because you know the people at the lower levels have the ability to act
quickly. They can sense the environment.
quickly. They can sense the environment.
They can uh they have more skill and knowledge and so forth. And that's
really what gets the whole organization into trouble. It seems to me almost
into trouble. It seems to me almost inevitable that this is going to happen.
I mean, you've just given an example of that, right? That
that, right? That >> the automated defensive systems will be much faster and uh more effective in certain ways. But so there'll be a
certain ways. But so there'll be a constant pressure to you know allow more of that to be automated >> and that's what then >> gives away too much you know authority to these machines.
>> I mean the the the delegation problem is a serious problem overall in that we just don't have good ways of delegating authority to agents overall in it. This
is actually a fundamental issue we've got is the how do we authenticate AI agents? How do we create like uh
agents? How do we create like uh temporary constrained delegation for them? We don't know like if you want to
them? We don't know like if you want to tell an agent go you were allowed to be me for the purposes of shopping on Amazon for and you can buy up to you
know I want you to buy a gift for my wife and it can cost up to $50. Go buy
something nice for her.
>> There is no mechanism for you to do that. Mhm.
that. Mhm.
>> It will just have your credit card and it can go buck wild, right? Like there's
just no mechanism to delegate its authority within reasonable constraints, right?
>> Yeah.
>> And it's the same thing on corporate networks. There's no good way to say you
networks. There's no good way to say you can go interact with my email for the purposes of sending an email for a very constrained purpose. It could just like
constrained purpose. It could just like >> basically write an email to tell your boss to go f himself, right? And like
>> you know M you know hey president lemon this is what I think about you know how Stanford's going >> right >> love professor Fukuyama right like the
agent has the ability it's very hard to build those constraints right we just don't have like the semantics for it we don't have the models for it and we don't have we don't run our agents
within sandboxes to do that a bunch of people are proposed mechanism for that um they're all proprietary so this is the other thing is that everybody's like oh yeah I could solve that for you if you if you opt in to my
>> my my sandbox and it's my corporate sandbox and you can only run within my sandbox yada yada there's almost no mechanisms that people want to do that are at all uh open >> and I take it our colleague of the
Stanford Law School Mark Lemley is working on all the liability issues where >> you delegate authority that you shouldn't and then somebody gets mad and sues you for it.
>> Oh yeah. I mean once again the lawyers are going to do great out of this. I
mean, not the young lawyers, because none of them are going to have jobs, but the older [laughter] lawyers like Mark, uh, who is the wizard in my, uh, Dungeons and Dragons group, uh, which here's just a little tip. Uh, if you're going to have a Dungeons and Dragons
group, do not have any law professors in it. Uh, because every session turns into
it. Uh, because every session turns into a multiple level appel appellet [laughter] process. Um, yes, but yes,
[laughter] process. Um, yes, but yes, Mark is uh working on that. Uh, and it turns out the legal issues here are huge because yeah, the these AI agents are just going to go out and do a bunch of
stuff without you as the human knowing and signing contracts for you and such.
I mean, we already have this docuign stuff. Um,
stuff. Um, >> I sold a company a couple years ago. I
sold a company, you know, over a decade ago and we had this huge paper signing ceremony and the CEO of the company we're selling to told me like, "Son, you should go buy like a nice pen and
that'll be like a thing that you remember." And I did. Like I buy bought
remember." And I did. Like I buy bought like a nice Mont Blanc and I used it to sign and it took us like an hour like because they walked us through signing all the papers and it was like this big thing.
>> Yeah.
>> And it was like it felt really important. And then the second time I
important. And then the second time I sold a company we did it on Docu Sign.
>> Yeah.
>> It was much less it was very anticlimatic [laughter] >> even though it was like for a larger amount of money and it was like oh wow that was that was it.
>> I've never I've never understood how docuign works cuz anybody can push a button.
Yeah, I mean apparently people have litigated it, but it does seem very easy to f I mean, yes, it could be you could do it through cross-ite scripting. You
could do it through malware. Uh I mean, do not call me as your expert witness to get [laughter] out of your docuign contract, but somebody I mean, you people are going to be getting out of those contracts for sure.
>> Um but anyway, yes, I mean the AI stuff, people are going to be having AI sign their contracts and say they didn't authorize it.
>> So let's talk a little bit about the international dimensions of this. Yes.
uh give me an overview of who's really good at this stuff, who we have to worry about in terms of international actors um and who are we less worried about.
>> Well, so this is this is fascinating obviously. Um, okay. So, when we talk
obviously. Um, okay. So, when we talk about actors who are doing this, the the report that just came out, uh, Enthropic threw like a huge grenade into this
whole world saying that they caught a group that they associated to Chinese intelligence using their systems to
automate all parts of the uh, kill chain using cloud code. Now, this was a huge deal. There's a bunch of kind of
deal. There's a bunch of kind of conspiracy theories. Anthropic is
conspiracy theories. Anthropic is overblowing this. Um, I will say I wish
overblowing this. Um, I will say I wish Anthropic had released more data. Like
if you're going to do these threat intelligence reports, you really should have like more IOC's out there and such.
I do >> what's an IOC?
>> I'm sorry. Indicator compromise. So like
the technical details, they should have released more of like the the raw raw details. That being said, I do I know
details. That being said, I do I know some of the people in anthropic who work on this. I trust that they're telling
on this. I trust that they're telling the truth.
Anthropic is somewhat controversial because they are I think the most ethical of the foundation model companies and they have called for more aggressive regulation in this space and so some people are conspiracy-minded
that anthropic is lying about this but I do think it is true. Um what we have seen is uh it is not crazy to think that
a a a PRC entity would do this. In fact,
there is a open-source tool that you can download um uh from a PRC group that is not that much less advanced than what uh
Anthropic demonstrated >> that uh uses DeepSeek, which is an open- source Chinese LLM that is trained to use Kali. Ki is the virtual machine that
use Kali. Ki is the virtual machine that I actually use to teach the class here.
Uh which is a a a Linux uh um distribution that has a bunch of attack tools built into it that will automatically do a bunch of hacking for you. This this automatic this tool that
you. This this automatic this tool that this Chinese group built >> that you can talk to it in either English and Chinese and basically ask it, hey, go hack this network for me and it will go do a bunch of the hacking
automatically for you. It's super cool.
Um, and so, >> so, so Cali is a Linux distro that I can just download and >> put on my computer.
>> You can come to my class and I'll teach you how to do it. Frank, you you you still haven't taken the class, right?
Yeah, you got to do that. Um, if you're going to make the master students do it, you should do it yourself. But yes, you can download it yourself or we run it here uh on virtual machines here in our attack lab. Um, but yes, Cali doesn't
attack lab. Um, but yes, Cali doesn't come with the the uh the LM. What the
this Chinese group did was they take the the Cali virtual machine and then they trained an LLM on all of the instructions on how to use all the tools in Cali and then on a bunch of attack patterns that they trained it on the
open internet and then hooked it up with MCP servers to use all the tools and then you can basically ask it hey go do attacks for me and so it'll go run all the tools in the proper order to go do
it. Mhm.
it. Mhm.
>> Um so not as advanced as the attack Anthropic talked about on their systems. Uh the interesting thing here about what
Enthropic talked about is why if Enthropic and OpenAI OpenAI has seen some of this kind of stuff but not nothing as as advanced of what Anthropic
demonstrated the LMS that are being created by Deepseek by QM by the the
Chinese labs are 70% 80% as good as the the closed models from anthropic and open AI. right from from some of the
open AI. right from from some of the metrics maybe not that good but they're pretty good right you can go get those models and then just run them on your own hardware right if you do that you
are not leaving logs at anthropic or open AAI for the FBI to get for the NSA to get and such and so if they are seeing that kind of activity then there is a ton of activity happening using the
open source models right because the other thing you can do is you can't >> take anthropics models open eyes models and then really retrain them to do hacking like they're they have a bunch
of protections in place to try to keep them. Now you can trick them. You can
them. Now you can trick them. You can
you can say oh this is authorized and such and and because of what happened I think anthropic has made it harder to try to use their tools to do hacking. Um
but the open source Chinese models like you don't have to do anything to get them to help you to do hacking. Um, and
then you can intentionally train them just like this this Chinese toolkit to you can take a bunch of stuff and add these checkpoints and then you know use rags and such and then feed them a bunch
of data. Um, and so you you know if I
of data. Um, and so you you know if I was building these kinds of things I would just be using the open source models. They're quite good like Deepseek
models. They're quite good like Deepseek R1 the the you know the latest kind of uh uh you know um chain of thought model uh from Deepseek is quite good, right?
uh and you know if you have the hardware then I would just be using that. Yeah.
Um so yes of of the of the groups I'd be worried about I think China is by far the first right that Chinese are China is the only adversary country that is really has their own labs that are
equivalent to the US labs right and >> any you know all this stuff about the US being way ahead of China in AI is foolish right
like um the PRC has had a plan for decades to catch up to the United States in fundamental sciences including in computer science. They have been sending
computer science. They have been sending students here including to Stanford for years and that has been effective. Right
now a bunch of those students stay here but a bunch of them go back and then they go and they teach there and then they create the next generation. Um AI
is a of all the current fields of computer science. It is one of the most
computer science. It is one of the most academic. Therefore it is something that
academic. Therefore it is something that is published very broadly. That has
changed a little bit in the last couple of years where a lot of the cutting edge stuff has stopped being published.
Right. But until like until GPT3, almost all of the the really cutting edge discoveries were published openly.
>> After GPT3 and the commercialization, people stopped. But that's that was only
people stopped. But that's that was only three years ago, right? So up to that point, everything you needed was public, right? Um and you know, like our
right? Um and you know, like our undergrads here build toy LLMs. Our graduate students are doing, >> you know, cutting edge research. Mhm.
>> A bunch of them are are Chinese nationals and a decent percentage of them will >> go teach it how right >> does does so as in the case of nuclear
technology or or it it seems to me you're saying that unlike nuclear technology you don't necessarily need to be a state level actor to really be at the cutting edge
of this technology. No, not at all.
Because like with nuclear technology, we couldn't control the physics, right?
Like every country in the world very quickly knew how to build an implosion bomb, right? Like controlling the
bomb, right? Like controlling the knowledge of how the physics worked was effectively impossible. What we
effectively impossible. What we controlled was the uranium. Well,
controlling the compute is effectively impossible. That is something that the
impossible. That is something that the Biden administration's controls demonstrated partially because >> you don't actually have to have the GPUs
in your country for this to work.
There's a reason why Singapore and the UAE are on the list of the top importers of Nvidia GPUs.
>> It is not because the UAE is full of >> AI researchers.
>> It is because they are effectively exporting that compute to China, >> right? Mhm.
>> right? Mhm.
>> Um because the actual thing you're exporting after you do all that work, it fits on a single thumb drive. So you can do all the compute in Singapore, you can
do all the compute in the UAE or Dubai using their cheap subsidized power >> and then you can zip the the final results in seconds over fiber optic
cables back to China. So if this is something that um individuals can really master, it seems to me the threat level then goes way up because you're you're
you're you living in a world where there are a lot of people with bad intentions.
In the nuclear world, you really had to be a country with bad intentions. Yeah.
>> Now, [clears throat] it seems to me you're saying that potentially a lot of individuals bet with bad intentions can make use of this technology in a very sophisticated way.
>> I mean, as of right now, you can go on hugging face and you can download retrained versions of Deepseek that will write you exploit code,
>> right? You can get retrained versions of
>> right? You can get retrained versions of Quen that will write you malware. So
yeah, I mean you could never do that with nuclear weapons. You could not, you know, you could never download yellow cake. Um, so I think the,
cake. Um, so I think the, you know, the idea that we can apply something like the nuclear nonp proliferation, you know, that the idea that the SESAC model from the the second
floor here, right, >> or third floor is going to apply at all to this world is just foolish.
>> Yeah. So um
in terms of the criminal world, what's the cutting edge there? Um
>> yeah, I mean so the first the first wave was AI for to make fishing better, right? So um you know a lot of the
right? So um you know a lot of the protections against spam and fishing were based upon the idea that if you were sending out a message to fool a lot of people, you'd send one message, it
would be the same and you send to a thousand people. M
thousand people. M >> now because of LMS you can send a thousand unique messages to a thousand people.
>> So the first time people did that at least it was the same like you didn't know who those thousand people were.
>> You were just sending a mail enhancement message that used different languages to try to beat the filters. Now, I think what we're going to end up having is you're going to have scammers who are
able to do scams at scale where the interactivity with individuals will be based upon bots. And that's going to get pretty scary, right? that like you know that this is Sally, that she's a
63-year-old grandmother and the work that used to be done by a Nigerian scammer who's actually sitting in Logos who had to think about Sally and how to trick her can be done by an LLM whose
English is better than that scammer. Um,
and who has information stolen about her from, you know, a data breach and knows what her grandchild's name is and is able to build a lure. That's perfect.
So, I think like uh that's going to get scary. The other thing we have now is we
scary. The other thing we have now is we have the face replacements, we have voice replacements. Um and so at the
voice replacements. Um and so at the high end, what we're what you end up having is you have uh people coming in and doing fake video conferences, fake fake phone calls, and doing uh uh wire
transfer scams for enterprises, right?
So, if you want to rip off $250,000, $500,000, um people are making phone calls or video conference calls into uh account
payable teams into accounting teams, uh pretending to be executives, the CFO, and uh saying, "Oh, yes, we need to make this payment. We need to make this wire
this payment. We need to make this wire transfer." Um or going to companies and
transfer." Um or going to companies and saying, "Yo, uh I'm your I'm this large uh vendor. um this we've made a change
uh vendor. um this we've made a change to our payment structure and uh this is a a old scam. It used to be all on paper because it's a significant scam. People
then move to okay let's have a phone call to verify and so now you can emulate somebody's voice, emulate somebody's face.
>> The other scam that's going on now is the North Korean worker problem.
Postcoid, lots of companies decided there's certain classes of people they can hire remotely to the point of where you might have employees that have never been met by a human being in the
company, right? That you interview them
company, right? That you interview them remotely, you hire them remotely, you ship them a laptop, they do all their work, when you eventually fire them, you ship their laptop back. Okay, that works
out fine until the fact that the North Koreans have taken advantage of this where they apply for a job, they use a hu an American mule. So that American has a real social security number. They
have a real American bank account. They
have a real American mailing address.
>> And so they apply with the with the identity of that American, but the interview is taken by a North Korean who was actually skilled at that job, whether it's a programming job or something. They use face replacement to
something. They use face replacement to pretend to be American. and they use voice replacement to be able to do, you know, a a perfect American accent. They
do well in the interview because these people are actually good at the job.
They get the job. The laptop gets shipped to that American. The American
opens it up. They install a piece of remote access tool and then somebody in North Korea actually does the job.
>> The cop the FBI has raided these people's apartments. And what you'll
people's apartments. And what you'll have is you'll have some cocktail waitress in in Las Vegas who will have 20 laptops open, all of them being controlled remotely by 20 different people in Pyang
>> and then she's collecting 20 paychecks and she sends 90% of each one of them >> back to North Korea via cryptocurrency.
>> This a real thing that happened.
>> Oh, it's a huge thing. It is worth now billions and billions of dollars of of hard currency transfer back to >> um North Koreans. North Korea now the it
looks like the two largest sources now of their uh of them getting money is one North Korea is the largest thief of cryptocurrency in the world by far. Uh
nobody else is even close and then two is the scam.
>> Mhm.
>> Um and then they bust that cocktail waitress. She goes to federal prison and
waitress. She goes to federal prison and then those 20 North Korean workers have to get 20 new jobs.
>> Um, and uh, you know, the cocktail waitress is like, "Oh, I I had no idea they're North Koreans or whatever. Who
knows?" You know, this is like those work from home scam. She got recruited to work from home and then she got pulled in. Not you know, she knew
pulled in. Not you know, she knew something was wrong. She probably didn't know they're North Korean, but she knew obviously she was getting paid to like run these laptops. Is that the same thing that's going [clears throat] on in
these uh kind of slave farms in Cambodia that they've been in the news a lot lately that people are literally kidnapped and forced to work?
>> Uh it's possible. Yeah. I mean there's there's a lot I mean certainly the North Korean workers the reports are they're very hard workers and it's obviously because they're slaves, right? Like um
uh people are getting actually really good output from them, right? graphic
design, it's programming, it's data entry. It's things that people can do
entry. It's things that people can do without a lot of phone calls without a lot of interaction. Um, and uh, you know, asynchronously across time zones, although they'll work during American,
you know, they're up in the middle of the night their time.
>> Um, the Cambodian stuff, those are people who are working for like Chinese triads, but they work in Cambodia, so they're outside of chi, the reach of Chinese law enforcement is my understanding.
>> Yeah. So, you uh suggested [clears throat] that like the Russian uh GRU, you know, when they get into this, they're more cautious than your typical criminal. Why? Why is that?
criminal. Why? Why is that?
>> Right. So, I I mean, you're certainly going to see state actors use AI, but they're going to be more careful. And
it's because um if you're a state actor, depending on who you are, you you might not you might care about getting caught.
Now, the GRU is an interesting example because there's a lot of examples of the GRU not caring, right? So, you know, as you know, but maybe your listeners don't, there's three major intelligence agencies in the Russian Federation,
right? There's the SVR and FSB. Those
right? There's the SVR and FSB. Those
are the two descendants of the KGB.
>> I always found this interesting. I'd
love to talk to Mike McFall about this.
It's interesting to me that they threw away the KGB brand. Like, the
Bellarussians kept KGB. It's like
>> that sucker's got a huge amount of brand equity, right? Like, what's the Like
equity, right? Like, what's the Like what's the recognition rate of the term KGB? It's 95%. Who knows what the FSB
KGB? It's 95%. Who knows what the FSB is, right? But like the Bell Russians
is, right? But like the Bell Russians still have the KGB because it's like it's terrifying. Everybody's terrified
it's terrifying. Everybody's terrified of the KGB. Who's afraid of the FSB? I
mean, people are now, but they weren't in like the '9s anyway. Um, so the KGB breaks up and the the the first director like the foreign intelligence service becomes FSB. Uh, I'm sorry, becomes the
becomes FSB. Uh, I'm sorry, becomes the SVR and then domestic intelligence and the near abroad becomes the FSB. Um, and
[clears throat] so the SVR and then uh is the the SVR, they have the best hackers in Russia, right? Um, and then the GRU is military intelligence, so they work for the Kremlin. The GRU are
kind of the thugs of the of the Russian state hackers. Um, they're the bulls in
state hackers. Um, they're the bulls in the China shop. They often don't care about getting caught. And then the FSB is somewhere in the middle. Um, they
have hackers who are very careful. They
have hackers who don't really care. Um
they also use a lot of contractors, right? So a lot of FSB work is being
right? So a lot of FSB work is being done by Russians and people from uh related countries who are doing work for
their uncle in the FSB. Uh when I got to Yahoo, Yahoo had been pre-b breached before I got there by a group of guys who were working for FSB. A guy named Alexi Balon ran that team. He's actually
not Russian. He's Latvian who had gotten caught by the FSB. And then I guess you know given the choice like you could live in that concrete box or you could work for us. I'm I'm guessing most people take option B, right?
>> Mhm.
>> Um and that the FSB recruits a lot of people that way, right? Um so the SVR they are famous for not getting caught, >> right? Because they're an intelligence
>> right? Because they're an intelligence gathering operation. So a lot of their
gathering operation. So a lot of their work if they get caught it's useless, right? Um, and so they, for example, are
right? Um, and so they, for example, are famous for the Solar Winds hack where they spent like nine months working their way slowly into Solar Winds, very carefully, very quietly sneaking through
the network. When they implanted the
the network. When they implanted the back door into Solar Winds, they did so with brand new malware on a build server that was in the kernel only. It
decrypted itself just for the moment of the build. It patched the software in
the build. It patched the software in memory only. It never touched disc. It
memory only. It never touched disc. It
was incredible. It's gorgeous. Um,
really careful. The GRU is just like blow stuff away like because you know they're a defense support. They're a lot about breaking things. If they do
intelligence gathering, it's often for a very short-term purpose or they're doing like uh disinformation work, right? So,
you know, they famously broke they're the ones who broke into Podesta's email and then they stole that and then they released it, right? So, it's like they don't care that people know.
>> People know they know. people know, we know, they know, like they're fine with it. And so, um, if the SVR is going to
it. And so, um, if the SVR is going to use AI, it's going to be for things like exploit development. It's going to be
exploit development. It's going to be things for like to create malware. It is
not going to be to automate their kill chain because they're they're willing to spend 9 months to carefully do stuff.
>> I can see the GRU doing it, right? U now
the ransomware groups again, those guys operate at scale. They like to hack a thousand things and then traditionally they've had to come back after they hack into a thousand companies and then pick
and choose which ones they actually ransom. So if they can automate that
ransom. So if they can automate that process, that's good for them. The other
problem that the ransomware groups have is if you have a affiliate group of 15 or 20 guys working together, >> one of those guys might be a cop. One of
them might be a Australian signals director undercover agent. One of them might be dumb and go to Cyprus to to go to the beach and get picked up on Interpol warrants and then get turned
against you, right? So like for them, human beings are a liability. And so if you can replace those guys with bots, you're much safer. And so for them, I
think AI is also a good move because if you can go from 20 guys to two or three using AI, then you have a much safer >> Yeah.
>> you know, a much safer, tighter group.
So um maybe just to wrap up in the future what uh are there new actors that people are not aware of that we ought to be careful about or there
>> new ways and just you know not companies or big organizations but just individuals what do they have to worry about? Well, I mean, it's the one of the
about? Well, I mean, it's the one of the things that's gonna be interesting to see is >> AI is going to really improve the game of lots of people because it's it's
already allowing uh attackers to find vulnerabilities and write exploit code that they were never able to to do before. So, like those really high-end guys like SVR, the
Ministry of State Security in China, um you know, the US, our NSA. So, think
like the real high-end guys, right? So
it's a US and the five eyes. US, Canada,
Australia, New Zealand, uh UK, right?
>> Um China, Russia, Israel, >> France, Germany, couple other Western nations.
>> Those are like the real cyber powers where you have people who are like writing custom exploit code, doing real zero day development, doing really hot stuff. There's a whole tier below that
stuff. There's a whole tier below that where it's actually really rare to see brand new exploits used in the wild. So
like even Iran, they're very active, but they don't use a lot of O day of zero day exploits traditionally because they don't have a ton of people who can write
this beautiful new exploit code. That is
about to change because AI both allows you to find new vulnerabilities, but it also lets you write exploit [clears throat] code and test exploit code, I think, >> much more easily than you could before.
And so it's going to be fascinating to see what happens when you have all of these groups take a step up. So like
India and Pakistan traditionally not a ton of new exploits. I mean it happens but not a ton. Um Iran maybe like South American comp countries. Um but also
like you know uh individual groups both the ransomware groups but maybe activist groups. These people have always been
groups. These people have always been doing hacking but they've been using tools that they find. They've been using exploits that they get that kind of trickle down. You know you'll see like
trickle down. You know you'll see like the superpowers attack each other right?
You'll see like stuckset gets used or the US loses eternal blue probably because the SVR steals it and releases it and then eternal blue gets used all the time in all these hacks. Well, if
that kind of capabilities in the hands of everybody with access to AI, >> that's a really scary future. And and so I think like we're going to have to as defenders really adjust to the
possibility that a much broader set of people have the ability to write really good exploit code and to find new vulnerabilities really quickly. So one
as defenders we're going to have to write better code. We're going to have to find vulnerabilities really fast. Um
we're going to have to patch much more quickly because the number of companies that have actually had to deal with zero day is is really quite small. It's like
defense industrial base, oil and gas, banks, big tech, pretty much it.
Government, most companies don't actually have to worry about that. Now,
you might see all companies have to play at that level. Um, that's a, I think, a really scary possible future.
>> Yeah. Yeah. I remember when I was young, I read a science fiction story about, uh, [clears throat] some aliens that came to Earth. They had a machine. You
could just, it was very simple and easy to reproduce. You just point it at
to reproduce. You just point it at anything made of metal and the metal basically turns to putty. And so they leave this machine and they fly back to their home planet. They come back 50
years later and the whole world has basically fallen into complete chaos because any teenager can melt the Brooklyn Bridge, you know, and um >> Oh, wow.
>> So, right, >> seems like we may be moving into some cyber equivalent of of that.
>> You think an alien just left the transformer transformer paper? Maybe it
wasn't an alien, but but once the technology, you know, becomes that accessible, >> uh there's there's really lots of bad actors. And
actors. And >> I mean, the good thing here is AI also helps us write better code. It helps us find these bugs and fix them.
>> So, I mean, this is what uh the company I'm at now, Corridor, was started by two of our alumni, right? So, I joined this company with two of the students I I worked with here at Stanford, uh Jack
Cable and uh Ashwin Ramaswami. So um you know be so like the upside here is AI does help us write better code. It helps
us refactor code. It helps defenders find bugs.
>> So you know it's not all bad. We we just have to have the uh the will and the courage to try to move faster than the bad guys.
>> And it's an ongoing arms race or it will be an ongoing arms race.
>> It will be and we have to as defenders invest in actually fixing things. Like
if you look at the the salt typhoon attacks like the the big attacks against the uh the uh telephone companies, you know, these really never got properly investigated. The CSRB was shut down by
investigated. The CSRB was shut down by the Trump administration. We never got the final report. My understanding is a bunch of the vulnerabilities that were exploited. The patches exist, they just
exploited. The patches exist, they just weren't applied. Like we can't allow
weren't applied. Like we can't allow that anymore, right? Like we actually have to go >> fix things and if it means like >> we have to have some downtime. We have
to spend the money. We have to upgrade the switches and the routers. We've got
as a as a society to decide we're going to spend that money. We're going to squish the corporate profits a little bit. We're going to hold people to a
bit. We're going to hold people to a higher standard.
>> Yeah. And you as an individual shouldn't delay on doing that patch on your computer.
>> No. No. But like the truth is is like us in the tech industry, it's on us to make that automatic. I mean the truth is is
that automatic. I mean the truth is is the things we build if we ship you a device >> the patching should be automatic that it should be secure by default like we should not be putting that on normal people.
>> Mhm. Okay. Okay Alex that's great that's really informative. So thanks a lot for
really informative. So thanks a lot for talking and uh we'll have to do it again. You know in a couple of weeks the
again. You know in a couple of weeks the things will be completely different. All
right. Yes. If the AI lets us have a podcast. Yes. Okay. Good. Thank you sir.
podcast. Yes. Okay. Good. Thank you sir.
>> Thank you.
Loading video analysis...