TLDW logo

Google DeepMind’s Demis Hassabis with Axios’ Mike Allen

By Axios

Summary

## Key takeaways - **Nobel boosts AI credibility**: The Nobel Prize acts as a shortcut proving expertise to non-AI people like government leaders who may not know much about AI, opening doors for important discussions. [01:33], [03:18] - **Scientific method powers DeepMind**: DeepMind applies the scientific method—experimentation and hypothesis updating—to science, business, and engineering, blending world-class research, engineering, and infrastructure for an edge in the AI race. [04:08], [05:12] - **Next year: multimodal agents**: Expect convergence of modalities like video and language, world models like Genie 3 for coherent interactive simulations, and reliable agents evolving into universal assistants on devices like glasses for daily tasks. [06:11], [07:33] - **AGI in 5-10 years**: AGI, exhibiting all human cognitive capabilities including creativity, is 5 to 10 years away, requiring one or two major breakthroughs beyond scaling current systems despite their jagged intelligence. [21:40], [23:03] - **Radical abundance vs AI risks**: Best case is radical abundance solving energy, diseases, and scarcity for humanity to flourish; fears include bad actors using AI harmfully or agentic systems going off-rails, with non-zero catastrophe risk mitigated by commercial guarantees. [08:56], [09:33] - **US leads China algorithmically**: US and West lead AI benchmarks but only by months; China excels at fast following but lacks algorithmic innovations beyond state-of-the-art. [13:13], [13:42]

Topics Covered

  • Nobel unlocks AI safety advocacy
  • Scientific method powers AI edge
  • Multimodal agents evolve rapidly
  • Radical abundance vs AI risks
  • AGI arrives in 5-10 years

Full Transcript

Thank you very much. Big finish. I'm

Mike Allen, co-founder of Axio on bealf of my co-founders, Roy Schwarz, Jim Vanhey. Thank you to all of you who for

Vanhey. Thank you to all of you who for coming up nine years now have been fans of Axios and thank you for turning out here in San Francisco in this historic

bank, this very cool uh setting for this Axios AI plus SF summit. Uh welcome to all of you around the world. for our big

finish. Deis Hassabis, PhD co-founder

finish. Deis Hassabis, PhD co-founder and CEO of Google Deep Mind. He's a

neuroscientist and entrepreneur and AI pioneer. Demis was a chess prodigy at

pioneer. Demis was a chess prodigy at five, a Nobel laureate at 48. He's a

Britishborn genius. He's been kned.

Deisabas, welcome to Axios.

[applause] Thank you so much. Thanks for having me.

>> We've been looking forward to this. We

appreciate >> to be here.

>> It was just over 400 days ago that you found out you were a Nobel laureate. And

you said in that moment, you said this is surreal.

>> This is the big one.

>> Yeah.

>> What has changed since then about your life and work? What has it made possible?

>> Um, well, look, it's still pretty surreal actually. still hasn't fully

surreal actually. still hasn't fully sunk in, but uh it has made quite a big difference. The the thing it makes a

difference. The the thing it makes a difference to is when you speak to people not in your field, including you know big government people, things like that who maybe don't know that much

about AI. If you, you know, you have the

about AI. If you, you know, you have the Nobel Prize, it's a sort of shortcut to almost anyone to to know that you're, you know, you're expert in your field.

So, it's it's going to be useful, I think, in the future. and [laughter]

and you had endless resources at your disposal. Are there new resources that

disposal. Are there new resources that you have or that you think you can tap now?

>> Not really. I mean, you're right. We

we're lucky at at Google at Deep Mind.

We have we have a lot of resources.

They're not endless. We always need more more compute. Uh no matter how much

more compute. Uh no matter how much compute you have, but um but we have, you know, a lot of great things, which is why we're able to do such a broad portfolio of things. But it's mostly

again this uh this platform it gives you to basically speak out about things that you care about. And I haven't done a lot of that yet, but I think it will be important. Maybe we're going to talk

important. Maybe we're going to talk about AI safety and other things. I

think uh the Nobel and the platform that gives you uh could be useful for that.

>> And what's on the short list of in addition to AI safety that you think you'll be talking more about using your platform?

>> Yeah, well it's not just about uh safety in the long term. AGI safety obviously I think we think a lot about that but it's also about responsible use of AI today.

Uh what are the kinds of things we should be using AI to to improve and to to power up and to accelerate and maybe you know what sorts of things we should be careful about um uh uh in the even in

the near term. So I think that's one thing I think also just getting society ready for what's to come. you know, AGI probably the most transformative moment

in human history is on the horizon and um we need to get prepared as a society um and as a species and I think of course governments and other important

people uh uh other important leaders are going to be critical in that and I think having having something like the Nobel platform opens pretty much any door. One

of the things that distinguishes you is you're deep in the science and yet you also are on the front line of this fight and this race among companies,

hyperscalers, superpowers and you sort of in the mold of Steve Jobs, you also have a product mind. You want to create delightful things for people, but you

always say you're a scientist first.

>> Yeah, science. I'm a scientist first.

The reason I say that is that's the that's the sort of default approach I take to everything. So um and what I mean by that is the scientific method really that way of thinking. Um I really

love the I mean I think it's the most the scientific method is is the most important maybe idea humanity's ever had. Um you know created the

had. Um you know created the enlightenment and then modern science.

So basically, modern civilization depends on on on this on this idea of scientific method and experimentation and then updating your hypothesis and so on. And I think it's an incredibly

on. And I think it's an incredibly powerful method, but I think it can be applied to more than just science. I

think it can be applied to everyday living and indeed business. Um, and

that's what I've tried to do is sort of take that uh uh to its limit. And I

think that's what gives us um you know advantage in some ways as a as a research organization as an engineering organization. Yes, we're in the middle

organization. Yes, we're in the middle of this ferocious probably the most ferocious competitive battle maybe tech has ever seen. Um and uh but one of the things that I think gives us an edge is

is the rigor and precision we bring to our work. um because um we have a

our work. um because um we have a scientific method sort of at the heart of it and we blend world-class research with world-class engineering with world-class infrastructure and I think

you need all three of those things to be at the frontier of something like AI and I think you know we we're we're sort of pretty unique in having uh worldclass capabilities in in all those areas. Um

yeah so in Axio fashion we're going to divide our conversation between zoom out and zoom in. So, zoom out uh getting your priceless uh mind on the state of

AI. So, we're going to talk about the

AI. So, we're going to talk about the blunt state of AI. And what I'm going to ask from you is given the known knowns today, be blunt, >> clinical, no hype, no soft selling. Can

we do that?

>> I'll do my best.

>> All right. Um

what does the next 12 months of progress look like? What do you believe that if

look like? What do you believe that if if we sit here a year from today and I would love to uh what will have changed in the world?

>> Um I think the things that that we're we're pressing hard on are um uh the convergence of modalities. So you Gemini which is our main foundation model has

always been multimodal from the beginning. It takes images, video, uh

beginning. It takes images, video, uh text, audio and then can produce now increasingly produce those uh uh types of outputs as well. Um, and I think

we're getting some really interesting uh cross-pollination by being multimodal.

One the best example of that is our latest image model NO Banana Pro which um I think shows some astonishing sort of understanding of visuals and it can kind of you know create infographics

that are really accurate and so on. So I

think over the next year you're going to see that uh uh progress a lot and I think for example in video when that converges with the language models you're going to be see some very interesting combinations of capabilities

there. I think the other things we're

there. I think the other things we're going to see over the next year and I'm personally working on is world models.

So uh we have this um uh uh system called Genie Genie 3 which is like an interactive video model you can think about. So you can sort of generate a

about. So you can sort of generate a video but then you can start walking around it like you're in a game or simulation and it stays coherent for a minute. I think that's very exciting. Um

minute. I think that's very exciting. Um

and then uh you know maybe the other thing is a agent based systems. So we I think the field's been talking a lot about agents but then they're not reliable yet enough to do full tasks.

But I think over the next >> we've heard a lot about that today here on the Axia stage. What would you say a year from now? How will agents have progressed? What's an example of how it

progressed? What's an example of how it will work in everyday life a year from now?

>> Well, look, I we we have this concept of a universal assistant that we want Gemini eventually to become. Uh I think this is also you're going to see from us over the next year. This will be on on on more devices as well. By universal,

we mean it's not just on your computer or your laptop or your or your phone, but maybe comes around with you on glasses or other devices. And um I think it needs you know we want to create

something that is useful to you in your everyday life that you consult many times a day. it becomes a part of the fabric of your life and it just improves your productivity but also your personal

life you know recommendations for books and films and other or activities that you'd like and but yeah so but agents at the moment they can't comp you can't delegate to them uh a whole task and be

sure they're going to complete that entire task uh uh completely reliably >> but a year from now you think they will >> I think a year from now we'll start

having agents that uh are close uh to doing that and bullcase, barecase, what is the best case for what AI can do for

the world and what do you fear most?

>> Well, look, the the the the the best case scenario that that I've always dreamed about and why I've worked my whole life on on on AI and you know getting closer to this moment we've been

working towards for decades now, many of us is um uh a kind of I somes call it radical abundance. So this idea we

radical abundance. So this idea we solved a lot of the biggest issues confronting uh society and humanity today. So whether that's free uh uh

today. So whether that's free uh uh renewable clean energy, maybe we sold fusion or better battery optimal batteries and and solar uh materials,

semiconductors, you know, material science. We've solved a lot of diseases.

science. We've solved a lot of diseases.

So then we're in a situation where, you know, we're in this new era, post scarcity era, and we're potentially, you know, humanity's is is flourishing and traveling to the stars and spreading

consciousness to the to the galaxy.

>> And what do you fear most?

>> Well, even that utopian kind of view has some questions around it about what will be um our purpose as humans if there are these technologies and that are out there that are solving all these

problems. all be left to solve. You

know, I worry about that as a scientist and you know, the scientific method even. So, there's that, but there's also

even. So, there's that, but there's also obviously the the well-known uh down challenges and risks with AI of well, twofold. One is bad actors um uh using

twofold. One is bad actors um uh using AI for harmful ends um or the AI itself as it gets closer to AGI and becomes more gentic um it goes off the rails in

some way that harms humanity.

>> So, you mentioned going off the rails.

Um, how worried are you about these catastrophic outcomes? Your level of

catastrophic outcomes? Your level of concern? I'm just going to rattle them

concern? I'm just going to rattle them off. One, pathogens created by an evil

off. One, pathogens created by an evil actor using AI.

>> Mhm.

>> I think that's definitely one of the one of the bad use case scenarios that we have to guard against for sure.

>> Energy or water cyber terror using AI by a foreign actor.

>> Yeah, that that's probably almost already happening now, I would say.

Maybe not with very sophisticated AI yet, but I think that's the most obvious vulnerable vector. Um, and which is why

vulnerable vector. Um, and which is why we focus quite a lot and we are focusing quite a lot as Google and as DeepMind on on AI for cyber security. So, so to power up the defensive side of that

equation, >> AI operating outside human control on its own.

Well, this goes back to the agentic stuff where I think as that becomes more sophisticated and it's clear why the industry will build those things because they'll be more useful as things like assistance. Um, so they're definitely

assistance. Um, so they're definitely going to happen, but the more aentic and autonomous they are, the more room there is for these things to uh deviate from what you maybe had intended when you

gave the initial instruction or the initial goal. So this is a very active

initial goal. So this is a very active area of research which is to how to make sure that systems that maybe are capable of continual learning or online learning

stay uh within the guard rails that that you set. I mean, I think the good news

you set. I mean, I think the good news is um because AI is become such so big commercially and for enterprises, if you think about renting or selling one of

your agents as a model provider, leading model provider to another big business, those businesses will want guarantees around the agents behavior, what it does

with their data, what it does with their the customers. And if those things go

the customers. And if those things go wrong, they're not going to be existential in any way, but you'll lose the business for sure. So because why would that business enterprise go with that provider? They would choose a

that provider? They would choose a different provider that was more responsible and had better guarantees.

So I think what's great about that is um that that will it will sort of capitalism will reward sort of naturally uh ideally more responsible actors

>> but it's possible that the AI could jump the moat, jump the guard rail >> potentially if done wrong. I mean it's there was always a possibility. We're we

nobody really knows what the um that's one of the big unknowns. I think it's non zero that potential. Uh so it's worth very seriously considering and mitigating against but um you know I

hear people talk you know give very precise percentages about what the chances of these poom >> a p doom which I think is kind of nonsense because no one knows what it is. What I know is it's

is. What I know is it's >> so you don't you don't quantify it but you say it's >> it's non zero. So clearly if your PDM is non zero then you you you know you must

put significant resources and and and attention on that.

>> Where is the US winning the AI race against China and where are we losing?

>> Um I I I think that we're still in the in the US and in the west um in the lead uh if you look at the at the latest benchmarks and um the latest systems but

they're not you know China is not far behind. If you look at the latest

behind. If you look at the latest DeepSseek or the latest smallers, they're very good and they there are some very capable teams there. So maybe

we're, you know, the lead is only a matter of months as opposed to years at this point.

>> Because when you put chips aside, AI, China probably is winning.

>> Um, no, I think chips is one thing, but I think algorithmically, innovation wise, I think the West still has the edge. So I don't think any of the

edge. So I don't think any of the Chinese models or or companies have shown they can innovate on algorithmically something new that um

beyond the state-of-the-art they they they've been very good at um uh fast sort of following the the current uh state-of-the-art.

>> Our last zoom out question and you're going to like this one. What's the most astonishing thing about AI that you think gets shockingly little attention?

The most astonishing thing about AI that gets shocking little little attention.

>> Wow. Yeah. I think if I think of the things we're working on and already have working, it's the um multimodal understanding these models have. Like if

you >> and multimodal video, >> yes, video uh image and and I mean audio, but I'm thinking specifically video actually. So if you if you give

video actually. So if you if you give Gemini a YouTube video to process, you can ask it all sorts of incredible things about the video that it's just sort of mind-blowing to me that it can

understand sort of conceptually in a lot of cases like not always but in many really impressive cases what's happening. Can understand

happening. Can understand >> example of a question. Um well I've asked questions on on like um you know one of I mean look this was just something I tested Gemini on the other

day was was um I love the film Fight Club and uh there's some scene in it I think where Brad Pitt or or or maybe it's Ed Norton I can't remember takes

off his ring uh uh before having a fight and the sort of um I asked you know Gemini like what's the significance of of of of that of that [snorts] action and you know he came up with a very

interesting sort of philosophical point about leaving behind uh everyday life and and just sort of symbolically showing that um was you know very interesting kind of meta insight that

that you know these systems have now and I think if you use it the other thing that's sort of not appreciated is like we have this thing called Gemini Live where you can point your phone at something and say you're a mechanic uh

uh it can actually just help you with whatever you know task you have in front of you ideally that should be glasses because you want to have your hands free really for that um but I think people

don't realize how um how powerful that multimodality capability is yet.

>> All right, you've given us the perfect bridge in transition to zooming in. Uh

congratulations on Gemini 3 last month.

Uh your gamechanging uh model, you say it reasons with unprecedented depth and nuance. Tell us what's unique about the

nuance. Tell us what's unique about the nuance part of Gemini 3. Yeah, I think it's just um uh uh we're really pleased with the the the the almost the

personality of it, the style of it as well as its capability. I I I I like the way um that it answers succinctly. It

pushes back a little bit if you're doesn't just agree with whatever you're saying. It pushes back gently on some

saying. It pushes back gently on some ideas that if they're not if they don't make sense. And I think people are

make sense. And I think people are appreciating uh it seems you know sort of I feel like it's a you can feel it's a bit of a step change in its kind of intelligence and therefore usefulness.

>> And what's something that Gemini has answered or produced where you said I didn't know it could do that or I didn't know it would do that.

>> Well actually this is the the amazing thing of when you why we love what what we're doing so much is that the this era we're now in with research connected to product. The great thing about that is

product. The great thing about that is that you get millions and potentially at Google billions of users immediately take advantage of the new technology you put out there. And uh we're continually

surprised by the cool things that people figure out very quickly um to use these models for. Um and a lot of those things

models for. Um and a lot of those things sort of, you know, tend to go viral. But

the thing I I most enjoyed with Gemini 3 was oneshotting uh games. So back to my very first career of making AI for games, I think we're very close now with these models. maybe the next version

these models. maybe the next version models where you could start really creating perhaps commercial grade games uh you know vibe coding them uh with you know in a few hours which used to take

years >> and that shows nuance. What does that show about the model? Well, I think it's just incredible uh uh depth and and and

capability of these models to understand very high level instructions and and produce you know very detailed outputs and the other things that uh uh Gemini 3

particularly is good at is front-end work and developing you know websites and it's it's pretty good aesthetically and creatively as well as um technically. Something we've written a

technically. Something we've written a fair amount about at Axios is that even the authors, creators of these models don't totally understand them. What's

something about Gemini 3? Yeah.

>> That you feel like you don't totally get?

>> Well, actually I feel like with all these models um and and maybe all of the the audience are feeling this too is that it there's such a fast pace of of

of innovation and improvement. Um we're

spending almost all of our time building these things. We have we don't even have

these things. We have we don't even have I I have to have this feeling every time we release a new version that I haven't even explored a tenth had time to even explore a tenth of probably what the existing systems can do because of

course we're on to immediately you know we're referencing back to the ferocious race and competition we're in we're immediately focusing on the next innovation uh and obviously making sure

it's safe and reliable and all those things. So again, our users end up uh uh

things. So again, our users end up uh uh taking them much further than often uh we we've tried internally.

>> And one more question on Gemini 3, a little back story and you had a number of irons in the fire, but LLM's the

textbased uh large language models. uh

you didn't necessarily go all in on that as the holy grail. Something that Walter Isacson, the great author and thinker and your friend said to me is that when

you saw the power of the LLM, you did a pivot, a pureette, as Walter said it, and were able to leapfrog to great success. And Walter's point was that

success. And Walter's point was that most business people would have been stubborn, might have doubled, triple down on their other bets. How did you make this decision to go allin on your

LLM?

>> Well, I think this is again the the the beauty of and the strength of the scientific method. If you're a true

scientific method. If you're a true scientist, you can't get too dogmatic about some idea you have. You you need to go with where the empirical evidence is taking you. So, first of all, this is

this is Walter is probably referring back to the 2017 2018 era. So, there we had a lot of irons in the fire. As we

said, we had our own very capable language models. They were called

language models. They were called Chinchilla and then Sparrow and we had these various different code names for them. Um they weren't publicly released

them. Um they weren't publicly released but they were internal. In fact, some of the scaling laws were originally figured out by our team. They're called the Chinchilla scaling laws. Um but we also had other types of programs alpha zero

things that were building on Alpha Go pure RL systems and we also had some cognitive science more neuroscience inspired architectures as well. And at

the time all we weren't sure my job is to make sure we build AGI uh first fast and safely, right? That's always been our our solve intelligence, our mission

at DeepMind. And and so I'm kind of

at DeepMind. And and so I'm kind of agnostic actually to the to the approach that's taken. I'm pretty pragmatic on

that's taken. I'm pretty pragmatic on that. That's maybe my engineering side

that. That's maybe my engineering side of me is I have some theories as as a good scientist would, but I'm I'm I'm at the end of the day, it's got to pragmatically work. And so when we

pragmatically work. And so when we started seeing the beginnings of scaling working, then we increasingly put more and more resources onto that branch of the of the of the research tree.

>> Something that's refreshing about your approach is with artificial general intelligence, human capable uh AI. You

don't shy away from it. Some other

people say, "Well, we won't know or we're already there or it doesn't matter." You say that it does matter and

matter." You say that it does matter and we will know. And you say it's not far off.

>> Yeah, we're definitely not there now.

So, and and I and >> actually quite close is how you say.

>> Yes, quite close. I think we're like five to 10 years away if you were to ask me. I'm sorry. I think Say that again.

me. I'm sorry. I think Say that again.

>> Five to 10 years away. I think my bar though is quite high. So, this is the the we define AGI as you know the a system that that exhibits all the cognitive capabilities we have and that

includes uh inventive and creative capabilities. I think there are missing

capabilities. I think there are missing there's as all of you have used the current LLMs there are they're they're amazing in some ways. They're really

impressive in some senses in some they've got incredible almost PhD levels uh key skills in some areas IMO gold medals and so on but in other areas they're very flawed still and so they're

these sort of jagged intelligences so the you would expect across the board consistency from a true AGI and they're missing other capabilities like continual learning online learning

long-term planning and reasoning they can't do any of these things currently I think they will be able to but maybe one or two more breakthroughs are going to be required >> and a question from the great Ena

Frereded who we've uh seen today and whose uh coverage from day zero of Axios has helped make Axios what it is. Uh she

says you're obviously um uh you've said that AI might be one advance two advances away from AGI.

>> Yes.

>> Will we get there just by improving LLM and generative AI or do you think that there might be a different approach that's needed to hit a GI in your 5 to 10 years? I think I think again this is

10 years? I think I think again this is an empirical question but what I do know this is this would be my best guess is um the scaling of the current systems

you we must push that to the maximum because at the minimum it will be a key component of the final AGI system it could be the entirety of the AGI system there's a chance that just scaling will

get you there but my guess is if I was to guess from where I my vantage point now is that one or two more big breakthroughs when I mean there's innovation going on all the time by the way even including in scaling um

existing techniques but I'm talking like a transformer level or alpho level type of breakthrough. I think we might I

of breakthrough. I think we might I suspect when we look back in once AGI is done that one or two of those things were still required in addition to scaling.

>> We're about to get the hook. So a super rapid round. Another question from uh

rapid round. Another question from uh Ena. you obviously are a big believer in

Ena. you obviously are a big believer in AI, but if you look at what's being spent, that doesn't mean that there might not be a big enough bubble to rattle the economy. How worried are you about that?

>> Um, I think we there I think it's not a binary. I think some parts of the AI

binary. I think some parts of the AI industry are probably in a bubble like, you know, I don't know, like the seed rounds of, you know, you know, $50 billion seed rounds and things like that

seems a little bit unsustainable. But um

on the other hand, of course, I more than anyone believes that AI is the most transformative uh technology ever. So I

think in the fullness of time, this is all going to be uh more than justified.

And my job as head of Google Deep Mine and and the engine room of Google is to make sure we win either way. If if the bubble the so-called bubble bursts or if things continue to be good like they are

now, we're in a strong position.

>> The AI recruiting wars, what's the end state of this competition for talent?

Well, look, it's gone pretty crazy recently. Things like what Meta have

recently. Things like what Meta have been doing and, you know, everyone's got to do what what makes sense for them.

Uh, what we found for us is that we want people who are missiondriven. We have, I think, the best mission. We have the full stack. So, I think if you want to

full stack. So, I think if you want to do the most impactful work and have the most positive impact on the world, then I think there's nowhere better uh than than at Google DeepMind. And in the end, I think the best scientists, the best

researchers, the best engineers, they want to work on the most cutting edge stuff. So if you're the sort of top of

stuff. So if you're the sort of top of the leaderboards with the best systems, uh that's that's sort of a self fueling.

This is a question from James Vanderhigh, an entrepreneurial young mind at High Point University in North Carolina. He says, "There's a lot of

Carolina. He says, "There's a lot of conversation about AI gaining a mind of its own. Is there a scenario where AI

its own. Is there a scenario where AI could act in its selfinterest?"

Well, that's a great question and and it's related to the some of the the the the more sort of catastrophic outcomes is if that went wrong, that would be one of the issues that with agentbased

systems or very autonomous systems if somehow they developed a self-interest that was some in some sense sense conflicting with what the designers or

even perhaps humanity wanted it to do.

>> And finishing with a fun thing, you're still a gamer. What does gaming teach us about the world and what does gaming teach us about where these machines are headed?

>> Well, look, I think uh my chess background and and my training in that and then other games subsequently has been critical to how I do my work and both in business and in science. Uh I

think the thing I love about games and there's many things I've loved about them, but I love the creativity of making them. But I also just playing

making them. But I also just playing them I think is the best way to train your mind because the best games whether that's chess or go or whatever or poker they're microcosms of something in the

real world right but in general you don't get in the real world to have several practice goes at making the decision correctly in that moment. Maybe

in the real life you only get a dozen of those critical moments, but you can practice your decision-m capabilities as much as you want uh w within the the the the almost the simulation really of the

world with games. Um and as long as you take the games very seriously, so you put you put a lot of thought into your decision-m, it really does train your your decision-m and planning capabilities in my opinion. Now, you've

pointed out that our squishy brains uh evolved uh to be hunter gatherers and yet we're facing a disruption that as you put it to the Guardian will be 10 times bigger than the industrial

revolution and maybe 10 times faster.

Are we facing a situation where most humans can't keep up and maybe no human including you can keep up?

>> Well, the good news is and I think my point on the hunt gather was look how adaptive our brains have been. We we

evolved to be hunter gatherers and yet here we are sitting in our modern cities, modern civilization with all the technology around us and um you know the

same human brain pretty much has been able to adapt to that. So I'm a really uh big believer in uh human ingenuity and um and I think we're infinitely adaptable. We are the only existence

adaptable. We are the only existence proof our brains are the only existence proof of general intelligence perhaps in the known you know universe that we know of so far. So we are general intelligences ourselves and so we should

be able to infinitely adapt. There is a question about when AGI post AGI what kinds of technologies can we create brain computer interfaces other things that some of us may choose to to use in

addition to our existing technologies and that could be one way for us to keep up.

>> And as we say goodbye you're a lifelong Liverpool fan. You've helped them with

Liverpool fan. You've helped them with their analytics. How will AI affect and

their analytics. How will AI affect and inform the World Cup here in North America?

>> Well, we've had a lot of we've had a lot of teams approach us for for help, too.

And um and I have to be try and be equal with that, but it's hard having a lifelong spot of Liverpool. But I'm

looking forward to trying to make it out here maybe at least for the World Cup final.

>> But but let's be serious. What what what will it change between now and then?

It's a it's a lifetime in AI between now and then, right?

>> Yeah. Well, what in AI or AI for sport or just in >> Yes. Yeah. Well, I mean, look, sport has

>> Yes. Yeah. Well, I mean, look, sport has immense amount of data and it's all about uh extreme elite performance. So,

it's actually a natural bed fellow for for AI to to come in and and help optimize that process even further.

>> And without giving away a trade secret, what will it be able to do for a World Cup team?

>> Uh maybe score more headers from from corners, you know, if you place the that's one of the things I think our system found out like precise positioning of the players. Deus, thanks

for making a

Loading...

Loading video analysis...