TLDW logo

Plug and Play SV November Summit 2025 : Enterprise & AI

By Plug and Play Tech Center

Summary

## Key takeaways - **AI Infrastructure Bubble Real**: We're in the largest capital investment cycle in tech history, bigger than the internet, with AI spending exceeding $400B this year; it's an industrial bubble building physical assets like data centers that will power the next decade, as Jeff Bezos confirms. [18:03], [22:30] - **No ROI? Organizational Gap**: MIT study shows no tangible returns despite $30B spent because success hinges on adapting structures, habits, processes; AI maturity = technology × organizational readiness, even perfect tech fails without integration. [28:45], [30:02] - **NYC AI Nexus Launch**: Plug and Play awarded RFP for AI Nexus in Manhattan with NYEDC to accelerate applied AI adoption across sectors from SMBs to Fortune 500s, connecting 2,000 AI startups with businesses for pilots and jobs. [11:10], [14:40] - **San Jose AI Excellence**: AI is the next internet; San Jose won't miss it with new center of excellence leveraging talent density, SJSU epicenter, Nvidia/Adobe ecosystem to transform regions into AI leaders. [00:15], [01:19] - **New Partners Announced**: Fujifilm, Ricoh, San Jose Earthquakes, Applied AI Company join as partners; NT Data provides advisory/integration with virtual sandbox for risk-free AI piloting without data exposure. [07:56], [09:20] - **JP Morgan AI Scale**: JP Morgan invests $2B yearly in AI with 43K engineers, embedded teams reporting to CEO, yielding $2B productivity gains by starting with strategic hires, measuring relentlessly, scaling what works. [37:32], [38:24]

Topics Covered

  • New York Leads Applied AI Deployment
  • AI Bubble Funds Infrastructure Overbuild
  • Organizational Readiness Caps AI Maturity
  • Train Humans to Supervise AI Agents
  • Build Custom Small Models for ROI

Full Transcript

AI is the next internet and San Jose and the Bay Area and California certainly didn't miss the internet. We're not

going to miss AI. And by having a center of excellence here in San Jose, we can leverage AI is the next internet and San Jose and the Bay Area and California certainly

didn't miss the internet. We're not

going to miss AI. And by having a center of excellence here in San Jose, we can leverage the density of talent who can deliver the true power of artificial

intelligence to transform the world in amazing ways we can't even imagine.

An AI center of excellence. We launched

this effort in 2024 where we essentially transform regions into recognized AI leaders by establishing strategic hubs in different localities that connect

local innovation with global opportunities.

As our president says, and I'm going to quote her. She says San Jose State

quote her. She says San Jose State University is the epicenter of the future and a plugandplay has always been creating future for decades now. So when

you put two innovators and pioneers together where the innovation or entrepreneurship is baked into our DNA, I can only imagine the possibilities.

>> Scaled companies like Nvidia just uh down the road or Adobe headquartered right here in our downtown that also want to see this innovation and startup ecosystem grow. So I think we have an

ecosystem grow. So I think we have an opportunity to pull together virtually every sector, academia, philanthropy, public sector. There's an opportunity to

public sector. There's an opportunity to bring everyone together to create something really special in downtown San Jose. With the AI market expected to

Jose. With the AI market expected to reach more than a trillion dollar by the end of the decade, plugandplay's AI centers of excellence are uniquely positioned to capture value. Having

invested in more than 650 AI companies, we provide them with a real testing ground for their solutions. And for

enterprises, this means getting to work directly with top AI companies.

But I remember through that journey like one of the people that was supportive throughout that entire journey was uh say and plug and play. We actually had our office space here and I would if you're running a startup two things I'd

recommend try to get plug-and-play on the cap table and specifically try to have say involved in some capacity.

Before plug-and-play for us, it was very hard to get into big enterprise companies. And thanks to plug-and-play,

companies. And thanks to plug-and-play, we now have a full pipeline of the exact logos that we wanted to talk to.

>> At archetype AI, we are solving the problem of helping people understand the complex physical world, plug and play, and help us uh rapidly understand potential customers who could make good use of our physical AI solution.

>> The startups in all of this, they're the engine. They're the ones bringing

engine. They're the ones bringing cutting AI solutions to the table. And

we at PlugandPlay, we're acting as the central connector, managing the ecosystem, facilitating the right vetted pilot ready introductions and running

the accelerator programs and making sure that all pieces work in unison.

When we think about the Seattle industry cluster, we have Microsoft, Amazon, Microsoft AI institute or costtoco and then Starbucks and fusion companies,

aerospace, bowing. We would like to

aerospace, bowing. We would like to create a new industry in cooporation with plug and play. We would like to run a new cubation center over there with plug and play. And then second aspect is that learn new fund.

>> Plug and play has really interesting connections with companies. I think that this allowed particular startup to stress test their ideas to see if they actually have a good product or if their

technology has legs and it also provides them with a pass towards scaling up a particular type of product and also finding a client for this product and

perhaps for an acquisition if this is something important to a company.

All right. Welcome to day three, everyone. You made it. How's everyone

everyone. You made it. How's everyone

feeling? Good. That doesn't sound like it. Sounds like people are tired and

it. Sounds like people are tired and ready for a happy hour, but welcome to day three. You made it to the enterprise

day three. You made it to the enterprise and AI SE session for our batch 18.

We're really happy to have you all here today. Thanks so much for making it. Um,

today. Thanks so much for making it. Um,

just a couple uh figures about this week. So, this week we had over 4,000

week. So, this week we had over 4,000 individual people attend Plug-and-Play uh over the last three days. Uh, 4,200

plus if you count Monday where we announced our San Jose batch as well.

So, that was amazing. That led to over 2900 meetings through our app. Uh, and

plenty of countless others that we did not capture. Uh when you break down

not capture. Uh when you break down those 4,000 individuals that were here, that con constitutes over a thousand different startups, 1100 different

corporations, over 500 investors, over 180 different government officials, and over and representatives from over 60 different different universities. So,

thank you all very much for being here.

It makes this place a very special place. And as you can tell, and you've

place. And as you can tell, and you've probably felt over the last couple couple days, this network is really unique and it's really powerful. So

appreciate you all for being here. As a

quick intro, my name is Nate Henman. I'm

the senior director of our enterprise and AI program. I've been with Plug-and-Play for about eight years, coming up on December 1st, and it's been an amazing ride. Um, this program really, it started out as a cyber

security program, pivoted to enterprise B2B software, and now we're heavily heavily focused on AI and the others as well. Um, before we get started, want to

well. Um, before we get started, want to go through a quick agenda. So, uh, you can listen to me do my opening remarks.

We're going to announce a new location, which we're really excited about. If you

were here on Tuesday, you you kind of know what we're talking about. Uh, if

you were here last year, you know what our trend reports consist of and you know who's going to lead them. So, we

have the all wonderful uh Emit Patel to give our trend presentation. Then, we'll

jump into our startup pitches. We'll do

some closing remarks and then we'll get off into the networking session where we can all kind of blow off some steam. So,

with that, um, I want to first shout out my team. Thank you guys all so very much

my team. Thank you guys all so very much for putting all this together, the hard work that you bring to the table every single day. You make work fun. You make

single day. You make work fun. You make

this all happen. And, uh, you don't get enough thanks. So, thank you very much.

enough thanks. So, thank you very much.

I'd also like to thank our partners.

These are our partners uh, for the enterprise and AI uh, vertical across North America. These are all represent

North America. These are all represent different industries as you can see. uh

we have banks, we have tractor companies, we have beverage companies, uh IT service integrators, we have telecommunications providers. That's

telecommunications providers. That's because enterprise AI is applicable to all industries. And so with that, we're

all industries. And so with that, we're able to identify and evaluate technologies across different industries, but actually across multiple industries, but with the same use case.

And so with that, we're making very strong investments through our financial services fund that we just closed. And

uh very excited to announce additional new partners. First, Fujifilm, a

new partners. First, Fujifilm, a multinational conglomerate that operates in a variety of different industries new uh worldwide. Thank you very much for

uh worldwide. Thank you very much for joining this year, Fujifilm. Good to see you. Here we have Rico, a global

you. Here we have Rico, a global information management and digital services company. We also partnered with

services company. We also partnered with the San Jose earthquakes this year. If

any of you been to the PayPal arena, you've probably seen our signage up. The

PayPal suites are all now the plug-and-play suites. So, it's kind of a

plug-and-play suites. So, it's kind of a cool full circle. If you know our story, we were one of the first investors in plug-andplay in in PayPal, excuse me. Uh

and now we have uh a stadium with our logo in it. Um and then finally, the applied AI company out of Abu Dhabi, a supervised automation platform redefining knowledge work across

regulated industries. So, thank you very

regulated industries. So, thank you very much to our new partners and let's give a round of a round of applause to our partners and the team.

One partner I'd like to highlight here is NT data. Um so if you're not familiar, NT data is a uh multinational

information service or IT services and integration um provider out of Japan.

The majority of their business is not in Japan, however. And so what we're

Japan, however. And so what we're actually working on uh working on with NT right now is this new advisory and integration partnership. So, not only

integration partnership. So, not only are they working with the startups in our ecosystem like you are for either internal adoption or investment, they're also working with our corporate partners and providing free advisory services.

So, a lot of our partners come to the table and say, "Hey, you know, we'd love to use AI, but not in our environment.

We don't want to put our data at risk.

So, how can we do this? What's the best strategy for deploying AI across the organization?" plug-and-play, we advise

organization?" plug-and-play, we advise on that, but also NT is now bringing a little bit more firepower with their virtual sandbox environment. So, our

partners can rapidly pilot, test, and adopt solutions in front of their entire teams without having to put any of their data at risk. We also are hosting several innovation events. We have a

security an AI security series that we're hosting with NT. Uh we held one about two months ago in San Jose. The

next one is uh going to be more on the regulatory side and we'll be in DC hopefully in Q1. Um next I'd like to

show you our map. So the the logos in yellow here are the current uh existing offices for the AI program. The blue

represent the offices that we're hoping to open in the next 12 to 18 months.

With this global footprint, we're able to identify technologies across the world, bring them together on one platform, and introduce them to all of our partners, investors, and mentors,

and anyone in our network. This platform

is incredibly powerful when it comes to AI and strategy and global deployment.

And with this, we have a board of directors, adviserss that will meet annually through our AI networking event.

I'd like to make announcement, another announcement. Uh, this is our new

announcement. Uh, this is our new office. We're incredibly excited about

office. We're incredibly excited about this thanks to the NYEDC.

We recently were awarded the RFP for the AI Nexus platform in Manhattan. And here

to talk a little bit more about it is Daria Seagull from New York EDC. Daria, if you could join me. Round of applause.

join me. Round of applause.

>> Thank you, everyone. Um, you're gonna have to bear with me. and I'm going to talk about New York for about five minutes. Um, so apologies, but uh, we're

minutes. Um, so apologies, but uh, we're pretty excited about this partnership.

Uh, so I'm Darius Seagull. I'm a senior vice president at the New York City Economic Development Corporation. Uh,

for those of you that don't know us, which I assume is most of you, uh, we're the city of New York's economic development arm. Our mission is to grow

development arm. Our mission is to grow and diversify New York City's economy.

So it works for all New Yorkers. Um and

supporting the growth of our innovation sectors has been a key part of our mission and ensure and and the building the jobs of our future. Uh New York City has been at the forefront of innovation

since its start from Thomas Edison building the first electric grid in lower Manhattan to Nicola Tesla opening laboratories into the city. And so we're a city of dreamers and innovators and

creators. Over the last decade, New York

creators. Over the last decade, New York has really cemented its place as a global hub for the tech sector. We're

now ranked number two. We're only behind the Bay Area globally uh as a tech startup ecosystem.

And since 2022, more than 500,000 recent graduates have flocked to New York City uh for school and to stay in New York City following their graduation. And as

we recently heard from some real estate folks, uh New York City is dominating in attracting college graduates to our diverse, unmatched range of industries and career opportunities. Uh we've

produced more P STEM PhDs three times faster than the rest of the country and all of big tech is in New York City.

Over 300,000 New Yorkers live in New York living in New York City work in the tech sector and today it's part part about part of a 7% of our economy. And

now we've emerged as the applied AI capital of the world. So New Yorker is a city of builders and problem solvers from finance to fashion, media to medicine. And the world looks to New

medicine. And the world looks to New York, not just for ideas, but how to turn those ideas into impact. And so

there's no place more exciting and more important to deliver AI than New York City. And so that's why we're so excited

City. And so that's why we're so excited to partner with Plug-and-P. So, um, I've talked a lot about what's going on in New York City at large, but, uh, you know, we have over 2,000 AI startups,

25,000 startups overall, 40,000 AI ready workers, and put simply, the talent, the capital, the use case diversity, and the appetite are all here in New York. So,

that led to us developing our AI strategy. U, so we have three key goals

strategy. U, so we have three key goals for AI in New York City. One is we we want to advance New York City as a global leader in applied AI. And so

we're not just focused on foundational research uh but our vision is again applied AI where innovation meets our real world sectors. Second is to grow a dynamic AI ecosystem business creation

partnerships across the full value chain. And that's really what the AI

chain. And that's really what the AI nexus is about which I'll talk about in a second. And then thirdly is developing

a second. And then thirdly is developing an AI ready workforce and ensure equity in access to the benefits of AI happens across New York City. Uh we're doing that through training with our public

university system, through our library system, and a lot of investments we're making in K through2 through upskilling New Yorkers. Okay. So now what I'm

New Yorkers. Okay. So now what I'm really what we're really here to talk about is the AI Nexus. Uh so this is a key uh component of our strategy that we're so excited to be partnering with plug-and-play on. It is specifically

plug-and-play on. It is specifically designed to accelerate the practical adoption of applied AI across critical sectors of our economy including small and medium-sized businesses up to the

Fortune 500s. So why is this important?

Fortune 500s. So why is this important?

As one of the biggest uh challenges in the AI era is deploying AI in a real business in an organization may that may not have deep AI experience and in a way

that drives value and creates impact.

The AI Nexus will help connect startups and AI founders with New York City based businesses. It will support

businesses. It will support experimentation, pilots, proof of concepts, and adoption. And this is essential if we want to ensure that AI delivers economic impact across New York

to create jobs to have productivity gains and inclusive benefits across our city. So just to summarize as we look to

city. So just to summarize as we look to the next three to five years uh we want to make sure New York City leads the world across the sectors where our urban economic cultural stren strengths give

us an advantage and that's financial services as media and entertainment fashion and retail health tech climate tech the list goes on that can all be powered by AI. We aim to create tens of

thousands of AI augmented jobs not only for engineers and data scientists but for domain experts who know those sectors and can apply and work alongside AI. We will scale AI adoption across the

AI. We will scale AI adoption across the full business spectrum. And most

critically, we will ensure workforce equity training and upskilling programs for New Yorkers historically underrepresented in tech so that AI is a technology that doesn't widen the gap

but helps to close them. So we're

excited to collaborate with plug-and-play. We're excited to

plug-and-play. We're excited to collaborate with all of you, with academia, with startups, with industry, with policy makers to ensure that New York City um that the next chapter of New York City is focused on AI. So,

thank you so much for bearing with me.

Appreciate it.

>> That was a long spiel.

>> Thank you, Daria. That was perfect.

Here's to the next chapter of New York City. We're very excited to be a part of

City. We're very excited to be a part of it. Um up next, you guys may know him,

it. Um up next, you guys may know him, the infamous Emit Patel. Please welcome

me in joining him to the stage with a round of applause. Emit, the floor is yours.

>> Okay. So, you guys can guess what I'm going to be talking about. Yeah. So, um,

good afternoon everyone. I'm emit a partner at the firm leading our enterprise and AI fits and also I co-lead our fintech and AI fund. In the

previous three years, I've usually presented our AI investment thesis.

Today, however, I'm going to answer two questions that keep surfacing in many of our partner conversations, right? First,

are we in an AI bubble that's about to burst? And two, if we're not, why aren't

burst? And two, if we're not, why aren't we seeing the ROI yet? And what do we do about it? You know, the answer to both

about it? You know, the answer to both of these questions comes down to the same thing. We're building

same thing. We're building infrastructure faster than we're building organizational capacity. You

know, we've been moved beyond, you know, should we do AI to why isn't our AI working? And the answer to the second

working? And the answer to the second question has nothing to do with technology. But let's start with the

technology. But let's start with the bubble question first.

Kai Woo from Sparkline Capital, you know, shows that we're in the middle of the largest capital investment cycle in tech history. bigger than the internet,

tech history. bigger than the internet, bigger than the clouds, even bigger than the railroads when accounted for depreciation. This year alone, AI

depreciation. This year alone, AI spending is expected to exceed over $400 billion. But here's a kicker. JP Morgan

billion. But here's a kicker. JP Morgan

estimates that to justify a 10% return on the model AI investments through 2030 will require around $650 billion of

annual revenue into perpetuity, which equates to about $400 from every current iPhone user here. So the question isn't whether the spending is unprecedented.

It clearly is. The question is will we ever actually generate returns to justify the spend.

And you know excitement around AI has caused many investors uh to kind of view these investments favorably. And that's

why you're seeing kind of 75% of the kind of the S&P 500 returns from this year being driven by just 10 companies.

All of them hyperscalers all of them doubling down on AI infrastructure. and

AI related capex added more to GDP growth than consumer spending did for the first time ever and then a Harvard economist found that 92% of GD growth

GDP growth in the first half of this year came from data center construction and IT investments without that US GDP growth would have essentially been zero

so AI has literally become the engine of economic growth right and big tech is driving nearly all of the profit growth and the GDP expansion through their spending.

Now, when you trace the flow of capital, right, here's what's actually happening under the hood, right? Nvidia, Nvidia

invests in OpenAI. Open AAI will use that cash to buy Nvidia chips. AMD

offers OpenAI cheap equity to win their business. Coreweave rents out Nvidia

business. Coreweave rents out Nvidia GPUs to OpenAI while Nvidia will guarantee any of the unused capacity.

Oracle send Oracle plans to build data centers for OpenAI and then we'll use NVIDIA chips for the same facilities. So

the same dollars are spending between the same players. And so this is what we call the circular economy of AI. It's

not malicious, but it's how ecosystems form in the early stages. But it means capital is circulating faster than value creation. And the big risk here is that

creation. And the big risk here is that valuations are rising ahead of true productivity. Now for years, you know,

productivity. Now for years, you know, hyperscalers thrived on asset light models. Right? Today they're building

models. Right? Today they're building factories, grids, chip supply chains. So

the firms that once scaled on code, they're now scaling on concrete. And

this transition is already showing up in the financials to the point where it's actually eroding free cash flow positions where you've seen Amazon, Meta, and Google now issuing bonds uh to

fund their spending. And Kyoo also shows that capital intensive firms generally underperform their asset light peers, not just across industries, but within

industries as well. Now, you've probably read the warning signs, right? This

looks a lot like the dot bubble, and you'd be half right. I think the key difference today is that in the dot era, companies were buying each other's banner ads to inflate metrics. Today,

they're buying GPUs, data centers, power grids, you know, physical assets with lasting value. And before we call this

lasting value. And before we call this or dismiss it as a bubble, let's remember that we've actually seen this before, right? There were many people

before, right? There were many people who thought the internet was hyped, right, and was was a fad. They were

wrong, but they had reasons to be skeptical at the time, right? It's

tempting to call this an AI bubble, but speculation and deployment are two sides of the same industrial revolution. An

economist, Carlo Perez, right? She

studied every major technological revolution from the steams to the railroads to the internet and she found that they all followed the same scurve pattern that many of you may recognize

today. So she showed that every major

today. So she showed that every major breakthrough triggered a speculation followed by a crash and then a widespread deployment. So the.com bubble

widespread deployment. So the.com bubble wasn't an ending, right? It was a funding mechanism for the modern internet. It paved the way for

internet. It paved the way for broadband, cloud, and the smartphone.

And even and even Jeff Bezos, right, who's not known to be overly cautious.

He said recently at the Italian Tech Week that yes, AI is a bubble, but an industrial one, not a financial one, right, will overbuild. And that's the

point. But let's listen to him.

point. But let's listen to him.

Volume up, please. happens when people get very excited as they are today about artificial intelligence for example is every experiment gets funded.

Every company gets funded. The good

ideas and the bad ideas and investors have a hard time in the middle of this excitement distinguishing between the good ideas and the bad ideas. And so

that's also probably happening today.

Um, but it doesn't mean that anything that's happening isn't real. Like AI is real and it is going to change every industry. In fact, it's a very unusual

industry. In fact, it's a very unusual technology in that regard and that it's a horizontal enabling layer. But the

great thing about industrial bubbles, this is a kind of industrial bubble as opposed to financial bubbles, industrial are not nearly as bad. It could even be

good because when the dust settles and you see who are the winners, society benefits from those inventions.

And so, you know, bubbles are how society overbuilds its next generation of infrastructure. There will be

of infrastructure. There will be probably some incredible winners and some incredible losers given the capital that's involved. And unlike 2000, you

that's involved. And unlike 2000, you know, today's investments are building physical infrastructure. You've got data

physical infrastructure. You've got data centers, energy grids, chip fabs, right?

Real assets that will power the next decade. And so this is how economies

decade. And so this is how economies frontload capability for century defining technologies, right? We've

literally replacing office towers with server farms, right? Data shows that new data center construction is almost outpacing new commercial real estate

buildout. And we're seeing the largest

buildout. And we're seeing the largest coordinated industrial expansion since the internet. Right? Unlike the.com era,

the internet. Right? Unlike the.com era, today's AI investments aren't propping up virtual traffic numbers. They're

financing energy infrastructure, physical and computational infrastructure that will underpin nearly every industry. So, you know, I think at

every industry. So, you know, I think at least most of the folks in the room would agree that we at the very least need to modernize our energy grids and our energy infrastructure regardless of

kind of how AI plays out. And so, both sides have valued our arguments. Yes,

we've we are seeing financial engineering and circular AI economy driving value valuation skywards and creating or not creating kind of any kind of equivalent short-term value. But

we need to look beyond the short term.

Right? Over the long term, we're witnessing the largest infrastructure build out in history. And so like railroads, electricity, and broadband before it, today's overspend may look

excessive but inevitable in hindsight.

and valuations may correct but the value created may exceed expectations. So when

people ask me is this an AI bubble honestly I don't think matters for it I don't think it matters for this audience right I'm not here to advise you how to invest in the stock market right we'll

grab beer and pizza one day and I can tell you if I'm buying the dip today you know it's really about technology innovation and growth right more

specifically it's about how enterprises like yourselves can capitalize on this buildout and turn it into competitive advantage because we know it can deliver results,

right? Amazon saved $260 million

right? Amazon saved $260 million recoding 30,000 apps using generative AI. GE healthcare improved cancer

AI. GE healthcare improved cancer therapy detection or predictions by 80%.

So AI isn't just a line item on a budget. It's a productivity multiplier

budget. It's a productivity multiplier when deployed correctly. So the winners of the next decade will turn the current capex spending into their own capability

multiplier. But before we get into

multiplier. But before we get into organizational capabilities, let's listen to the former director of AI at Tesla and the founding member of OpenAI and what he has to say about AI agents today.

>> And what do you think will take a decade to accomplish? What are the bottlenecks?

to accomplish? What are the bottlenecks?

>> Well, um actually make it work. So in my mind, I mean when you're talking about an agent, I guess or what the labs have in mind and what maybe I have in mind as well is it's uh you should think of it almost like an employee or like an intern that you would hire to work with

you. Uh so for example, you work with

you. Uh so for example, you work with some employees here. Um, when would you prefer to have an agent like Cloud or Codeex uh do that work? Like currently,

of course, they can't. Uh, what would it take for them to be able to do that? Why

don't you do it today? Yeah.

>> And the reason you don't do it today is because they just don't work. So, uh,

like they don't have enough intelligence. They're not multimodal

intelligence. They're not multimodal enough. They can't do computer use and

enough. They can't do computer use and all this kind of stuff. And they don't do a lot of the things that you've alluded to earlier. You know, they don't have continual learning. You can't just tell them something and they'll remember it. And they're just cognitively

it. And they're just cognitively lacking. And it's just not working. And

lacking. And it's just not working. And

I just think that it will take about a decade to work through all of those issues.

>> Interesting.

And so, you know, here's a reality that nobody wants to admit. We could have AGI tomorrow and most firms in this room wouldn't know how to consume it. Why?

Because we're still organized around old ways of working, right? Hierarchal

approval chains that needs rapid iteration, functionbased teams in a world that needs cross functional orchestration. the fact that we're still

orchestration. the fact that we're still probably a few years away from AGI, that might not be such a bad thing because it gives companies the time to experiment, time to change, time to be ready to

incorporate AGI once we actually get there. And so this is your window to get

there. And so this is your window to get ready. And so, you know, the AI boom

ready. And so, you know, the AI boom might correct, but the infrastructure, the data, the organizational rewiring that it leaves behind will define who wins the next decade. So you may not be

the one spending $400 billion, but you can be the one to turn it into value. So

I say this, forget the market cycle.

Focus on the capability cycle.

Technology progress is going to be inevitable, but readiness is optional.

Now AI is starting to feel a lot like nutrition science, right? Sometimes it's

bad for you, sometimes good. I remember

we went from saying coffee is bad for you to actually coffee is great for longevity. You've heard probably me say

longevity. You've heard probably me say this quite a few times this week when I'm drinking coffee. You know, we've seen similar swings in AI ROI studies, right? Some say AI is transformative

right? Some say AI is transformative while others say it's underwhelmed. For

today's conversation, I want to focus on the MIT study because it generated a lot of buzz. And many of you have actually

of buzz. And many of you have actually asked me about this. So, you know, MIT found that many corporations had no tangible returns despite spending more than $30 billion in collective

investments. But there is more to the

investments. But there is more to the story and this is critical. The problem

isn't the technology. It's not that the models don't work. The problem is organizational. Right? This is where we

organizational. Right? This is where we uncover the second question. Why are

some companies not seeing ROI while others are getting massive returns? The

MIT study found that or or they found something that should fundamentally change how you think about AI deployment. The study found that the

deployment. The study found that the generative AI divide is a reflection of a learning and implementation gap within organizations, not a technology gap.

Right? Success hinges on an organization's willingness to adapt its structures, its habits, its processes around the new capabilities that AI provides. Right? So the bottleneck isn't

provides. Right? So the bottleneck isn't the the the model, it's the orchard.

Right? Companies don't lack algorithms. They lack integration, workflow redesign, and leadership alignment.

Pilots often fail because integration proved harder than anticipated. And so

companies are expecting immediate results without addressing the necessary fundamental structural and cultural changes. And so if you take nothing else

changes. And so if you take nothing else from today, remember this equation because it took me years to kind of figure it out because this is the reason why you're

not seeing returns. AI maturity is a function of technology and organizational readiness. Think about

organizational readiness. Think about what this means mathematically. You can

have the best technology in the world 10 out of 10, but if your organizational factors are a zero, your maturity is going to be zero. Your ROI is going to

be zero. It doesn't matter how good your

be zero. It doesn't matter how good your GPT cla or your custom models are. If

your organizations can't integrate it, you can't redesign your workflows around it, and you don't have the culture to adopt it, it's going to fail. And so

whether you're in insurance, finance, retail, health, I don't care. The pure

P&L now depends on how you convert the current AI infrastructure into your own capability advantage. Right? Most firms

capability advantage. Right? Most firms

are still organized around old structures. So even if AGI appeared

structures. So even if AGI appeared tomorrow, enterprises couldn't harness it efficiently because the organizational plumbing isn't designed for it. But that's not a failure. It's

for it. But that's not a failure. It's

an opportunity. Right? This decade is going to be about rewiring your org design, your infrastructure, your ability to experiment, then scaling what works. Right? This is why I'm also

works. Right? This is why I'm also excited about kind of the AI Nexus program in New York City, right? Because

this is exactly what we want to focus on. How do enterprises like yourself and

on. How do enterprises like yourself and the small business community apply AI?

Now, let's walk through each of these, right? We'll start with the

right? We'll start with the organizational design first because that's where most companies struggle.

Two years ago on this stage I said that talent not technology will be the biggest barrier to adoption and that is still true today. I mean it finally pays off to be a nerd. You know you and you

have you have three options around talent strategy right? First you can you know

strategy right? First you can you know pay top dollar to acquire top talent.

Outside of the hyperscalers companies like JP Morgan, Capital One, Aliance have done a fantastic job in hiring tech talent according to evident insights.

Second, you can aqua hire talent to scoop up kind of large teams. I've said this many times at Summit. I'll say it again because it's still true. There are

many high-flying AI companies raising rounds at crazy valuations. Most of them are not going to make it. This will be a great opportunities for enterprises like yourself to scoop up teams at a fraction

of the price of what they raised. Third

is upskilling at scale. But here's a catch. You're not just teaching people

catch. You're not just teaching people to build AI. You're training them to supervise AI. And you know, let me give

supervise AI. And you know, let me give you a concrete example of why this matters. Deote had to refund the

matters. Deote had to refund the government contract because they submitted an AI generated report riddled with errors. Now, the problem wasn't

with errors. Now, the problem wasn't that the report was AI generated. The

problem was that no one checked the output. Right? So, I don't think any

output. Right? So, I don't think any enterprise is going to allow autonomous agents to run wild in their environment.

Not in finance, not in insurance, and not in healthcare. Nowhere where

mistakes can have serious consequences, right? You're going to want to have

right? You're going to want to have humans in the loop at every critical juncture to make sure that the work is being done well and in compliance. So,

you're not training employees to just build AI, you're training them to manage it. Ethan Mullik from Wharton, you know,

it. Ethan Mullik from Wharton, you know, he argues that working with AI is similar to actually managing employee, right? Because most some of the most

right? Because most some of the most important prompt engineering skills are essentially managerial skills, not technical skills. Because you need to

technical skills. Because you need to set clear expectations, provide context and background, review work critically, know when to push back, and recognize

when the output is plausible but wrong.

Right? The winners will be the organizations that train their people to be excellent AI supervisors, not AI replacement.

Now the biggest mistake is trying to just add AI to your existing structures.

Right? The technology enables fundamentally different ways of coordinating work and organizations that restructure to take advantage of what's kind of you

know will have enormous benefit over that don't. Benedict Evans said first we

that don't. Benedict Evans said first we make the new tool fit the work and then we change the work to fit the tool. So

what does readiness look like here?

Right? First, consider redesigning around capabilities and not functions.

Instead of organizing around HR, operations, finance, think customer intelligence, autonomous operations, product innovation. That's where AI

product innovation. That's where AI creates compounding advantage because AI doesn't respect functional boundaries.

Second is the rise of the chief AI officers, not as a pilot title, but a a real seat at the table peer to the CTO and CIO. someone who bridges business

and CIO. someone who bridges business ambitions with B model governance and data strategy. And here is the key.

data strategy. And here is the key.

Business unit leaders should become the product owners of their AI capabilities.

They're accountable for the AI ROI in their domain. They build hybrid teams

their domain. They build hybrid teams composed of AI specialist and domain experts plus engineers. Less focused on headcount management, more on capability development. According to McKenzie

development. According to McKenzie study, leaders who owned AI personally were three times more likely to scale it, right? It makes sense because if

it, right? It makes sense because if it's delegated to the IT team, it becomes a technology project. If it's

owned by the business, it becomes a business transformation. So the same

business transformation. So the same study emphasized that the companies who scaled AI successfully thought bigger, right? They didn't just run pilots, they

right? They didn't just run pilots, they rebuilt workflows, they set growth goals, and they invested real budgets, not just PC money. Now fourth and this

is critical for execution you should have an AI hybrid operating model right you centralize what must be consistent your data your architecture governance talent development but you distribute

what must be contextual to the business unit use case identification implementation domain specific model fine-tuning change management and adoption day-to-day AI human workflow

design think of it as a hub and spoke model right core capabilities like compliance and risk management, AI and data platforms are managed centrally by the hub. But the business units, the

the hub. But the business units, the spokes in this case will innovate at the edges, right? With autonomy to deploy AI

edges, right? With autonomy to deploy AI for the unique challenges. This balances

agility with governance, innovation with safety and speed with quality. Local

teams can experiment and move fast. You

know, when they find something that works, it actually gets promoted and scaled across the enterprise through the central hub. Now, one more critical

central hub. Now, one more critical piece. Create AI translators as a formal

piece. Create AI translators as a formal role. The critical shortage isn't AI

role. The critical shortage isn't AI engineers. It's people who know how to

engineers. It's people who know how to bridge the between the business and technology, right? These are people who

technology, right? These are people who understand both the business problem and know what AI can do. They speak both languages. And here's how you can

languages. And here's how you can structure this, right? Distribute them

throughout the organization, not centralizing it. So perhaps maybe one AI

centralizing it. So perhaps maybe one AI translator for every kind of 50 to 100 knowledge workers, right? Their

responsibilities should be to include or identify automation opportunities, prototyping solutions and then training colleagues on AI tools. Make it a career path through a rotational program

because this will ensure this will build AI literacy at scale and prevent it from becoming a bottleneck. Now, and that's

what I've said up until now. This brings

me to what I think is the most powerful example of execution at scale. Back in

2018, JP Morgan brought in Dr. Manuela Veloa, a former head of machine learning at Carnegie Melon. At the time, JP Morgan didn't even have an AI research division. Today, her team is embedded

division. Today, her team is embedded across the enterprise. The person

leading the AI adoption in the company, Theresa Heisenre, reports directly to the CEO, JP uh Jamie Diamond. JP Morgan

has over 43,000 engineers, 900 data scientists, and 200 AI researchers. The

bank spends about 11% of the 18 billion tech budget on AI. That's roughly $2 billion per year. They're applying AI to everything from customer service to

research to financial modeling. What's

impressive isn't the technology, it's the architecture around it, right? They

JP Morgan didn't just launch products.

They trained the employees. They built

the infrastructure, the safeguards, the evaluation frameworks. Right? That's the

evaluation frameworks. Right? That's the

competitive advantage and the results.

JP Morgan himself has said that they've seen $2 billion in productivity gains.

Right? But here's the most important detail. JP Morgan didn't start with

detail. JP Morgan didn't start with 43,000 engineers and a $2 billion budget. They started with strategic

budget. They started with strategic hires. They built incrementally. They

hires. They built incrementally. They

measured relentlessly. And they scaled what worked. You don't need to match

what worked. You don't need to match their scale, you need to match their discipline. And I want to give a shout

discipline. And I want to give a shout out to Al from Coobank who's they're a fintech partner and he's established from what I have heard or what it sounds

like to me a solid framework for the AIC COE.

Now most companies will jump from AI excitement to deployment without understanding how work actually flows.

Until we measure and map work itself, not the orc chart, not the process documentation, but the actual work, AI will keep running on hope instead of evidence, right? Agent deployment

evidence, right? Agent deployment requires visibility, role clarity, coordination. A digital twin of your

coordination. A digital twin of your operating model shows exactly where your team stands and what comes next.

Companies want to reimagine workflows in the age of AI. But how do you do that without having a deep understanding about how you actually operate as a business today? If you don't have the

business today? If you don't have the metadata behind your operating model, you're going to struggle to measure the post AI ROI, right? You can't know the

after be without knowing the before. And

so create a system of record of work that captures how work is actually done across the entire organizations.

Every company today has a system of record for finance, HR, customers. You

know exactly what's in your bank account. You know exactly who your

account. You know exactly who your customers are, and you know exactly who's on your payroll. I hope so.

Anyway, but you know there are no system of record for how actually work gets done and that's what tools like scanner are building a digital metadata layer

that captures how people and AI interact across systems. It tracks actual workflows. It shows where humans add

workflows. It shows where humans add value and where there can be bottlenecks. It reveals where AI can be

bottlenecks. It reveals where AI can be inserted effectively. This kind of work

inserted effectively. This kind of work measures or provides you with all the analytics you need to conduct you know decisions

on how you want to deploy an AI because if you cannot measure how work flows you cannot redesign it.

Now let's talk about deploying AI right you have three types of agentic platforms and choosing the right ones depends on who your end users are and what you're trying to accomplish. First,

you've got agentic development tools, right? They typically use open- source

right? They typically use open- source frameworks to help you build fully custom AI agents from ground up. Maximum

complexity, maximum uh kind of flexibility. Second, you've got no code

flexibility. Second, you've got no code and low code horizontal tools. This

comes up constantly in many of the meetings that I've kind of observed between corporates and agentic AI startups. What is the right level of

startups. What is the right level of code? Right? No code is great for

code? Right? No code is great for business users, right? This comes

because it's easy to use without any kind of developer involvement but you're limited to whatever capabilities the platform provides. Low code is built

platform provides. Low code is built with a developer centric u mindset right so you know it's far more customizable it's more flexibility you can adapt it to your workflows has more robust

security features and it can scale across different workflows and companies like thread AI excel here I know they did a a demo earlier for the fintech

expo now third you have vertical agents right these are specialized agents for the application layer with domain expertise right they work well with industry specific ific processes with

deep integrations. Now, most of you will

deep integrations. Now, most of you will end up with a portfolio approach. No

code for quick wins in non-critical areas, low code for important but perhaps non- differentiating kind of workflows and custom development only for your true competitive no uh modes.

The key is knowing which is which. And

so, you know, let me give you a concrete example of a vertical agent that makes sense. Arrived has built an agent AI

sense. Arrived has built an agent AI platform purposefully built for cyber security. that let you design, deploy,

security. that let you design, deploy, and scale security operations, right?

This tool makes sense if you realize how underst staffed and underbudgeted security teams can be because it's sometimes seen as an afterthought. No

one cares until something bad happens and when it does, the seesaw typically on the chopping block. So, they work with some of the major enterprises from our ecosystem and they will present today, right? And it's been incredible

today, right? And it's been incredible to see their growth. This is a kind of vertical solution that solves a real problem for a for for a specific domain.

Now, we've, you know, we've talked about buy versus build before, right? Last

year, we hypothesized that companies will lean towards building more than they're buying. And that's exactly what

they're buying. And that's exactly what we saw, but with a nuance. You know, a lot of partners were buying, but they were buying kind of um developer tools or middleware solutions, not end solutions, because they wanted more

control over the final implementation.

But interestingly that, you know, the the recent MIT report had a fascinating finding. Purchasing AI tools from

finding. Purchasing AI tools from specialized vendors and building partnerships succeeded 67% of the time.

Internal builds succeeded only one-third as often. So, you know, but when

as often. So, you know, but when deciding between buying and building, you know, you've got a few things to consider, right? You've got consider

consider, right? You've got consider things like time to market. Do you need this now or can it wait? You need to consider resources and talent, right? Do

you have the money and the people to deploy large AI systems? And then you need to assess control. How much

customization and integration do you actually need? Buying is going to get

actually need? Buying is going to get you to market faster and it's probably going to be more cost effective right out the gate. But you sacrifice flexibility. Building requires good data

flexibility. Building requires good data quality and the right talent, but you get greater flexibility to customize with all of your workflows um and systems. It just takes longer.

You know, if we look at kind of what companies are buying for specific use cases, software development has been the most popular category in terms of both funding and user adoption, it's exploded. Tools like GitHub, C-Pilot,

exploded. Tools like GitHub, C-Pilot, Curser are now standards in many dev shops. I'm not sure if you guys saw this

shops. I'm not sure if you guys saw this recently, but Curser grew their revenue from 1 million to 1 billion in annualized revenue in two years because

developers are seeing 30 to 50% more 30 to 50% more productive with these tools.

The ROI is clear and it's immediate.

We're also seeing the rise of generative engine optimization. This has probably

engine optimization. This has probably been the most popular request that we've gotten from our partners this year because part companies care deeply about how their brands show up in AI search

results. Right? When a user comes in and

results. Right? When a user comes in and asks chat GPT what's the best credit card, banks want their products to be mentioned. You can also buy tools based

mentioned. You can also buy tools based on focus on specific business functions.

Right? In terms of requests from partners, I would say this year the kind of the requests have been kind of fairly evenly spread. What's interesting is

evenly spread. What's interesting is seeing HR and finance make a comeback after being quiet for a while. You know,

here's a question I often get from CXOs.

You know, where should we start? I've

gotten this question at least three times in the last four weeks from different CEOs. While sales and

different CEOs. While sales and marketing may seem more appealing, right, everybody wants revenue growth.

I've consistently emphasized that you should go after the boring functions first because that's where we've seen real demonstrable impact. Consider even

starting with finance teams. Your finance team already knows how to book gains and measure value, right? They

understand ROI calculations better than anyone else in the company. Plus,

finance workflows are typically highly structured which and repetitive exactly where kind of AI excels. Invoice

processing reconciliations financial close, right? They aren't glamorous, but

close, right? They aren't glamorous, but the productivity gains and are measurable and immediate. And

interestingly, we've actually been getting a lot of agentic AI requests directly from the CFO office. And so,

don't chase the sexy use cases. Chase

the one that will build internal credibility and free up budget for the next phase.

Now, if you're going to build, you need to be honest about what it takes. You

know, you know, companies need to modernize their infrastructure and develop a composable architecture.

Palanteer CEO Alec Alex Karp he said that the real power comes when LLMs are paired with customized infrastructure built over the years. You know he said Palanteer was well positioned to adapt

or quick LLM because they had already built the underlying infrastructure or had the organization muscle to build it quickly and we shared the modern data stack on the left hand side the market map two years ago and we've talked about

the evolution kind of to handle unstructured data sets and kind of the immense value it could unlock but based on conversations with our partners there are still major data quality and

management issues right financial times reported that 70% um of IT budgets are spent on managing legacy kind of systems and legacy systems cause six to 18 months delay

when it when it kind of uh in terms of rolling out new features and around 40% of a software developer's time is spent managing technical debt this becomes much more serious issue when you're

trying to implement agentic AI right if your organizations doesn't have the clean data and quality governance policies AI agents cannot be trusted to execute kind of autonomous functions and make real business decisions so

companies Companies need to spend considerable time here removing technical debt uh and creating infrastructure built on modern data pipelines. This is a massive undertaking

pipelines. This is a massive undertaking but it needs to happen if you want to use AI effectively. You can't build AI on a duct tape of you know on legacy systems. And so in the infrastructure

has to come first. Before I get into the right hand side, let's listen to Jensen Hong.

>> Last question. If you were a CIO in the audience with $10 billion to allocate toward AI in the coming years, what would you invest it into?

>> I would right away um experiment with building your own uh AI. I mean I just you know the fact of the matter is we

take pride in on boarding employees and how the the method by which you do so the culture by which you bring them into the philosophies of your company uh the

operating methods the practices that that that makes your company what it is.

Um the the um the collection of data and knowledge that you've embodied over time that you make accessible to them. And so

so that that is what defines a company in the past. A company of the future includes that of course but you need to do that for AI. You need to onboard digital you need to onboard AI

employees. There's methodology for

employees. There's methodology for onboarding AI employees for uh we call them fine-tuning but basically teaching them you know the the the the uh the the

culture of the knowledge of the skills of um evaluation methods and and so the entire flywheel of your agentic employee is something that you need to go and

learn how to do. Uh I tell my CIO uh our company's IT department they're going to be the HR department of Agentic AI in the future. They're going to be the HR

the future. They're going to be the HR department of of uh uh digital employees of the future and those digital employees are going to work with our of course biological ones and and that's

going to be the shape of our company in the future. And so if you get a chance

the future. And so if you get a chance to do that, I would do that right away.

>> So build your own AI. You know, at the moment open AI is used by over 90% of Fortune 500 companies and most are building applications around it. It's

comfortable, right? It's safe. Nobody

gets fired for using open AI. But we've

talked about the possibilities of a multimodel architecture in the past and I want to emphasize it again. You know

some of the best ways to accomplish business tasks today's with generative AI systems is to use many models in concert. Right? So enterprises should

concert. Right? So enterprises should think about a multimodel architecture and not pulling all your eggs in one basket. Engineering this kind of

basket. Engineering this kind of workflow with proprietary models can quickly get expensive. This is where open-source and small language models come in. The innovation in specialized

come in. The innovation in specialized and open- source models has been extremely uh impressive and it's accelerating. And so we can look at look

accelerating. And so we can look at look at both of these. You know, enterprises should seriously consider open source for several reasons. You know, first they're now just as performant as closed

model for many tasks. The gap has closed and dramatically and using open source allows for greater control and you avoid that vendor lockin and it provides cost efficiency at scale. Uber fine-tunes

open source models to improve things like Uber Eats recommendations and search. And we've also seen, you know,

search. And we've also seen, you know, great advancements in small language models. So there's ample evidence now to

models. So there's ample evidence now to show that smaller fine-tuned models can actually outperform larger general purpose model for for which they've been specialized and by a pretty decent

margin. And since small models require

margin. And since small models require far less compute, you save on inference and reduce latency to users. So the

economics are compelling. So why aren't enterprises using it today or or developing them today? Based on my conversations with kind of like heads of enterprise architect is because you know they require a lot of operational

overhead and so companies aren't ready to deploy resources to it. Now there are other perceived barriers such as kind of extra development cost compared to an out- of-the-box LLM and that you know

the misconception that you need big data for successful fine-tuning. But here's

the truth both hyperscalers and many of the successful scaleups are using it today. Meta is using SLMs for kind of ad

today. Meta is using SLMs for kind of ad delivery. Uh Airbnb is using them for

delivery. Uh Airbnb is using them for customer service. You just don't realize

customer service. You just don't realize it because engineers are getting really good at stitching together smaller, simpler AIS. And so the math is clear

simpler AIS. And so the math is clear here for repetitive specialized tasks, smaller models are sufficiently powerful, suitable, and can be more economical according to a research done

by NVIDIA and Georgia Tech. And we've

been saying this for uh for two years now. hasn't become mainstream within our

now. hasn't become mainstream within our enterprise ecosystem yet, but I think it it will. And you know, this is why we're

it will. And you know, this is why we're excited by kind of platforms like what Umei is building, right? They eliminate

the custom AI development cost, which has been the main barrier. So instead of taking months to develop a custom model, um it takes kind of um hours and you don't need a massive ML team and you

don't need to become AI researchers.

This is the bridge between buying everything and building everything. It's

the middle path that gives you the customization without the traditional kind of overhead. And remember, this actually fits into the hybrid operating model. Central teams will manage the

model. Central teams will manage the model infrastructure and governance.

Business units will fine-tune and deploy models for the specific needs. And you

know, if you're looking to have more control over uh and develop IP around the models trained on your proprietary data sets, the economics of custom models makes sense.

Now let's talk about experimentation because you can't learn without having a safe place to kind of to to test. Now

companies should be investing in a sandbox environment, right? To for rapid testing and prototyping. And there's two main ways in which you can do this. One

is you can work with a sandbox as a service provider, right? They'll set up an isolated environment for you where your teams can experiment uh um without kind of risking kind of uh production systems. And two, you can partner with a

systems integrator where they'll create a customized sandbox environment for you um uh tailored to your security and compliance requirements. You know, we we

compliance requirements. You know, we we you know, we work with firms like entity data and KPMG. I'm sure you know they can build out tailored custom uh sandboxes. There's no one-shot quick

sandboxes. There's no one-shot quick solution here. This requires a real

solution here. This requires a real resources and thoughtful architecture, but it's table stakes for effective AI experimentation. And so, you know,

experimentation. And so, you know, without sandboxes, every experiment carries production risk. And I've seen it cause inertia. There are so many enterprises we work with who say we can't test or we can't pilot with a

company because we're not set up that way. We can't let them touch our

way. We can't let them touch our environment. Now let's address

environment. Now let's address something. Does vibe coding have a place

something. Does vibe coding have a place in the enterprise space? I'd say the answer is yes and no. Right? Vibe coding

is incredibly powerful in you know in kind of building up or proto rapid prototyping. you know, you can mock up

prototyping. you know, you can mock up quick workflows uh without kind of, you know, but you're accumulating risk at unprecedented rates, right? Code quality

issues, security vulnerabilities, and technical debt quickly add up. But, you

know, and the no is, you know, when you apply kind of vibe coding indiscriminately, you're not just moving fast, right? You're actually like adding

fast, right? You're actually like adding technical debt, what we talked about earlier, right? There are you you know

earlier, right? There are you you know you probably don't understand the security kind of flaws that you may occur when you vibe code but this can work if you create safe environment

right sandboxes with limited scope and access clear boundaries for what vibe coded tolls can and cannot do right structured handoff to engineering teams

when prototypes show kind of or promise and so when paired with enterprise kind of grade tools no code platforms like Liza you know or domain specific tools

like arrived for security many gaps can get addressed right an enterprisegrade no code solution can automatically scan for security issues identify kind of bottlenecks and provide a safe kind of

way to scale the platform right and arrived has done this for security related buildouts and has been recognized by firms like Gartner uh for enabling enterprises kind of to safely

um uh expand kind of the AI products and you know finally let's talk about kind of how you scale what works because traditional budgeting cycle will kill any of your AI momentum, right? You need

to scale what works through dynamic resource allocation. Think venture model

resource allocation. Think venture model kind of applied internally. So you make small bets with stagegate funding and you need to make rapid killer scale decisions. And so teams should be kind

decisions. And so teams should be kind of bidding for resources using prototypes, not PowerPoint decks. And

speed matters here. So the companies that can quickly reallocate capital and talent will scale AI much faster than those companies who are still stuck in their kind of a you know annual AI

planning cycles. And so you know agents

planning cycles. And so you know agents are complex. I'm not going to dive into

are complex. I'm not going to dive into this kind of deeply because I am running out of time. I can see red but you know it's critical to build out kind of your evaluation uh frameworks and and and

track ROI systematically. The companies

that scale ROI well or successfully measure everything. Model performance,

measure everything. Model performance, business impact, user adoption, cost per transaction, error rates, human intervention frequency. If you're not

intervention frequency. If you're not measuring it, you can't improve it. And

if you can't improve ROI, you can't unlock the budget for the next phase.

And so ultimately, you need to evaluate your old ways of working versus kind of the art of possible with AI native workflows. And you know, AI isn't just

workflows. And you know, AI isn't just automating tasks. is reshaping roles.

automating tasks. is reshaping roles.

The winners will redesign around capabilities and build federated models that kind of blend governance with agility. And the shift from kind of

agility. And the shift from kind of one-sizefitit all models, you know, will towards kind of domain specific agents require, you know, you build an architecture that should be composable

and human- centered. Without tight

integrations between data, systems, and people, you'll never see real ROI. And

the advantage now is speed, right? AI

teams can kind of move from idea to prototype in days, not months. And so

create sandboxes that lets your team safely test and learn and scale what works. And you know, don't treat AI as a

works. And you know, don't treat AI as a series of pilots, right? But as a continuous capability, build feedback loops, monitor performance, and make scaling responsible by design. And you

know, this is not just about the corporations. I know I'm out of time,

corporations. I know I'm out of time, uh, but I quickly want to address the startups in the room as well, right? Um

you know I want to share some of the quick lessons that I was inspired by my conversation um at the KPMG symposium where I I spoke on stage with my friend

JJ from KPMG. Number one, you know, I want to say for the startups, become a consolidator of systems, you know, to reduce vendor vendor sprawl. Our

partners have told us, you know, they're not looking for part, you know, platforms for or for point solutions.

They're looking for platforms that can scale across multiple workflows and processes. So, gone are the days where,

processes. So, gone are the days where, you know, they'll test out and try out new point solutions. they are really conscious about like yeah if I bring you into my environment can I work with you

across different workflows and two focus on building connectors you may build the shiniest platform in the world but if you're unable to integrate with the legacy systems of enterprise environment

you ain't going to make it I can even make a bet that a subpar product with amazing connectors will out compete a product that is you know the best in the world but really lacks really good

connectors and so spend some time doing that and as such add four deployed engineers to your GTM stack to support the edge cases and your user interface

should have a seamless humans in the loop feel like I said before based on my conversations with chief AI officers and technology leaders enterprises won't let autonomous systems run wild in the

environment and so it's really critical for you to provide that flexibility to add humans whenever wherever they need it and my last key takeaway is Don't

become a system of record even though many VCs may tell you to. This is

solved. I actually drank the Kool-Aid last year and I said, you know, build on top of a system of record, but based on my conversations with enterprise architects, I want to correct it. They

don't want another storage system. It's

already been solved. I've heard it heard it from multiple folks today. Where they

actually struggle is the data flow to and from existing systems. So, become a system of information or data flow.

you'll unlock many more enterprise doors. And so everything I covered today

doors. And so everything I covered today provides the reasoning behind kind of the AI Nexus program in New York City, right? We've wanted to expand our

right? We've wanted to expand our presence. Um, you know, I remember

presence. Um, you know, I remember meeting with Daria last year, end of Q1 maybe, and we said, you know, and we we, you know, and we had multiple brainstorming session. We're trying like

brainstorming session. We're trying like what could this initiative look like?

And we aligned on the fact that the focus needs to be on applied AI, right?

how can companies like yourself really adopt it? And so, you know, that's why

adopt it? And so, you know, that's why I'm really excited about launching this program. Some of my most recent

program. Some of my most recent investment deals have been in actually, you know, companies based out of New York City. And so, you know, one of my

York City. And so, you know, one of my goals for next year is to improve successful adoption across our entire AI vertical ecosystem. So, and Paulo, I

vertical ecosystem. So, and Paulo, I don't know, I think I saw him on the side there. You promised that you will

side there. You promised that you will share actually over there. Paulo from

Brightar, you can look at me, Paulo.

you promised to get on stage next year to kind of showcase or share a successful case study. So, I'm going to take you up on that offer. So, it's my kind of goal to make sure you have a successful kind of KPI from AI adoption.

And so, you know, if you're interested in joining the AI ecosystem in New York City, you know, or working closely with us, please reach out. And, you know, second part of the initiative that I'm trying to think about is, you know, I

think, you know, I'm looking to form specialized network. And what I mean by

specialized network. And what I mean by that is I want to create a specialized network of data leaders, a separate specialized network of security leaders, talent leaders, architect leaders. And

the goal is to really kind of um uh you know uh provide kind of specialized content and kind of provide a shared learning kind of environment for these folks to kind of get together. So if you

are a senior leader and that's covering any of these areas that I've mentioned or you think any of your leaders from your organizations would like to become part of this network, please reach out as well. And so I think you know to

as well. And so I think you know to improve adoption and then success I think we need to engage with these groups directly. There are four things

groups directly. There are four things that will matter. Talent, architect,

data and security will matter if you have an AI strategy. And finally, one last thing. I promise if any of these

last thing. I promise if any of these enterprise in the room here today, if you have had a really successful case study and you found a a rockstar startup, I actually want to learn about it. So, please reach out to me if you

it. So, please reach out to me if you said like, "Hey, we've we've been working with this. You know, it could be any startup startup or a scaleup, doesn't matter. Um, you know, I do want

doesn't matter. Um, you know, I do want to invest." And so, customer feedback is

to invest." And so, customer feedback is really important for us. And so, if you've had that success, you've had that story, I would love to learn about it.

With that said, thank you so much. I

hope you enjoy the rest of the startups.

>> All right. Thank you very much, Emit.

Wait, >> Nate said I didn't answer the AI bubble question, so just in case if you think I didn't, I said it doesn't matter.

>> All right, take it. Take it how you will. Um, we're going to jump into the

will. Um, we're going to jump into the startup presentations now. Up next, or up first, sorry, we have Umei. If we

could jump over and have Umei join us.

Let's give him a hand of applause. Round

of applause.

>> Truly amazing talk by Amit and the insights and everything. You heard both Amit and Jensen say you should be building your own AI. But historically,

the problem has been that to build your own custom AI models that are tailored to your specific enterprise use cases, it would take you months and it would require highly experienced AI

scientists. With Umei, any AI engineer

scientists. With Umei, any AI engineer can do it in hours. Most enterprise

right now, they're stuck using large generic off-the-shelf models that are unreliable, slow, expensive, and they offer no

control to the enterprises. More and

more enterprise are starting to realize that you know what there's a better way to do this. Building small custom models you heard Amit also saying it earlier that are tailored to the specific use case that can offer dramatically higher

quality a dramatically lower cost latency and full control. The problem

then again has been that to do this it would take a lot of work. It would take months of effort. Often many enterprise they fear they may not have the expertise expertise to do that successfully.

uh and when a new model comes out then you have to repeat this lengthy process again again solves this problem by providing the first automated end to-end platform for the full end toend model

development for evaluation data securation training everything that you need so you can complete the whole process in hours also it's AI powered which means even if you don't have the

deepest expertise any engineer can prompt an LLM can use to develop their own custom models and then once you're in um when a new model comes out it's literally within minutes you just test and get the results with the new model.

Now to show you how it works, I'm going to go into a very specific use case.

Let's say for example, you're trying to summarize emails in bullet point format.

It's a very arbitrary use case selected specifically to demonstrate the fact that any use case is possible. The first

steps when you start with such a new enterprise scenario is say you know what let me evaluate existing models to see how they do or my model I have in production. If you have your existing

production. If you have your existing test set that's great you can bring it in and use it. If you don't, you can specify natural language and say, "Hey, I want to build a model that summarize emails in bullet point format." Um will automatically create for you a very

thorough synthesis plan that tells you, you know what, for you to create a thorough test set, you need to take into account all these different properties, you know, do different purposes for the email, different formats, different

tones. You're still in full control. You

tones. You're still in full control. You

can still go and customize as as Amit was saying, human in the loop. You can

still be in the loop and auto in tune this if you want, but it works really well out of the box. Uh and then the next step you after you create your test set uh to create the actual evaluations

we automate this with LLM judges. Uh

again in natural language you can say what are the criteria what you want to evaluate or just a description of the task. For example again summarize emails

task. For example again summarize emails in bullet point format and we automatically create LM judges for you.

Again you can be in full control human in the loop and change it as you see fit. uh but in this case you know um

fit. uh but in this case you know um would tell you you know what you need to create a judge for the comprehensiveness of the summaries the groundiness meaning the no hallucinating uh the fluency and

the format adherence and then at a click of a button you can get the test results as it's common quite often models don't work very well at the first go especially vanilla of the self models you can see in this case for example

says that for groundiness and for mat adherence the quality is not exactly where it needs to be so typically the next step is to say okay now how do I improve on this what data do I need to curate and how do I do it to to make it better. Well, the good thing with whom

better. Well, the good thing with whom is that not only when you evaluate, we can show you all the cases that are failing automatically, but also distill all the different as we call them failure modes, the patterns in which

your model is failing. So that at a click of a button, you can synthesize the right data to use to improve the model. So then when the time comes to

model. So then when the time comes to actually train the model, um supports all the different state-of-the-art approaches, super fine tuning, RL, Laura, uh you can easily use the data that you just automatically synthesize

that improves on these failure modes.

And then it already comes with curated recipes for any of the models that you may want to train so that even if you don't know how to properly configure this, it will work really well. Uh and

that's it. That's all it takes.

Literally within hours, teams can take new use cases to production. It doesn't

take months anymore. they can do dramatically faster and get results that are higher quality and much cheaper models. Now, if this looks like a dream,

models. Now, if this looks like a dream, actually it's real. It's happening

today. We've helped a leading healthcare provider develop over 10 models with over 20% quality improvements, over 70% reduction in costs. also a a global

media giant to develop the best hallucination model in the industry with over 10% quality improvements alongside all the other things that they need in terms of citations, confidence scores to

drive down the human review time and these are just two examples. There's

many more enterprise that currently using uh omi omi is being built by a world-class team that was building and leading efforts for building Gemini at Google uh also Apple intelligence meta

and co here and we have an exceptionally strong academic backing that helps drive a lot of the innovation in the platform.

So if AI is critical for your success as Jensen was saying earlier you should be building your own AI. You should be

using customized AI models that are optimized for your use case, not generic ones. You can start with UMI and go and

ones. You can start with UMI and go and evaluate existing models you have in production or any of the selfmodel to see where they fall short and then you're just one click away from training

a better model. We have built the first AI power platform that is specifically designed to enable any enterprise to fully harness the power of AI. That's me

and we're happy to help. Thank you.

Next up is Arrived. Arrived is an agentic AI platform to consume or rapidly build no code AI AI applications that tackle the toughest security

challenges.

>> Good afternoon everyone. Like uh I say we have arrived. The question is whether you have arrived or not. So just a little bit about myself. This is my seventh startup and every time I start a company I tell my wife that this is

going to be my last. It actually has worked out six times but she has believed in me. But this time when I told her like trust me this is going to be my last startup. She asked she basically told me that you know what

even I have been thinking about starting my own startup. So I was uh I was fairly excited to hear about a startup and I asked her like so what is your startup going to be about and she said it's

going to be husband 2.0. So,

so ha having said that I really had to think what I'm going to do this time because this really might be my last startup. So, as you know, arrived is a

startup. So, as you know, arrived is a platform to build agentic apps in less than 5 seconds. You can consume apps or you can create apps. And today we are

focused on cyber security. When I was thinking about arrived, I was thinking about the customers first and I had to make sure that there was something that they could embrace which could give them

exponential productivity boost. It was

not about can you empower my team by 1x or 2x. It is we want to talk about

or 2x. It is we want to talk about productivity boost. We'll talk about at

productivity boost. We'll talk about at least 60x to 100x of productivity boost.

The second thing was ROI. Can you really prove that if we get your technology we can get a financial gain? And I said yeah absolutely at least 4x financial

gain we can prove using our platform.

The second requirement was to look for customers or work with customers who really want to own the AI and the reason was very evident. Like I said in our

platform a customer can create agentic apps in few seconds and these agentic apps are powered by agents. That means

they can also create agents using our platform. The minute a customer creates

platform. The minute a customer creates an agentic app or an agent that IP belongs to them not to arrive and we wanted to make sure that is very clear within our platform of who owns the IP

of the technology that they are building through our platform. The last

initiative that we had to make sure is we engage with customers who really believe in standardization. We are

working with customers who have one CRM vendor, have one firewall vendor, have one routing vendor, right? have one

ticketing vendor the question becomes very simple who is your agent ticket platform vendor and the answer becomes arrived having said that the customer's journey with AI can be very different

which is what our observation was this is my third geni company I started my first geni company in 2017 when transformers technology didn't even exist transformers came out in 2018 I

started my second geni company in 2019 where I was building agentic sock where with co-pilots do operation ationalize and make the sock team really efficient and nobody understood what I was trying

to do in 2019 and now everybody's investing in agent AI uh for cyber security operation. So we had to make

security operation. So we had to make sure that our platform can really adapt to different customer personas. If a

customer is very early in their AI journey, they can pretty much go into an app store where they can consume pre-built cyber security apps ready to go instantly. If a customer is in their

go instantly. If a customer is in their mid AI journey, they can go into this agentic Lego block store. So think of it like Lego blocks and the agents can the

customer can connect these agents on the fly and in few seconds you have a full-blown app ready to go and when I say a full-blown app ready to go I don't mean a chat interface I mean a

full-blown product gets generated in order of seconds and if the customers are really advanced in their AI journey they can pretty much go into a third layer which is the AI tools think of it like 3D printers where they will explain

what they want to do and agents will carve out by themselves in few minutes or in a couple of hours.

We are very young uh but we have been recognized by Gartner as a tech innovator in agentic AI and being positioned to expose the most advanced AI capabilities for even high schoolers

to operate and create agentic apps. Uh

there are nine different research reports in which Gartner has mentioned us in the span of last two months.

Like I said uh we have uh been in the market for a very short time but we have accounts which are largest of the largest companies in United States and internationally.

There are many use cases that we can offer. Um the use cases can span from

offer. Um the use cases can span from optimizing your security operation uh creating identity ccentric apps for doing identity governance uh observing

shadow ID applying governance on top of it prioritizing vulnerabilities and performing exposure management identifying third part third party suppliers their risk and mitigating that

risk instantly or performing internal audits or compliance or risk based frame frameworks. Having said that, our

frameworks. Having said that, our platform does not require specialized AI talent to be hired. You can put in high schoolers, middle schoolers, and they'll be able to create agentic apps in order

of seconds. Thank you so much for your

of seconds. Thank you so much for your time.

>> Thank you very much.

Next up, we have Arkham, which is a data and AI platform that helps enterprises unify fragmented data, standardize business metrics, and solve complex

operational challenges with AI models tailored to their operations.

>> All right, thank you so much everyone.

Uh, I'm so energized and happy to be here. Um, I think Amid just did a a

here. Um, I think Amid just did a a phenomenal job at explaining us this phenomenon. a lot of AI pilots but the

phenomenon. a lot of AI pilots but the majority of them are failing to deliver value to the enterprise. So today I want to share with you a really compelling success case from one of our customers.

Um I think from the from Amit's presentation I had the general catalyst the CEO general catalyst here but I think Aid was much better. I think the reasons for AI pilot failure can be

boiled down to three core areas.

Companies are struggling with fragmented data. AI pilots that are generic that

data. AI pilots that are generic that don't solve a very specific workflow and then teams that are failing to adopt because organization is not ready because they're not ready to adopt those

solutions that are quite generic. So we

launched Archam three years ago almost almost three years ago and today we can claim that our customers are part of the 5% they're part of the 5% of companies

that are actually solving really complex problems with trusted data and AI really tailored to their operations and we have multiple success stories from retail

with very strong ROI to infrastructure energy CPG but today I want to bring your attention to the to the um success story of Circle K, our dear partners at

Circle K. What was their challenge? The

Circle K. What was their challenge? The

challenge for their commercial and supply chain management team is that they struggle with fragmented data for different organizational reasons. Their

POSOS system was not properly integrated with their ERP and many other transactional systems. And this generated a lot of inefficiencies in their day-to-day from answering simple

questions such as how much do did we sell in this store as compared to this other one or the performance of a region to much more complex workflows such as sales forecasting or or analyzing

pricing and promotions. These processes

were highly manual, highly Excelbased going into a PowerBI, downloading data, doing some data crunching, super inefficient for commercial and supply chain management teams. So these are the

type of problems that we solve at Arkham and this is how we do it. First, Arkham

is a data and AI platform. We help

companies like Circle K, Kimberly Clark unify fragmented systems and data, standardize that data into metrics and business rules and then deploy solutions

with both machine learning or geni that are truly tailored to specific use cases that can deliver very tangible ROI. And

the way we do it piggybacking on on a mid's suggestion is we partner with teams within those organizations that have very clear problems, very clear use cases and that are motivated to solve

these problems. So once we identify the problem, then we work backwards from the data. Our platform manages the entire

data. Our platform manages the entire data life cycle from the integration and transformation. But we don't stop there.

transformation. But we don't stop there.

We then work handinhand with different organizations to tailor use cases. And I

want to bring your attention to two really powerful use cases that we have today with Circle K. The first one is it can seem very simple but we call it a a GI control tower. With this use case,

commercial teams and supply chain management teams can obtain answers about their operations in seconds. And

it's not only reactive but also proactive. We have a system of prompts.

proactive. We have a system of prompts.

We don't like to use the word agent too much because there's a lot of noise around it. We have a system of prompts

around it. We have a system of prompts that sends automatic insights directly into into Microsoft Teams that are proactive that can provide insights around their operations before they even

ask. So this is a really interesting use

ask. So this is a really interesting use case that today has more than 100 people engaged making decisions on a daily basis on circle case operations. The

second use case is sales forecasting.

Our platform basically handles all of the complexity for training and deploying machine learning models. We

have trained a sales forecast model that does forecasting at the store and category level. And today Circle K has a

category level. And today Circle K has a really powerful solution with 98% accuracy that is the baseline for all of their planning. And like this we have

their planning. And like this we have many use cases that are now deployed in their operations. And just to close the

their operations. And just to close the reason companies choose us is because building this in house can be complex.

It can take months specialized tools. So

they rely on Arkham. They partner with us because they get one platform and one team forward deployed engineers dedicated to help them solve these specific challenges. So if these kind of

specific challenges. So if these kind of problems resonate with you or around we're happy to help. Thank you so much.

>> Thank you.

Next up, we have Tac Labs, which delivers agents that streamline data migrations and drive business transformation across ERP systems, rollovers, and M&A.

>> Okay. Thank you. So, we are a organization uh building your uh enterprise agent uh AI partner in uh

acting in your uh enterprise state from migration to operation to integration and implementation. We are tackling uh

and implementation. We are tackling uh that and helping you move away from your legacy systems. Uh we basically

productize and uh make that uh to replace part of uh the work that you do with your uh SI to unlock the budget that you might have in your uh existing

enterprise state.

Our multi-engine system understand the systems records. So the record SAP,

systems records. So the record SAP, Oracle, Salesforce and Workday and help you take control over your data, your process and your business. We connect to

all of them and understand your business processes and help you run uh your migration scenario, transformation scenarios and optimization as well. Take

a lead to cache example. You to in order to run that business process, a lead to cache opportunity in CRM like a Salesforce and then your cash as well as

your inventory uh and finance in SAP.

You'll probably need two sets of team.

One team specializing in S uh SAP, another one in Salesforce. We're able to connect both systems and allow you to run the process seamlessly.

Not only that, we have developed agents that will be able to connect to different systems records that uh SAP, ECC, S4 HANA. So transitioning between

ECC and S4 HANA made it easier. Uh

Oracle, Ariba, Workday, Salesforce and so on so forth. We are uh building this as a SI as a product.

The AI is unlock. So the technology behind this it's uh uh we have the best consultants that we saw the different cases from customers we have people that

have worked on different integrations uh we built this for a life science uh fortune 500 life science company which I have uh I can show you some of the cases

right so the where's the manufacturing on the bill of material the financial close the acceleration of customization integration as

uh but also uh across the different enterprise whether is a manufacturing uh your finance your supply chain and so on so forth most importantly the case study

that I wanted to show we were able to achieve a 45% uh projected saving in the SI cost and this is across their ECC to

S4 migration a fortune 500 a fortune 100 large life science base in the US where we are working uh with them as a major uh main partner helping them transform

the ECC to S4. So the older version to the newer version. Our solution allow them to automatically uh remediate a lot

of the customization. SAP requires that you have uh you change the way that you do customization uh as well as how you manage your data

itself. So the data structure changes

itself. So the data structure changes the way the business changes as well.

Our um uh our solution allows customer to uh reduce that time that you spend uh which is usually two to three years of a

migration. Uh we had uh a case with them

migration. Uh we had uh a case with them where we they engage with SI to do us work a piece of work uh on their finance

data uh for 18 months and we were able to do in that weeks. So we were able to reduce a lot of the time that they're spending with the SI uh as well as the

cost as well. So month of acceler acceleration instead of the years uh comparing to the uh regular uh projects that say you will have in the SAP ECC2

S4 migration. With that said we have all

S4 migration. With that said we have all uh the other scenarios that we can show you. If you are interested feel free to

you. If you are interested feel free to reach out and thank you.

>> Thank you very much. Next up is Axel, which is an AI software architect as a service that helps companies modernize their legacy code bases faced with challenges like slow development speed,

high costs, and lack of scalability due to outdated systems. Hello everyone, my name is Somay and I'm the CEO of Axel. We help automate the

manual data entry that is probably happening in all of your team's back offices.

Quick question for all of you. Who knows

what a purchase order or an invoice is?

Don't raise your hand, just think in your head. For those of you who don't

your head. For those of you who don't know, a purchase order is someone sending a document saying they want to buy something from you. And an invoice is someone sending you a document saying they want to purchase something from

you. And all of the teams here are

you. And all of the teams here are probably processing them in a very similar way. The document is sent to

similar way. The document is sent to your email and then you have teams of people that are reviewing the document, making sure it has the right information

and then manually typing that document's information into your ERP. Simple,

right? What you might not know is how much that review, validation, and manual data entry is truly costing your team.

The average sales rep spends around 25 hours each week just manually typing this data. Between salaries and errors

this data. Between salaries and errors in this process, that's thousands of dollars each week, compounding to hundreds of thousands of dollars per rep and millions of dollars between your

entire team for doing this manual data entry. Not to mention the opportunity

entry. Not to mention the opportunity cost of allowing these teams to do something more valuable as well as errors that are causing real friction between you and your customers in this

important time to retain customers.

But what if you never had to do manual data entry ever again? I want to introduce Axel and I want to show you a real life example of how it works.

Everyone hates manual data entry and your sales reps waste 20 hours a week typing orders into ERPs instead of selling. Meet Axel, your AI co-worker

selling. Meet Axel, your AI co-worker that automates order entry from email to ERP. Let's see it in action with a new

ERP. Let's see it in action with a new purchase order. A purchase order comes

purchase order. A purchase order comes in. Axel automatically recognizes it as

in. Axel automatically recognizes it as a purchase order and extracts every relevant detail your team needs. For

this machine shop, that includes the PO number, part numbers, quantities, and pricing. Next, Axel validates the order

pricing. Next, Axel validates the order against the company's specific rules. In

this case, the team wants to ensure that the pricing aligns with the specified quantity. Axel then cross-ch checkcks

quantity. Axel then cross-ch checkcks the ERP to confirm that the pricing and quantities match the original quote. If

something's off, Axel automatically drafts a message back to the customer to make corrections. If everything looks

make corrections. If everything looks good, Axel enters the information into your ERP automatically and drafts an order acknowledgement all in under one minute. Manufacturers and distributors

minute. Manufacturers and distributors use Axel to catch pricing mistakes, saving an >> and so we process thousands of documents each month. And I want to tell you a

each month. And I want to tell you a case study of one of our first original clients, which was a major aerospace supplier. For each three reps, they were

supplier. For each three reps, they were responsible of processing 300 plus documents each week. And it was causing true problems due to how long this took,

which made conflicts between them and their customers. and they were losing

their customers. and they were losing customers because of the amount of errors and shipment mistakes that were happening. We were able to decrease the

happening. We were able to decrease the amount of time it takes to process each document from 25 minutes to 2 minutes, boasting a 40% kickback rate, meaning

that four in 10 documents, we were finding errors in these invoices and purchase orders that a regular manual data entry person was not finding. And

this was just saving $20,000 a month per rep in just cost mistakes found alone.

We have a world-class team of four deployed engineers that are ready to make this workflow extremely custom to your team's edge cases as well as 20

plus ERP integrations that are already built out so we can start from day one.

And for every single company in the audience today, we are waving our proof of concept fee. Meaning in one week we can help understand your workflow,

configure Axel to all of your edge cases and they go live with real trials so you can see the value of Axel directly and if you want we can then consend you with a real with a real life pilot

afterwards. So it's essentially

afterwards. So it's essentially risk-free. So what's stopping you from

risk-free. So what's stopping you from starting today? Please contact me at

starting today? Please contact me at Axel and thank you all for your time.

Thank you very much. Next up is Valum.

Valum offers a unified enterprise platform for measuring AI tool ROI, enhancing adoption and governing usage via real-time visibility and natural

language policy controls.

>> Hi everyone. Uh my name is Charlie. I'm

the co-founder and CEO of Alum. I'm here

to share about our operations layer for AI users in the company.

So what is the macroscopic issue here?

It's the MIT report like you guys heard about from a meet everyone else. But my

interpretation of it is this. I believe

there's a massive lack of alignment between the usage of these AI tools in companies and the goals of the companies themselves and that's where this debate is coming from extracted value. So that

manifests in a few different ways. The

first is usage that creates little or negative value for the company. So this

can be usage that creates risk whether that's through security use cases that are not approved. It can mean adoption of the wrong resources. So people using

shadow AI not using the enterprise tools you guys pay for the ones that you might even build internally for specific applications. And it's also just an

applications. And it's also just an insufficient grasp leadership has on are they getting value on their spend? What

are people using these tools for? And

where is value being created?

So that's where Valum comes in. Val is

essentially your digital chief AI officer. It will govern, measure and

officer. It will govern, measure and upskill usage of these tools to prevent use cases you don't want people doing and uh measure the ones that are actually creating value so you can steer

strategy. On top of that, it will

strategy. On top of that, it will actually upskill usage to try get people on these approved AI tools and utilizing the ones that you're building and paying for and improve the prompting once

they're there.

How does it deploy? It deploys as a lightweight browser extension in a desktop app to cover the usage on the entire employees device. Uh as you can see it can intercept things in chat GBT

cloud copilot all these different AI tools that you might have.

So the governance is defined in plain English where we have these plain English policies that can enable and disable use cases of these tools. That

could mean things like uploading financial projections, things like that.

Or it could also mean not letting people write performance reviews or do things with AI that create pure risk for and uh are just not approved.

Measuring wise, we can actually show you not just what tools are being used and who's using them, but we can show the specific business use cases they're doing. So that could be like showing

doing. So that could be like showing that Charlie uses AI to write emails and to review resumeums and things like that or it can be at an organization and department level showing how your

finance team uses AI uh for their use cases as well.

The last aspect here is the upskilling.

We try to react to what the user is doing on these tools and potentially in other places to direct them to the right AI resource that you want to encourage.

Once they're there, they can pull from a bank of prompt templates that can be created in Valum and used across different places uh where AI tools are being used or we can just give them in the flow feedback on their prompt to

tell them what they can improve and make it better the next time.

Uh to leave off, I think that what I've seen is most companies are in this position where they've invested a ton in AI. They know it's really useful. Um

AI. They know it's really useful. Um

there's definitely resources available.

We know these tools are powerful. We're

all at the plug-and-play AI expo after all. Um, but the people who are really

all. Um, but the people who are really creating value with these tools are the ones who are able to guide their employees towards effective use cases and away from risky use cases and do that on the right tools. Um, yeah. And

to leave off, um, we're working with a lot of companies right now like Tri Countyy's Bank in California, which is a public bank, uh, Northeast Shared Services, which is a 30,000 employee enterprise. And so if you guys are

enterprise. And so if you guys are interested in Valum, I don't have my email up here, but email me directly at charlievalum.ai

or look up our website vlum.ai.

Thank you guys.

Next up, we have Various, a sandbox platform that lets enterprises train and validate autonomous agents in realistic highfidelity simulations before deployment.

Hi everyone.

Um, I should actually thank Amit first of all because he did most of the pitch for me right now. Um, you've heard it from him. You don't need to hear from

from him. You don't need to hear from me. If you're building an AI agent, you

me. If you're building an AI agent, you need a simulation sandbox. There's just

no way around it. Uh, that's that's an absolute requirement. Whether you are an

absolute requirement. Whether you are an in a regulated industry or not, you need a place to test and train your agents in environments. So who are we? We are

environments. So who are we? We are

building AI simulation sandboxes for AI agents. Um I'm Andy Partovi. I'm the CTO

agents. Um I'm Andy Partovi. I'm the CTO and co-founder. We are a team of AI and

and co-founder. We are a team of AI and ML engineers. Most of us are PhDs. Uh

ML engineers. Most of us are PhDs. Uh

I'm from Google. We have people from Adobe, from Service Now, from other companies. is we're building a very

companies. is we're building a very tough uh technical product. Uh we're

backed by a crew decibel and the Berkeley house fund.

So throughout this conference uh ironically all the AI professionals here have told you that AI doesn't work. Um

and we know now at least part of the reasons that that happens. Part of is because of the processes uh because of the enterprise data. There is no question about the economic value of AI

agents across all industries. In the

past two days, I've talked to probably a dozen industries, enterprises from different industries. Everybody wants to

different industries. Everybody wants to build AI agents, but everybody is also very worried about putting them in production and then making mistakes.

That's why you need a simulation environment. We actually have an AI

environment. We actually have an AI agent in production today that you all have seen and it's a very dangerous AI

agent. It's a Whimo car. It can actually

agent. It's a Whimo car. It can actually kill you. It's a fully autonomous AI

kill you. It's a fully autonomous AI agents on the road. Um, and the way they were able to provide that reliability

and make it work was through an was through a simulation environment.

So we have created the same concept for enterprise software AI agents. Instead

of simulating uh drivers and pedestrians, we are simulating users of your AI agent.

Instead of simulating roads and buildings, we are simulating ERP systems, CRM, payment systems. So we create a digital replica of the environments that the agent is going to

interact in. So you can check their

interact in. So you can check their reliability compliance, you can test them and you can train them further so you can very confidently put them in

production. So that's why you need a

production. So that's why you need a simulation sandbox for your AI agent. As

a last note, uh we have worked with uh two of the biggest credit card providers. Uh you probably have their

providers. Uh you probably have their cards in your wallets right now. Uh

we're based in New York, the capital of applied AI. uh and we work with

applied AI. uh and we work with regulated industries, insurance, tech, uh financial services mostly uh but also supply chain and other industries. If

you are looking for an environment, if you think that you are stuck in putting your AI agent in production, please talk to talk please talk to me. Uh we are at

various.ai. Thank you so much.

various.ai. Thank you so much.

Next up, we have Unsupervised, an AI powered analytics platform that automates a discovery of actionable insights from complex data, enabling businesses to make smarter, faster

decisions.

>> Thank you.

We set out over eight years ago with the intent of automating analytics and I'm proud of how much of it we've done.

Right now we have AI analysts that outperform humans today in production.

This isn't even fresh news. A year ago, we were hitting the point of taking major fortune50 companies and yielding giant returns. And what's most

giant returns. And what's most interesting about this is this is not about cost savings. This is revenue, topline revenue increased through AI.

And that's not because we've taken LLM and made them better at data. It's

because we built a groundup AI that natively is focused on data and then we wrapped it. We we wrapped it with an LLM

wrapped it. We we wrapped it with an LLM to make it be able to talk to people, not the other way around. So, let me show you what that looks like.

>> Let's see how it works. Here, I'm

looking at loan data. Let's go ahead and dive in and see what's driving default rate on these loans. My AI analyst has already found hundreds of hidden patterns that are driving my default rate up or down. Each one of these is statistically significant and tied to a

measurable KPI shift. These are not surface correlations. They're detailed

surface correlations. They're detailed insights that are explainable, ranked, and actionable. That's obviously

and actionable. That's obviously powerful, but with unsupervised, I can actually ask my data directly. So, let's

go in here and ask my team of agents, why is my default rate so high?

My agent comes back and starts asking me for more information, and I want to have it consider my all-time default rate.

Within a few seconds, the agents return a full answer linked to live data and with quantified impact. And this is now ready for me to co-work with this agent, almost like I would co-work with an analyst. When you need more depth,

analyst. When you need more depth, switch into deep research mode. You can

either ask a deep research question here of your team of agents, or you can scroll down and start a conversation with the deep research agent.

Here, I'm going to ask, what can we do to reduce our default rate?

The kind of deep investigation that would take an entire analytics team days or weeks is now done in about 20 minutes. Now I have already loaded in an

minutes. Now I have already loaded in an existing deep research report. So about

20 minutes ago I started this query and you can see that I'm getting a very very detailed uh summary of my you can see it did the same question here and it's gone through and added these different sections and started drafting them

reviewing them tweaking them and it's produced a comprehensive report that is deep understanding of what's going on inside of our default rate and what we can do about it. Now you can also use the insights identified by the AI agents

to take better actions or to predict outcomes. Here I have a decision app, a

outcomes. Here I have a decision app, a purpose-built interface that will allow me to explain a single loan in context of other information. Let me go ahead and load up an individual loan. And

you'll see that loan in context with its predicted default rate, the AI identified patterns and insights that are related to that loan, as well as related documents that have already had key values extracted by AI so that you have all the information you need to

understand this loan right at your fingertips. Just as importantly, you can

fingertips. Just as importantly, you can identify and explain where all this information came from, so there's no blackbox prediction going on. You can

dig as deep into this information as you need to to truly understand how the AI got here.

We're already at 3x the accuracy of other agentic platforms. And it's over a 100 times faster than humans are today.

And that's not just answering simple questions. That's getting into these

questions. That's getting into these types of deep pieces of research and yielding Mckenzie style reports or helping people making everyday decisions on critical business things faster with

deeper insights. We work with some of

deeper insights. We work with some of the largest companies in the world and we know what you need. We can operate in both the public and private clouds.

Infosc teams love us because we're so transparent, deep account management. We

understand data governance processes. We

know what it's like to work in your organizations. So, I hope that you'll

organizations. So, I hope that you'll become one of our customers soon and start getting this level of impact.

Thank you.

Next up, we have Calineia AI. They

provide an enterprisegrade AI safety and reliability through AI judges that assess, guard, and improve model behavior in real time.

Hey everyone, last startup on the last session of the last day of the conference. So let's do this. I'm Samep

conference. So let's do this. I'm Samep

and I'm here to help you build better AI with better data. Now again um as with a lot of uh the startups here, one of the

key things you'll see us see about us that we are a deeply researchled team.

Um here's my small team here. We are

just a mile away down uh Mountain View and we are partners with Stanford AI lab, Amazon AGI and a few other research organizations. Most of us are PhDs and

organizations. Most of us are PhDs and as you'll see bottom here uh we are cited by OpenAI, by Science magazine,

some of the biggest journals um in the research space. Now what do we do? We uh

research space. Now what do we do? We uh

for anyone who has built AI agents, I'm going to I'm going to list some of the key problems that you have already experienced and these are very very tactical things. You built an agent uh

tactical things. You built an agent uh you want to test it and you use a benchmark but your benchmark is not a real world user. Then you what do you try to do? You try to VB val it out. You

try to run a few prompts see it works and most likely it doesn't and then you feel unsafe about it and that's why your VB vals don't work. And then when you want to try and scale the test coverage

that is also really really hard to do.

And when you bring in expert curated data that's slow that's expensive and when you eventually do find uh some sort of failures that rarely translates into

model improvement. So we are here to fix

model improvement. So we are here to fix that. We are collinear and we give you

that. We are collinear and we give you data for eval data for post training. So

we do that. You you've seen a version of this in one of the earlier presentations, but we help you simulate realistic agent agent users uh tools,

verifiers, all of the lot. And then once you see these synthetic um mock users using your tools, you find really find what the gaps are in your AI workflow,

in your AI process, where your agent falls short. And once you have figured

falls short. And once you have figured all of that out and you want really want to improve performance, that's where we give you high signal training data for

uh running any post- training process to improve your solution. And so colinear data for eval data for post training.

Now how does this work in actuality?

You're building your AI app and it could be any use case, chatbot, rag system, internal copilot, agent, whatever. And

it could be based on any model, whether it's closed source, open source, pick what you want. We come in up front and set up simulations for it. Simulations

could look anything like your your regular average employee using your uh agent. It could be your grandma trying

agent. It could be your grandma trying to ch chat and change their Exfinity bill uh and typing out long paragraphs.

It could be it could be your young 16-year-old saying weird things like 67 and you don't understand what that means, right? All of that lot. We

means, right? All of that lot. We

simulate that and that goes into your Genai app and that and then we evaluate that transcript. We look at uh we throw

that transcript. We look at uh we throw in reward models, verifiers. We look at all the different metrics you care about whether your agent really understood what they meant. Did they complete the

task they were supposed to and so on and what the end experience was and once we find the gaps there that's where we run simul that's where we curate high signal data to actually help you drive

improvement.

So that's where you know you are experimenting today. come in, work on

experimenting today. come in, work on the loop with us and what you get is you get to test and train your agents in the real world. You cut your iteration

real world. You cut your iteration cycles and that saves a significant amount of time and then that helps you feel confident in shipping to

production. We are working with some of

production. We are working with some of the leading labs and fortune 500 companies folks like Comcast, Amazon, Service Now, our customers. And we have

some pretty big logos and partner as partners too, including plugandplay. And

that's pretty much about us. So if you feel not so confident in your agents performance, reach out to me um colinear.ai and we'd be happy to help.

Thank you.

>> Thank you. Thank you very much.

All right, that's it for the startups everyone. So that means we are concluded

everyone. So that means we are concluded with the enterprise AI expo. Um upstairs

we have networking till I believe 6:30 7 o'clock tonight. So if you saw some

o'clock tonight. So if you saw some startups today that you want to meet with go through or meet them upstairs at their desks or just try to grab them. Um

special thank you to the partners for being here. Really appreciate you being

being here. Really appreciate you being out here again. This is a great turnout.

Everyone else, thank you so much for being here as well. If you have any questions, please follow up with us. And

uh yeah, have a great rest of your week.

Thank you. Enterprise team up here for photos.

Loading...

Loading video analysis...