TLDW logo

Michael Truell: How Cursor Builds at the Speed of AI

By a16z

Summary

## Key takeaways - **Focus on Power Users, Not "Democratization"**: Cursor deliberately rejected the 'democratization' narrative to focus on power users, recognizing that catering to advanced users would drive adoption and product development. [01:24] - **Two-Day Work Trial for Agency, Not Credentials**: Cursor employs a rigorous two-day work trial for all hires, prioritizing demonstrated agency and problem-solving skills over traditional credentials to assess real-world capability. [18:16] - **Owning the Editor: A Contrarian Bet**: Against conventional wisdom, Cursor intentionally built and owned its code editor, understanding that users would switch if presented with a significantly better tool, a lesson learned from the shift to VS Code for Copilot. [09:14] - **From 'iPod Moment' to 'iPhone Moment' in AI**: The AI market is poised for an 'iPhone moment,' indicating a significant shift and widespread adoption beyond initial advancements, a transition Cursor aims to navigate and lead. [00:15], [26:36] - **Aggressive Talent Acquisition: Flying After Rejection**: To secure top talent, Cursor engaged in extreme recruiting tactics, including flying across the world to meet candidates even after an initial rejection, demonstrating a commitment to acquiring the best people. [00:04], [22:42]

Topics Covered

  • The 'blind man in the elephant' problem: understanding user needs.
  • Cursor's contrarian focus: building an editor, not just an AI layer.
  • Navigating extreme scale: from cloud disruption to API provider stress.
  • The two-day onsite: a rigorous test for product engineers.
  • Acquisition as a talent strategy: acquiring companies for their people.

Full Transcript

us growing the initial 10 people on the

team, we did crazy recruiting stunts

like flying across the world to the

person. Oh yeah. Uh after they say no.

And then yeah, I think that one of the

key challenges facing the company uh in

the future and we've faced in the past

is we are in a market that's like you

know had a iPod moment and like it's

going to have an iPhone moment.

Please join us in welcoming to the stage

Michael Troll, co-founder and CEO of

Curser and general partner Martin

Cassada.

>> Great to be here.

>> Good morning everyone.

>> Thanks for being here, Michael.

>> H glad to be here. Yeah,

>> appreciate it. He very very rarely does

these things I had to beg. So I really

appreciate you coming up. No, we didn't

miss this. Yeah.

>> Okay. So, uh, as everybody knows,

Michael's CEO of Curser. It's one of

the, uh, fastest growing companies

certainly we've ever seen. Um, it's

everywhere. It's crazy. You have to

hire, operate through that. So,

actually, what I want to do is dig into

not the typical kind of founder journey,

what brought you here, that there'll be

a little bit of that, but like how are

you handling the mayhem? Is that cool?

>> Sure. Yeah. No, that sounds great.

>> Okay, great. So, to start to start off

with, we'll just do a little bit of

history. So, um, I met with a company

recently and they came in and they said,

"We are the 3D of Cursor." And I said,

"Funny story, cuz Kurser was once a 3D

company." Is that right?

>> Yes.

>> And so, maybe do you mind talking about

kind of a bit of the origin story?

>> Of course. So, uh,

there's a bunch of different ways you

could actually peg the start date, but,

uh, effectively the way the company got

started was my co-founders and I, we

were close colleagues from school and

some other places and we were two

moments got us really excited about

building a company. Uh, one was trying

some of the first useful AI products and

in particular trying GitHub copilot, the

incumbent in our space. uh and uh the

reason this got us excited about

starting a company is these products

were actually useful and this was the

first existence proof of you know AI

that you know it's we shouldn't be

working on AI in a lab. Uh it's time to

actually build systems out in the real

world and there's real useful things

that you could be doing. Uh the second

thing that got us excited was scaling

laws too. Uh we got excited about how it

seemed like even if the field ran out of

ideas the models would get better. And

so this was around 2021, beginning of

2022. Um and then uh cursor sort of came

out of kind of a whiteboard exercise

where we were very excited about a

cursor for X for many different spaces.

And what does that mean? Uh we thought

at the time that there would be for a

bunch of different verticals of

knowledge work the company that

automates uh that area of knowledge

work. Uh you know a company for each

space and uh that company it would do a

couple of things. The first thing it

would do is it would build the best

product for that space. Uh and it would

define what the actual act of that

knowledge work looks like as AI matures

and gets better. Uh and then with that

product, it would win distribution. It

would win a big business. Uh and it

would get resources like data and

capital. And then it would back into

being something that looks a little bit

more labl like though not a foundation

model lab where it would start to use

the data uh it gets access to to

actually work on the the under

underlying models and kind of push the

autonomy in the space and then that

would then in turn push forward the

product and change what the best product

looks like. You get this flywheel going.

Uh and so we were really really really

interested in that and we thought that

Microsoft would do that for coding and

uh we wanted to work on you know a

sleepy uh more or you know less

competitive space uh and we had some

colleagues who who did mechanical

engineering and we were familiar with

CAD systems and so there was this uh

yeah initial false start of working on

uh working on mechanical engineering

actually and working on me models to

help people be more productive within

CAD systems and also building building

our own sort of CAD

So that was how we got started. It was a

bad idea. The founder market fit was

horrible. There was this blind man in

the elephant problem where we would hop

on calls with uh mechis and ask them

what they do during their days and we we

only we never really had like an

intuitive sense for it. I almost wish

that in the kind of six seven months

where we were working on that we had

just gone and been interns at a company

uh to really learn the space. But uh

eventually eventually we put that idea

aside and uh kind of came back to the to

the thing that we were really most

interested in which is working on

programming.

>> So so I've got this theory I don't I

don't know I would actually like to hear

your views on it on why cursor did so

well early on and it's actually pretty

benal which is um you know at the time

we were surveying the space and there

was a lot of companies and they were

doing a lot of different things and a

lot of it was pretty science fiction

like you know we're going to create an

agent that will like be a software

engineer. We're going to create a model

using this new technique. we're going to

do all the things. We're going to

rewrite the editor, etc. And one of my

theories why in early on cursor did well

is you were incredibly focused. You

chose VS Code. Copilot had matured the

market for a few years at the time and

it was this narrow focus and just like a

way way way better product um that did

it. And so

two questions. A do you think this is

kind of a legit legit view on it? And

then the second question is is like h

how how did you decide to maintain focus

when everybody else was doing everything

else because it it was the time to to

build the agents or build the model or

>> um yes I definitely think that there's a

lot of truth truth to that. I think that

also there's an important asterisk in

that uh you know the story of this

company is still yet to be written and

there's so much more to do too.

>> Of course. Yeah. Yeah. No to get to get

to this point. I mean like the success

to this point I mean there was just such

an updraft. Yeah. Yeah.

>> Um yes. So um uh going back to when we

were working on the CAD stuff, uh the

cold start problem in that space was

much harder than in our space where to

get started on helping people be more

productive and building mech models of

uh stuff that they were going to make in

the real world. Uh none of the out of

the box models were good at that stuff.

Uh like there were there was actually

like no good 3D representations. You

were like open source 3D models that had

had transfer. uh if you took the

existing textbased LLMs and you tried to

get the LLMs to be good at CAD uh they

weren't really and so uh much of our

time uh spent working on the cat idea in

addition to you know calls with mechy's

where we didn't really understand what

they did in their day job which was

obviously a big problem uh it was it was

spent doing a lot of modeling work and a

lot of data scraping work and we were

kind we kind of had PTSD from that when

we decided to put it aside and work on

programming and so initially yes we were

super focused. We were super expedient

and we did hack on hack on hack to just

get something out into the world as fast

as possible uh and start to get some

momentum. And part of that was, you

know, we didn't have uh you know, we had

some funding, but um nothing like the

the seed rounds of today. Um and uh and

you we had four co-founders and um you

know still we talked about hiring

expanding the team but uh you know I

think we were still really fully

learning how to do that and so yes the

competitive landscape at the time it was

Microsoft it was dozens of startups

these startups fit into a bunch of

different buckets there were some that

were immediately trying to build big

foundation models there were some that

you know had high fluid product ideas of

like very different changes in people's

workflows uh and we just tried to get

something out as fast as possible. And I

remember at the time the like commitment

device for us was actually the monthly

investor update uh which probably no one

read at the time but uh yeah we would it

was I think from day one deciding to

work on cursor it was a couple weeks to

actually have an IDE that we used

ourselves and initially we we didn't

even fork VS code we actually built from

scratch. So we built ID from scratch

that we used ourselves as a daily

driver. couple more weeks to actually

get into other people's hands and then

in the span of total I think a couple of

months we had launched our first beta uh

out to the internet and uh immediately

it started to get interest from people

uh and then you know that kind of set

off the momentum

>> and just I mean just specifically while

while the momentum was building a number

of the people in the same space were

broadening very quickly like they'd go

to CLI very quickly or they would like

integrate with Intelligj or whatever it

was and like you decided not to I mean

was it Was this was this like super

intentional or was it just you know

you're getting pull on the right one

like you had enough work to do?

>> Um yeah the ideas were intentional in

that uh you know we kind of just worked

all the time and so the four co-founders

every day it would be breakfast, lunch

and dinner. What are you going to talk

about? You're going to debate endlessly

these core strategic questions of do you

build an editor and an extension? Do you

do anything on the model side of things?

Um and um you know other you know the

initial product ideas

>> build a new ID. Yeah. Yeah.

>> Um and yeah I think that we were we were

really really intentional about wanting

to own the surface. So at the time it's

it's not super controversial now but at

the time people just thought it was very

weird to do an editor whether it was a

fork or not a fork. Uh they said you

can't get people to to switch their code

editor. They're too tied to it. Which we

knew was wrong.

>> Um because we had actually switched the

VS Code ecosystem because of Copilot. Uh

we were all like lites using command

line Vim. Uh and uh so we knew that if

you built a better mousetrap you could

get people to switch you know the bar

would be high.

>> Um and then yeah we were very very

intentional about eventually in the

future we want to touch the model side

of things and there's been a whole story

of backing into that and that's actually

been a really important product lover

for us but uh we didn't want to start

there. Uh we wanted to just get

something out to the world not touch any

of the modeling stuff.

>> Awesome. Okay. So, I I told you this

anecdote and you said you didn't

remember it, but I remember it very

well, which is um so the ear the early

days were about scale and I I've seen a

lot of companies over the years. I've

been in the industry for 30 years. I've

never seen scale like this this quickly,

especially with a small team. And I

remember one night I got a call from you

and you were like, "Listen, we've taken

down one of the big clouds cuz like they

can't handle our scale and um there was

this actually relatively minor service

disruption and then you guys fixed it

actually pretty quick and it was fine."

But apparently in that time or so Oscar

tells me someone showed up at the cursor

office and put an iPad on the window and

says cursor's down.

>> So like definitely it was like to a

point where people are noticing. It was

for me it was kind of a shock cuz it's

kind of this nondescript building like

you know that that they found out.

>> Um so it would be great

>> to hear how you think and the team

thinks about handling this much scale

especially because I mean you're really

at the point that you're

>> you know like you're even stressing the

platforms you rely on even though

they're some of the largest platforms. I

mean there's nowhere to go.

>> Yeah. Um yeah, that that anecdote is

lost within the

>> of the many of the many Yeah.

>> back in the day. Um yeah, I think that

>> well early on the way we encountered

scale was just it was such a tiny team

operating a service that that started to

grow very fast. And um my I my

co-founders are are great. Um and um but

you know we're not the the most

experienced group if you can't tell uh

just in terms of of years of experience.

And so uh yeah very quickly you know we

had lots of people using the service. Uh

there's ways in which uh especially with

things like we have our own file sync

system that you can think of it as like

there's kind of two or three different

sort of mini dropboxes within cursor

where you know early on uh within cursor

there's kind of like a search engine for

the AI. Um, and you know, it seems like

a kind of on the surface it doesn't

sound like it should be that complex,

but it ends up being uh kind of annoying

to build. Uh, and depending on how you

built it, uh, definitely can start

stressing uh stressing the systems that

you rely on. Um, but yeah, very quickly

we were were getting to a scale when it

came to just normal boring cloud

services stuff. And so there was a whole

story of, you know, we were running a

very very large Kubernetes cluster

larger than um, many other companies.

and then uh trying to figure that out on

the fly with five total people at the

company and uh having things you know

having having hiccups and troubles with

that. Uh then we sort of just got a

handle of that uh by making some of the

right architecture decisions growing the

team. Then the next big scaling problem

that came up was actually just stressing

the API providers. uh and uh that was

less a being very clever technically to

get past that scale and that was more a

relationship thing where uh you know

these I I don't think the API providers

really knew what to make of us because

it's you know these four 20somes and

their thing now comprises like a really

high double digit percent of their API

revenue and uh now they're going to have

to make capacity planning decisions uh

decisions maybe financing decisions to

you know handle the growth under the

hood

And that was more of just a uh and I

think it's something we're still

learning, you know, forging

relationships with people. Uh it was

also getting very clever about turns out

uh these tokens, these API tokens, you

can get them for the same model for many

providers. Uh there are there are token

resellers that exist out there. And uh

it's strategically helpful actually to

spread it across multiple providers

which have committed contracts. And so

we got very good at hunting out all the

all the sonnet tokens that exist in the

world. Um and uh so that was that was a

level of scale that was tricky for us. I

I'd say now right now we do a decent bit

of our own uh training. We do some of

our own inference and so there's like

now a whole you know side of the scale

uh there's there's a whole new scale

problem there in and making decisions

there. Do do you do do you think that

this converges on

you know heterogeneous dependency on

third parties or do you think it

converges on largely you're running your

own infrastructure or like have you not

gotten that far

>> for the underlying uh model inference?

>> Yeah. Yeah. Yeah.

>> No no no no just not just just

infrastructure in general like more and

more you're pulling stuff in house just

so that you have control of it

>> just for operating our our website

desktop app back end things like that.

>> Yeah. Um I think we've been pretty multi

cloud from the start. Uh and so I think

we're definitely on a default path to

heterogeneous rely on multiple

providers.

>> We have data bricks snowflake we have uh

we're on AWS GCV and Azure uh for cell

for web stuff. uh we use uh planet scale

for databases and have had our whole uh

you know one of the scaling the kind of

boring cloud services uh stuff was was

really reliant on our DB where there was

a whole Kubernetes side of things things

like core DNS going down then there was

a whole series of DB stories where some

of things we're doing are like pretty DB

heavy and uh eventually we got to a

point where uh well usually you should

just scale the RDS instance that works

well for a long time eventually you run

out of that and then it's like do you

shard the database

>> uh and then we switch AWS's service

which claims to not let you shard the

database.

>> Turns out uh that's wrong.

>> You think of these public clouds as they

have it all together. Uh but really it's

a very small set of customers for the

highest highest level of scale and

they're figuring it out on the fly. Uh

and so planet scale has been amazing

there where we went from limitless to

planet scale.

>> Sam Sam, are you here?

>> Thank you very much Sam. We appreciate

it. All us developers

for Sam.

Um but yeah no for us I think multiple

providers uh multiple providers are are

great at different things and so that's

our plan. Um just quickly before going

towards talent um so you've had to

balance focus which you were very good

at since then you've done a lot of

multi-product stuff right you did bugbot

you did CLI um you're doing

infrastructure improvements

to what extent is the decision to do

this is pretty organic um and just

obvious and to what extent do you kind

of do prioritization

in a more deliberate way or maybe just

walk through how you think about kind of

where to expend R&D resources given

everything that you're you're dealing

with.

>> Uh it's pretty deliberate. We try to say

no to lots of things,

>> but I do think we're going to need to be

a multi- product company going into the

future.

>> I think that there's a big multi-product

opportunity in our space where there's a

whole AI coding bundle to be built and

we kind of want to be the uh for for

many of our customers like the AI coding

provider for them. Um and uh so far

that's really focused on this wedge in

which is you know the surface that you

sit in the pane of glass that you sit in

when you're an engineer going about your

day um uh building software which is the

editor and we think that there's still

so much more to do there and that's the

main focus. That's where we spend

resources. Um we do think that uh the

ways in which work is changing within

the editor start to affect how teams

work together too. And so we think that

that presents both a big strategic

opportunity. It's also just like

necessary to have the best editor thing

is to also have this compliment that's,

you know, helping teams review and

collaborate a little bit more.

>> Um, and so we're intentional about it.

Uh, it's been, uh, we're we're still, I

think, learning how to how to do it

well. Um, like how to give projects like

that air cover. um how to do cross-ell

where um there's really really big

cross-ell opportunities in our space

both from a like growth engineering PLG

show them the button side of thing and

then you know enabling the sales team

>> I I will say there like many founders

underappreciate how tough it is to go

from single product to multi-product

when it comes to actually go to market I

mean it's it's very very complex

>> yeah and a lot we're still learning

there uh but you know uh very uh very

excited by kind of the early results

>> awesome okay I'd like to shift towards

talent um so I think you one of the more

rigorous and thoughtful talent uh hiring

processes I've ever seen. Like I try to

like reserve a part of my evening and

weekends to help you uh to talk to and

recruit people. And every and before I

hop on every one of these calls, I get

this incredibly well thought out like

here's where it is. Here's what we've

done. You know, I mean, I I just think

there's so much behind this process. So,

if you wouldn't mind, can you just kind

of walk through how you think about

recruiting and kind of how you run your

process and what you found out what

works and maybe what doesn't work?

>> Yeah. Have your board members do lots of

calls until they cry uncle prepare

>> them. Take advantage of their time.

>> Uh

um yeah, how have we thought about

recruiting? Um

>> I think that there are ways in which our

process is uh pretty orthodox. I think

that some of the things that might be

unique, one is uh normally when you're a

small company, there's this thing that

you do with the first set of engineers

where you basically just have people

contract with you and you probably don't

do a normal lead code style thing, a

normal interview loop. That's what we

did. Uh it felt the most natural because

you're kind of getting to the ground

truth of do you work well with the

person? And then usually people stop it

after a couple of a couple of hires. Uh

we have and we've tried to kill it many

times internally. I've tried to kill it

too. Uh uh we still kind of do that

where everyone who gets hired on the

edge team and the design team uh spends

two days in office and they work on a

project and it's very free form. It's

not like you know you have this

whiteboard interview and then that

whiteboard interview and your days two

days are packed. It's here's a desk,

here's a laptop, uh you know, here's

three projects you could work on. Here's

a frozen version like of an you know a

frozen older version of the codebase

with the dev setup. uh just go do it and

then you uh this functions this has kind

of two functions. So one function is uh

I think it's a really great test that

tests for orthogonal things to the

normal coding style interviews that we

we ask before people get on site uh

where you're seeing you know can they go

end to end in the codebase like are they

agentic uh our engine design and product

are pretty tightly coupled and so we try

to hire product engineers who have

product sense this gives you a sense of

that you know what would they build if

left in a vacuum without a team um and

so I I think it really gives us a lot of

signal on raw technical skills needed to

be successful in our environment.

And um you know that gives us a sense of

would we want to be around you? Do you

want to be around us? Uh if one of the

benefits, you know, maybe sub sub point

third benefit is uh it really gives the

candidate a ton of information about the

company uh and what it's going to be

like to show up on the first day. Um,

and I think that that's led to, you

know, really really really high chance

of fit on their end too, uh, if they say

yes. And so that's that's one of the

more unorthodox things we do is we have

this two-day onsite and we've clung to

it even though we are over 200 people

now.

>> And but you don't do this with like go

to market or other

>> we did initially. Uh, so yeah

sales guy like

>> Yeah. So yeah no so to hire our first

reps we would give them we're like here

are inbound leads

and

>> that's awesome. have a quota.

>> Yeah, it was Yeah, it was a little bit

more structured where, you know, they

would do a demo. Uh they would do some

like mock customer communications,

>> but we would give them access to the

real the real data and have them dig in.

I think the very very very first one we

did was was literally like

>> the rep came in, we showed them

everything. We're like,

>> teach us how we should do sales.

>> Uh but then it started to get more

structured.

>> Okay. Awesome. Great. Um so listen, I

think this wave in general is changing a

lot of orthodoxy on how you build

companies. I mean it just it's a new

super cycle like you know we're

questioning everything. You're

definitely on the forefront of that. I

mean you've got you know relatively I

would say junior folks running very

large orgs and it's working out

incredibly well. Another thing that

you're you're you're doing I would say

almost to like an extreme amount is is

is M&A. Like you've been very very good

at doing these kind of tuckins for a

2-year-old company, right? Like I mean

clearly a lot of private companies

acquire companies but you've done a

great job about that. Would you be open

to sharing kind of how you think about

this? I mean like the adage preai was

like startups should never buy startups

and it's actually been hugely successful

not just with curs but across the board

and so I think it'd be great to hear how

how that's worked for you. Any lessons

learned?

>> Yeah, I think that so far for us

uh it's been consistent with an approach

of do anything possible to get the most

talented people.

>> Yeah. And so, you know, early on as part

of our uh, you know, us growing the

initial 10 people on the team, uh, we

did crazy recruiting stunts like, you

know, flying I Yeah, some some of these,

you know, are kind of normal and people

do them, but but a lot of things like

flying across the world to the person.

Oh, yeah. Uh, after they say no. Uh and

um and then when they say no after you

fly across the world, uh you make up a

dinner with researchers that's happening

in SF that they should totally fly to

and come to 6 months later so that you

can reignite the conversation and

convert them to be an engineer and then

they end up being one, you know, one of

the best people on the team. Um that

that happened. Um but so yeah, we've

we've really tried to get the most

talented people possible and I think

that sometimes uh you know either

conveniently or inconveniently those

people are working on companies. Uh and

that's where mostly it's come from. It's

come from the talent side of things. Um,

I think increasingly in the future, you

know, with the whole suite of products

that are possible in our space and with

the benefits we think of of bundling

those together, we're especially

interested in uh earlier than than most

companies in their maturity uh using M&A

as a strategic tool also to to start to

build out a couple of like GM type

structures within the company and add on

complimentary products. Um and yeah

that's something where you know for each

new product that becomes possible in our

space we might try doing it internally.

We might look to see what the market has

to offer and if there's really the right

fit with the right set of founders uh

you know would would love to join up

with them. Um yeah that's a little bit

about how we thought about it uh so far.

The first the first real M&A we did was

uh was super maven uh and so uh this uh

as just one concrete example this was a

team of five people. It was started by

the person who had built co-pilot,

GitHub copilot before GitHub copilot

which was tab 9. Y

>> um and was also a researcher at OpenAI

had done a bunch of work with uh with

John um uh thinking machines and uh uh

Jacob's Jacob's fantastic and uh he was

working on you know we were working on

autocomplete models uh he was working on

autocomplete models uh the stuff the

technology we were doing is very

complimentary and just really built a

relationship stayed close over many

months and it was really us like

approaching him and being being kind of

aggressive. All right. So, uh, I have to

wrap it up now, but I want to ask you

one more question.

>> Sure.

>> Okay. So, and this actually came from

one of your candidates. I just thought

it was such a clever way to, uh, to to

to phrase it. He's like, you know what?

Cursor is disrupting software, and which

we all agree. I mean, this whole AI wave

is disrupting software. And he said, but

cursor is written in software. So, to

what extent is this ora boros somehow,

you know, ushering in your own

disruption?

And I thought it was nicely

philosophical. So I'd love any thoughts

you had because my answer to him was

like well I'd rather be the one

disrupting than not but you know that

felt like a very VC thing to say. So

yeah wait and so is the like uh cursor

do narrative like if cursor is so good

then someone could

>> no this is someone that was super

excited to it was a very philosophical

person was super excited to join and it

was like basically listen if if you're

focused on building the disruption

>> but

you know the the foundation of the

product is what's being disrupted what

does that actually mean

>> um yeah I think that maybe two things

one is I think despite the headlines.

Despite how much demand there is in this

market and how much software has changed

over the last few years, it's so far

away from being automated%.

>> It's so inefficient. Building software

in a in a professional setting,

especially with, you know, anywhere from

dozens to tens of thousands of people,

it's just

>> it's really easy in an executive level

to um underestimate

just how far away we are from from the

limit of of automating software. So, I

think that there's a really really long

way to go. There's a really long messy

middle. Yeah. Um and then yeah I think

that one of the challenges uh key

challenges facing the company uh in the

future and we've faced in the past is we

are going we are in a market that's like

you know had a iPod moment and like it's

going to have an iPhone moment and

another iPhone moment and uh I think

that there have been a couple of those

so far. I think that there are

definitely more in the future and we've

tried to build a company to be a place

that that can continually build those

things because if we don't, you know, uh

we're kaput. Uh and I think that it's

actually, you know, it's a challenge.

It's one of the nice things about the

physics of the space too because I think

it's one of the things that makes it

pretty tricky for a Microsoft to really

compete in a big way.

>> Um but yeah, definitely definitely a

challenge.

>> Awesome. Great.

>> Well, thank you so much. Please give the

Michael a hand for coming and doing

this. Thank you so much for coming.

Thanks for your leadership.

Loading...

Loading video analysis...