TLDW logo

Let the LLM Write the Prompts: An Intro to DSPy in Compound AI Pipelines

By Databricks

Summary

## Key takeaways - **Prompting is the new regular expression**: Just as regular expressions can lead to complex problems when applied to messy data, using prompts without a structured framework can result in managing two problems instead of one. Early adoption of prompting can be a gift for rapid prototyping, but it quickly becomes a curse in production pipelines. [00:15], [00:47] - **Prompts are opaque and brittle in pipelines**: Prompts become difficult to manage and understand when embedded in applications and pipelines. Their performance varies significantly between models, and they can accumulate 'hot fixes' becoming a catch-all for bugs, making them opaque and fragile. [01:54], [02:43] - **DSPy decouples tasks from LLMs**: DSPy allows developers to define tasks programmatically rather than writing prompts directly. This approach enables easier management, optimization, and portability across different LLMs, shifting the focus from prompt engineering to pipeline programming. [11:14], [11:44] - **Programmatic task definition simplifies LLM integration**: DSPy uses signatures and modules to define tasks, allowing developers to specify inputs, outputs, and prompting strategies. This programmatic approach, exemplified by a place matching pipeline, drastically reduces the code needed to interact with LLMs and eliminates the need for manual prompt parsing. [12:41], [18:42] - **Automated prompt optimization boosts performance**: DSPy's MiPro optimizer can automatically generate and test new prompts against evaluation data, significantly improving performance. For instance, a conflation task saw accuracy jump from 60% to 82% without manual prompt writing. [20:01], [25:06] - **Eval data is gold, prompts are worthless**: The true value in LLM pipelines lies in evaluation data, not the prompts themselves. DSPy leverages eval data for optimization, enabling easy model portability and ensuring that pipelines remain effective as new models and strategies emerge. [28:45], [29:41]

Topics Covered

  • Prompts are a gift and a curse.
  • There is always a better model tomorrow.
  • We are moving from prompting to programming.
  • Let a large model write your prompts.
  • Prompts are worthless, your eval data is gold.

Full Transcript

So uh I've been thinking about this qu

quote a lot over the last 18 months. Um

any of you who have worked in data

pipelines or or worked with data where

you're using kind of interpreting human

uh created uh uh data elements is

familiar with this quote which is some

people when confronted with a problem

think I know I'll use a regular

expression and now they have two

problems. They're brittle. They're

fragile. You have to test them against

it. It's it's a big pain in the butt.

And I've been thinking about this a lot

because I think you could reapply this

incredibly easily and say, I know I'll

use prompting

and now you have two problems.

Prompting has been a gamechanging

element, probably one of the things that

are responsible for the large success

and embrace of LLMs over the last 3

years. Um, they are amazing, but they

are a gift and a curse. They are a gift

because anyone can describe programs,

functions, and tasks, especially the

domain experts that you need to kind of

start to code your eval data and make

sure your pipelines are doing the things

you want them to do. They can be written

quickly and easily. I was advising um a

group of students who are building

applications uh on top of overure maps

data. And what's great about that is

everything goes into the LLM first, then

you optimize it out into more repeatable

and scalable functions as you start to

develop that pipeline. It lets you get

started so quickly. The other nice thing

about prompts is that they're

self-documenting.

You can kind of read the prompt and have

an idea of what the program is supposed

to do.

But prompts are also terrible. And

they're especially terrible when they're

buried within applications and

pipelines.

they perform differently for different

models. This is a incredibly

underdisussed facet of prompting is that

we can take a benchmark model

performance which they've been trained

against with reinforcement learning and

then we change the wording just slightly

and the score falls through the floor.

Um it is not a discrete onetoone

interaction and you can have varying

results with just slight changes if you

change up your model. The second issue

is that prompts become a catch-all. I

recently was analyzing the system prompt

that was extracted from uh claude both

37 and four. And one of the things I

found is that kind of the development

cycle within anthropic is they just

start putting in these hot fixes to

solve. It's just little oneliners that

take care of a buggy problem and then

when they get to retrain the model into

40, those hot fixes are gone. It becomes

this catch-all. And as your application

or pipeline develops, your prompts will

become a catch-all. They'll have all

these weird statements that you don't

really understand why they're there. How

many of you have seen a comment in a

codebase that says, "Don't delete it."

Like, that's the prompting equivalent of

this. The other thing that really

bothers me about prompts, especially

when they're just stuck in line with

code as just a formatted string, is that

they contain multiple components, but

you can't see those components. They're

completely opaque. We do the same tasks

all the time in prompts, but it still

just appears like a block of text going

on in some cases for pages. And let me

show you what I mean by these multiple

components. So, I'm going to use an

example prompt to kind of be our table

of contents. This is um from uh OpenAI's

uh prompting guide. They give an example

prompt for the prompt that performed

best against SWE bench. And I created a

visualization which is I read the prompt

and went through it and I marked it up

and I organized it into these six

components. You've got the task which is

only 1% of the whole prompt. You've got

the chain of thought instructions which

ends up taking about 20% of the prompt.

You've got detailed context and

instructions about how you should be

answering these questions and what you

can do to answer these questions. Uh the

examples, uh some few shots in there.

You've got tool definitions. Um in

Claude's uh um uh system prompt, it's

like 70% tool definitions. Just fills it

out. Um and then you've got a ton of

instructions on formatting instructions

because again, this is git commits. So

you got to be very very explicit and

repetitive about how to properly form a

git commit. Um what something I've been

tracking is uh how I think I have a

theory that a lot of benchmark uh scores

going up is just better formatting uh

due to reinforcement learning. Uh but we

can talk about that later. And then you

got this other at the end. But here's

the problem. This is not what the prompt

looks like when it's in your code.

Looks like that.

And that goes on for pages. It is

opaque. So we think of prompts as this

thing that anyone can understand. But

when you're scanning it in a codebase,

it's terrible. And actually, this is a

little unfair. I didn't turn on syntax

hiding lighting. There's syntax

highlighting turned on. There it is.

Clears it right up. So this is the

problem we're going to address today. So

uh my name is Drew Brunig. Um I my I

have a background in cultural

anthropology and computer science. Um, I

help humans understand data and data

understand humans. Um, I've built uh

award-winning quantified self self apps.

Um, I ran data science and product at a

startup called place IQ for about a

decade. Um, using pabytes of movement

data to understand human behaviors in

the real world which was bought by

precisely um and today I um consult with

teams to help them explain their

technology um and I work mo a good chunk

of my time with the overture maps

foundation. Now, I'm going to talk about

the Overture Maps Foundation because

it's going to be our example project for

today. We're going to make a real

pipeline for Overture Maps. So, how many

of you here have heard of the Overture

Maps Foundation?

Two, three. Okay, that's great. I

expected zero. Um, now I love the

Overture Maps Foundation because I when

I tell people what it is, they're like,

"Well, why don't I know about that and

why am I not using it?" Because it's

kind of this incredible thing. So the

Overture Maps Foundation produces a free

free open geospatial base and reference

layer. So it tells you about streets,

tells you about buildings, tells you

about places, tells you about addresses,

tells you about everything. And this is

not just some podunk like open project

that like is just something I've been

working on. Uh we've got almost 40

companies building this. It was founded

by AWS, Meta, Microsoft, TomTom. Um and

Ezri has joined. We've got incredible

companies in here. um from Uber to

Niantic to Trip Advisor to um and one of

the things that's amazing about this is

that this data is now being deployed in

a lot of these people's maps. So

billions of people are using this map

information and you could go use it

today. It's free. It's unbelievable. Um

and great licenses. Today we're going to

be talking about points of interest or

places. So these are things businesses,

schools, hospitals, parks. Um this is a

data layer that is has incredible uh a

really great license. Uh it doesn't need

share alike or anything like that. It's

primarily sourced from meta and

Microsoft's data. So this is like

Facebook and Microsoft POI data that you

can use. But we take multiple data sets.

I think we have four or five places data

sets now coming together and we have to

do what's called a conflation. Um if

you've worked in geospatial you know

what a conflation is. It's a giant pain

in the ass. um it's the act of merging

many geospatial data sets together so

that I join the things that point to the

same business in real world. Now this is

hard. It's a big pain and the reason

it's a pain is because

place data is created by humans. There's

bad data entry. You would never believe

how many different ways there are to

spell Walmart. It is unbelievable. I've

seen them all. The other thing is you

end up with similar regional names. How

many of you are from or have been to

Atlanta?

Okay. Do you notice how everything is

called Peach Tree there? Like it's

unbelievable. And so now you're trying

to join like Peach Tree gas station with

another Peach Tree gas station down the

block and you're supposed to know those

are two different things. At Place IQ,

we had my favorite example was two BP

gas stations that were kitty corner uh

right next to each other. They had the

exact same listing, just a slightly

different address, off by a few

latitude, longitude points. Um, also the

owners hated each other. It was near our

office. Um, so we couldn't do it. So,

and then the bad addresses and

geocoding. Address uh formatting is

incredibly hard problem to solve. Um,

and it's you don't want to be doing it.

So, uh, these are all the problems. So,

this sounds perfect for LLMs. It's

human- created messy data. Now, the

problem with this is that we have 70

million places at last count in our

pipeline and we have to compare them

against each other. So now you're

looking at a monthly pipeline and we're

going to be deploying bi-weekly soon

enough and we can't put everything

through an LLM. It just cost, speed,

reliability, it does not make sense. So

I'm sure because this is a data bricks

conference, I can say things like

compound AI and you know what I'm

talking about. So we use compound AI

pipelines, you know, doing spatial

clustering, string similarities,

embedding distances. There's lots of

reasons. Most matches don't need an LLM.

Most matches we can figure out

ourselves. But then we bring the LLM in.

Now, one of the problems with this is

you saw how many companies are part of

Overture, right? Overtur's matcher is

maintained and edited by multiple people

from multiple different companies. um

and they drop in sometimes and then they

might get transferred off to a different

project. So having that big opaque

prompt is actually a terrible idea for

us. We strive to be cloud agnostic. We

strive to be platform agnostic and we

strive to boot up everything really

easily. We also may want to run it on

different models over time. There's lots

of things that we care about. So how do

we solve this problem? And this is why I

have become such a uh fanboy for uh

DSpy.

I don't have no idea if I'm pronouncing

that correctly. Um but anyway, this is a

framework that makes everything easier

for us. And I'm going to show you why.

But I think this is the crux. This is

how I describe it to someone of why you

should use this framework.

Tomorrow there are going to be better

prompting strategies. There is going to

be a new paper released on a new

optimization methodology and in two

weeks there's going to be a model that's

better and cheaper than the one you're

using today.

You should not tie yourself to any of

these things. And that is why I love

Despite. Despite decouples your task

from the LLM.

So it lets you define your task

programmatically. And this I think is

like the great irony of these things. If

you read system prompts and prompts used

in pipelines, you start reading them and

you start wireframing them down. I don't

know if you're like me, I start thinking

about how I would program them. We're

moving from prompting to programming

when we start to look at the pipelines

and applications we design. So why just

we can skip all the prompting and just

write the programming? It's much easier.

So we write tasks. We don't write

prompts.

We can use despite to optimize our

function against our eval data to make

sure that our prompt is accountable and

performing. And we can embrace model

portability. When a new model gets

released, we switch over, we run our

optimization against our eval, and we're

off to the races. That's it. Super

simple. So, let's go back to our open AI

uh SWE example. We're going to use this

like our table of contents. So imagine

we want these different components in

our prompt for our conflation

challenge, right? So let's start with

how do we perform the task and the kind

of prompting instructions using DSpy.

Well, DSpy makes prompts from

structures.

We can quickly spec out what we need and

then manage our prompts. We need two

things. We need a signature. Signature

is just a fancy word for this is what I

want the inputs and outputs. And then a

module. A module is basically all right,

what prompting strategy do I want to use

to execute this input output.

Now, signatures can just be strings. So,

I could write input output. I could

write baseball player and I want to get

out. Are they a pitcher? I could write,

you know, restaurant and then the

question could be is fast food and then

I could type it with a boolean. It can

just be a simple prompt. It could be

anything. And we can also define this as

a class with multiple inputs and

outputs. Um, and it gets really cool.

I'll show you that in a second. Now,

signatures tell you what you want.

Modules tell you are basically how

you're going to get it. So, they're

strategies. They're think of them as

prompting strategies. So the base one is

predict which is just hey turn this into

that but you could do multiple different

ones like chain of thought which is hey

I want you to think step by step to get

this answer. React allows you to use

tools. The module can retain trainable

parameters and you can start stacking

these modules together. So you might

have a multi-stage pipeline where you

want to do several different things to

the LM and we can build that all as a

single module and I'll show you we're

that's beyond the scope of this talk but

um it can be done.

So here's how you get up and started

roughly 12 lines uh including new lines

print codes. We're not doing code golf

here uh quite yet. Um but we just

connect our LLM. Behind the scenes DSpy

uses light LLM. So you can use OpenAI,

you can use Anthropic, you can use VLM,

you can use SGAPH. I use O Lama when I'm

just doing crazy stuff at home. Um, I

then define my uh signature in that

predict call. So that predict, see where

it says on line 8, that's the module.

Predict is the module and it's taking my

signature, which is I'm going to give it

a question and I want an answer. Once I

have that little combined uh model, I

can run that QA and feed it a question

and I get my response up and running

really quick. Now,

oh, I forgot I had highlighting. Sorry,

you guys. So, what is it doing behind

the scenes? There's no prompts here. You

don't have any multi-line prompts. So,

what is it doing behind the scenes?

Well, when you give it a signature

question answer and you put that in a

module predict, it basically turns it

into a system prompt. And this is what

it says. Gives you all the things. Tells

you how it wants formatted. You never

have to worry about this. You don't have

to see it. You don't have to touch it.

You can see it if you want, but you

don't have to worry about it. And then

when I call this,

what's the capital of France? it gives

it in a uh the user prompt with that

system prompt and I just get it back.

Now the cool thing about this is that

it's modular. Why is it called module?

Because it's modular. I can switch out

predict.

Let's go to chain of thought.

Changing a few characters and all of the

sudden this is my system prompt. Notice

it has reasoning in there. And now I get

reasoning as a separate field. I didn't

have to do anything. I just changed or I

changed one line. So, I did have to do

something. I had to hit a few keys. And

there's lots of modules. There's

multi-chain comparison. There's majority

voting. There's react refine. Again,

there will be a better prompting

strategy tomorrow.

Don't bother with trying to program this

into your prompt. Just program into it

in the program.

So, now let's define a signature class.

So this is uh my beginning of my

conflation pipeline or my LLM call. Now

there's one typo in here I'll show you.

But I I basically am just using some

pyantic or or typed fields. I first

define my place. It's got an address

which is a string and a name which is a

string. And then I have my signature my

place matcher class. Now there's a

couple things I want to call your

attention to. First off, this is

hopefully my only typo in this deck, but

this is the doc string at the top. It

shouldn't say verify the text is based

on the provided contact. It should say

review two places and determine if

they're the same place in the real

world. This will actually be used in

your prompt. So, if you want to sloppily

add something into your prompt and make

sure it gets added in there, some

business logic that it's not going to

know off the bat, and you don't have any

eval data that's going to teach it, just

put it in the dock string. It's a cheap

but dirty trick. Um, then what I have is

I have I can add descriptors to my

field. So, I have my two input fields,

place one, place two. Self-explanatory,

I don't need to say anything else. Then

I've got my match, which I've typed as

boolean. And I give it a description. do

the two places refer to the same place?

Um, and that will again get carried

forward into my prompt?

And I've also asked it to give me a

match confidence, low, medium, and high.

Off the bat, that's not going to deliver

a lot of value. But because I have eval

data that I know what low, high, and

medium uh uh confidences are because

I've had human labelers, it's going to

get valuable as I start to train my

model. And that's it. That's that's

literally the entire thing. That's my

entire class and everything's there. Now

this is how it works. I can take two

places which I can kind of hot define in

the call to the matcher and this is what

I get back. A prediction object which

has a boolean match and a high

confidence match confidence. Now you

know what's cool about this? I didn't

have to parse any JSON. I didn't have to

yell at the LLM to format it properly.

It just worked. It just came out. I

didn't have to interpret any strings. I

didn't have to use any reax. Again, I

don't want two problems. And so, right

there, I've taken care of the task, the

chain of thought instructions, and all

the formatting

with like basically 12 lines of code.

And that's heavily typed. You could do

that in like one and a half.

Now, a cool thing about this, just

because I want to check the box because

it's on our table of contents, you can

add tools if you start to use the React

module. Um this is uh defined on the def

spy website. I just ripped it off here.

So we have two functions. Evaluate math

expressions uh in Python and then search

Wikipedia. We can provide those as tools

and it just works. Again lets me check

off my I don't need any tools in this

conflation demo, but I'm still going to

check off my table of contents. Now

what's left is all that detailed context

instructions. These are the things that

get you the results that you want. This

is where you put your hot fixes. This is

where you put like all your conditional

statements. So, how do we get that in

DSpy? It all comes from your eval data.

It all comes from being able to optimize

our prompt.

So, to optimize a prompt in DSpy, you

really only need I would say you need

three things, two things if you don't

have any training data. The first is our

validation function. Um our our reward

function, our um you know uh metric. And

this metric is really simple. I'm

basically just taking the prediction

with the example which I have had humans

label and just saying seeing if they

match. That's it. This could be very

complex in other pipelines. I've broken

apart the address. I've given points for

getting close. I've attempted to uh uh

put some metrics behind high, medium,

and low. You can even do cool things

where you take a giant model and use it

as an LLM judge within this metric,

which again is great way to have some

transfer learnings here. And then what I

do is I um set up what's called MIRO v2.

I'll explain this in a second, but

that's our optimization function. Now,

this is what we do. We then optimize our

matcher and then we save our optimize

prompt. Before I get into all of the

details here, I want to call out a few

things. First off, I already did that.

I've now created a new LM. Now, why did

I create a new LM? Because I'm actually

using a really small model here. I think

I'm using Quen 0.6 uh billion

parameters. And I'm going to use a

larger model here because in MERO, and

we're going to get into this in a

second, we use the large language model

to try writing better prompts. And so

I'm going to use a large language model

uh GPT41 to be my prompt writer, which

is great because I'm not going to run it

against my entire pipeline. I'm just

using it to try to improve my prompt I

give Quen 0.6.

And we'll talk about that in a sec. And

then finally, I can save it after it's

all done. This again is great because it

just saves as a JSON file. I can load it

in. I can version it. How many people

version their prompts without an

external tool? It's like embarrassing to

me how many prompts I just see slugged

into some Python and you're it's just

managed through git.

So let's talk about Mi Pro real quick.

So this is uh the optimizer we're using

and this I think is one of the magic

tricks of of DSpy and it's just an

optimization function.

So MRO is an optimization technique that

generates examples from your pipeline.

It runs conflation on a bunch of my

training data that I've given it, runs

it through the LLM, and then looks at

the pipelines. It then provides those

examples, those traces and a description

of my task and the function to, in this

case, GPT41, and says, "Hey, this is

what the model's doing. This is what

it's trying to do. Here's some examples

of what it does. Can you write me a

better prompt for this?" And it will

even give you some prompting techniques

like uh you know I mean all the silly

things we used to do back in the day.

Threaten the model's mother or offer to

pay it $100 million if it gets it right

or give it an occupation a fancy

occupation sounding thing. And so it

generates all these instructions. You

don't have to write this and you can

specify how many it writes. So you could

have it write eight candidates. You

could have it write five candidates. You

could have it write 30 candidates.

And then it's going to do basically a

beige test of comparing all of those

different prompts it wrote

against the eval data to see which ones

perform the best. And then it's going to

find the best parts of those prompts,

merge them together, test it against the

full run, and come back with an

optimized prompt. It's also going to

figure out, hey, should we put some

examples in here? How many examples do

we need? what's going to should they be

synthetic examples or should they be

actual examples and you hit run

and it runs for a while

and then behind the scenes you see you

see your score go up but this is what it

does this is hard to read so I'll turn

it this is the original kind of prompt

there was some framework around

everything but it started with determine

if two points of interest refer to the

same place I ran this model on my eval

data with Quinn

0.8 eight I think it is um with 41 and

at the end of a you know optimization

cycle this is the prompt I got back

given two records representing places or

business each with at least a name and

address analyze the information and

determine if they refer to the same real

world entity consider minor differences

such as case dietitics transliteration

abbreviations or formatting as potential

matches if both the name and address

strongly are strongly similar only

output true if both fields have a close

match blah blah blah blah blah blah.

This is a great prompt and I didn't

write it. And not only that, I know how

well it performs.

I went from 60% correct against my eval

to 82% correct against my eval without

writing a single prompt.

That was 14 lines of code, which is

everything I have over here, managed to

produce an excellent 700 token prompt.

The code is easily readable. The prompt

can be versioned, tracked, and loaded.

I've got a team of 35 plus companies who

are all touching this, who can look at

this and know exactly what it does

immediately without having to read pages

of prompts. And now, if this wasn't cool

enough, so at Overture, we strive to be

cloud agnostic.

So, it's a little awkward. Overture is

one of my favorite projects because it's

a great public private partnership and

you've got the biggest companies in the

world. like we've got what two cloud

platforms as members Microsoft and

Amazon. We've got I think three

different model builders between

Microsoft, Amazon and Llama. And so

there's a lot of weird like well do you

want to run this on AWS hardware? Do you

want to run it on Azure? Do you want to

run it on what have you? Well, the cool

thing about this is again change one

line in my code, run the optimization,

and I can change the model. So with my

original pipeline, I went from 60% to

82%. And Quen was great because it's 6.

It's very small. It's lightweight. It

runs really well. But if Meta comes in

and says, "No, you got to use llama

because we want to get a llama when we

want to get a case study out there."

Well, it's great. We can run the

optimization goes from 84 to 91.

Microsoft says, "Actually, we want to

try the reasoning uh soft reasoning of

54." Uh let's do that. 8695.

There's always a better model tomorrow.

And each of these prompts were

different.

That's the big takeaway. That's kind of

crazy.

Do not build a prompt and invest

countless hours

for something that is going to get

thrown away when a new model comes out.

So where could you go from here? If you

wanted to take this pipeline further,

you could try new optimizers. So, I know

the DSpy team is going to be talking

about some of their new optimizers that

have been in beta. They're going to be

releasing DSpy 3 zero later uh this

week. Simba is the new miro. It's very

cool. They've even got some RL options

in there. You could utilize fine-tuning.

DSP lets you do fine-tuning with SGA and

other things. So, we at Overture have

fine-tuned our models because again,

makes sense. We want a tiny model. We

want it to go lightning speed. Um, and

we want to be able to run it across, you

know, pabytes of data. You could build

multi-stage models. I have had success

in conflation where I have another model

that just breaks apart the address into

components. So, given an address string,

give me the street number, give me the

street name, give me the what have you.

Addresses. I could spend two hours

talking about how bad addresses are. Um,

and that might be part one. If I then

build a little despy module with my

address breaker downer and then my

complator, I can optimize those at once,

not separately. It's incredible. And

then you can start to incorporate tools.

We didn't even touch these. Um, but this

is a really cool thing. So remember,

DSpy decouples your task from the LLM.

Write tasks, not prompts.

Optimize your functions using DSpy.

Again, better optimizers are going to

come by. It lets you focus on two

things. The task you want to do and your

eval data. Everyone here has eval data,

right? It's the most valuable thing we

can have. Prompts are worthless. Eval

data is gold. Model is kind of

worthless. It's all about your eval

data. And then you can embrace model

portability. You can test all the

models. You can test everything

incredibly well. So, you should try DSpy

today. I'm going to give you two bits of

homework. The first homework is you

should get started with DSpy today. It's

incredibly easy to get started with. Um,

it lets you go fast. Get up and running

with DSpy. Define your task. Don't worry

about exhaustive prompts. Don't worry

about formatting. I first got into DSpy

because I just was tired of formatting

um, uh, extracting structured data from

LLM output and DSpy took care of that

for me. Great. The second thing is you

can grow with DSpy as you make more

mature pipelines. uh you can involve

multiple people and they can read your

code better. You don't have to like have

these magical incantations.

Get good with dspy by bringing your eval

data and doing the optimization. Again,

60 to 82% gain with like that cost like

a dollar of credits and then stay good

with DSpy. Run that eval build your eval

data set constantly. Always check out

the new models. Now, your second bit of

homework today is write a signature in

DSpy. Go write a signature. Just try it.

It's incredibly easy, incredibly cool.

The second thing is I want you to check

out overture data. If you use geospatial

data at all in your stacks or even just

want to query like where addresses are

or where neighborhoods are or where

cities are, roads, whatever, this is the

best data set to get started. It's in

the data bricks marketplace maintained

by a company called Cardo. It's also you

can go to overturemaps.org

check it out. It's amazing what an

incredible resource it is and it's all

in modern cloudnative formats like

parquet. You don't have to download the

entire open street map planet and then

do an extract and figure everything out.

You can query it with duct db and get

everything you need. Um and then check

out my writing. I like to try to explain

technology in a way that uh everyone can

understand it so we can actually have

productive conversations um that are not

clouded by hype or fear or anything like

that. So with that, thank you very much.

[Applause]

Loading...

Loading video analysis...