TLDW logo

Concurrency is not Parallelism by Rob Pike

By gnbitcom

Summary

## Key takeaways - **Concurrency ≠ Parallelism**: Concurrency is a way to structure a program by composing independently executing processes, while parallelism is the simultaneous execution of those processes. Concurrency is about dealing with lots of things at once; parallelism is about doing lots of things at once. [02:13], [02:35] - **OS Drivers: Concurrent, Not Parallel**: An operating system manages mouse, keyboard, display, and network drivers as independent concurrent things inside the kernel, but with only one processor, only one runs at a time. Concurrency models these IO devices without needing parallelism. [03:10], [03:21] - **Add Gopher, Boost Speed**: In the book-moving design, adding a fourth gopher to return empty carts increases total work but makes the system run faster through better concurrent management of pieces. This concurrent composition can outperform simpler parallel scaling. [07:44], [08:18] - **Gophers Model Web Servers**: Replace book pile with web content, gophers with CPUs, carts with networking, incinerator with web proxy, and you have a web serving architecture. Concurrency naturally expresses proxies, buffers, and scaling. [13:01], [13:34] - **Goroutines: Cheap Threads**: Goroutines are lightweight, multiplexed onto OS threads dynamically, much cheaper than threads; a production Go program handled 1.3 million goroutines with 10,000 active. They enable massive concurrency without scheduling worries. [15:01], [17:25] - **Channels + Select Enable Scalable Load Balancer**: Using channels for typed communication and select for multi-way coordination builds a realistic load balancer distributing work to least-loaded workers with no locks, scaling to millions of requesters and workers. The design is correct and efficient regardless of parallelism. [25:37], [27:16]

Topics Covered

  • Full Video

Full Transcript

you if you looked at the programming languages of today you probably get this idea that the world is objectoriented um but it's not it's actually parallel um you've got everything from the lowest

level like multicore machines and up through networking and so on so but it gets all the way up to users who knows planets the universe there's all these things that are happening simultaneously

in the world and yet the Computing tools that we have are really not very good at expressing that kind of worldview and that seems like a failing but we can fix

that if we understand what concurrency is and how to use it now I'll assume that most of you at least heard of the go programming language is just what I've been working on at Google the last few years and go is a concurrent

language uh which means that it has some things in it to make concurrency useful like it's got the ability to execute things concurrently the ability to communicate between things that are

executing concurrently and it's got this thing called a SEL statement which is a multi-way concurrent control switch if that doesn't make any sense to you yet don't worry but when we announced go

which was about 2 years ago all these uh programmers out there said oh concurrent tools I know what to do I can run stuff in parallel yay uh but that actually isn't true concurrency and parallelism

are not the same thing it's a commonly misunderstood problem and I'm here to try to explain why and to show you that concurrency is actually better what would happen to these people who were

confused was they take a program they'd run it on more processors and it would get slower and they think this is broken it doesn't work I'm going away um but what was really broken was the worldview

and I hope I can fix that uh so what is concurrency well concurrency as I'm using it and as it's intended to be used in the computer science world is a way to build things it's the concurrent it's

a composition of independently executing things typically functions but they don't have to be and we usually Express those as the the interacting processes and by process I don't mean a Linux

process I mean the sort of General concept that embodies threads and co- routines and processes the whole thing so think in the most abstract possible sense so it's the composition of independently executing processes

parallelism on the other hand is the simultaneous execution of multiple things possibly related possibly not and if you think about it in sort of a general handwavy way concurrency is

about dealing with a lot of things at once and parallelism is about doing a lot of things at once and those are obviously related but they're actually separate ideas and there's a little confusing to try to

think about them uh if you don't have the right toolkit so one is really about structure concurrency and one is about execution Paralis and I'll show you why those are important so concurrency is a

way to structure a thing so that you can maybe use parallelism to do a better job but parallelism is not the goal of concurrency concurrency goal is is a good structure so here's an analogy you

might be familiar with if you're running an operating system it might have a mouse driver a keyboard driver display driver network drivers whatever else and those are all managed by the operating system as independent things inside the

kernel but those are concurrent things they aren't necessarily parallel if you only have one processor only one of them is ever running at a time and so there's a concurrent model for these IO devices

but it's not inherently parallel it doesn't need to be parallel whereas a parallel thing might be something like a vector dot product which you can break down into microscopic operations that you execute on some fancy computer in

parallel very different idea not the same thing at all so in order to make concurrency work though you have to add this idea of communication which I'm not going to focus on too much today but you'll see a little bit about it so

concurrency gives you way to structure a program into independent pieces but then you have to coordinate those pieces and to make that work you need some form of communication and Tony in 19 1978 wrote a paper called communicating

sequential processes which is truly one of the greatest papers in computer science and if you haven't read it if anything out of this talk SNS in is that you should go home and read that paper it's absolutely amazing but based on

that a lot of people with lesser Minds have followed and built tools to use these his ideas into concurrent languages like llang is another one that's great go has has some of these ideas in it but the key points are all

in that original paper with with a couple of minor exceptions which I'll I'll come up to but look this is all way too abstract uh we need Gophers so let's get some Gophers going

here's a real problem we want to solve okay we have a pile of ancient obsolete manuals say the C++ 98 manuals now that C++ 11 is out or maybe it's a C++ 11

books and we don't need them anymore whatever point is we got to get rid of them they're taking up space so we have a gopher whose job is to take the books from one pile and move them into the incinerator and get rid of them okay but with only one gopher it's going to take

a long time if it's a big pile and also Gophers aren't very good at moving books although we've given them a cart um so let's put another gopher in the problem except he's not going to get any better right because he needs the tools and

this this is kind of pointless we need to give him all the parts he needs in order to do this so This gopher needs not only the ability to to be a gopher but he also needs the tools to get the

job done so let's give him another cart now that's going to go faster we're definitely going to be able to move books quicker with two Gophers pushing the carts but of course there may be a little problem because we're going to have to synchronize them they can get stuck at the incinerator at the book

pile getting each other's way running back and forth so they're going to need to coordinate a little bit so you can imagine the Gopher sending s little Tony messages saying here here I am I need space to put the books in the

incinerator or whatever it is but you get the idea this is silly but I want to make it really clear these ideas are not deep they're just they're just good okay well how do we make them go faster well

we double everything we put two Gophers in and we double the the pile uh the piles and the incinerators as well as the goers and now we can move twice as many books in the same amount of time

that's parallel right but think of it instead of parallel as really the concurrent composition of two goer procedures moving books so concurrency is how we've expressed the problem these

two this this gopher guy can do this and we paralyze it by instantiating more instances of this gopher procedure and that's called the concurrent composition

of processes or in this case Gophers now um this design is not automatically parallel because sure there's two Gophers but who says they both have to

work at the same time I could say that only one gopher is allowed to move at once which would be like having a a single core computer and the design is still concurrent and correct and nice but it's not intrinsically parallel

unless I can make both those Gophers move at once that's when the parallelism comes in having two things executing simultaneously not just having two things okay that's a really important model but once we've decided that we

understand we can break the problem down into these concurrent pieces we can come up with other models so here's a different design okay now we got three Gophers on in the in the picture it's the same pile of books the same

incinerator but now we've got three Gophers right there's a gopher whose job is just to load the cart there's a job there's a gopher whose job is just to carry the cart and then presumably return the empty back again and then

there's a gopher whose job is to load the incinerator so three Gophers it's going to go faster might not go much faster though because they're going to get blocked you know the cart with the books is going to be in the wrong place and there's time to bring the Gopher

running back with the Mt where nothing useful is getting done with the cart so um let's clean that up by having another go for return the empties okay now this is obviously silly right but I want to

point something fairly profound that's going on here this this version of the problem will actually execute better than this problem this guy even though we're actually doing more work by having

another goer running back and forth in here so once we've got this concurrency idea we're able to add Gophers to the picture and actually do more work but make it run faster because the

concurrent composition of better managed pieces can actually run faster and it's pretty unlikely that things will work out just perfectly but you could imagine that if all the Gophers were time just right and the PS were just right and

they knew how many books to move at a time this thing could actually keep all four Gophers busy at once and it could in fact move four times faster than our original version unlikely but I want you to

understand that it's possible so here's an observation that's really important and it's kind of subtle we improve the performance of this program by adding a concurrent procedure to an existing

design design we actually added more things but the whole thing got faster and if you think about it that's kind of weird it's also kind of not weird because you added another gopher and Gophers do work but if you forget the

fact that he's a gopher and think of it as just adding design adding things to the design can actually make it more efficient and that parallelism can come from better concurrent expression of the

problem it's a fairly deep Insight that doesn't look like it because there are gophers involved but that's okay um so we have four concurrent procedures running here right there's a gopher that loads things into the cart there's a

gopher that takes the cart and TR trucks it across towards the incinerator there's another gopher who's unloading the carts contents into the incinerator and there's a fourth gopher who's returning the empty carts back and you

can think of these as independent procedures just running as independent things completely and we just compose those in parallel to construct the entire program solution but that's not

the only way we could do it here's a completely different design sorry not complet we we can here's a same design made more parallel by putting another pile of of books another incinerator and four more Gophers but you see the key

point is here we're taking the idea that we have how we break the problem up and once we understand it its concurrent decomposition we can actually paralyze it on different axes and get better

throughput or or not but at least we understand the problem in a much more fine grain way we have control over the pieces in this cases if we get everything just right we've got eight Gophers working hard for us burning up

those C++ manuals um or maybe there's no paralyzation at all who says that all these Gophers have to be busy at once I might only be able to run one gopher at a time in which case this design would

only run at the rate of a single gopher like the original problem and the other seven would all be idle while he's running but the design is still correct and that's a pretty big deal

because it means that we don't have to worry about parallelism when we're doing concurrency if we got the concurrency right the parallelism is actually a free variable that we can decide just how many go goers are busy or we could do a

completely different design for the whole thing let's let's forget the old pattern put in a new pattern and we'll have two Gophers in the in the in the story but instead of having one gopher carried it all the way from the pile to

the incinerator we put a staging dump in the middle so the first gopher carries the books to the to the dump drops them off runs back gets more the second guy sits there waiting for cart for books to

arrive to the pile takes them from there moves the incinerator and if you get this right you got two goer procedures running but they're kind of different procedures they're kind of the same but they're subtly different they have slightly different parameters but if you

get this system running right at least once it's started uh it can in fact in principle run twice as fast as the original even though it's a complet different design in some in some sense

to the original one but of course once we've got this composition we can go another way we can paralyze the usual way run two versions of this whole program at once and double again now we

got four golers maybe up to four times the throughput or we could take a different way again and uh put the staging Pile in the middle into the original concurrent multi-

gopher problem so now we got eight Gophers on the Fly and books getting burned at a horrific rate but that's still not good enough because we can paralyze on another dimension and Go

full on so here's 16 Gophers moving those books to the burning pile and it obviously this is is very very simplistic and silly but it's got Gophers in it so that makes it good um

but I want you to understand that conceptually this this is really how you think about running things in parallel you don't think about by running in parallel you think about how you break the problem down into independent

components that you can separate and understand and get right and then compose to solve the whole problem together so what does this all mean well first of all there are many ways you could do this I showed you just a couple

if you sit there with a Sketchbook you can probably come up with 50 more ways to have gopher move books there's lots of different designs they're not necessarily all equivalent but they can all be made to work and you can then

take those concurrent designs and refactor them rearrange them scale them in different dimensions to get different abilities to process the problem and it's nice because however you do this

the correctness of your algorithm for doing this is easy there's really it's not going to break I mean they're just Gophers but um you know the design is intrinsically safe because you've done

it that way however it's this is obviously a stupid problem this has no bearing on real work well actually it does because if you take this problem and you change the book pile into some

web content you change the Gophers into CPUs you change the cart to the networking or the marshing code or whatever it is you need to run to move the data and then the incinerator is the web proxy or browser whoever you want to

think about the consumer of the data you've just constructed the design for a web serving architecture and you don't probably don't think of your web serving architecture as looking like that but

the fact this is pretty much what it is right and you can see by substituting the pieces this is exactly the kind of designs that you think about and when you talk about things like proxies and forwarding agents and and buffers and

all that kind of stuff uh scaling up more instances they're on this drawing they you just don't think of them that way so they're not intrinsically hard things to

understand if Gophers can do it so can we right so um let me now show you how to use these ideas a little bit in building things with go now I don't I'm not going to teach you go in this talk I

hope some of you know it already I hope lots of you go and learn about more uh more about it afterwards but I'm going to try to teach you a little tiny bit of go and then hope the rest kind of gets absorbed as we as we do it so go has

these things called go routines which you can think of as being a little bit like threads but they're actually different and I rather than go into the details of how they're different let's just say what they are so let's say we

have a function that in this case takes two arguments if we call that function our in our program then we wait for the function to complete before the next statement executes that's very very familiar you all know that but if

instead you put the keyword go before you call that function then what happens is that function starts running but you get to run right away at least conceptually not necessarily remember concurrency versus parallel but

conceptually your program keeps running while f is off here doing his thing right and you don't have to wait for the F to return and if you that seems confusing just think of it as being a lot like the Amper sand in the Shell so

this is like running F Amper sand off in the background okay now what exactly is a go team well they're kind of like threads right because they run together uh they're in the the same address space

within a program they are at least but they're much much cheaper uh it's pretty easy to to make them and they're very cheap to to create and then they get multiplexed dynamically onto operating system threads as required so you don't have to worry about scheduling and

blocking and so on the system takes care of that for you and when a when a go routine does need to block like doing a read system call or something like that no other go routine needs to wait for it they're all scheduled dynamically so they feel like threads but they're a

much much lighter weight version of them and it's not an original idea other languages and systems have done things like this but we we gave them our own name to make it clear what they are so we call them go routines okay now I

mentioned we have to communicate between these things right so to do that we have these things in go called channels which are like a little bit like pipes in the Shell but they have types and they have other nice

properties which we're not going to go into here today but here's a fairly trivial example we create a timer channel uh and say it's a Channel of time.time values and then we we launch

time.time values and then we we launch this this uh function in the background that we sleeps for a certain amount of time delta T and then sends on the timer

channel the time at the at that instant timer time. now and then the other

timer time. now and then the other process because the other go routine because this one was launched with a go statement can doesn't have to wait it can do whatever it wants and when it's ready to to hear that the other guy is

completed he says I want to receive from the timer Channel whatever that value is and the that go routine will block until there's a value to be delivered and once it is completed that will be get get set

to the time at which the other go routine completed trivial example but everything you need is in that one little slide and then the last piece is a thing called select and what it does is it lets you

control your program's Behavior by listening to by looking at multiple channels at once and seeing who's ready to communicate and you can decide to rate in this case between channel one or channel two and the program will behave differently depending on whether channel

one or channel two is ready in this case if neither's ready the default clause will run which is me lets you sort of fall through if nobody's ready to communicate if the default clause is not present in the select then you'll wait until one or the other of the channels

is ready and if they're both ready the system will just pick one randomly so this this will come up a little later but it's it's pretty much like a switch statement but for communications and if you know D's guarded commands it should

seem fairly familiar now I said go supports concurrency and I mean it it really supports concurrency it is routine in a go program to create thousands of go routines and we are once

debugging actually live at a conference uh uh go thing that was running in production that created 1.3 million go routines and had something on the neighborhood of 10,000 actually active at the time we were debugging it and to

make this work of course they have to be much much cheaper than threads and that's kind of the point so they're not free there's allocation involved but not much and they grow and Shrink as needed and they they're sort of well managed

but they're very very cheap and you can think about them as being you know as cheap as Gophers uh you also need closures I showed you a closure sort of under the covers before um here's just proof that

you have them in the language because they're very handy in concurrent expressions of things to create nons procedures so you can create a a function that here in this case composes a couple more functions returns a

function it's just to show that it works and and their real closures Ino so let's use these elements to build some examples and I hope you'll learn a little bit of concurrent go programming

by osmosis which is the best way to learn uh so let's start by launching a demon um you can use a closure here to wrap some background operation you want done but not wait for it so in this case

we have two channels input and output and for whatever reason we have to deliver input to Output we but we don't want to wait until the copying is done so we say go funk for a closure and then have a for Loop that just reads the

input values and writes them to the output and the four range claws in go will drain the channels so run until the Channel's empty and then exit so this

little burst of code uh just drains the channel automatically and does it in the background so you don't have to wait for it and you has a little bit of boiler plate there but you get it's not too bad and you get used to it um Let Me Now

show you a very simple load balancer a very simple one and if there's time which is I'm not sure there will be I'll show you another one um but this is a simple one so imagine you have a bunch of jobs that need to get done and we've

abstracted them away here or maybe concretize them into a work structure with three integer values that you need to do some operation on so worker tasks what they're going to

do is compute something based on these values and then I put a sleep in there so that there's we guaranteed that we have to think about blocking because this this worker task May block an arbitrary amount of time and way we

structure it is we have the worker task read the input channel to get work to do and have an output channel to deliver the results so those are the arguments to this function and then in the loop we

range over the input values doing the calculation sleeping for some essentially arbitrary time and then delivering the output to the respon to the the output to the guy who's waiting so we have to worry about blocking so so

that's got to be pretty hard right well there's the whole solution um the reason this is so easy is that the channels and the way they work along with the other elements of the language let you express

these concurrent things and compose them really well this what this does is creates two channels an input Channel and an output Channel which are the things connected to the worker they're all reading off one input Channel and

delivering to one output Channel and then you just start up some arbitrary number of workers notice the go clause in the middle there all these guys are running concurrently maybe in parallel and then you start another job up that

says generate lots of work for these guys to do and then you hang around in this call function call receive lots of results which will read the values coming out of the output channel in the order that they complete and because of

the way this thing is structured whether you're running on One processor or a thousand the job will run correctly and completely and we use the resource as well it's all taken care of for you and

if you think about this problem it's pretty trivial but it's actually fairly hard to write concisely in most languages without concurrency concurrency makes it pretty compact to do this kind of

thing and more important it's implicitly parallel although not you don't have to think about parallelism if you don't want to but it also can scale really well there's no synchronization or nonsense in there num workers could be a

huge number and the thing would still work efficiently and the tools of concurrency therefore make it easy to build these kind of solutions to fairly big problems uh also notice there was no locking no mutexes all these things that

people think about when they think about the old concurrency models they're just not there you don't see them and yet this is a correct concurrent and parallelizable algorithm with no locking

in it that's got to be good right um but that was too easy so let's see how we doing yeah I got I got time to do the harder one this is a little trickier but it's the same basic idea but done much more realistically so imagine we have a

load balancer that we want to write that's got a bunch of requesters generating actual work okay and we want we have a bunch of worker tasks and we want to distribute all these requesters

workload onto an arbitrary number of workers and make it all sort of load balance out so the work gets assigned to the least lightly loaded worker so you can think that the workers have may have large amounts of work they're doing at

once it's not just one at a time they may be doing lots and there's lots of requests going on so it's a very busy system this could be maybe on one machine which is how I'm going to show it to you but you could also imagine that some of these these lines represent

network connections they're doing proper load balancing architecturally the design is still going to be safe the way we do it so the what a request looks like is very different now we have some

arbitrary function closure if you like that represents the calculation that we want to do and we have a channel on which we're going to return the result now notice a channel is part of the

request in go unlike a lot of the other languages like erlang the channel idea is there and it's a first class value in the language and that allows you to pass channels around and they're kind of like

file descriptors in the sense that if you have the channel you can communicate with someone but any no one with anyone who does not have the channel is not able to so it's like you know being able to pass a phone call to somebody else to

do or to pass a file descriptor over a file descriptor it's pretty a pretty powerful idea so the idea is you're going to send a request with a calculation to do and a channel on which

to return the result once it's done so what here's an artificial but somewhat illustrative version of the requester uh what we do is we we have a

Channel of requests that are coming in and we're going to generate uh stuff to do on that work channel so we make a channel which is going to go inside each request to come back to us for the

answer we do some work which I've just represented here is sleeping who knows what what it's actually doing um and then you send on the work Channel a request object with the function you

want to calculate and whatever that is I don't care and a channel on which to send the answer back and then you wait for the on that channel to get the result to come back and once you've got that you probably have to do something

with the result so this is just something generating work at some arbitrary rate it's just cranking out results but it's doing it by communicating on Channels with with inputs and

outputs uh and then the worker it's which is on the other side of this picture remember we've got requesters oops we've got requesters delivering data to the balancer which is the last thing I'm going to show you and workers on the right and what the workers have

in them is a channel of incoming requests and then a count of pending tasks and which is going to represent the load that that worker has the number of tasks he's actually got busy and then

an index which is part of the Heap architecture which'll show you in a second so then what the worker does is receive work from his request or his request Channel which is part of the

worker object uh call the function on the worker side so you pass it request I guess from your point pass the actual function from the requester through the balancer into the worker he does the

answer and then he returns the the answer back on the channnel now notice that unlike lot of other load balancing architectures the channels from the worker back to the requester do not go through the load balancer once the load

once the requester and the worker are connected the bouncers is out of the picture and the worker and the request are talking directly and that's because you can pass channels around inside the the the system as it's

running okay uh and then if you want to you could also put a go routine inside uh put a go statement inside here and just run all these requests in parallel on the worker it would work just fine if you did that but I that's enough going

on at once already okay and then the balancer is kind of magical uh you need a pool of workers and you need some balancer object you're going to put the balancers methods on and that includes a pool and then a

single done Channel which is how the workers tell the balancer that they' finished their most recent calculation so then the balancer is pretty easy what it does is it just forever does a select

statement waiting either for more work to do from a requester in which case it dispatches that request to the most likly loaded worker or a worker tells him he's done in which case you update

the data structure by saying the balancer is that worker has completed his task so it's just a simple two-way select uh and then we just have to implement these two functions and to do

that we actually what we actually do is construct a heap uh I'll skip that bit it's not very exciting you'll get the idea um dispatch all dispatch has to do is grab the least load of worker which

is a standard priority que implementation on a heap so you pull the most lightly loaded worker off the Heap you send it the task by writing the request to its request Channel now you increment the load because he's got one

more guy you know about and that's going to in influence the the loading distribution and then you push it back onto the same spot on the Heap and go and that's it you've just dispatched it you've updated the data structure and

that's what four executable lines of code and then the completion task which is when the work is finished you've got to do the sort of inverse you there's one F guy on this workers CU so you

decrement his pending count you pop him from the Heat and you put him back on the heat and that'll put him back fact where belongs in the priority que and that's a complete implementation of a semi-realistic load balancer but the key

Point here is that the data structures are using channels and go routines to construct this concurrent thing and the result is scalable it's correct it's

very simple it's got no explicit locking and yet the architecture just sort of makes it all happen and concurrency is therefore enabled parallelism intrinsic in this thing and you can actually run this program I have this is all

compilable and runable um and it works and it does low balancing per and that things all stay at exactly uniform load module quantization it's pretty good and you can of course have I never said how

many workers there are or how many requesters there are there could be one of each and 10 of the other or a thousand of each or a million of each the scaling still works and it still behaves

efficiently one more example um which is somewhat more uh surprising but it fits on a single slide so it's a nice one to finish imagine you had a a replicated database so you got a database with with

uh the same data in each of multiple uh what we call shards at Google same same instance right and what you want to do is deliver a quest to all of the

databases and a query and get back the result but they're all going to be the same you're using this to go faster by picking the first guy answer as first guy to come back with the answer is the one you want so if one of them's down or

disconnected or something you don't care because somebody else will come in so here's how to do that this is the full implementation of it um you have some array of connections and some query you

want to execute um you create a channel uh which is buffered to the number of elements the number of replicants inside this

replicas inside the the query database and then you just run over all of the connections to the databases and for each one of them you start a go routine

up to deliver the query to that channel to that database and then get the answer back by this due query call and then deliver the answer to the single channel

that that's holding the result for all of these guys and then after you've launched them all you just wait on the bottom line there and we wait for the the first guy that comes back on that channel is the answer you want you

return it and you're done and the thing is this looks like a toy and it kind of is but it's actually a complete correct implementation the one thing that's Miss is is clean tear down you want to sort of shut tell the servers that haven't

come back yet when you've got an answer that you don't need them anymore and you can do that it's it's more code but not an unreasonable amount more but then it wouldn't fit on the slide so I just want to show you this is a fairly

sophisticated problem to write in a lot of systems but here it just sort of Falls naturally out of the architecture because you've got the tools of concurrency to represent a fairly large

distributed complex problem and it works out really nicely so I 5 Seconds left that's good uh conclusion concurrency is powerl but

it's not parallelism but it enables parallelism and it makes parallelism easy and if you if you get that then I've done my job so if there if you want to read more there's a bunch of links

here uh there's a golang.org has everything about go you want to know uh there's a nice history paper that Russ put together that's linked there uh I gave a talk a few years ago that led up to us actually doing go which you might

find interesting uh Bob Harper at CMU has a really nice uh blog posting called parallelism is not concurrency which is very similar to the idea that concurrency is not parallelism but not quite uh and then there's a couple other

things the most uh surprising thing on this is the concurrent power series work that Doug math my old boss at Bell Labs did which is an amazing amazing paper but also if you want a complete different spin on it the last link on

the slide is to another language called sawal which I did at B at at Google shortly after coming there from Bell labs and it's remarkable because it is incredibly parallel language but it has

absolutely no concurrency and by now I think you might understand that that's possible so thanks very much uh for listening and thanks to herok for

inviting me and uh I guess it's time to have some drinks or something

Loading...

Loading video analysis...