Neuralink Overview, Fall 2025
By Neuralink
Summary
## Key takeaways - **Neuralink founded 2016 from scratch**: Started in 2016 with nothing—a bunch of ideas; first day involved buying a chair at OfficeMax to build the world's first mass-manufactured, high-bandwidth BCI. [00:32], [01:03] - **Telepathy empowers paralyzed gamers**: First participant P1 played Civilization VI for nine hours straight the day after getting Telepathy; late-stage ALS patient goes outside with family using Neuralink, unlike eye-gaze systems. [03:18], [04:11] - **Robot inserts 1024 electrodes avoiding vessels**: Robot uses tiny needle thinner than human hair to insert 128 threads with eight electrodes each into moving, vascularized brain while avoiding vasculature. [08:07], [08:35] - **Rev 10 robot 10x faster insertions**: Current robot used for 13 humans takes time per thread, but Rev 10 inserts over ten times faster to shorten surgery to under one hour like LASIK. [10:36], [10:52] - **Neural decoding drifts from non-stationarity**: Models cluster well initially but drift over days due to neural non-stationarity where neurons change; seeking ML techniques to eliminate daily 15-20 minute recalibrations. [19:24], [19:40] - **10,000 on Telepathy waitlist**: As of September, changing lives of 12 people (13 last week), averaging eight hours daily use, but 10,000 on waiting list demands massive scaling. [04:25], [04:51]
Topics Covered
- BCI Restores Paralyzed Lives
- Neuralink Scales to 10,000
- Whole Brain Augments Humanity
- Robots Conquer Brain Insertion
- Decoding Fights Neural Drift
Full Transcript
I'm going to start by telling you a little bit about the founding story of the company.
So for that, we’re going to take a time machine back to 2016.
This is just to kind of contextualize the world at that time: no ChatGPT, pre-transformer, and self-driving was barely working.
And Elon had this tweet about neural lace back in 2016 and then started tapping into experts working in this field to potentially start a company.
And at the time, I was doing my PhD working on this thing called neural dust and met Elon in October of 2016 and then a few months later, we had a company with a few other people to build the world's first mass-manufactured, high-bandwidth BCI.
And I really like showing that photo on the lower right corner.
That's what the office looked like.
My first day at work was going to, I think OfficeMax or something, to buy a chair.
So we really started out with nothing, a bunch of ideas, and I'm going to walk you through our journey after that.
So before I do that, what is BCI?
It stands for brain-computer interface.
It’s basically set of technology that allows you to read and write to and from the brain and, with the hopes of potentially initially helping millions of people with paralysis.
So people have mind body connections that are severed.
So the first four years of the company was really focused on building the foundational technology.
Building a wireless fully-implantable system is really hard.
And making it small is really, really hard.
So we started out with building what's on the top row.
These were wired implants with usb-C connectors coming off of them.
They were really a platform to build towards a wireless implant, that you see down there.
Also, from day one, we had a very big thesis that in order for us to have a scalable deployment of our system, we need to build robots.
So we started out by building this robot on the upper left corner that was put together by a bunch of eBay parts.
There's only n of one.
Obviously that's not going to work so we then found ways to productionize it to a point where it looks like that right now.
That’s used in human.
So as I mentioned, the first four years were really focused around building the hardware, down to a level where we can test in various, different animal models.
And then back in 2021, we had this demo with one of our monkeys, Pager, where he was playing a game, pong, with his mind.
And then since then, it was really intense three years of testing, building, testing again, iterating, and then getting approval for launching clinical trials in 2024.
So our first product is called Telepathy.
And it basically allows people with spinal cord injury or ALS or people that are quadriplegic, which means they’re paralyzed from the neck down, be able to control digital devices like computers or phones or whatever just by thinking.
And it's making real impact in people's lives.
And in this particular case, this is one of our participants, we call him P1 because he was the first participant, playing Civilization VI.
And he mentioned that the day after he got Telepathy, he was playing Civilization for nine hours straight.
It also can extend beyond just digital control to robotic arm.
So this is one of our participants, drawing this amazing thing with a robot arm after getting trained to use it for about an hour.
Telepathy also works for people with neurodegenerative conditions, such as ALS, similar to what Stephen Hawking had.
And, this is one of our late stage ALS participants with three kids.
So he can only move his eyes, so he interacts with the world previously with an eye gaze system that doesn't work outdoors.
Neuralink doesn't have that problem so this is him being able to go outside with his family, play games, and then also have their kids actually hear dad's voice for the first time.
So as of September, we're currently changing lives of 12 people in the world.
As of last week, that number’s 13.
And the interesting number that I’ll just quote in this is that, on average, people are using Neuralink about eight hours a day.
Okay, so what’s next?
One is scaling.
So this number, 10,000, I look at this every day.
This is how many people we have on the waiting list for Telepathy.
12 or 13 is really great, but that’s significantly less than 10,000.
So there's a lot of challenges as you can imagine in scaling both the device manufacturing, deployment, and patient service.
So there's a lot of challenges associated with that.
The second set of challenges really involve around expanding our indications.
So as of right now we're focusing on movement.
Being able to use control, computers cursor and robot arm.
They also don’t have tactile sensation.
So being able to feel again.
We're also launching a program to enable someone who lost their voice talk again, hear again, and see again with Blindsight. And
this is all ultimately towards, what we call a building a whole brain interface, which basically means reading and writing from any part of the brain.
As you can imagine there's huge clinical implications to this, not only addressing some of the things I've mentioned, but, even some of the really deep psychiatric or neurological disorders, anything that you can really think of those are all the spikes in the brain and in not so distant future, we believe that there's actually potential to augment human capabilities.
And really, at the end of the day, we're building a set of tools to really try to understand our three pound universe that we call brain.
So that's a brief snapshot of what Neuralink is about and why we exist as a company.
If our vision excites you and you’d like to help shape it, I’m going to talk about some of the engineering challenges, and, just know that as I'm talking about every single layer of our tech stack, we really need everyone in that skill set And there's tons and tons of work to do.
And we're also a very small company, we’re just over 300 people.
So as an intern and as a full time you get a massive scope and you can also talk to those guys to actually talk about whether I'm lying or their experience reflects this.
Okay.
So what are we dealing with here?
So on one side, you have a biological neural net and on the other side you have a computer.
So how do you get to the intent from a biological computer to an artificial computer or actual computer?
This is how.
At the highest level, you have neurons that spike.
We're measuring spiking rates.
You look at the change in the spiking rate through our devices.
And then there's an ML model that then converts it to cursor movement.
There are three major components.
There's a surgery and robot to actually deliver this implant.
The implant itself.
And there's neural decoding.
So I’m going to talk about each and every one of these.
So starting with the user experience, this is a clip that I'm going to play that talks about how someone gets a Neuralink.
A couple different steps.
But first, we have a surgeon, human neurosurgeon.
Hopefully the robot will do this portion of it as well but at the moment, we have a human neurosurgeon that exposes the brain.
So they drill a 25 millimeter hole in the skull and then expose the brain.
And then the robot has this tiny needle.
It's about the size of red blood cells.
So really, really tiny.
Thinner than a human hair, inside this cannula.
And this cannula is actuated by precision motors on the robot to engage with this thread loop, to grasp it.
And then you insert it into the brain 128 times.
There's a total of eight electrodes per thread, so total of 1000 channels.
And this is done while avoiding vasculature.
And then once all the threads are inserted, implant basically replaces this hole, and then you put the skin flap over.
Everything is invisible and you become a cyborg.
So what makes this challenging?
So this video, is a video of a real brain and two things that you will notice.
One, it moves a lot.
And it's also highly vascularized.
And if you poke a brain it feels like tofu or like jello.
So now the question is, okay, how do you insert safely into this moving, vascularizing, soft thing?
So we built this robot to deal with these challenges.
The robot has a bunch of custom mechanical, electrical, optical subsystems, essentially enabling you to have motion, perception, and the brain behind how you insert these things.
And all of these parts that are color coded, we design every single piece of it.
I’m not going to spend time going over all of these, but just a couple things that I want to highlight.
The optics problem is really interesting.
It’s a diffraction limited optics.
So there's not a single camera that can give you all of the depth of field, resolution that you want.
So we have essentially six microscopes and optical coherence tomography that’s looking into this 25 millimeter hole that we created and the movement of the brain and tracking all that stuff.
So super interesting set of challenges here.
So what are the improvements that we need to make?
Speed and reliability.
So I'll play this clip.
The clip on the left is our current version of the robot that's been used for 13 humans with Neuralink.
And by the time this is inserting one thread, the version on the right, which is this new robot that we call Rev 10, has inserted it more than ten times.
So why is speed important?
We want to keep the surgery time super, super short.
At the moment, the end-to-end, what we call parking lot to parking lot, is 4 or 5 hours for the surgery.
And the robot portion of it is about an hour, just over an hour.
And, our long term vision is to make this kind of like a LASIK surgery and potentially you can get it during a lunch break.
So you need to do this under one hour.
Maybe you can do it awake.
And so there's huge amount of challenges and benefit of having it be a shorter surgery.
And then there's also at the moment, we send a bunch of our engineers to go out to the surgery and look at every move that it makes.
We don’t want that.
We want to make sure that it becomes, kind of a one click surgery.
Super reliable, boring surgery - we like boring surgeries.
So huge amount of challenges, as you can imagine, not just on the hardware, but on the software side.
So the other improvement is, improving insertion depth.
So at the moment, we only insert four millimeters from the surface of the brain to record all of our neural activities.
But being able to insert deeper means you get access to more neurons.
And then there are also the different parts of the brain that you get access to by inserting deeper.
And particularly for visual prosthesis, if you insert deeper then you get peripheral vision.
That's sometimes or most likely more useful than actually the foveal vision that has a very narrow field of view.
So, as you can imagine, huge challenges; optical, mechanical, software, co-registering the imaging that you’ve done pre-op to the robot.
Very very interesting set of challenges here.
Okay.
Now switching gears to the devices.
This is what the device looks like when you blow it up.
Like I said, it’s about a quarter, the size of a US quarter, 25mm in diameter.
And then it has this, flexible, really tiny, thousand electrode wires, that we call threads.
This is really the only part that goes into the brain.
The rest of the body, is the thing that's replacing the skull, as you saw in the animation.
And those threads are manufactured, using MEMS process in our own clean room in Fremont.
We also design our own low power analog circuits, custom chips.
This is kind of a misnomer, it’s not really a chip, it’s really an SOC.
There's a lot of digital processing that happens.
And then just to highlight what kind of signals we're dealing with.
Typically the signals you record from these electrodes are anywhere between ten microvolt in amplitude to millivolt.
And then you're generating about 200 mbps.
And if you think about it, in modern electronics that’s actually not that much data.
But we're trying to send these off through Bluetooth, which has about 20 kbps of bandwidth available.
So now you have to deal with, okay, how do you go from 200 mb to 200 kb.
So there's a bunch of compression and on-chip spike detection that happens on this SOC, where it basically goes through this analog chain of amplifying, filtering, and then digital finite state machine that looks for the spiking.
You bin them into these 15 millisecond bins that get sent off to BLE.
And then you'll look at the change in the spiking rate and, you know, there you go.
We also build our own, well, design and then assemble our own PBCA.
That's kind of the core of where the chip gets assembled on and there's a bunch of sensors, there’s additional processing that happens to enable this.
So there's a battery inside the implant that on a single charge lasts ten hours.
But you need to recharge it.
So it's done inductively.
There's a bunch of innovative, crazy stuff that happened on the engineering side to make sure that you don't heat the tissue while you're charging the implant.
But the charger hardware, this is all built by us.
There's two components: there’s a charger base with the battery and all the electronics.
There's a charging coil that you hover over the implant.
Most of our patients actually have it built, as part of their hat.
So they put on a hat and they charge.
And our eventual goal is to embed it into a charging pillow so you can charge it while you’re sleeping.
Okay.
So what are some of the challenges or things to improve on the implant?
The main technical metric that matters is channel count.
So we have 1000 channels right now, having more channels means more neurons you're recording from, which means more information.
For the case of the robot or movement program, that means higher degrees of freedom.
For vision, that means more pixels.
So the more the better.
And everything that you saw in that tech stack, there's a huge amount of challenges when you're increasing channels.
How do you hermetically pass even more wires through this plastic enclosure?
How do you keep the power consumption low?
How do you compress even more data that you're collecting now?
Maybe you need to increase radio bandwidth and so then, like, how do you build a custom radio?
How do you then also package and system integrate?
The list goes on and on.
So you can imagine there's a huge amount of challenges across the board that we need to solve.
This is also true on the robot side, but there's an enormous amount of testing that we do.
For us, before we did our first implant, we made over 1000 implants.
And the reason for it is pretty simple. Like
if you want to understand 0.1% failure mode of something, you need to build at least a thousand of them.
And then test a thousand of them, right?
So there's tons and tons of testing that we need to do.
That's a picture of our custom hardware in the loop tester.
And then that server rack - that’s not an ordinary server rack.
It contains basically rows and rows of implants in this brain in a vat that's accelerated in terms of temperature to increase aging.
So tons and tons of testing.
Okay, so the final piece, the neural decoding.
Okay.
So how does the user actually use this to convert to something useful for them?
So users, in this case our human participants, can connect to the device through computer or phone that's running this custom Neuralink app that we call Telepathy app.
It has a couple different components.
There’s mainly three steps you have to go through.
One, you pair the device similar to how you would pair any Bluetooth device.
And then once you do that, you go through, body mapping process, which is imagine moving your hand, arm, whatever different parts.
Like squeeze your hand, and then we look at the neural patterns just to see if we put the electrodes in the right areas.
And we have a pretty good sense of where we're putting it in the first place.
And then once you do that, there's a calibration phase.
So basically using one of the motions that we were able to get from body mapping, you convert that to cursor movement.
So and then you iteratively refine this, so similar to how any ML system has a training phase and an inference phase there's ways to do that.
And there’s huge amount of challenge.
It takes about, for someone who's never had a Neuralink implant to being able to use a computer, it takes anywhere between 15 to 20 minutes.
And then there's also obviously a huge amount of control interface for OS integration that we built on top.
You can upgrade the firmware, you can name your implant, you have all the whatever wonderful features you can, you have access to on your computer.
So we built all of this on top of it as well.
The work of one guy, this entire thing.
Okay.
So what are some of the areas of improvement on the BCI decoding side?
One is robustness.
So this shows you, basically the plus signs are where we want the decoding targets in sort of this circular vector space.
And the point clouds represent the model output.
So what you’re seeing is after the initial calibration you have nice clustering of these point clouds in the desired separable space.
But over time it drifts.
Like a few days later it could look like that.
Ok, so what is happening?
So there's this thing called neural non-stationarity where, you know, neurons are changing all the time depending on the context, you might be recording from a different neuron.
So, you need to deal with basically having to recalibrate the model.
Recalibrating kind of sucks.
So how do you deal with that?
Can you maybe delete recalibration at all?
What are some of the challenges there?
There are a lot of ML techniques that we’re looking at and some that are semi-supervised, unsupervised ways to refine the model as a user goes, and not have to sit in front of computer calibration 10-15 minutes.
For some people, they actually enjoy it.
It’s like somewhat meditative to them.
But that doesn't seem like a great experience if you have to calibrate your mouse every day.
So there’s a huge set of challenges there on the ML side.
And again, this is just a mosaic of different applications that people are using it for, you know, some playing Halo.
Some are using robot arm to feed themselves for the first time.
Someone are playing CAD programs. Some are using it for their job, art.
Like, these are all the things that we use computers for, right?
And phones for, and other devices for.
Now we have to basically productionize all of these different applications to build on top.
So there's tons and tons of work on this.
One of the things that we’re really proud of, and will continue to invest tons and tons of money towards is vertically integrating.
So pretty much all the things that you saw, we build in house.
Including having our own clean room, and we have our own construction team to build custom buildings for ourselves.
That's actually in Austin.
It looks much nicer now but that’s where we’re building our headquarters.
So yeah, that's my pitch.
So if you are interested in applying, you should apply.
And you don't have to be a brain surgeon to work at Neuralink.
We have a brain surgeon over there though.
Loading video analysis...