From Edge to Everywhere — Imagine 2025 Keynote
By Edge Impulse
Summary
## Key takeaways - **TinyML Born from Microtensor Proof**: Neil Tan proved running ML on microcontrollers was possible with microtensor, compressing TensorFlow models to FPU math, sparking TinyML movement with Google, Qualcomm, ARM. [05:52], [06:32] - **Edge Impulse Fixed Unusable Tools**: Early tools were nerdy, difficult, breaking; only one Google guy trained MNIST. Edge Impulse built end-to-end stack for industrial developers, not ML researchers. [07:00], [08:37] - **TinyML Now Ships Millions**: Shallow neural nets on sensors, CNN audio/image classifiers now ship in multi-million units like Fiat 500s, mature market stuff. [09:42], [09:46] - **Mullertch Saved $1M/Year on Defects**: Mullertch used FOMO anomaly detection on plastic parts to detect defects, saving over a million dollars yearly in fines, improving quality. [18:52], [19:03] - **Suzuki Replaces Ear-Trained Engineers**: Suzuki trained audio classifier on engineers labeling good/bad wiring harness sounds, improving throughput, catching issues engineers missed. [20:06], [20:29] - **Visual Inspection Solves 90% Factory Problems**: New Qualcomm product brings full visual inspection solution with no-code libraries, QR onboarding, ERP integration, solving 90% of factory floor issues. [22:12], [22:40]
Topics Covered
- TinyML Tools Failed Developers
- Build Ferraris Not Fiat 500s
- 9x LLM Quality Shrinks to 10B
- Audio Replaces Retiring Experts
- Visual Inspection Solves 90%
Full Transcript
Please welcome Edge Impulse Co-founder and Qualcomm Vice President Zack Shelby.
>> Hello everyone. Good morning. Welcome to
our fifth annual Imagine. Uh Edimpulse
is only a little over six years old since Jan and I founded the company. So,
it's amazing to think that we're already five times organizing this conference where we bring together leaders across the edge industry, partners, top
developers, customers, and really cool brands that are here to learn about the latest applications for the technology, what people are really building, what people are excited about, and where the
techn is going next. So, I want to ask a quick question because I know there's at least one person. How many of you have been here? All five. Imagine since the
been here? All five. Imagine since the very beginning. We've got a Jenny,
very beginning. We've got a Jenny, Quabina, Adam, I've been here, Yan's been here.
>> Oh, you missed the first year because of COVID. Okay. So, you guys even beat my
COVID. Okay. So, you guys even beat my co-founder Yan. Uh, we organized the
co-founder Yan. Uh, we organized the first Imagine um in the beginning of or the towards the end of the pandemic. So,
we actually decided to pull off this conference. It was Adam Benzian who led
conference. It was Adam Benzian who led our marketing team at that time had the crazy idea that you know what we need to own the conference for edge AI. We need
to bring everybody together in the ecosystem and learn from each other. How
do we build the most amazing solutions and how do we go to market together? And
I think we've been extremely successful in doing this. We built an entire industry. So I want round of applause
industry. So I want round of applause for everyone that's been involved with this. like
this. like this is what success looks like when you go build things together. And so we're only going to be doing more of this.
We're going to go bigger. We're going to go better. And I think you're going to
go better. And I think you're going to see that this year with just the energy, the quality of speakers, all the cool demos that we have outside.
Live from the Computer History Museum, we have a very cool set of people. I'm
going to show some of the logos of the companies that are here because I want you all to come away from this feeling like you've had the best networking of the year. Imagine it's very much about
the year. Imagine it's very much about networking together, getting to know other people in our ecosystem. So, if
there's somebody that you want to meet, something that you want to achieve, please come talk to one of my team.
Anybody from Edge Impulse will help you meet the people that you need to here at Imagine. And this year we have a very
Imagine. And this year we have a very special theme. This year we're going to
special theme. This year we're going to talk about cars. I'm a car guy.
So really excited to talk about everything related to fast cars and AI.
And my job as CEO or co-founder is to draw really cool car cartoons. This is
my biggest value ad. And so you're going to see all of my car cartoons. If you
want one, please uh email me. I charge
money for doing these, but I will make you your own car cartoon.
We've got 12 speakers today. an amazing
lineup um from every end of the technology. We have everything from
technology. We have everything from ultra low power, Arduino, Open MV all the way up to canonical, right? Super high-end Linux. Everybody
right? Super high-end Linux. Everybody
is here talking about the whole end of Edai. So, please enjoy all of our
Edai. So, please enjoy all of our speakers that are coming up. I'll
introduce a few while I'm talking.
And most importantly, right, my job is to make fancy looking PowerPoints and entertain you on stage. That's the easy job, right? The real job is all of our
job, right? The real job is all of our engineers who have built 20 plus very cool demos from across the industry. We
have our closest silicon partners, OEMs, ODMs, module makers, disties, AWS, right? We have everybody here showing
right? We have everybody here showing off the coolest applications in Edai.
So, at our coffee break, at lunch, during our networking, you'll have a chance to go to the showroom and hang out with the people that are really building stuff. So, please do that. Feel
building stuff. So, please do that. Feel
free to dive into what people are building, how does it work, ask questions, um, and do business together, right? This is the place to start
right? This is the place to start working on the stuff that we're going to show next year at Imagine.
We have over 300 attendees coming today.
So, some people are still on their way in. It's going to be a full house as the
in. It's going to be a full house as the day goes on. We have thousands on live stream. So, imagine always a live stream
stream. So, imagine always a live stream event. So, anybody can join the event
event. So, anybody can join the event online. We have thousands of developers
online. We have thousands of developers that join every year to watch. And we
have a Ferrari.
The Ferrari has a story, right? This
Ferrari 268 GTS supercar, 820 horsepower, it came here for a reason.
Um, I'll get to that a little bit later.
A bunch of the people that are speaking today actually had a part in helping make that car come true. And so I wanted to bring that car here for everybody to experience what it takes, what it means to build a supercar and what's the all
all the AI technology that goes into the process of making that happen.
Now I want to go way back right to the very beginning of Tiny ML. Here we have Yan
Fiat 500 Yang Boom. When Yan and I met uh 12 years ago, 10 years ago, uh he drove a bike. He didn't drive cars, so he was very tiny. These days, Yan drives
a Tesla, but that's a little boring. So,
I put him in something cooler.
Tiny ML started back in 2017 when we began to prove how do you run machine learning on microcontrollers on small devices. Back then, when we talked to
devices. Back then, when we talked to people, we thought it was impossible.
when Yan first heard that we had a very cool engineer from Taiwan.
Interestingly, Neil Tan, it's his name from Taiwan, one of our developer Devril engineers. Neil told us that, you know
engineers. Neil told us that, you know what, we could do a project where we can run this machine learning stuff that I've been doing for robotics on big GPUs. We could run this on a little
GPUs. We could run this on a little microcontroller. Why not? It's just
microcontroller. Why not? It's just
math. And Yan was like, "BS, I don't believe you. It can't be possible."
believe you. It can't be possible."
Right? back back at this time it's almost 10 years ago and um Neil said yeah it is possible so he wrote a little open source project called microtensor some of the coolest stuff starts as open source projects and microtensor showed
that we could go from the tensorflow world of big ML and we could compress that down to the math that we could execute on a microcontroller in real time on an FPU with no other
acceleration so this is just standard microcontroller stuff and that blew everybody's minds right that helped us begin to get the confidence together with Google, Qualcomm, ARM, Microsoft
and many others to go and build an entire movement around what we called tiny ML at the time. And we started the Tiny ML foundation at Jenny back in 2018
if I'm correct. We started a tiny ML foundation around this, a movement, right? It's the very beginning.
right? It's the very beginning.
The problem was the tools that we built ourselves, our open source tools, the TensorFlow tools, stuff from Microsoft stuff. It was completely unusable. This
stuff. It was completely unusable. This
is the worst tool stack that we've ever seen in our lives. Yan and I, we're like, "Wow, this is really fun, but this stuff is so nerdy, so difficult, it breaks." There was one guy at Google in
breaks." There was one guy at Google in Dan's team actually one guy at Google who was able to train a model and it was an emnest gif or emnest like tiny um
classifier. Some people remember those
classifier. Some people remember those days, right? We had the one trained
days, right? We had the one trained emnest demo and that's what everybody showed on microcontrollers. Didn't
matter. There was no real data, no real sensors.
And that was so frustrating for us, right? So the story of Edge Impulse, how
right? So the story of Edge Impulse, how this all started was that Jan and I almost gave up. We were to the point of frustration with all of this ML stuff that we're like, "This is never going to
work. Developers are never going to be
work. Developers are never going to be able to deploy any of this in real production anywhere. It's so difficult
production anywhere. It's so difficult to use. We can't have one guy at Google
to use. We can't have one guy at Google do all the model training for everybody in the world. It's got to be easier. And
so it was actually over Sushi and sake here in the Bay Area that we finally realized ourselves that hey, maybe it's a persona problem, right? Maybe we're
making the tools for the wrong people.
ML researchers make tools for ML researchers, right? You write the code,
researchers, right? You write the code, you get your paper published, you throw the code away. It's going to be difficult. It has to be difficult
difficult. It has to be difficult because we're super smart mathematicians. But most developers,
mathematicians. But most developers, right, the millions, tens of millions that work on industrial systems, they're not mathematicians. They don't care
not mathematicians. They don't care about writing research papers. They just
want to get the ML to work for them. So
that's how Edge Impulse started. We
decided that we're going to build a tool stack that serves all the engineers, all the developers in the world building real industrial systems. And so that was 2019.
We built state-of-the-art technology, completely new developer flow with that philosophy in mind. And Tiny ML back then was something very different, right? Tiny ML the state-of-the-art, the
right? Tiny ML the state-of-the-art, the coolest, highest performance things we did. 100 kilobytes of RAM.
did. 100 kilobytes of RAM.
Microcontroller only, right? We were
really focused on the microcontrollers.
Smaller the better. Completely manual
data collection. None of this stuff does transfer learning. If you want to build
transfer learning. If you want to build a model on sensors, you have to collect all the data from scratch. Super
difficult, right? To get real data from real industrial equipment.
Most of the applications that we worked with back in these days, this is six, seven years ago, classical ML, neural networks that are shallow, working on sensor data, a lot of health
algorithms, working on CNN audio and image classification, simple classifiers right?
All of this is now state-of-the-art shipping in multi-million unit volumes.
So, I think about this is like in tiny amount, we've now gotten to the point where we're shipping Fiat 500s in millions of units. This is all mature market stuff. Edge Impulse is continuing
market stuff. Edge Impulse is continuing to invest in this. So when we hear Jon's talk later, we'll hear about all the cool technology that we're putting into play in this space. So how we can do more with sensor data, how we can get more performance out of real devices,
and we're going to be shipping a lot more of these, but Edge AI has become something very different. So this is our friend Fabio,
different. So this is our friend Fabio, CEO of Arduino. He'll be up speaking after me. I've done cartoons for
after me. I've done cartoons for everybody, don't worry. Um,
edgei has become something very different, right? Us as an AI industry,
different, right? Us as an AI industry, how many of you are at embedded world this year?
Edji was everywhere. You couldn't turn a corner without seeing EDI and edgi was in everything from small embedded modules to massive industrial PCs,
gateways, edge servers, you name it. Edi
is everywhere. That's the theme of our talk this year. And it's here for a reason. People aren't doing that just
reason. People aren't doing that just for fun. It's not just because of hype,
for fun. It's not just because of hype, right? We have customers in the field
right? We have customers in the field demanding that we build the technology that they need to solve their problems. And what customers are demanding today at the high end is a Ferrari. We can't
build Fiat 500s for these customers, right? We need to build Ferraris as
right? We need to build Ferraris as well.
And boy, the requirements are out of this world, right? The things that we have to do now at the high end are really extreme. gigabytes or even
really extreme. gigabytes or even hundreds of gigabytes of RAM. NPUs with
a 100 tops and I'll talk about something even higher performance later today.
Completely synthetic data. We have
customers coming in that have access to no data. How do we data completely
no data. How do we data completely synthetically? We have a really cool
synthetically? We have a really cool demo from Ivan, our Norwegian um ambassador, talking about all the work he's done in synthetic data generation.
Go check that out. No data, right? We
have to do zero shot solutions for solving problems and the applications we're expected to support from the Gen AI world coming in
at EDJI are pretty extreme completely natural language human interfaces right people want to speak to machines not just LLMs but LLMs with rag right these LLMs have to do something useful for
industry a chatbot is not that interesting we're getting asked to work on vision language models and then for robotics we're expected to deliver vision language actuation models. That's
something completely new for most of us in the space. And this all has to work with industrial reliability and it has to be as low power, lowcost and easy to
use as an Arduino. And so Fabio from Arduino will be talking next about this lowost easy to use experience and how this works with Edji. But this is a lot of work that we still have to do as an
industry. So the the takeaway I want
industry. So the the takeaway I want everybody to have here is that we have to step up right as an industry we have to do a lot more to deliver this vision and we'll make some announcements as edge impulse some cool new tools that
will help enable people to do this on new hardware but there's a lot of work still for us to do over the next two years and the thing that's inspired us why we
think Genai is coming in deeper and it's here to stay is that over the last 18 months the quality of large language
models has increased 9x compared to the baseline chat GPT. This
is the original chat GPT GPT 3.5 turbo 175 billion parameters. Now we have models in the class of 40 billion parameters down to 10 billion parameters
that have 9x better model quality on a quality index. And we expect at Qualcomm
quality index. And we expect at Qualcomm that this is going to just increase and continue. So this tells me that if we
continue. So this tells me that if we can do models in the 10 billion to 40 billion parameter range, we can do almost everything that's useful in the Gen AI world on the edge. And this is
really exciting. So this tells me that
really exciting. So this tells me that Gen AI is also maturing to the point that we can start to apply it in real products.
Now Gartner is saying that Genai Edge AI is on the way to maturity. We're just
two years away from being a mature market for this stuff. So that's why I say we only have two years to get our solutions in shape and deliver on this vision. This Ferrari that I was talking
vision. This Ferrari that I was talking about when we started working on this this talking about the the breath of products that we've seen we worked very much in the tiny space. This is HP Poly
where we've shipped in headsets, earbuds, consumer equipment. We have
another um partner here, the CEO of Hydrostasis will be talking about what they're doing for monitoring the water levels, the level of um of water that
you're drinking. Are you having problems
you're drinking. Are you having problems with too little salt, too much salt?
They'll talk about the device that they've developed with embedded processors and edge impulse a little bit later.
Smart grid. We've done tons of applications on monitoring the grid with 10 year, 20 year battery life, super embedded devices.
smart pets. We've done pet trackers, pet feeders, very low power, very cheap, and a lot of extremely embedded cameras. So,
later we'll have Quabina from Open MV talking about all the cool work that they're doing in embedded cameras. And I
have a bunch of toys here. I love my toy table.
This is how small cameras get now.
That's a complete high performance. This
does full object detection at full frame rates, 30 frames per second. Um, this is running in a Lyft embedded MCU with acceleration.
Corbin is going to tell a lot more about what this thing can do and you can see the demos outside, but this is how small embedded cameras are getting. We've done
a lot of this. We have another great example um at the lift table. This
partner builds a satellite powered battery powered camera. This thing runs at 100 mills of power. You can put it anywhere on the earth. Point it at something that you want to detect something's happening. You want to
something's happening. You want to classify a problem. you want to de go look at the visual inspection on something extremely remotely, you can do it with this little box. So, there's
another demonstrator outside in the showroom.
We're only going to see more of these super low power cameras. But now going forward with the edge AI requirements, we're getting pulled into completely new spaces. Industrial robotics, we're
spaces. Industrial robotics, we're seeing more and more applications where Edge impulse is getting asked to do more humanoid robots. So, I have a guest
humanoid robots. So, I have a guest later. We'll talk a little bit about
later. We'll talk a little bit about robotics and what's happening. We're
getting asked to work on humanoid robots. Very high-end, very high
robots. Very high-end, very high performance, lots of new model expectations.
And of course, automobiles, our theme today. We're getting pulled into not
today. We're getting pulled into not only manufacturing, but into the cockpit itself. How do we improve the security,
itself. How do we improve the security, the safety, the knowledge of what's happening in cars? We're getting asked to work on this, too. And I have a really interesting speaker from Global Sense coming to talk about what they're
doing in car analytics. And I want to show a little video. So we actually had them go out this morning and work on the Ferrari while we were setting up the show. They build this little audiobased
show. They build this little audiobased sensor. The US's largest auction houses
sensor. The US's largest auction houses use them to do audio analysis of the quality of cars. Very difficult to inspect a car quickly. They can do this
very short amount of time with this thing. So, they actually went and uh and
thing. So, they actually went and uh and hooked this up. See if this video works.
They went and hooked this up and did a real analysis on the Ferrari this morning. It's brand new, so I hope it's
morning. It's brand new, so I hope it's not broken or that I broke it.
>> So, say and team I expect full analysis by lunch of any problems with the Ferrari.
We're going to hear a lot more about what they're doing with this, but these kind of automotive solutions over the top using in this case microcontrollers, advanced audio, some of our silicon
partners um like Cintiient being used to go and enable these types of products.
And at we're being pulled into every angle of the automotive manufacturing industry right now. So that's what I'm going to talk about next. Um
manufacturing is rapidly modernized. But
we're seeing so many changes in the automotive industry. This is an industry
automotive industry. This is an industry that used to be boring and it's now exciting. It's one of the leading
exciting. It's one of the leading industries automating um they're getting geographically distributed, having to modernize for EVs, automate everything,
improve quality, improve inspection. And
many of the people that were used to um be doing these quality inspections for tier one suppliers and OEMs, they're retiring, right? We don't have that
retiring, right? We don't have that skill set anymore. We have new workers coming in that don't know what the quality should be. So that's being replaced now by sensors and AI to go
improve the quality at much lower cost.
Let's take Mullertche as an example.
This is a global tier one supplier of plastic parts for luxury car makers.
Mullertch used edge Impulse and our FOMO AD algorithm. this FOMO anomaly
AD algorithm. this FOMO anomaly detection, visual anomaly detection to go and deploy um monitoring stations on all of their plastic parts as they come through and tell whether there's a
defect, a misplacement, any small thing we can detect. And this um very quickly saved them over a million dollars a year in cost that they were getting fined by the manufacturers, improved their
quality, reduced their downtime, enhanced their their margins overall.
And we're going to see many more cases like this um coming from tier one manufacturers.
IDT IDT is a system integrator and solution provider for manufacturing. So
we're going to have Margarita talk later today. Margarita talk about what they're
today. Margarita talk about what they're doing for the luxury car manufacturing industry. And this is the reason that we
industry. And this is the reason that we have the Ferrari. Margarita's company
went and built a solution to do the the quality control on the battery cells going into that exact car. So, I
actually drove her over this morning in the Ferrari. She had never been in that
the Ferrari. She had never been in that car before. Super exciting. Her company
car before. Super exciting. Her company
helped build the quality control system for that car using Arduino. So, this
comes comes full circle with the Arduino talk. Next,
talk. Next, Suzuki. We've had Suzuki use Edge
Suzuki. We've had Suzuki use Edge Impulse to go and replace quality engineers that use their ears to detect when a wiring harness came together properly. It's super
together properly. It's super interesting. When you connect a wiring
interesting. When you connect a wiring harness together in the factory, it makes a sound. These engineers, they knew when that came together and the sound was good and they knew when the sound was bad and the quality was
problem. And so what we did is we used
problem. And so what we did is we used those engineers to label the training data to train a custom audio classifier for good and bad quality. And this we
could prove that we could replace improve faster throughput, better quality. We could check problems that
quality. We could check problems that maybe the engineers couldn't hear and we could do this um even when engineers changed and we lost that generation of people. Super interesting use case with
people. Super interesting use case with audio and I I believe we're going to see more of this vision audio sensors multimodal used in the factory setting.
another global supplier making actuators using edge impulse with multi-ensor fusion to understand the quality and the predictive maintenance of their products that are going into the car. So, not
only are we getting into the manufacturing side, we're being pulled into the automotive space directly in the car by tier ones. We hear more about this. This actually came in together
this. This actually came in together with our partner Microchip. So, we're
working on this together with Microchip.
And finally, we have a German luxury automaker starting to use Edge Impulse with the cameras for security that are on the doors, the license plate, the
front. And using these tiny cameras, we
front. And using these tiny cameras, we can look at um potential threats in real time. And this has impact not only for
time. And this has impact not only for the car owner experience, but also for things like insurance, accidents, etc. So, car makers are starting to embed this type of AI technology directly in
the car. And this brings me to something
the car. And this brings me to something really exciting, new product announcement. So because of all this,
announcement. So because of all this, right, all this demand that we've seen from manufacturing, not just automotive, but much more broadly, we're announcing a new product from scratch together with Qualcomm. We're
building something called visual inspection. And visual inspection is
inspection. And visual inspection is cool because for the first time, we're going to be bringing a full solution to market. Today, Edge Impulse solves maybe
market. Today, Edge Impulse solves maybe 20% of your problem on the factory floor. You have to build the model. You
floor. You have to build the model. You
have to build the system, all the apps.
You have to manage this end to end. Do
it. There's a lot of work that we don't provide. With visual inspection, we'll
provide. With visual inspection, we'll be solving 90% of the problem and working with system integrators to get the the last mile to the factory.
And the way that visual inspection works is that we're bringing state-of-the-art AI based machine learning together with modern quality management. And we're
doing this in a way that's distributed across the edge on the newest hardware.
We're doing it with all of our camera, edgebox, and AI appliance partners so that we're able to improve the yield, the quality, and the visibility both on the factory floor. So, if you you think
about this, there'll be cameras with a station for the factory workers. They'll
see exactly what's happening, right?
When something's marked as problematic, they'll see it. They'll be able to understand the data as well as the analytics for the back office. We can
manage any number of manufacturing lines, any number of sites from one platform. And this will be available
platform. And this will be available both hybrid cloud and completely on prem. No, no internet required.
prem. No, no internet required.
And a lot of benefits to this, right? We
can get these to market much more quickly. Easy onboarding, QR code
quickly. Easy onboarding, QR code onboarding of these cameras into the system. We're having no code model um
system. We're having no code model um libraries for people. So, we're
pre-building all the sets of algorithms that you need for different types of cases like PCB inspection, food and beverage, um plastic parts for automotive. So we'll have formulas that
automotive. So we'll have formulas that are ready to go and we have, you know, a automated process to fine-tune those models.
We keep the human in the loop, right?
This isn't about full automation. This
is about helping the workers on the floor to perfect their process. And then
finally, we're integrating with the enterprise side. So we're integrating
enterprise side. So we're integrating all of our analytics with ERP systems, PLM systems, quality management systems. So this will be fully factory integrable. And this is something that
integrable. And this is something that our system integration partners are really excited about. they'll be able to bring this to market a lot faster.
And we have a couple partners that we're launching with. We're going to be doing
launching with. We're going to be doing this across our entire ecosystem. But um
launch partners include JMO Kodico. So
they just um gave me this brand new camera. This is a brand new um this is a
camera. This is a brand new um this is a Dragon Wing IQ6based industrial camera.
This has a camera built in and it has connectors for two additional cameras off the side. So this is one of our our first launch products. So very
industrial um camera setup and Advantec with their new ICAM 300 industrial camera. We've used this for for um two
camera. We've used this for for um two years now already and we've been deploying a lot of these now. We'll be
supporting visual inspection on on this device as well. And the number of use cases that we can solve here are are extremely wide. Everything from PCBs,
extremely wide. Everything from PCBs, automotive parts, food and beverage, label and skew verification. So if you have applications that you're seeing demand for, please come talk to us.
Alexi, who is our head of product, talk to Alexi. He's really interested to hear
to Alexi. He's really interested to hear about more use cases for this, how we can go help. And it's not going to stop at vision, right? We're going to expand this for audio sensor, multimodal. We're
going to cover all of the different inspection um solutions that are out there built on edge impulse, but a completely new product. So we'll talk a lot more about that. We're now launching
our new YOLO Pro model into production right now. So Yolo Pro is something
right now. So Yolo Pro is something we've been working on for a long time.
This is a state-of-the-art YOLO class model that's been designed, trained, tested completely from scratch at Edge Impulse solely for the purpose of edge compute deployment and industrial
applications. So this is a really um
applications. So this is a really um state-of-the-art thing that we can control and make sure that we're fine-tuning this for exactly the targets. And so we have six targets that
targets. And so we have six targets that we're launching with. Yan's going to talk more about this later um and go into all the details for the new YOLO Pro.
We're also launching our new VLM cascading technology. So, I talked about
cascading technology. So, I talked about Genai coming into the space. We're
embracing that. So from this day forward, you'll be able to deploy multi-stage cascades from camera feeds, camera streams in through YOLO Pro for
object detection tracking and then directly to a VLM where we're able to do programmatic extraction of information.
So this is a use case that you'll see a demo outside where we can do vehicle analysis for a customer like a Mr. Car Wash, for example, that wants to understand the cars coming in and out of car washes. We can tell the make, the
car washes. We can tell the make, the model, the license plate number. Is
there a roof rack? Did you choose the right washing um option from our system?
Is it going to break? Right? We can tell these things in an automated way. Was
there damage? These are things you can't do with discrete computer vision models.
Very difficult to train these models. We
can do this oneshot zero training and build it into an industrial system. So,
this is super exciting. Try this out.
We're going to be doing a lot more of this type of integration at a gym pulse.
And something that excites me a lot, um, we have Evenny here. Jenny can raise his hand. We have E. Jenny that, um, owns a
hand. We have E. Jenny that, um, owns a a really exciting, um, new initiative called AI appliances at Qualcomm. And,
um, we've been working together with them because we've seen the AI appliance is a way that we can deploy more sophisticated workloads locally in industrial settings. We oftentimes get
industrial settings. We oftentimes get asked for this by big customers. And
what this does is it packs 800 up to 870 tops of NPU inference, not training inference into a single card.
And today, Edge Impulse, we deploy to edge devices. Um, this fall, we're going
edge devices. Um, this fall, we're going to be unlocking deployment directly to the AI appliance. So, you'll be able to develop a model in Edge Impulse, deploy it to the AI appliance. So say you have
brownfield devices that don't have enough compute power, an Arduino for example, and you want to go do some more sophisticated compute, for example, Genai, you'll be able to develop that in edge impulse, deploy it to the AI
appliance, and do that processing locally. And to give you an idea just
locally. And to give you an idea just how powerful this is, um, a single card can run up to 120 billion parameter GPT
model unquantized with Ragnar. So
remember earlier I talked about 40 billion parameters already being extremely high quality. This gets us internet scale LLM quality on the edge.
So super exciting and we're going further with this right.
Our customers often ask us how can we deploy more of edge impulse at our site right at our industrial facility right at the edge. We want to keep our data local and we're answering now by
deploying what they've wanted. So we're
going to be deploying Edge Impulse on prem on the AI appliance. That means
you'll be able to get the entire Edge Impulse enterprise stack all deployed in a single box all running at the edge.
There will be able to do training, user management, data management, model deployment, model monitoring, and data collection all locally. Data never goes anywhere under the internet. This is
super exciting. We're also going to do this in the cloud. So together with AWS, we're working to move not only data, which we already support hybrid cloud today. We're going to be moving all of
today. We're going to be moving all of our compute to the edge. So you'll be able to have us as the control plane, all the technology that's helping you develop and we'll have the data and the compute staying in the customer's
account, completely separated except for the control plane. So this is something to look out for working together closely with AWS on this um later this year.
And with that, I want to invite a special guest on stage, Miller Chang, president of EoT at Advantec. Welcome,
Miller.
>> Welcome. Please take a seat.
>> Thank you.
>> So, I figured Miller was a 911 type of guy. It turns out he is. That was a good
guy. It turns out he is. That was a good good >> I hope that next time I can drive the car, join the imaging. We can arrange that. Um, so Miller is joining us
that. Um, so Miller is joining us because we have a really special announcement. Um, we've worked with
announcement. Um, we've worked with Advantec for many years. Advantec was
actually one of Edge Impulse's first investors through an industrial VC called Momenta. So we have a long long
called Momenta. So we have a long long history of collaboration in the space.
And um, we've seen this market mature to the point where we're ready to go to market together. So, um, this fall we'll
market together. So, um, this fall we'll be releasing Edge Impulse Enterprise built into Advantec products out of the box and we'll be able to go and ship these products on a very wide range of
um, of cool Advantec products. You'll be
able to buy buy a license, go deploy in the factory. We'll do that with visual
the factory. We'll do that with visual inspection as well later. So, really
exciting um, new announcement.
>> Yeah. Yep. We are expecting that. I will
show you later.
>> I want to ask you a few questions, Miller. Um I see a lot of applications
Miller. Um I see a lot of applications and today we're talking a lot about manufacturing. Um what are the
manufacturing. Um what are the applications that that you're seeing from an advante point of view where edge AI is becoming a must have?
>> Okay. Uh yeah. Yeah. Hi M Chang from Advante Taiwan. Actually Advantec we are
Advante Taiwan. Actually Advantec we are uh supporting the edge computing for diversify vertical market for more than four decades.
uh in recent year because of the the high performance AI technology are gradually introduced into the edge
computing so we creating a so-called H AI become a new standard in IoT industry so as you can see there are just a few
cases in uh our service market for example like AMR robotics uh healthcare and also you can see like manufacturing and smart cities
I can give you some highlight for example like robotic and MR market the humanity robot AMR solution in logistic
and also the warehouse solution system there are uh saving label cost enhance the product production productivity also you can see the healthcare the
traditional medical imaging processing with the AI technology they can enhance hundreds thousands of time efficiency So let let's talk more about humanoid
robots. I think this is kind of the the
robots. I think this is kind of the the holy grail of complexity in our space of AI. Um what are we missing to to start
AI. Um what are we missing to to start to make humanoid robots developable with tools like ours?
>> Uh actually the humanoid robot is a very big big topic recent recent year. Uh you
can see from the report survey that more than 30% gross ke yearover of years. Uh
but however uh if you look at the human robot you need to understand that not only the big brain that the h for the human uh hi computing inside this is the
the human machine their brain right now also you need to consider for the the uh the module integration for example like vision sensor like 2D 3D camera
>> right wireless connectivity together like 5G networking or also the Bluetooth and also the the the the lighter IMU integration, motion control integration
and this is just the hardware integration but most important also the software data training right like you mentioned for VM and also model training
how to train the data to to to make the humanity the machine the big brain more smarter autonomous is very important and and and
technology that we work And that that actually brings me that that breadth of hardware that you need to build a robot, right? It's everything
from connectivity to vision sensors to central processing, distributed processing. Um, a lot of the audience
processing. Um, a lot of the audience doesn't know the entire advant I'd like to hear more about like the breadth of >> Yeah.
>> products you have.
>> Let's start on our HAI strategy. You can
see this is the we are co-working with all the mainstream silicon partner uh together to provide a very comprehensive hardware platform. Mhm.
hardware platform. Mhm.
>> You can see from the the the first line this HI embedded the I I give you some example the AI solution module talking
to the existing uh edge computing you can upgrade the AI per computing power immediately without changing the system architecture.
>> Okay.
>> Also the camera module like you introduced earlier, right? Also the H AI applies for to many different diverse vertical market applications. also hi
server.
>> Well, they are all the hardware.
>> Yeah.
>> On top of the hardware solution, we need software evaluated. That's a very great
software evaluated. That's a very great example that we want to show our customer or partner. Oh, wait a second.
>> We have a new new thing straight off the the factory line from Taiwan.
>> This is >> just in time. This is the uh development kit that we create with the cocom
together. Um the development kit for
together. Um the development kit for sure there are some advantage they have already the the preverify
advante devices with cocom new uh QCS 6490 also the new uh IQ9 IQX solution all together also pre-install prevertify
advantage hardware devices together with the pre-install plus tool for AI inference more importantly this a uh
exclusive three months uh the static key for edge impulse built in together as a hi development kit.
>> So you'll be able to buy buy it a developer kit or a camera or a gateway and just get an edge Impulse license pre-built. There you go.
pre-built. There you go.
>> Pre-built. Yes, for sure. And awesome.
and also very focused on some domain market as you can see like robotics AMR smart factory or smart uh uh retail market very very important development
kit that can support our customer also our partners speed up your hai development into your vertical market application >> amazing Miller thank you for joining us here at Imagine all the way from Taiwan
you >> great having And next I want to dive deeper into the technology and invite Nul Dal, SVP and
group general manager from Qualcomm.
Welcome Nicole.
>> When we talk about car guys, Nicole Nicole is maybe the biggest car guy that I know. Uh we'll talk a lot more about
I know. Uh we'll talk a lot more about >> I'm not as big a car guy as you but yeah I do spend a lot of time with the automotive industry.
>> So everybody here I'm sure is dying to know more about why Qualcomm and Edge Impulse. What's what's your vision of
Impulse. What's what's your vision of how all this comes together.
>> Yeah. First of all thank you for having me. I think it's incredible that you
me. I think it's incredible that you started this event uh in the middle of COVID and you've sustained it for five years. You know I think that actually
years. You know I think that actually explains it. We've been uh we've been a
explains it. We've been uh we've been a B2B company all of our uh you know the last 40 plus years we are now a 40-y old company and we've been very successful
in bringing really edge technologies edge products to pretty much every ecosystem and that was kind of how the company was founded but we were never
really a developer focused company and uh so as we started to look at uh where the market is headed in terms of IoT and
AI uh intersecting it became pretty clear that for us to be able to continue on to this journey that we are on and scale our business we would have to
infuse a very different DNA into the company. So as we started to look at how
company. So as we started to look at how we would go about doing that uh it was it was the companies we were looking at but it was also the founders that we were looking at. It was also the types
of culture and the types of uh mindset uh in terms of problem solving solving for the ecosystem being more interested in the problem statement and then how to get to the solution. So that I think is
what got us uh introduced and uh you know I think I think it's been a great uh great 12 months or so.
>> I 100,000% agree. Um and I want to dive a little bit more in the in the technology stack and how how all this comes together, right? We're now talking about building Ferraris, right? So this
is really complicated compared to what we do with microcontrollers.
>> Yeah.
So I think for us uh you know when we when we think about a stack uh our stacks serve many many many different types of customers right our stacks uh serve may maybe just to get started we
serve Android we serve Windows we serve Linux we serve the entire mobile ecosystem we are in PCs we now in cars we are obviously in IoT so it really is
serving many different masters and uh you have to start to think about the problem statement and then figure out how to internalize it from the perspective of what persona you use the
term persona earlier. What persona is interested in solving the problem and then you have to meet that persona within their environment. You can't
expect them to come to where you are at.
So we've been going through this journey in terms of how do we build a very complicated Swiss knife that needs to have some very specific utility for the persona that you're trying to deal with.
So we've been working with uh you know with Jan and with Zach and their teams and of course the corecom team in terms of how to uh infuse that personality that we need to build around our
products and I think we've had you know it's a it's been a great journey by the way you know uh stay tuned there is more to come on the developer side in the coming uh days uh this ecosystem is a
really important ecosystem for us so you will see Qualcomm do more and more >> yeah and you can really think about that this is building a an open developer platform that works across the entire
silicon industry. As you see, we we
silicon industry. As you see, we we we're keeping and expanding our industry that works on everything. Um to bring this easy to use developer platform and make this cool new silicon technology
like we've been talking about a few pieces of silicon like 6490 and IQ9.
These are extremely complex mobile SOC's almost unusable by a normal human being.
We're trying to expose these right to developers who just want to go and deploy workloads. And so that's kind of
deploy workloads. And so that's kind of the holy grail of bringing all this together. We want to make this stuff
together. We want to make this stuff easy to use.
>> You know, I think one thing that has been really interesting, you touched upon uh IQ9 in your comments. This whole
concept of cascading models where it's a multi-staged workflow that is running at the edge. This wasn't something that
the edge. This wasn't something that people thought about two years ago where the sensor detects what the camera is seeing or what the uh DSP is listening
to. And then you kick off a chain of
to. And then you kick off a chain of events that will be fed into you know uh a VLM or a VLA. And today if you think
about uh how far along these systems are at some level image is actually become you know fairly advanced. LMS have
become fairly advanced. Those are
merging. But to be able to think about the problem statement at the edge and then simplify it and make it available for a developer to just use out of the box that I think is still something that
requires a lot of work. So we build tremendous products that actually have the ability to solve these problems out of the box. We just have to make sure it's for the right persona.
>> Let's talk about cars.
>> Yes. So, those of you that don't know, Nicole built the automotive um business at Qualcomm from nearly nothing to uh now a $45 billion
um pipeline.
I was super impressed by the announcement together with BMW, the new BMW i3X full ADA stack self-driving or
assisted driving vehicle. Um tell us about that journey building a this level of edge Yeah, >> you know the car is a really interesting platform because it is an edge platform.
I mean there is no other platform that is more complex than a car and uh there are a lot of things that think of in a car that we know as a consumer and you were showing the video of you know
measuring audio signatures etc. What hasn't really happened in the history of automotive is so much of technology being directed towards this platform and you know Tesla started this you can
start to see this now in China but really I think uh you have to think about the car as a first citizen when it comes to the adoption of uh literally
edge AI because everything in the car is real time everything in the car has a certain purpose because it is a product that uh is built for safety for
efficiency for cost uh it has to be something that you know you need to be able to use in a in a very predictable manner and yet if you think about it there is almost no AI that is running in
the car the way that you've been uh the way that you've been describing it. So
we kind of caught on to this concept about 10 years ago where we said it's super important for us to not think about the car as an adjacency but it
really has to be a foundational product segment and uh we first started with the basics quality reliability u which obviously you have to be able to meet
higher standards but then we started to move towards the concept of safety grade silicon.
>> So every piece of building block that we build for the automotive industry is designed safety because you want to be able to run any kind of software without thinking about what surface area it's
running on. As we took that approach,
running on. As we took that approach, the auto industry realized as a large semiconductor supplier uh and somebody who builds a lot of technology, Qualcomm will be a good partner and so we became
very interesting to the ecosystem. uh we
started to think about ADAS quite deeply because uh ADAS is the ultimate and automated driving now is the ultimate uh problem statement when it comes to AJI
it's real time it's mission critical uh you have to continuously be learning you we are all now users of this technology that was that seemed far-fetched not
even three or four years ago and so the partnership that we created with BMW was one where uh it was It was a leap for us because we do silicon, we do software,
we do a lot of IP but we had never really done uh a stack and uh you know for us the complexity is if you're going to do something we have to go all in. We
are not going to depart in the water and in this case uh you know you can now buy the latest Noia class or BMW i series
with a Qualcomm stack driving ADAS running on Qualcomm silicon. So that's
something that was a challenge that we gave to ourselves at that time. We you
know we kicked off a lot of complicated efforts. We acquired a stack entity from
efforts. We acquired a stack entity from within a tier one. We built our first automotive grade silicon. We have over thousand people thousand engineers
working with BMW in a joint stack.
>> Thousand across you know seven countries. We've uh collected a million
countries. We've uh collected a million kilometers of data. We have deployed in 60 countries this year. We'll be in 100 countries next year. So, so you know I I
I kind of uh see the developer problem statement and this problem statement as opposite sides of a very complicated set of problems but uh I think they are all kind of for the same purpose where whatever we have to do have to do at
scale. So I think the the automotive
scale. So I think the the automotive journey has been really eye opening and uh very very meaningful. So let let's talk about that other extreme end. I
think there's a lot we can learn from building ADAS stacks for example robotics right how do we go and solve these problems but then how do we make it accessible for developers when when you and I first met we first thing
Nicole said was that I want to get to a million developers fast and I was like I'm in that's exactly my vision as well right Jan and I have always wanted to
reach millions of developers um how do we think about this like developer vision and democratization what's that going to do for edge AI as an industry you know I'm frankly learning the
developer side. I don't uh I I I'm not
developer side. I don't uh I I I'm not an expert. I'm very happy to be invited
an expert. I'm very happy to be invited here and be amongst all of you. I think
uh I think of it a couple of different ways. I think there is a technology
ways. I think there is a technology aspect which we talked about which has to scale. There is an access to an
to scale. There is an access to an ecosystem aspect that we are working on that we will uh share with you soon as to how we are touching that. But then
there is also a use case aspect. I think
one thing so we kicked off internally a program on humanoids recently and what you realize on the humanoid uh uh problem statement is you have to run
into yet another complexity of data collection and training which is the action where you are actually trying to mimic what a human does you know because a car is actually simpler because you
are you know it's basically lateral movement and acceleration that's all that you're really dealing with but in this case it's infinite poss possibility. So we are working with uh
possibility. So we are working with uh companies that uh have asked us to look at a retail store for example and build a robot that will organize uh uh the
inventory or the stock you know is the product expired has it fallen down is it organized per you know per the uh >> per the uh kind of the design criteria for the shelf
>> and so it's a completely different problem statement because you are getting into kind of system one system to thinking you have to be able to reason, you have to be able to act, you have to be able to go back and forth.
So, this is really endless in terms of the types of things that you have to go to. So, it's this it's this cycle of you
to. So, it's this it's this cycle of you have to have the right hardware, you have to know what is the speed at which it has to operate, uh what accuracy does it have to have, what training is
needed, who are you going to get that data from, how these use cases are different one from the other, what's a, you know, for lack of a better word, a 2D use case versus a 3D use case.
But it's all part of that same uh spectrum of complex problems. So you know I'm super privileged to be part of uh this journey inside Qualcomm because we get to see a lot of complicated
things and get to work with a lot of very smart people >> and our mission will be to make this all accessible for developers at scale. So
that's the the thing that you can expect to see much more of.
>> Nicole, thank you for joining me.
>> Thank you for having me. Thank you.
So talking about developers, we've had amazing traction as a developer community. Just Edge Impulse alone. Um
community. Just Edge Impulse alone. Um
doubling year on year, we've gone from just over a 100,000 developers now to 230,000 developers on the platform. Um
and you look at our data collection. So
this is the actual data that developers push onto the platform for training.
We're at over a billion new samples just in the last 12 months. So this we expect to only increase. We're going to be putting a lot of new resources into developer programs, developer
partnerships. We're going to hear more
partnerships. We're going to hear more from Arduino next. And it's important that we go all the way back to the students in education. So we've just announced a complete new university
program. We're empowering professors,
program. We're empowering professors, their students with all the compute power next year with new hardware so they can go and train all the engineers that someday are going to be working for
your companies are going to have your job. Um so all the young engineers
job. Um so all the young engineers learning about edge AI um and eventually powering this whole industry. So super
exciting to see what happens with developers going forward.
And with that just want to summarize um it's time to build Ferraris guys. We got
the Fiat 500 down. We'll keep doing that. Let's build Ferraris together.
that. Let's build Ferraris together.
Let's serve these high-end customers that we're getting access to now.
And we got to do it as an ecosystem, right? Edge Impulse has a role to play.
right? Edge Impulse has a role to play.
We're a developer platform, but we need everybody involved to go make these things happen. Every class of silicon,
things happen. Every class of silicon, every class of OEM, ODM device, every type of system integration, everybody has a role. And let's get to that million developers, right? We got a way
to go. But this is the critical mass I
to go. But this is the critical mass I think it takes. Once we hit this, this industry will just keep going and keep growing. So that's a goal for us all to
growing. So that's a goal for us all to hit. And with that, I want to thank you
hit. And with that, I want to thank you and welcome Fabio Ferrari Violante on stage.
Loading video analysis...