Definitions of AI and How Companies Use Them to Lie
By Internet of Bugs
Summary
Topics Covered
- AI Term Enables CEO Lies
- Narrow AI Beats Humans at Games Only
- Generative AI Delivers 80% Solutions
- Demand AI Definitions to Avoid Scams
Full Transcript
So, there's a big problem with AI, and that problem is "AI", well, the term "AI".
Whether it's the abbreviation or the words artificial intelligence, it's still problematic.
It's not that there isn't a definition, it's that there are very many definitions, and the current crop of AI folks are taking advantage of that.
So, in this video, I'm going to talk about at least several of the many various things that AI can mean and try to give you some understanding of what the differences are and hopefully give you at least a little bit of insight into which definition might be the correct one in particular situations.
I hope this helps because AI companies are throwing the term around like you're all incredibly gullible and virtually never one ever calls them on their bullsh------ This is the Internet of Bugs.
My name is Carl.
I've been a software professional for more than 35 years now, and I'm hoping to make the Internet a more reliable, less buggy place.
So first off, the term AI was most often used, up until ChatGPT at least, as a component of science fiction stories.
I think I first ran across it in the mid-1980s when I read William Gibson's Neuromancer for the first time.
Arguably, the most popular use of the term AI in modern media is the Matrix series.
[Morpheus]: "At some point in the early 21st century, all of mankind was united in celebration.
We marveled at our own magnificence as we gave birth to AI."
[Neo]: "AI? You mean artificial intelligence?"
When it comes to understanding what AI means, the way it's used in sci-fi doesn't narrow it down a whole lot, and that's not good for those of us that are having to make decisions about what is currently going on with AI products, or investments, or jobs, or whatever.
For the current crop of AI companies and professionals though, it's a huge plus because it lets them keep their claims vague and it makes it hard for people to call them liars.
It also helps the current AI industry in the press a lot because it means the reporters that have no actual understanding of how anything related to computers works, compare what the AI companies claim to be working on, to what the reporters have seen in the movies, and that keeps them from asking as many questions.
The next definition of "AI" is "the most advanced thing any computer will be able to do anytime between now and billions of years from now when the sun dies."
This is the implicit definition of AI that the AI Doomers are using when they ask people, "what do you think the chances that AI might destroy all of humanity?"
I made a video about that which you can watch, link below.
link below.
I'm sure I'll make more videos about it soon because it keeps going.
The next definition of "AI" is one that the companies kind of imply that they're using, but they rarely actually mean it that way, which is: "whatever the current generation of AI software will be able to do before the current AI bubble pops, and no one wants to spend any money investing in AI anymore."
So this is the definition that gives AI the least amount of functionality of any of the other definitions we're going to talk about today.
today.
Let me stop at this point and talk about how these definitions are used, and then we'll come back to some other definitions later.
So I recently made a video in which I talked about a Sam Altman quote, in which he compared AI to the invention of the transistor.
So let's think about that statement for a sec.
So in theory - not that it would happen in this political climate, but I'm not going to go there - In theory, when a CEO of a company lies about the company's potential future earnings, that CEO could, again in theory, get in trouble for misleading investors.
Now, the invention of the transistor led to every dollar that has ever been spent on any computer, smartphone, smartwatch - even most dumb watches - electronics, TVs, radios, etc. That's a lot of money, like maybe "more than a trillion dollars a year every year since the 1990s" kind of money.
In theory, if OpenAI were to collapse and go bankrupt, someone might sue Sam Altman based on this statement and say that this statement had misled investors.
In the event that that were to happen, Sam Altman or any other CEO could just claim by "AI" in that comparison.
He was just talking about what AI might someday become.
become.
In other words, "what AI might accomplish before the sun cools" and there's no way anyone in life can day can prove what AI may or may not be able to do in a billion years from now.
now.
So Sam might be able to get out of the lawsuit without getting into trouble.
On the other hand, if investors were to believe that Sam's comparison was accurate and that the "AI" in the statement meant "what OpenAI is going to do as a product before the current stock bubble collapses," that would be to OpenAI's OpenAI's great advantage.
And likely also to Sam's personal advantage, because if OpenAI were truly about to produce a product that would be worth trillions of dollars every year for the next 30 plus years, then all current investment in OpenAI and Microsoft and NVIDIA, etc. is not only justified, but it's just a drop in the bucket for what the industry is going to turn out to be worth.
In other words, it's to his advantage <i>now</i> for each of you to believe that he's talking about what they're going to release soon, and his advantage in the long term for the <i>courts</i> to believe that he's talking about the theoretical limit of what computers might be able to do eventually.
In a perfect or even a responsible world, when a CEO made a statement like that to a reporter, the reporter should immediately ask what exactly was meant by the term "AI" in that particular statement.
statement.
And if you are a reporter or in a position to influence reporters or other people who can question the CEOs in public forums, I urge you to please stop letting them get away with making these kinds of statements unchallenged.
Okay, I'll get off my Soap Box.
The next definition I want to talk about for the term "AI" is the idea of a specific or narrow use reinforcement learning AI.
Here think "chess playing AI."
We're currently at the point in history where there's a zero percent chance that any chess master, no matter how good, is going to ever be able to beat the world's best chess playing computer ever again, and a human will never be able to win such a game again.
The same goes for a game of Go. A similar kind of special purpose AI was trained to be able to fold proteins, which is a hugely important task that humans can't figure out on our own, but people responsible for that when the 2024 Nobel Prize for chemistry, and rightfully so.
This definition gets pulled out whenever anyone challenges an AI evangelist by saying that there's something that an AI won't be able to do or won't be able to do anytime soon.
So if you were to tell an AI Doomer that you don't think that AI is capable of killing every human in the world, they'll often say something like, "well, that's what people used to say about chess or Go or Jeopardy or protein folding" or whatever, as if these were the same thing.
So here's the thing: If there are very specific, easily measurable criteria for what constitutes a score or a win for a given task, and that task is easily attempted very quickly, in other words, if it's a game with rules, it can be played over and over, or if it's a mechanism like protein folding that has the known correct answer, and the computer can try over and over until it figures out the correct sequence
to get to the right answer, then it's a perfect candidate for reinforcement learning AI, and chances are that we can set up an experiment to create the circumstances for the computer to get better than us at figuring that stuff out.
out.
So if someone is talking about AI doing a specific task better than humans, this is probably what they mean.
Things like "killing all humans" are not good candidates for this kind of learning, because the criteria for what constitutes closer or further away from the goal isn't clear.
Each attempt will take a long time, because unlike a chess game where it can play itself, it has to wait on us to see how we react to what it does, and the computer can't attempt taking over the world over and over and over and over and over again, billions of times until it gets it right.
In fact, if an AI were to try to kill all of us - and I don't believe the current generation of AI is even capable of attempting such a thing - But if some more advanced AI sometime in the future were to try to kill all of us, it would basically only have one chance, because we - or at least some of us - would fight back against it in unpredictable ways.
<i>We</i> couldn't even predict all the different ways we might want to try to fight back, so there's no way it could.
And a lot of us think there's a pretty good chance that we'd win.
Now, I don't want to go down that road, because even if we won and destroyed the AI, we'd still lose a decent fraction of the population that I don't want to lose, but that's why I don't actually fear an AI-induced extinction.
So the next definition of "AI" is what we commonly refer to as "generative AI", in other words, "what ChatGPT and such ChatBots can do now and in the near future."
This is when you see AI is creating text or images or video or code or turning text into speech or speech into text or translating between languages or servers.
This is a good 80% kind of solution.
If you don't care that much about getting things correct, and you can be flexible at how perfect you need the answer to be, ChatGPT and friends are pretty good at this kind of thing.
of thing.
This is what AI means if people are talking about AI now or when it's somewhat, but not a lot, improved from what it can do now.
The next definition is what is often referred to as "AGI" or "artificial general intelligence," which is probably what "AI" meant most of the time before ChatGPT became a thing.
So no one can agree on exactly what this means, which is a problem, although not for this video, but vaguely "AI as smart as a human" is kind of what we're talking about.
This is not currently a thing, and we've been promised it a lot, but no one has been able to deliver it to date.
Some people believe that a human brain or mind or soul or what have you are unique and there's no way a computer will ever be able to get that smart.
smart.
Personally, I'm not one of those people.
I think that they will eventually get that smart, but it will take a lot more research and many more breakthroughs than we're anywhere close to.
close to.
When people talk about AI taking people's jobs, this is usually the kind of AI they are talking about.
Next we've got what's sometimes called "ASI" or "artificial superintelligence."
"artificial superintelligence."
This is the hypothetical point where AI's are so much smarter than us that we might as well be ants or pets or something to them.
I've never seen any evidence that this might actually ever be possible, but no one can prove a negative, so we can't know for sure that it can't happen and people aren't going to stop talking about it.
This basically gets trotted out to scare people, so if people are talking about AI in a way that's scary or seems like it's trying to scare people, this is probably what they're talking about.
talking about.
So I could think of some more definitions, but I think this is enough to hit you with in one video.
So to recap, there are a number of definitions of "AI" that people use.
There's "generative AI": what ChatGPT does now or very soon.
"Science fiction AI": what you see in the Matrix, "This bubble's AI": what the AI is actually going to be able to do if and when this bubble pops, which we'll have to wait and see.
"Specialized AI": AI that does one well-defined thing really well like play Chess or fold proteins. "AGI": as smart as a human coming for
proteins. "AGI": as smart as a human coming for your job. "ASI", AI as much smarter than us
your job. "ASI", AI as much smarter than us as we are smarter than ants, that's coming to kill us all.
And "theoretical future AI": the unknowably advanced, incredibly vague AI that's brought up to argue just with AI skeptics.
So in the future, when you hear someone say "AI", think about what definition or definitions they're probably using, and if there's ever a time when it seems like they may be conflating multiple definitions at the same time, be very, very wary.
very wary.
They're probably trying to screw you.
So that's it for me today.
Thanks for watching.
Let's be careful out there.
Loading video analysis...