You’re Not Behind (Yet): How to Learn AI in 17 Minutes
By theMITmonk
Summary
## Key takeaways - **AI Predicts, Doesn't Understand**: Generative AI systems like ChatGPT don't actually understand our language. They predict it, just like your brain predicts 'wall' after 'Humpty Dumpty sat on a'. [00:32], [01:07] - **Learn Machine English with AIM**: Speak machine English using AIM: A for actor (tell it who it's acting as), I for input (give context and data), M for mission (what to do). This turns vague prompts like 'fix my resume' into structured ones that get 5-10x better results. [03:15], [03:50] - **Master One AI Model First**: Don't jump between tools; pick one foundational model like ChatGPT and go deep, like learning guitar after drumming trains your brain for patterns. Drummers pick up guitar faster than beginners per a Frontiers Psychology study. [04:55], [05:30] - **Build Context with MAAP**: Use MAAP for context: Memory (conversation history), Assets (files/data), Actions (tools like search), Prompt (instruction). Richer context means better AI reasoning and responses. [07:06], [07:47] - **Debug with 3 Patterns**: When outputs are off, use chain of thought ('think step by step'), verifier ('ask three clarifying questions'), and refinement ('propose sharper versions of my question'). Prompting is iterating, not just typing. [09:25], [10:00] - **Verify AI with 5 Checks**: AI sounds confident even when wrong; verify using assumptions (list and rank them), sources (cite two independents), counter evidence, auditing (recompute figures), and cross-model verification across ChatGPT, Gemini, Claude. [13:30], [14:07]
Topics Covered
- AI Predicts, Doesn't Understand
- Master Prompts with AIM Framework
- Master One AI Model Deeply
- Debug Prompts by Iterating
- Verify AI with Five Critiques
Full Transcript
Most people using AI are doing it wrong, which is why it's surprisingly easy to get ahead of 99% of them. I have spent over 20 years in tech and AI as a CEO, board member, investor, building billiondoll companies. And here's what I'm seeing. The gap between people who understand AI and those who don't is getting wider faster. In this video, I'll give you a clear sevenstep road map to master AI like the top 1%. And the best part is you can actually do it in just 30 days, even if you're a total
beginner. Let's dive in. Week one starts with learning what I call machine English. Most people talk to AI like it's a person. And that's a huge mistake. Why? Because the generative AI systems like Chad GPT don't actually understand our language. They predict it. And that's where most people get stuck. If I said Humpty Dumpty sat on a Your brain's going to fire wall, you knew what was coming. Your brain predicted it. You could have said Humpty Dumpty sat on a roof. Now it's accurate,
beginner. Let's dive in. Week one starts with learning what I call machine English. Most people talk to AI like it's a person. And that's a huge mistake. Why? Because the generative AI systems like Chad GPT don't actually understand our language. They predict it. And that's where most people get stuck. If I said Humpty Dumpty sat on a Your brain's going to fire wall, you knew what was coming. Your brain predicted it. You could have said Humpty Dumpty sat on a roof. Now it's accurate,
but you knew wall was more likely based on what you've seen before. Think about Google search. It does autocomplete the same way. Why? Because it has seen so many search queries before. It has learned from it and now is giving you the most likely option. AI models like Chat GPT or Gemini work in a similar fashion, but they're different than search engines because they don't store any pre-baked answers. They generate the answer on the fly. How do they generate
it? Like at a very high level, AI breaks your text into smaller parts called tokens. Each token is a word or sometimes a part of a word. Humpty is probably one token. Dumpty could be another token. Sat another token. Wall another token. Then AI converts each token into a list of numbers, also known as multi-dimensional vectors. Those numbers are placed inside a massive mathematical space called an embedding space. And in that massive space, similar ideas tend to live closer
it? Like at a very high level, AI breaks your text into smaller parts called tokens. Each token is a word or sometimes a part of a word. Humpty is probably one token. Dumpty could be another token. Sat another token. Wall another token. Then AI converts each token into a list of numbers, also known as multi-dimensional vectors. Those numbers are placed inside a massive mathematical space called an embedding space. And in that massive space, similar ideas tend to live closer
together. The system has learned from previous experiences. So, it knows that the word Humpty, egg, wall, and fall will be closer, but they're going to be far from words like motorcycle or chocolate. Now, when it's time to generate the answer, AI looks at the context and predicts the most likely next token. So, when it sees Humpty Dumpty had a great, it weighs all the options. Humpty Dumpty had a great party. Humpty Dumpty had a great day. Humpty Dumpty had a great chocolate. and
together. The system has learned from previous experiences. So, it knows that the word Humpty, egg, wall, and fall will be closer, but they're going to be far from words like motorcycle or chocolate. Now, when it's time to generate the answer, AI looks at the context and predicts the most likely next token. So, when it sees Humpty Dumpty had a great, it weighs all the options. Humpty Dumpty had a great party. Humpty Dumpty had a great day. Humpty Dumpty had a great chocolate. and
it sees that the word fall is the most likely outcome. So the line is generated and finished not from memory, not from stored facts, but from probability and proximity. That's why AI can feel so smart, but also so alien. Now, I'm skipping a lot of details here, but the important takeaway here is that when your prompt is vague, this guessing machine called Chat GPT or Gemini will produce guesses that are also vague. And if your prompt is sharp and targeted, AI will come back to you with sharp and
targeted guesses. That's what I call machine English. It helps AI to compute your intent, not just try to comprehend it. So, what does a sharper prompt look like? I call it aim. A for actor. Tell the model who it's acting as. I is for input. Give it the context and data it needs. And M for mission. What do you want it to do? Instead of typing, let's say, fix my resume, try typing, hey, at GPT, you are the world's most sought after ré editor and business writer.
targeted guesses. That's what I call machine English. It helps AI to compute your intent, not just try to comprehend it. So, what does a sharper prompt look like? I call it aim. A for actor. Tell the model who it's acting as. I is for input. Give it the context and data it needs. And M for mission. What do you want it to do? Instead of typing, let's say, fix my resume, try typing, hey, at GPT, you are the world's most sought after ré editor and business writer.
You've reviewed thousands of résumés that led to interviews at top tech companies. You've told the AI what its persona is, what it's acting as. A second line, I'm attaching my resume and the job description for a senior product manager role at a fintech company. That's your input. Third, mission. Review it and give me a bullet list of 10 specific ideas on how to improve clarity, measurable impact, align with the role. Your mission is to help me build the best resume that gets me
hired. That's how you take aim. It turns a prompt into a structure. The model can understand, compute, and reason with. You can use this three-part structure in almost all prompts. And from now on, you will start seeing the results to be at least five or 10 times better than before. Only when you learn its language does AI finally start working for you. Now that you understand how to speak to AI, we're going to pick your instrument. Here's the thing. Most people start
hired. That's how you take aim. It turns a prompt into a structure. The model can understand, compute, and reason with. You can use this three-part structure in almost all prompts. And from now on, you will start seeing the results to be at least five or 10 times better than before. Only when you learn its language does AI finally start working for you. Now that you understand how to speak to AI, we're going to pick your instrument. Here's the thing. Most people start
their AI journey the wrong way. They Google top 50 AI tools. They pick 10 and they jump from one to the other. They skim through all of them. That's a recipe for failure because there's so much out there. My recommendation, pick one, go deep. Think of learning AI the same way you would learn an instrument. You know, there is a study in Frontier Psychology that found that drummers pick up guitar faster than complete beginners. Drumming is not even about melody and it requires very different
physical skills. But I personally had the same experience. I spent tens of thousands of hours as a drummer. And when I picked up guitar, it wasn't easy, but it wasn't uncomfortable because I already knew how to practice and my brain was trained to see structures and patterns. The deeper you dig into one foundational model, the faster you will find the rhythm of all the others. So, which one do you pick? If you want the most mature one, pick Chat GPT. If you're deep into Google stack and
physical skills. But I personally had the same experience. I spent tens of thousands of hours as a drummer. And when I picked up guitar, it wasn't easy, but it wasn't uncomfortable because I already knew how to practice and my brain was trained to see structures and patterns. The deeper you dig into one foundational model, the faster you will find the rhythm of all the others. So, which one do you pick? If you want the most mature one, pick Chat GPT. If you're deep into Google stack and
Google's ecosystem, try Gemini. If you want more business and projectbased AI, go with Claude. But really, it doesn't matter what you pick. In the first week, spend time with one of them and learn its personality, its cadence, its limits, its strengths. The goal is to start feeling the rhythm. Once you get comfortable, try using the aim framework that we talked about. By the end of week one, you should be able to write a structured prompt without thinking. All
right, so we've started using AI. Now, let's talk about what actually makes your outputs smart, and that's context. The world's smartest AI will sound clueless unless you feed it context. Every answer AI gives depends on how it understands the question. If you don't give it context, it has no grounding. Remember that inside these AI models, there is nothing but a crazy mathematical space filled with billions of numbers. Context is the map that helps you navigate that space to tell AI
where to look and what matters. And the best way to build that map is with an acronym I call map. M is for memory. the conversation history or the notes that carry over from previous chat sessions that you've had with the AI. Now, you can repaste the thread or ask the model to summarize before starting again. That's how you'll start building continuity in your conversations. A is for assets. The files, data, the resources that you attach or copy paste in your prompt. These assets help you
ground the model in reality. Second A is for actions. Now these are the tools that the model can call to do work. The action could be search the web or scan your drive or write this code or create a notion doc and P is the prompt and the prompt is the instruction itself. So the better you get with memory assets and external actions, the better context you'll give AI in the prompt. And the richer the context, the better the AI reasoning and response. Once you start
using these frameworks like AIM and MAP, you have joined the top 10% of AI users. But if you want to hit that absolute expert level, there is one more thing that you really need. Debug your thinking, which is step four. When you're not getting the right answer, the problem is not the AI, it's your thinking. I remember the first time I ever prompted an AI. It was one of those earliest models from OpenAI and I spent an entire day trying to make sense of it and by the end of it I was super
frustrated because it was random. It was unpredictable. But back then no one understood. The phrase prompt engineering hadn't even existed yet because prompting isn't typing. It's iterating. When the output is weak, I assume the fault is mine because it is. Did I get it the right persona? Did I provide the right context? Did I give it the right goal? And sometimes I even ask the model itself, what did you do? And why did you choose that answer? It will explain its logic. He'll explain his
chain. And that's when the magic starts. You're not just using AI, you're learning how it thinks. There are three cheat codes I use for that. The first is the chain of thought pattern. When the answer seems off, I would say think step by step. Show your reasoning. Then give me the final concise answer. The second is the verifier pattern. I would say to the AI, ask me three questions that would clarify my intent to you. Ask them one at a time and then combine what
chain. And that's when the magic starts. You're not just using AI, you're learning how it thinks. There are three cheat codes I use for that. The first is the chain of thought pattern. When the answer seems off, I would say think step by step. Show your reasoning. Then give me the final concise answer. The second is the verifier pattern. I would say to the AI, ask me three questions that would clarify my intent to you. Ask them one at a time and then combine what
you've learned and try again. And the third is the refinement pattern where you're refining your input itself. Before answering, propose two sharper versions of my question. Ask which one I prefer. So AI will tell me how to ask the right way. And then we continue. And you have to keep iterating with these patterns because these loops can teach the model how to understand you and teach you how to understand the model. test, tweak, tune up, push until you can tell why something is working and why
something is off. That's when it clicks. You're not talking at AI anymore. You're having an ongoing conversation. You and AI are learning together from each other. But here's the thing, it's not enough to just debug your mind. If your post sounds like every other LinkedIn post I see that's pasted from chat GPT, you still have a problem. And that's why step five is to steer to experts. When you ask Chat GPT a question, you're not searching a database of answers. You're
sampling from millions of probable ideas that AI has learned over time and is storing as billions of numbers. is some are brilliant, some are average, some are completely made up, and some are flat out wrong. If you prompt vaguely, like explain how to make a team more innovative, the model will give you a superficial generic blah answer full of buzzwords. And you'll read it and think, "Yeah, I already knew that." So, how do you fix that? You direct the model away
from the middle and toward the sharper edges of its brain. So instead of that vague prompt, you can say this. Explain how to make a team more innovative using ideas from Pixar's brain trust, Satya dea strategy, and Harvard's research. Now you pull the model from mediocrity into mastery by navigating it toward experts, frameworks, depth. What if you want to learn about black holes and you don't know who the experts are? No problem. Ask AI first. List the top experts, researchers, and research
papers and current thinking on black holes. Then feed the same thing back to the model and prompt using these experts and sources synthesize the original framework that fills the current gap on the science of black holes or whatever it is that you're after. That's the way you make sure AI is not an echo chamber anymore. But remember, you're going to need to verify what you get. That's our step six. Sometimes AI will tell you things like 68% of Americans are getting
divorced. I mean, you know, it's not true. But the scary part is AI will sound just as confident when it's wrong as when it's right. So, you can tell AI 100 times, stop making stuff up. But all models are essentially generative by design. Making things up is why they exist. So, what do you do about that? You simply verify. Don't just consume. Critique. There are five ways to separate intelligence from illusion. Assumptions, sources, counter evidence, auditing, and cross model verification.
divorced. I mean, you know, it's not true. But the scary part is AI will sound just as confident when it's wrong as when it's right. So, you can tell AI 100 times, stop making stuff up. But all models are essentially generative by design. Making things up is why they exist. So, what do you do about that? You simply verify. Don't just consume. Critique. There are five ways to separate intelligence from illusion. Assumptions, sources, counter evidence, auditing, and cross model verification.
Let's take one at a time. Assumptions, ask. List every assumption you made and rank them each by confidence. Second is sources. Ask. Site two independent sources for each major claim that you just made. Include title, URL, and a oneline quote. Now you can check it yourself. That's the scaffolding behind the answer. Counter evidence. Push it. Find one credible source that disagrees with your answer. Explain the dependencies. That's where real reasoning lives. Auditing is the fourth
one. Ask. Recomputee every figure. Show your math or code. You'll be shocked how often the numbers change once you make it slow down and start auditing. And finally, crossmodel verification. This one's my favorite. I run the same prompt in ChatgPT and Gemini and Claude. I take the output from one model and ask another to critique it. Or I feed the claims of one model into the other and say, "Verify this." That's how you separate noise from knowledge. By the end of your third week, you'll start
one. Ask. Recomputee every figure. Show your math or code. You'll be shocked how often the numbers change once you make it slow down and start auditing. And finally, crossmodel verification. This one's my favorite. I run the same prompt in ChatgPT and Gemini and Claude. I take the output from one model and ask another to critique it. Or I feed the claims of one model into the other and say, "Verify this." That's how you separate noise from knowledge. By the end of your third week, you'll start
feeling more in control of your output. But here's the problem. The best AI output aren't the ones that sound the most original, they're the ones that sound like you. That's why step seven is about developing tastes. Most people use AI like a vending machine. They push a button, grab the same junk food output everyone else gets, and call it a day. If you did that, most people will know you just copy pasted it. But you are past that now, right? It's your fourth
week. It's time to step into the ring. Treat AI like your sparring partner. Argue with it. Push back. Sharpen your thinking. Sharpen its thinking. That's where the ocean framework comes in. Is how you turn generic answers into tasteful insights. Something that sounds like you. Oh, original. Look at the response. Is there a nonobvious idea in it? If not, push it. Ask, give me three angles. no one else has thought about. Label one as risky and recommend the one
week. It's time to step into the ring. Treat AI like your sparring partner. Argue with it. Push back. Sharpen your thinking. Sharpen its thinking. That's where the ocean framework comes in. Is how you turn generic answers into tasteful insights. Something that sounds like you. Oh, original. Look at the response. Is there a nonobvious idea in it? If not, push it. Ask, give me three angles. no one else has thought about. Label one as risky and recommend the one
that you like the most. C concrete. Are there names, examples, and numbers that make sense? If not, ask. Back every claim with one real example. E is evident. Is the reasoning visible? Is there enough evidence? If not, ask. Show your logic in three bullets. Provide evidence before you provide final answer. A assertive. Does it take a stance? you could agree or disagree with. If not, push it again. Don't tell me what I want to hear. Pick a side. State your thesis, defend it, and then
address the best counterpoint. Narrative. What's the story? Does it flow? Is it tight? Guide it. Write it like a story. Hook problem insight proof actions whatever you want in that story. So, that's the ocean framework to add taste to your output. Now, as you apply this over 30 days, you will start noticing something deeper. Every prompt you write, every revision you push, every judgment you make, you're not just training the model, you are training you. AI is coming whether we like it or
not. To some, it might be triggering lots of deep fears, but I remain a perpetual optimist. I think AI is not here to replace human work. It's here to restore human worth. If you like this video, don't forget to subscribe and check out my most recent video here. Thank you and I love
not. To some, it might be triggering lots of deep fears, but I remain a perpetual optimist. I think AI is not here to replace human work. It's here to restore human worth. If you like this video, don't forget to subscribe and check out my most recent video here. Thank you and I love
Loading video analysis...