How to digest 36 weekly podcasts without spending 36 hours listening | Tomasz Tunguz
By How I AI
Summary
## Key takeaways - **Automate podcast consumption**: Tomasz Tunguz developed a 'Parakeet Podcast Processor' to download, transcribe, and summarize 36 weekly podcasts, enabling him to extract value without dedicating 36 hours to listening. [00:06], [00:39] - **Terminal-based workflow for efficiency**: Preferring terminal-based tools due to their low latency and scriptability, Tunguz built a personalized software experience for podcast processing that offers greater control than off-the-shelf solutions. [01:05], [10:20] - **Extracting investment theses from quotes**: The system extracts key quotes from podcast transcripts and uses AI prompting to suggest actionable investment theses, like identifying potential in 'AI assisted design tools'. [00:49], [07:25] - **Iterative blog post generation with AI**: Tomasz uses AI to draft blog posts, employing an 'AP English teacher' grading system with three iterative feedback loops to refine the content and approach an 'A minus' quality. [15:31], [17:34] - **Matching personal writing style is challenging**: Despite fine-tuning models and providing extensive context, capturing a personal writing style, including specific punctuation and sentence structures, remains a significant challenge for AI. [18:25], [21:11] - **AI as a writing evaluation tool**: AI can serve as a valuable first pass for evaluating writing, checking grammar and logic, freeing up teachers to focus on fostering creativity and stylistic nuances. [28:16], [29:08]
Topics Covered
- How AI builds your hyper-personalized content stream.
- Large Language Models simplify entity extraction tasks.
- Terminal: The low-latency key to AI productivity?
- Can AI objectively grade and improve your writing?
- How will AI enable 30-person, $100M companies?
Full Transcript
I have a list of 36 podcasts, but I
don't have 36 hours every week to listen
to 36 podcasts.
>> So, what I did is I created a system
that goes through each of those podcasts
every day and downloads the podcast
files and then transcribes them.
>> Can you show us how it's actually built?
Like where do you get this feed? It
sounds like you run it locally. How does
this all work?
>> I wrote this thing called the Parakeet
podcast processor. And this podcast
processor basically takes in a file and
what it'll do is it will read the file.
It'll download it and then it'll convert
it via ffmpeg. Then that will take the
audio and convert it to text. So here's
the podcast summaries for today. So
there's Lenny's podcast, the host, the
guests, a comprehensive summary. So
here's a conversation with Bob Baxley,
key topics, and then key themes. The
part that's most valuable for me are
these quotes. And those quotes, I'll
read them. It'll suggest a bunch of
actionable investment thesis for a
venture capital firm which is put into
the prompt like okay maybe we should be
looking at AI assisted design tools.
>> You've gotten not only the content you
want but the user experience you want
you control it end to end and you can
build this hyperpersonalized software
experience.
[Music]
Welcome back to how I AI. I'm Clarvo
product leader and AI obsessive here on
a mission to help you build better with
these new tools. Today I have Tom
Tungus, a legend in the enterprise
software business and founder of Theory
Ventures, which invests in early stage
enterprise AI, data, and blockchain
companies. Tom is followed by over a
half a million folks on his blog and
LinkedIn. And he's going to show us
today how he uses AI to keep up with all
the podcasts, including this one, and
draft blog posts that would be approved
by your AP English teacher. Let's get to
it. This episode is brought to you by
Notion. Notion is now your do everything
AI tool for work. With new AI meeting
notes, enterprise search, and research
mode, everyone on your team gets a
notetaker researcher doc drafter
brainstormer. Your new AI team is here
right where your team already works.
I've been a longtime Notion user and
have been using the new Notion AI
features for the last few weeks. I can't
imagine working without them. AI meeting
notes are a gamecher. The summaries are
accurate and extracting action items is
super useful for standups, team
meetings, one-on ones, customer
interviews, and yes, podcast prep.
Notion's AI meeting notes are now an
essential part of my team's workflow.
The fastest growing companies like
OpenAI, Ramp, Verscell, and Cursor all
use Notion to get more done. Try all of
Notion's new AI features for free by
signing up with your work email at
notion.com/
how I AI. To celebrate 25,000
YouTube followers on how I AAI, we're
doing a giveaway. You can win a free
year to my favorite AI products
including VZero, Replet, Lovable, Bolt,
Cursor, and of course, ChatPD by leaving
a rating and review on your favorite
podcast app and subscribing to YouTube.
To enter, simply go to how ai
pod.com/giveaway,
read the rules, and leave us a review,
and subscribe. Enter by the end of
August and we will announce our winners
in September. Thanks for listening.
Okay, Tom, I'm so happy to have you here
because you are going to show us how you
are solving a problem I'm creating for
you. And the problem the problem I'm
creating for you is I am creating yet
another piece of interesting content
that you have no time to consume.
certainly the format that we get it out
and I know TEU content is a really
interesting source of ideas of trends of
companies. So tell us what you built and
why.
>> Absolutely. Well, thanks for having me
on. So I I don't I prefer to read than
to listen uh because I can skip ahead
and I think there's a lot of information
inside of podcasts that people share
that I would love to know. And so I
built I guess what I call a podcast
ripper. And the idea is I have a list of
36 podcasts, this one included, that I
really admire and I want I want to learn
from, but I don't have 36 hours every
week to listen to 36 podcasts, right? So
what I did is I created a system that
goes through each of those podcasts
every day and downloads the podcast uh
files and then transcribes them using
initially it was open source or open AAI
is open source whisper which takes audio
and converts it to text and then there's
a new version called parakeet which
Nvidia released that runs really well on
a Mac and so I'll take that text and
then I'll run it through a prompt and it
will spit out a whole bunch of different
things. uh it'll spit out high level
summary or whatever I ask it to.
>> Okay. Can you show us how it's actually
built? Like where do you get this feed?
Do you It sounds like you run it
locally. How does this all work?
>> So I initially downloaded the Whisper
app and what I did is I wrote this thing
called the Parakeet podcast processor.
And this podcast processor
basically takes in a file and what it'll
do is it will read the file. It'll
download it and then it'll convert it
via ffmpeg
which is a library that converts
different kinds of files and that will
take the audio and convert it to text.
And then I use uh Gemma 3 which is
really good at this to actually clean up
the transcript. So, if we search for
the Olama model,
basically what I'm doing is I'm just
cleaning up the file here. Your
transcript editor, clean up this podcast
while preserving all the content, keep
the same length, remove the ums and the
o's, preserve all technical
conversations. And that returns a clean
transcript. And so on a given day, there
might be five or six different
transcripts that need to be transcribed.
And then what I'll do is it runs through
the parakeet podcast orchestrator.
Actually, it's just a podcast
orchestrator
which is here. And so I'm storing each
of the files that I'm transcribing in a
local duct DB which is a little database
that says I process this particular
podcast on this particular day. And then
I save the transcripts and I take all
the transcripts on that particular day
from the database which is here. And
then I send them through a prompt
which see if we can find it.
Summarizes
here the daily summarizer. So it
generates a daily summary document um
which is here.
It'll produce a file that looks like
this. So here's the podcast summaries
for today June the 13th. So there's
Lenny's podcast, the host, the guest, a
comprehensive summary. So here's a
conversation with Bob Baxley.
uh key topics. So here he's talking
about his philosophy, company culture,
and then key themes. And the the key the
the part that's most valuable for me are
these quotes.
And those quotes are then, you know,
I'll read them. It'll suggest a bunch of
actionable investment thesis for a
venture capital firm, which is put into
the prompt like, okay, maybe you should
be we should be looking at AI assisted
design tools.
And then that might kick off a market
map. we're really thesis driven. So
maybe that starts a conversation on a
Monday and we decide to staff a market.
Then it'll produce these noteworthy
observations which are uh actually put
into tweets. So here are the Twitter
post suggestions. So I haven't done this
yet. I'm still working on the prompt,
but the idea is like could we actually
automate like linking back to people who
we really like? And then another part,
this is a little out of order, but
another part here is are there startups
that are mentioned within these podcasts
that we should know?
Right? So, here's Airbnb, Google,
Amazon, Stripe. We know all these guys.
I don't know what this company is. And
so, this might go into our CRM,
right, to be enriched. And and then the
last is we'll actually generate prompts
for blog posts in the style that I
write. And then this will go into a
Python pipeline to actually machine
generate uh blog posts. So before before
we get to the the um machine automated
AI blog post pipeline, I have a couple
questions about this process because I
think you did a couple interesting
things. One, I have a question if you
found higher quality by cleaning up the
transcripts like how much did that
incremental input quality
piece actually help your your output?
>> So it helped. So initially I was trying
to get the answer was initially a lot
and then over time less
because initially what I was trying to
do was to find these companies I was
using named entity extraction algorithms
from Stanford there's a Python library
and uh it was having a really hard time
uh and so I was cleaning up cleaning it
up to try to get the performance to
improve and then I just pushed it to a
really large large language model and it
spit it out much better and so the
cleaning is not that useful anymore.
>> Yeah, I was looking because I was
looking at it and you were focusing on
like proper nouns, company names, and so
I'm assuming if you want to extract
something like stripe, which has many
many meanings, um getting it into a
proper noun format, for example, would
help with that extraction. But you're
saying as you could just use as opposed
to these kind of package libraries for
specific machine learning use cases
instead just send it to an LLM that
ended up just meaning you could worry
less about the input quality of of your
transcripts and more about the kind of
prompting and structure here of the
output.
>> Yeah, that's exactly right. So my goal
initially was to do everything locally
and so I was using Olama. I was using
that Stanford library parakeet is run
locally.
>> And then what I realized is particularly
for the named entity extraction
more powerful machines are much better.
>> Yeah. And so and then I have to ask
another question which is everybody's
going to look at this and they're going
to go what the hell does he typing in?
Like we have a couple people that are
like why in the terminal? So I'm just
curious you know did you ever think
about putting a UI on top of this? Do
you just you seem very comfortable in
the terminal so it seems to work for
you. I'm just curious about where you
decided to focus your uh user experience
efforts on this personal
>> well I love the term I read this blog
post by Danlu with two U's where he was
talking about latency and the latency
between like the keyboard and the
computer and it turns out that the
terminal is actually the application
with the lowest latency and the lower
the latency the less frustration you
have using a computer. So during COVID,
I decided to learn how to use a terminal
and since then I've sort of lived in it
and so like my email client is a
terminal based email client and I you I
use that because it's really fast and
then I can also script different things.
So I can delete 10 messages at once or I
can call an AI to actually automatically
respond to an email or add a company to
a CRM. So that was really important. But
at a high level like I think it's um
I've just become really comfortable with
it. It's really fast. And then the last
thing I'll say is I think Cloud Code is
an amazing product. And the great part
about what Cloud Code does is I have
about 2,000 blog posts. I can just go
into Cloud Code and say modify the files
in this way or change the blog post
theme or recently I launched a blog post
generator which takes all of the content
that I have on the blog and you can ask
it a question. It will write a blog post
for you about your particular question.
And I did that all using cloud code.
Yeah, I mean I I have two sort of
thematic things that I think of while
observing this this workflow and your
love for the terminal. I agree. Claude
Claude Code is an amazing product and
it's a really welldesigned terminalbased
product. I love it. I love that you have
this constrained surface area in which
to like communicate progress and latency
and changes. And I think it's really
thoughtfully designed. So for anybody
out there building dev tools in
particular, learn how to design in the
terminal and it's so so important
because you make really fabulous
products for I guess people like you and
me that say things like I picked up the
terminal over co as as my hobby. The
second thing that I was thinking about
is since generative AI has become
mainstream, every single person has said
somebody make a podcast digest
application. Every single person I know
is like it was one of the first projects
I made. I made my kids a podcast digest,
their favorite podcast, and it made
little um
>> quizzes about the topics that they could
answer.
>> Super cute. So, I think it was a very
common use case. But what I was thinking
is no startup is going to be like, you
know, it's going to be huge TAM company,
a terminal based podcast transcript
processor and thematic extraction
generation engine. And I I think this is
such a perfect example of like, yeah,
there's probably something off the shelf
that could do something like this, but
you have gotten not only like the
content you want, but the user
experience you want. You control it end
to end and you can build this like
hyperpersonalized software experience,
which I just it was not possible um or
it wasn't um efficient to do I would say
uh until very recently. Yeah, it fits it
fits the workflow my workflow like a
glove, right? And anytime something
comes up and changes, like maybe there's
a section that's out of order like we
found, I can just go into cloud code and
update it and it'll be done in 15 to 30
seconds, right? And you know, I really
wanted an email of this every day and
that was straightforward. So, I agree
with you. But I think we're at a place
where
the marginal friction to achieving a
gloveike fit with little utilities that
maybe you wouldn't have paid for in the
past is now um it's just so it's so
quick, right? Like you you just
answering a couple of emails and it'll
be done.
>> Yep.
>> You've seen the doom and gloom
headlines. AI is coming for your job.
But the reality is a little bit
brighter. In Miro's latest survey, 76%
of people say AI can boost their work.
It's just that 54% still don't know when
to use it. As a product leader and a
solo founder, I live or die by how fast
I can turn fuzzy ideas into crisp value
propositions, road maps, and launch
plans. That's why I love Miro's
innovation workspace. It drops an AI
co-pilot inside the canvas so stickies,
screenshots, and brainstorm bullets can
become usable diagrams, product briefs,
and even prototypes in minutes. Your
team can dive in, riff, and iterate. And
because the board feels like a digital
playground, everyone has fun while you
cut cycle time by a third. Miro lets
humans and AI play to their strengths so
that great ideas ship faster and
happier. Help your teams get great done
with Miro. Check out miro.com
to find out how. That's mirror o.com.
Okay. So, you have taken all this
content um including amazing content
from the Lenny's Podcast Network and
you're processing it. You're extracting
themes. You're extracting quotes. You're
finding companies that may be
interesting to reach out to. You're at
least drafting Twitter posts. We will
see if those actually get posted um you
know in production. And then let's talk
about your second workflow which is you
extract insights that might be
interesting for you to write about or
add your perspective on and then you
actually turn those into drafts using
AI.
>> There's a lot of stuff that's happening
in the ecosystem and every once in a
while I like to write about what
somebody said in a in a podcast, right?
Um uh and I think today like there I was
looking the GitHub CEO is actually
interviewed and so Matt Turk interviews
who's at another venture firm interviews
Thomas and he talks about how AI and
coding is the future and so what I
really want to do here is let's suppose
I really wanted to have a blog post that
was tied to this. So what I can do is I
say like okay I have this podcast
generator and I'll show it to you in a
second and what I'll do is I'll take as
context the transcription of that
podcast which is here um and then I'll
define an output file and then I'll give
it a little prompt which is like you
know he said this quote which is
actually within the podcast summary
everything that I can easily replace
with a single prompt is not going to
have any value. it will have the value
of the prompt and the inference and the
tokens, but that's often a few dollars.
And I'll tell it, okay, go look for
podcasts that are related to this. And I
I've categorized them uh as AI. And then
here, actually, there's a bug. So, demo
fail. I was trying to fix it before I
got on the video, but the searching for
the relevant blog post is failing, and I
need to figure that out. It's it's run
through um Lance DB vector embed in this
database.
and then it'll generate a blog post. And
I'll show you the prompt in a second.
And the best well, one of the techniques
that I found the most effective when
generating blog posts is to ask it to
grade it like an AP English teacher. And
this goes back to my history. I remember
not really loving to write until I took
a class with um an army veteran and uh
he taught me to really love to write and
he was my AP English teacher. And so I
really like receiving feedback in that
way. grade it on a letter grade and then
tell me what I could improve and then
I'll iterate with the model until I get
to an A minus.
>> Got it. And so just before we go into
the actual writing and I'd love to see a
little bit of this AP English prompt.
Are these two pieces connected? Your
podcast summaries, do those go into this
vector DB that can then be searched
through for relevant other podcasts if
you're writing on a topic? Like how does
this all come together? Yeah. So, right
now it's just the blog posts that I've
written in the past, the 2,000 blog
posts or so that go in. And the major
reason I add those as context is I'm
trying to capture my style. And I have
to tell you like that's really hard.
Like I have fine-tuned OpenAI. I had
fine-tuned Gemma models. And getting the
voice and you'll see it in the output.
It sounds like a computer when it writes
even with that additional context. And
uh it doesn't the other thing that I I
have not been able to figure out is I
think it's really important in one blog
post to link to other blog posts that
I've written just because the knowledge
builds on itself and obviously outside
as well. But I haven't been able to
figure out how to get it to link
effectively.
>> Well, I I think this is a a common
feeling with AI generated writing. No
one is satisfied with style even when
style is exceptional. I think I've seen
examples, especially some of the newer
commercial models, actually writing
really lovely pros and really lovely
language.
It's just it's so personal what your
style is and how you would write
something, the rhythm in which you would
write it, how would you punctuate and
break line, all that kind of stuff is so
personal that I have, like you had a
very, very hard time getting it to write
like me. And I think even harder, which
is why I appreciate that you're not yet
posting this. It cannot it can't tweet
like me. I can't I cannot
>> No, the short ones the short ones are
the hardest, you know. Um I guess they
say that about about writing writing
generally. Um h have you felt like any
of the models have done better or worse
at writing like you or is it just like
they only get 70 80% there and I just
accept the fact that I'm going to have
to rewrite things?
>> Well, they have different voices. Um, I
don't think any of them are close. Uh,
like I think Gemini is more
clinical is the way that I put it.
>> I agree.
>> Claude is more warm and verbose, you
know, very very galous, like just wants
to keep talking and wants really long
sentences and really long paragraphs.
Um, and uh, OpenAI, I think the models
each have slightly different
personality. So there I don't think
there's like a single characterization.
So I I've been I think I've been
iterating to I used to use claude 35 a
ton and I uploaded all of my blog posts
in a project and I and then I'd have it
iterate there. Now I can kind of do it
with cloud code or using this prompt. So
that's a little less useful. But what I
found is um
you really need to add your own voice
and then you need to tell the AI to keep
the things that are wrong, right? Like
this it's kind of funny thing to say,
but as you were saying, Claire, before
the way that you punctuate, I really
like amperands, right? And I like adding
spaces before colons. And I like
starting certain sentences with or
having little incomplete clauses
um because I think they keep the reader
moving.
But an an AI won't do that. An AI will
only deliver you a grammatically perfect
specimen.
>> Yeah. We're gonna have one one very
nerdy uh English language moment, which
is I like to start paragraphs with a
conjunction. I love a and or a but. Oh,
it pulls you in.
>> So, okay, you and I are going to work
we'll we'll build like a an a and micro
sass on
good good writing models um and prompts
that that people can use. So, okay. So,
we accept that it's not going to write
exactly like you, but you've created
this grading process to say, well, is at
least good? And so I'm curious, can you
walk us through how it gets to an A min?
>> Yeah.
>> But as an A+ student, I don't know, a 91
would really stress me.
>> Tell me how you kind of wrote the prompt
and then why you picked like a minus as
as your bar.
>> Yeah, for sure. Okay, so uh the way I
broke the prompt, I told it what I
wanted. Um, and I asked an AI to
critique I think I asked Gemini to
create critique Claude's output. So,
it's kind of using a student teacher or
critique model. And then what it does is
we'll walk through the prompt in a
second, but it goes through three
grading attempts. So, it reads a file,
gives it a grade and a score, and then
it the things that are the most
important that I found particularly for
readers are the hook, which is the first
few sentences or the lead you might call
it. And then the last is the conclusion
and making sure it ties back because
then you have a complete um you have a
complete post. And so it goes through
this three times, right? And so you can
actually see like here it gave itself a
90 and then a 91. Um and then at that
point it basically was good enough. It
was satisfied with the hook. So um if we
uh let's see if we read the blog post
generator
um you can see what it does at a high
level right so it finds the blog post it
generates an initial blog post grades it
like an AP English teacher improves
um and then autogenerates a URL friendly
slug so it actually writes it in the
right format and then it can use openAI
or
then the prompt is here uh you are an
expert blog writer specializ izing in
technology and business content. And
then here I add in the blog posts and it
kind of shows the patterns. What it also
does is um it dynamically calculates the
number of paragraphs from relevant posts
and uses lama to summarize the stylistic
patterns of those related posts. So, I
might write a little bit differently
when I'm targeting a web 3 or a crypto
audience than say I might when I'm
analyzing the public disclosures of a
company. Snowflake just announced
earnings, let's say. And so, it's
dynamically injecting that here. It
shows a bunch of different examples. And
then, you know, here's what I think
makes my blog post tick, right? 500
words or less. I have like 49 seconds
with a reader. No section headers. I ran
a an analysis of dwell time as a
function of how many headers there were
and it turns out headers were terrible
for dwell time. People just bailed. Uh
flowing paragraphs, each paragraph
transitions smoothly to the next.
Actually, the AI consistently critiques
my transitions and says they're too
harsh. And going back to the A minus
point that you made before, I think I
lose five or six points because of my
transitions because they're abrupt. and
then you know limit each paragraph to at
most two long sentences
and then the structure of the blog post.
>> I I think this is a really interesting
towards the top and I want to make sure
people don't miss it. I've seen this
before which is like take this example
and describe it back to me and use it.
And so you're saying I'm writing on this
topic go find the blog post like this
topic analyze them for format like what
is what is the structure? how am I
writing things and match stylistically
match this subset of of my blog posts
because I do vary style by topic.
>> Exactly right. Exactly right
>> thing. Okay. And then two sent I was not
expecting this two sentences per
paragraph thing. I I like it.
>> Yeah.
>> I have one more question for you as
somebody who did take AP English. So,
this is um perfect for you. Did you
actually do they publish the AP English
like grading standards for the tests?
Like, did you integrate any of that? Is
it just sufficient enough to say AP
English teacher? I'm just curious how
deep you went.
>> Yeah, I just said AP English teacher. I
figure there are enough people leaking
either like the scoring rubrics or
essays that scored fives or whatever it
was.
>> Got it.
>> That there's good underlying data.
>> Okay. So, this is for writing it. And
then what about for grading it? Do you
have that prompt?
>> Here's the grading prompt. So you're an
experienced English teacher. Here's a
letter grade, numerical score, and then
here are the evaluations, the hook,
which you know, argument clarity,
evidence and examples, paragraph
structure, conclusion strength, overall
engagement.
>> Got it. And have you ever gotten B's and
C's on
>> Yeah, for sure.
>> Consistently getting like 91%. I I
always wonder about this because I do
think these models are positively
inclined towards telling you you've done
good work. I found that consistently.
I've always had to say be more harsh, be
more critical, call out where I'm doing
things wrong. So I'm curious, do you
actually get high variability in this in
these gradings or you know what has been
your experience?
>> Yeah, absolutely. So another so this is
one pathway for I mean the podcast to
blog post data pipeline is one pathway
for generating blog post. Another one is
just an idea comes to me. And so then
what I'll do is I'll just literally
dictate. Um I'll dictate I'll put it in
and I'll pass it into the blog post
generator and then have it grade. And
there I've seen C minuses. Right.
>> Got it.
>> Um yeah.
>> So it's easier when it's grading itself
and a little harder when it's grading
you.
>> This is super interesting. And then in
the you do it three loops. Do you also
get high variability between the loops?
Do you find that that that threetime
process is actually additive to the
evaluation?
>> I do. I think I often see the first one
like a what like a 91 and then the
second one will dip into the BB+ range
and then it'll pop back up.
>> Yep.
>> So it's a little bit explore exploit
again most of the time for me it's
around those transitions and most of the
time the verbosity of those transitions
that the AI injects is just like
catastrophic. I mean it doubles the
length of the uh blog post. Um and then
the third the third iteration tends to
then kind of rein reinforce the brevity.
>> Got it. And um my kids are too small for
AP English to be something that I have
to worry about yet. But meta question,
you know, everybody's so worried about
students using AI to write. I'm This
seems like such a more fair way to
evaluate writing. I'm curious. Do you
think we're going to see more and more
of this site this type of evaluation in
academic setting? And do you think
teachers could benefit from, you know,
checking their own work when they're
grading these things that are a little
harder to put quantitative or
qualitative feedback against?
>> Yeah, I think it's a great first pass
filter. Like 80% of the work, what's
going on grammatically? Are you using
sentences and conjunctions and dangling
modifiers and all that stuff? Like I
think that um the wrote analysis of the
logic of that language should be handled
by an AI,
>> right?
>> And then I think there's this other part
which is the stylist. I mean you look at
I was reading EE Cummings poems last
week and you look at the creativity of
some of those poems. Um, and I, you
know, I think it only comes after you
have the mastery of the language, but
you'd want, you'd want teachers to be
free to
champion that or encourage it. I think
it's really just just just as important.
>> Yeah. So, for the students listening,
you know, I still think it's good to
learn to write, uh, to read a lot, to
learn to write, to write yourself. And
if you're looking for a place to
practically apply AI to your writing
work, maybe it's as a a first pass
grade. Say, if you were my teacher, how
would you grade this? And what feedback
would you give me? As opposed to, if you
were me, how would you write this? Maybe
that's the right way to get students
starting to use AI in a practical way
that still allows you to develop these
hard skills that I think are going to be
continue to be super relevant.
>> Could not agree with you more. I mean,
oftentimes, I don't know about you, but
I'll run into writer's block or I'll
have an idea that I really want to
convey, but it's just a soup in my mind.
And there an AI will help you iterate
and refine. And often it'll give you the
the germ of an idea and then you'll take
it and add your specific lens to it. But
um but yeah, I think it's a wonderful
learning tool because you have the
feedback so quickly.
>> Yep. Exactly. Okay. So, you have shown
us just taking Zoom back 30 something
podcasts you process on a daily basis.
You create summaries, you extract
themes, you extract tweets, you extract
topics. Those topics then go into
another um Python script that writes a
blog post based on some other relevant
blog posts in your own um blog writes
the blog post on demand AP English
teacher to grade you three times and
then you take the final pen and then is
AI post like do you have it just like an
agent going sender you
>> that I don't that would be awesome
But no, that that's still done the artal
way. Point and click.
>> You are still copying and pasting with
your human fingers.
>> Yeah.
>> Okay. This is a great super practical
process. Um I'm even thinking about ways
I can do this to identify future podcast
go guests or um topics that people might
want to see. So you've given me some
inspiration. I'm going to ask you two
wrap-up questions and then get you out
of here back into your terminal. First
question, I was reading your 2025
predictions and you said this is going
to be the year we see a 30 person
hundred million dollar company and I'm
curious when you in your mind's eye when
you imagine that company what is it
who's in it? Like what are they doing?
How are they operating? What do you
imagine that company looks like?
>> Yeah, I think it's probably there's a
CEO who's a product person. There's an
engineering team of 12 to 15 and then
there's probably a couple of customer
support rail people and maybe there's a
salesperson
maybe who's closing some of those bigger
contracts and then a solutions architect
as a function of the kind of company but
it will be predominantly software
engineering and then I think the go to
market motion is PLG bottoms up just
massive adoption
>> and do you think those software
engineers are largely still focused on
product building or do you imagine that
those software engineers are also
enabling the company with tooling and
automations and figuring out how one
salesperson can do the work of 20? I'm
just curious how you think that's going
to shake out.
>> Oh, absolutely. I think that's right. I
mean uh you we were we were kind of
talking about this but like the ability
of a person to come up with a demo and
then use AI to critique the demo and
test uh is now so fast and the ability
to take that code and basically move it
into production really quickly is also
incredibly fast. So I do think there
will be a pretty significant like
internal platforms enablement function
and whether that's kind of 20% time for
a bunch of engineers or a dedicated team
of two or three people huge amount of
leverage there.
>> Yeah, I I completely agree. Okay. And
then last question when your AI is
grading you unfairly or writing terribly
or making very long transitions that do
not um sound like you, what is your
prompting technique to get AI to listen?
I have two AIs duke it out.
And so I have like a little example of
like this is the input, this is the
output that you gave me, this is the
output that I want and then I have
Gemini and Claw duke it out and finally
kind of decide on um and I'll use a
little script to do that where they'll
finally polish a script. It doesn't work
all of the time, but I do think
switching models helps a ton. It it
creates a level of generalizability
that uh I haven't been able to replicate
as a human. I I agree and I will give
you a how I AI tip from a previous uh
previous guest Hillary who like negs the
models to each other. So they're like
Gemini look at this garbage.
>> No way.
>> How to And then they're like Claude look
at this trash open AI gave me like
surely you can do better than this.
That's what she calls it mean girls.
She's like I mean girls the models and
get them to compete with each other. And
maybe you can create a a a Python-based
terminal script to to do that and then
share it with with our audience, open
source that thing.
>> Uh great great idea for a weekend
project this Saturday.
>> Well, this is so helpful. Uh where can
we find you? How can we be helpful to
you?
>> Oh, I'm on totneous.com and uh if you're
starting a company within the AI
ecosystem, I'd love to hear from you.
>> Great. Well, thank you so much for being
here.
>> Thanks for having me, Clary.
>> Thanks so much for watching. If you
enjoyed this show, please like and
subscribe here on YouTube, or even
better, leave us a comment with your
thoughts. You can also find this podcast
on Apple Podcasts, Spotify, or your
favorite podcast app. Please consider
leaving us a rating and review, which
will help others find the show. You can
see all our episodes and learn more
about the show at howiipod.com.
See you next time.
Loading video analysis...