TLDW logo

Ethan Mollick: What Leaders Need to Know About AI

By FranklinCovey

Summary

## Key takeaways - **Play Unlocks AI Mastery**: Play is one of the best ways to get to know AI, using it for fun alongside work reveals its capabilities. A playful attitude like asking for otter on a plane or bizarre poetry helps see the serious side. [03:27], [04:14] - **Prompting Tricks Obsolete**: Rigorous testing shows old prompting techniques like chain of thought, bribing, or threatening AI no longer matter for accuracy. Models now better understand human intent, making AI easier to use. [05:05], [06:36] - **Leadership-Lab-Crowd Framework**: Successful AI integration requires leaders articulating vision and using AI themselves, crowd-wide access with incentives since 45% of Americans already use it secretly, and a lab for R&D experimentation. Treat it as management innovation like in the 1900s. [15:22], [16:15] - **AI Boosts Low Performers Most**: BCG study: consultants with GPT-4 saw 30% quality and 25% speed gains, biggest for low performers; P&G study: individuals with AI matched teams. Validated repeatedly. [09:46], [10:19] - **Jagged AI Frontier**: AI has a jagged frontier: excels unexpectedly like gold medal at Math Olympiad or outperforming doctors in diagnosis, fails unexpectedly elsewhere. Expect jagged AGI, not smooth superiority. [13:27], [13:44] - **Bundled Jobs Safer**: Seek bundled jobs combining many tasks like doctor or podcaster, harder for AI to replace fully than narrow ones like press release writer. AI handling subtasks lets you focus on what you love. [30:43], [31:01]

Topics Covered

  • Play Unlocks AI's Serious Potential
  • Prompting Tricks Now Obsolete
  • AI Advances Far From Plateau
  • Leadership-Lab-Crowd Framework Wins
  • Bundled Jobs Resist AI Disruption

Full Transcript

Hello and welcome to Franklin Cvy Leadership. I'm your host Will Hodeling,

Leadership. I'm your host Will Hodeling, Franklin CVY's head of product. In these

conversations, we'll speak with many of the world's greatest leaders on the mistakes, methods, and mindsets that help them become the people they are today. We hope that hearing their

today. We hope that hearing their hard-earned insights accelerate you on your journey. Today's guest is Ethan

your journey. Today's guest is Ethan Mollik, professor at the Wharton School at University of Pennsylvania, where he is the co-director of the Generative AI

Labs, and his research focuses on AI's impact on work, entrepreneurship, and education.

Ethan has become widely regarded as one of the world's experts on the influence of AI on the world and work. He has

hundreds of thousands of followers across X and LinkedIn. He writes a popular Substack, one useful thing and his recent book co-intelligence is a New

York Times bestseller and was named by both the Financial Times and the Economist as one of the best business books of the world. Ethan was recently named by Time magazine one of the most

influential voices in AI and he received the award of the MBA professor of the year by poets and quants. If you are interested in how AI should influence

your world of work, today is your podcast. Professor Ethan Mollik, welcome

podcast. Professor Ethan Mollik, welcome to Franklin CVON Leadership.

>> Really excited to be here.

>> Wonderful. So, I have been following you on Twitter for many years, I guess now ex uh and was following before you became the AI guru for a long time. You

were working at Wharton as a professor.

You were teaching about innovation, entrepreneurship, how games can teach leaders. I followed you because you did

leaders. I followed you because you did an amazing job summarizing uh abstracts, journal abstracts on Twitter and so it gave me a window into the academic world. Uh I'm curious to hear from you

world. Uh I'm curious to hear from you to start. What was your motivation for

to start. What was your motivation for diving so deeply into AI? Um what a pun semi-tended what prompted you into this world?

>> So I've actually been um an AI person for a while. When I was at MIT in grad school, I worked in the media lab in their AI group with Marvin Minsky who's one of the founders of the field. But I

was always the nontechnical person. So

I've always been the explainer for AI, making connections about what it can do.

And so um you know involved using it in games and stuff, kept an eye on what's happening. So essentially when the LM

happening. So essentially when the LM revolution hit, I had already been playing and talking about these systems for a long time for their educational potential. And I think was one of the

potential. And I think was one of the first people sort of in this world to see what was going on with first GPT3 and then chat GPT. And um it just sort of put me in a position where I think

there was a lot of interest in what I was doing which led to sort of positive feedback loop where now I'm in contact with lots of people about this. written

a book on it um and just have gotten very deep in this topic and as a node I think >> one thing that I find refreshing and uh

distinct about your AI commentary online is that it's fun that a lot of the AI commentary is very dense and you you know you do go into the deep technical

issues but for example your canonical otter examples every time a new model comes out you have a traditional prompt that you ask it so maybe talk to me about these these zany examples that you give and how you kind of keep it

light-hearted and fun. Is that

intentional or is it just your personality and you're enjoying it and you do the thing you enjoy?

>> So I I think just as a general lesson on AI, play is one of the best ways to get to know this. There's two avenues in using it for everything you do for work and then using it for fun. Um so I think

I've always been a fan of play, right?

Games are something I build. Um and also you know a lot of people in this world are computer scientists and they care about code and they are you know they've built but they built a machine built on

human words right and what this machine is trained on is the entire corpus of human language. So to make it do

human language. So to make it do interesting things kind of requires approaching it from a playful humanities almost standpoint or managerial standpoint of like what can this thing

do? So I have it write bizarre poetry. I

do? So I have it write bizarre poetry. I

uh I give it impossible challenges. I

ask it to create, you know, a version of missile command, but relativity actually takes place in that and works there. I

ask every image generator to create a otter on a plane using Wi-Fi because at the time I was doing this, I was on a plane with my daughter who loves, you know, otter and we were uh we decided I just I thought it'd be a fun joke to

send her. Like if you have a playful

send her. Like if you have a playful attitude towards these things, it can often help you see the more serious side as well. M outside of your

as well. M outside of your experimentation um in you know the way that you use AI for work that might be distinct challenges other people don't face how do you use AI which models are

you using most and less least uh commonly um what kind of prompting guidelines do you follow >> so the big revolution that's been happening in the AI space has been

taking all of the stuff that used to be very hard about AI and making it easier um so one thing we doing I run the generative AI lab at wartin along with my wife um and we've been doing some rigorous testing with a bunch of other

great researchers of prompting techniques. It turns out all the stuff

techniques. It turns out all the stuff that used to matter doesn't matter anymore, right? Um it turns out

anymore, right? Um it turns out bribering the AI or threatening it, none of that matters, being nice to it, not being nice to it. But more seriously, things like chain of thought, prompting, which was a very common technique that

everyone still teaches for AI, doesn't have any effect on accuracy in the studies that we're doing. So, this stuff is getting easier to use. Similarly,

right around the time we're we're recording this, GPD5 was released a couple weeks ago, and it is two things.

It's a very good model, a set of models, but it's also a router that automatically decides which AI model you should use in any circumstance. Not

always good at that, but for a hard problem, it uses a more complicated AI.

For a simple problem, it uses a smaller AI. And so, you can start to see this

AI. And so, you can start to see this move towards making things easier, but my general advice is pick one of the big three. um Google's Gemini claw

three. um Google's Gemini claw anthropics claude um or OpenAI's uh GPTs and then generally use the most advanced model you can at the time we're talking

that's GPT2 uh that's Gemini 2.5 claude 4.1 and um and uh Opus and GPD5 thinking but that will always change but generally use the most advanced model you can

>> it's refreshing that prompting doesn't really matter very much anymore I felt like that was one of the hurdles for a lot of people using AI for for a long time was feeling like, oh, I'm not using it the right way. I don't know how to prompt it. You read these articles

prompt it. You read these articles online about prompt engineers. So, I

think it's a major innovation by the frontier labs to kind of get to a standard use case working. You know, the entry point being a lot easier for a lot of people.

>> I I think it's less and you'll see this as a theme. It's less the labs consciously doing this and more as models get larger and better trained, they just start to get human intent better. So it becomes less important,

better. So it becomes less important, you know, to put them in a in a headsp space, right, than it used to be. I when

I wrote my book, it was all about giving personas to AI. You're a helpful adviser. That still can be useful. But

adviser. That still can be useful. But

the AI now is enough theory of mind that you can say here's the problem I'm trying to solve and it it anticipates the problem without having to pretend to be somebody.

>> So maybe it's the frontier lab solving it from model advancements and not necessarily solving it from a product intentional perspective. So that

intentional perspective. So that >> which I think is a big point overall, right? which is these still are coding

right? which is these still are coding people creating coding applications and they don't know your use case your business and that's where there's a lot of opportunity to explore and create is to figure out what it does for you and

your use case >> yeah you referenced the launch of GPD5 uh so in the last couple of weeks since GDB5 launched there have been critiques saying we have reached the plateau of

the scurve we're now going to be see more um incremental improvements in model performance that uh the next you phase is going to be about AI application and not AI foundational

innovation. What do you think about the

innovation. What do you think about the advancement in the the foundational um in the foundational models themselves?

Have we reached the the plateau of that S-curve?

>> I don't see any reason to suspect that we have um you know I I don't I don't understand sort of the there's a online chatter that veers between apocalyptic that AI will never advance and apocalyptic AI is going to murder us

all. And if you just look at the sort of

all. And if you just look at the sort of advancement curves, they're exactly on, you know, on a fairly fast doubling path of any what you look at. A seven-month

doubling path, a 14-month doubling path, a three-month one. But I just don't see either talking to labs or in the results or in the experiences that that's what's happening right now. There's actually

two things AI is trying to maximize.

Labs are trying to maximize. They're

trying to lower cost. Um, and they're also trying to which is the same thing as lowering environmental impact and everything else. People care about

everything else. People care about speed, increasing speed. and the other is is increasing ability. And so you're going to see advances in both of those.

I just don't see any sign this has run out of steam. And it's a weird sort of argument to make when we've just seen models get the, you know, gold medal at the International Math Olympiad. So like

I I think it's it it seems like a strange like just the story of the moment. I don't think that you know we

moment. I don't think that you know we could plateau. I just don't see any

could plateau. I just don't see any indication that's happening.

>> Yeah. I I think one of the things that I've enjoyed following uh I've enjoyed so much about your commentary and thought leadership on the topic is that you do break out of the Twitter hype cycle of the moment. There's such an

ecochamber for you know following whatever the topic dour is on AI as you said AI apocalyptic approach or um believing that the models have slowed down. I think you take a really

down. I think you take a really empirical view and you still summarize a lot of very interesting journal articles. So, I'm curious if you could

articles. So, I'm curious if you could just share with our audience or people who maybe aren't following you regularly. What are some um what's some

regularly. What are some um what's some academic research that's come out about AI applications that has really stuck with you?

>> So, being an academic, I'll start with two papers that I've written with a team of fellow researchers at MIT and Harvard and the University of Warwick. um one

kind of kicked off some AI investment in the early days which was we did a study at Boston consulting group of consultants took 8% of the global workforce gave some GP4 which is now obsolete some not and we saw you know uh

quality improvements of 30% and speed improvements of 25% and that the low performers got the biggest boost that's now been validated over and over again right this performance gain more recently we've done some work on teams

with uh using AI at Proctor and Gamble and they gave us 776 of their employees and we had the market either individually or in cross functional teams of two. Some got AI and some did

not. Individuals using AI again the

not. Individuals using AI again the obsolete GPT4 um had a uh performed as well as teams, right? So really big kind of performance gains. Also, you know, again to sort of talk about research

I've done and then we can zoom out. Um

innovation, it turns out AI is really good at generating ideas and probably should be part of your innovation ideation process. And we've got but

ideation process. And we've got but there's also weirdness. So I have another paper with um I'm sure a lot of the audience here is familiar with Bob Chaldini's principles of influence. So

in a paper with um him and Angela Duckworth and a few other researchers along with me um did a um Leny in particular um did um did some interesting work. We found the AI is

interesting work. We found the AI is persuadable by the same persuasion techniques you can use for humans. So

foot in the door all the techniques you learn. So I think there's a lot of

learn. So I think there's a lot of really interesting work out there and that's just my piece. You know, just a couple days ago, research came out showing that a voice powered LLM model in the Philippines was able to uh as a

recruiter was preferred um people selected it more as a recruiter for rather than talking to a human. It did a better job placing people and giving out offers and had higher match successful

match rates. There's really interesting

match rates. There's really interesting work on uh in in uh in subsaharan Africa showing that AI can be a really effective tutor when used properly.

There's just a lot of exciting work out there right now.

Um I I appreciate you summaring all of those pieces. There was another piece

those pieces. There was another piece that you cited recently around um diversity and creative writing. I think

one thing that people critique AI for is oh you know it's going to create all this AI slop or it is not as diverse and is not as creative as as humans. Maybe

if you want to talk briefly about that that paper that you were you were summarizing.

>> I mean so the creativity question is generally an interesting one. So this

paper found that if you gave uh AI the beginning of a story, it would complete them with as much diversity or more diversity in writing style, approach, theme as humans would, especially if you gave it some random words. So you know,

here's here's a story and here's some random words to inspire you. So I think there's this view I mean AI does produce slop, right? The median content is pro

slop, right? The median content is pro is not, you know, not great. that can be changed by asking it more interesting questions having more dialogue giving it more prompting to kind of work with less

kind of magic more sort of like giving it feedback like an editor which fits a general theme of how to work with AI give it feedback right and that's how you get better results >> yeah you in all these examples you're

you've been sharing about the um dramatic impact of AI and in many ways AI outperforming humans the AI as HR in the Philippines example uh there's a lot of discussion AI community about the

path towards AGI towards super intelligence. How do you define AGI? I

intelligence. How do you define AGI? I

feel like it's something that um I haven't there's a lot of different floating definitions around. So how do you define AGI and do you think we will get there?

>> So this is the flip side of the question you asked earlier if we plateaued. So

AGI is this nebulous term artificial general intelligence which is something like the machine's better than a human at every intellectual task and sometimes defined as better at expert level across 80% of tasks. I mean we don't have a

good definition. Um, you know, some

good definition. Um, you know, some people like Tyler Cohen have said, "We've already achieved AGI. We just

haven't integrated in with the level of the models we have today." I don't think it's that useful a definition. Um,

partially because in our first paper, we coined the idea of the jagged frontier of AI that it's good at some stuff you wouldn't expect, bad at some stuff you wouldn't expect. I think we're going to

wouldn't expect. I think we're going to see a jagged AGI. We already do like the AI is better than most humans at a bunch of topics. And, you know, you have to be

of topics. And, you know, you have to be a really good expert to be better than it in a lot of areas. that already

outperforms doctors in diagnosis in controlled experiments. Um, you know, so

controlled experiments. Um, you know, so I think it's going to be a very but integrating that with organizations to think about how we work with that system, that's that's the challenge.

Now, if you talk to the AI labs, they're actually have skipping AGI and they're going for super intelligence. They think

machine that's a super genius and smarter than any human. That would

probably change things in a much more direct way than a kind of human equivalent system. But I think we've got

equivalent system. But I think we've got 5 to 10 years of change even if there was a plateau of just trying to integrate AI into our systems today.

>> Yeah. So speaking of integrating AI into our systems and around the challenges of applying AI, um NVIDIA CEO Jensen Huang recently said the last 10 years was really about the science of AI but the

next 10 years is going to be about the application science of AI. Uh there was an MIT study, your uh PhD alma modern MIT study came out recently that 95% of

corporate AI initiatives are failing today that they're not achieving the goals they set out to they're not having the financial impact they expected. So

I'm curious how leaders should be thinking about AI application and adoption and given all the unbelievable use cases and all the wow moments we've

all had personally with it. Why is it at a at an enterprise at organizational level? Why has it been so hard to

level? Why has it been so hard to realize kind of significant gains or do you disagree with that that study?

>> I mean, I don't think that study is a particularly great example of stuff.

It's it's not very clear how it was done. It was some sort of survey work. I

done. It was some sort of survey work. I

but I don't disagree that like it's a hard road to haul and you should be thinking of this as an R&D effort, right? We we don't know how to integrate

right? We we don't know how to integrate AI in and I worry that people will just say, "Okay, vendors will figure it out."

They have the same problem as everyone else. So my framework for thinking about

else. So my framework for thinking about AI integration organizations and I work with lots and lots of large companies on this um is uh and this is not like gospel this is just sort of my viewpoint is leadership lab and crowd you need

those three things to make your integration successful you need leaders who actually articulating a view I I find uh Jensen Wong's stuff to be very interesting but it's always this vague view of what AI would do right it'll

work besides us as agents like what specifically do you want to change in your organization what incentives you need to do what process changes you need to make do are you leading from the front. I was talking uh to Nikolai Tang

front. I was talking uh to Nikolai Tang Tangen who is the um CEO of the Norwegian sovereign wealth fund and the way he got AI transformation is he

personally requires everyone to use AI in his presence when he you know what have you used AI for and has pushed it through the organizational level that's the leadership piece you have to actually be pushing for use crowd means everyone in the system everyone in your

company is going to be exploring AI uses or almost everyone using tools do they have access to tools and are they incentivized because Most people are using AI right now to get performance gains. They're not fools. They're never

gains. They're not fools. They're never

going to show the company the performance gains, right? 45% of

Americans are using AI at work. And like

the issue ends up being, okay, well, why would I show you when I look smarter using AI when I know I save time?

Totally.

>> Yeah. If I show you performance gains, what happens when I there's performance gains? People get fired. I'm not going

gains? People get fired. I'm not going to get incentivized for this. And I'm

working 90% less. So unless you incentivize the crowd, they're not going to do stuff. And then you need a lab.

You need to be doing R&D effort. I

really hope I mean I think that 95% number is not real at all but I I actually wouldn't mind if it was 95% of these efforts failing because if you're not doing R&D what are you gonna do this is the time for leadership is to

experiment and figure out what's going on and if you look at throughout the 1950 you know 1900s the way the US management succeeded US companies succeeded was management innovation experiments in management styles that

has actually not happened as much in the last 20 years and I think we need to restart that for you know and spend some R&D money for AI Okay, I love that framework. Um, leadership, crowd and

framework. Um, leadership, crowd and lab. On the crowd front, are you

lab. On the crowd front, are you encouraging general purpose tools? So,

give every employee at your company access to GPT teams or co-pilot or Gemini etc. Are you consider are you um are you encouraging vertical specific

tools? So, in marketing use Jasper and

tools? So, in marketing use Jasper and in and in engineering use cloud code or both or kind of where do you suggest companies start? So I think they need

companies start? So I think they need people need general purpose tools because they want to be able to build solutions. That doesn't mean you

solutions. That doesn't mean you couldn't that you won't find you know Harvey useful or that Claude you know everybody who's coding should probably be using >> Harvey legal assistant for those

>> right is a legal assistant I mean but you should know enough internally to know where they have advant like all of the vendors every vendor you talk to with one or two exceptions essentially is all using the same off-the-shelf LMS

that you have access to right so I I think people attribute magic to vendors but software engineers are no longer the magic piece here you need to figure out how to give these things instructions and talk to I think a lot of people

whitewash their own like AI usage by saying I'll buy it from a vendor. Why

would that vendor know more about the problem space that you're facing and and than your experts? Why would they be the ones to validate what the use is where how you get competitive advantage in that case? So I do think there is room

that case? So I do think there is room for vendors and especially in narrow areas that you want that aren't your area of expertise. But in your core business operations, oh man, I think you need to be doing experimentation with, you know, get as close to the metal as

possible. And that tends to be pick one

possible. And that tends to be pick one of the main major model makers and and you know get that internal use going.

>> Have you seen interesting applications to incentivize trial and sharing of best practices like really the the care and feeding post implementation not just rolling the

balls out and letting the kids play but saying okay we now everybody has access to GPD 5 now let's really get adoption across the company.

>> So lots of interesting plans out there.

talked to uh I've talked to leaders who have made it so that whenever you do a hire, the team doing the hire has to spend two hours using AI to try and do the job and then rewrite the job description.

>> I've seen cases where anytime you ask for budget, you're asked to do it with AI first for an hour and then rewrite your budget description. Um um Madna did a really interesting thing where they

used uh HR as their um as their um uh Bryce from MADNA as as their sort of choke point. And when you went through a

choke point. And when you went through a review process at the end of the year, you could you built a elaborate set of AI tools to help you with your own performance reviews and people who used

it would get would be more likely to be get more money because they wrote better performance reviews, but it took you through a very elaborate process. That

was a nice choke point to get people using these systems. I've seen cash prizes at the end of every week, hackathons. I mean, there's lots of

hackathons. I mean, there's lots of approaches. You kind of need that lab

approaches. You kind of need that lab there though because that takes the ideas from the crowd and helps turn them into reality and often the best people in the crowd are the people who should be populating the lab.

>> That was the next question I was going to ask. Talk to me about ideal lab

to ask. Talk to me about ideal lab structure. Is this you need to take

structure. Is this you need to take people out of the engineering team, take product managers or who are you putting in this lab and how do you structure them to really have the maximum impact?

>> Yeah, the lab has three functions. Um

immediate productization. So taking

ideas like a good prompt comes out, everyone should be using that prompt.

you should go test the prompt, build a better one, and then send it out through whatever system you're using so everyone could do that, right? So immediate

stuff. Second thing you're going to be doing in the lab is benchmarking. Do you

have benchmarks at actual problems that you're facing and how good the AI is so you know it's doing? And then the third is building for a future that doesn't exist yet. Agentic endto-end system,

exist yet. Agentic endto-end system, something that doesn't quite work. You

know, trying to build future stuff. I

don't have any numbers on this but we do have some early evidence suggesting that coders are actually worse at using AI often than than non-coders because it works more like a person than a machine.

So I would say that you want subject matter experts mixed with some coders and developers who are AI savvy but you do need those people in the like every organization I talk to knows the people

in the crowd who actually really like they're they're desperately evangelizing AI to everybody and no one's listening to them. Those are the people who are

to them. Those are the people who are really good inside the lab.

>> What mistakes have you seen companies make in AI application that should be avoided?

>> I think there's a danger in starting with uh I mean responsible AI is a important thing to do but people launch responsible AI initiatives before they launched a you know before AI got where

it is and it tends to often be lagging and so there's often very elaborate committees set up with a bunch of rules that are based on how AI worked in 2023.

Right now for example the security information security risk level is at the level of a you know another cloud app I mean when JP Morgan and Novartis are all using and modern are all using

AI there are ways forward to use these systems and like I think creating unnecessary blockers is a problem right you want to be responsible but you have to revisit this and you can't just have you know policies that's one angle

second is leaders not articulating clear vision themselves and not using AI so it's one thing you know I I don't like the emails that came out of you know stripe or Amazon on saying AI is really important in the future of our business.

Where was the vision there about how AI will make business different? What does

the organization look like 5 years from now? What's your vision? You're trying

now? What's your vision? You're trying

like this dodging the question of vision at a leadership level is a big problem.

And then not giving people access to tools, right? And uh or giving them

tools, right? And uh or giving them inferior tools can be a real problem as well.

>> Um so you you brought up earlier first off, thank you for for those very concrete practical uh examples of where AI adoption application goes wrong. Um

you brought up environmental impact earlier as one of the goals well the goal of the AI frontier labs is making it easier to deliver which also will have positive impact on environment on the environment. Um there are a number

the environment. Um there are a number of different common critiques of AI that I wanted to run by you and hear your perspective because you're obviously broadly speaking a real AI optimist. Um

so the first is environmental impact for people who are worried about AI energy usage and whatnot. What what do you say?

And by the way, I think I'm I like to think of myself as a pragmat optimistic pragmatist on this, right? Um, in terms of, you know, optimists tend to be think AI is going to save the world and pessimists tend to think it'll kill us all. So, you got to kind of calibrate on

all. So, you got to kind of calibrate on those goals. On environmental impact,

those goals. On environmental impact, look, >> I think that what what has happened is again indexing on 2023 numbers. At this

point, the environmental is we have audited reports and we have self-reports from OpenAI and basically using an AI query is negligible. Um in fact the

average chatb query uses about as much energy as a Google query in 2008 right so like that like and creating an image is about the same as using your laptop for 15 seconds right like this is not at

this point at the individual level um you know water use is a is a fifth you know is for cooling which doesn't disappear is a fifth of a teaspoon you know if you're really interested in environmental impact at the individual

level giving up a couple hamburgers a month would would be a significant energy gain versus not using AI at the aggregate level though obviously this matters a lot. People are building data centers. They're restarting nuclear

centers. They're restarting nuclear plants. So it's like everything else at

plants. So it's like everything else at the individual level I'm not very concerned about AI use and I think people are indexing on older numbers but on the aggregate level they're we're building out AI to be a major part of

society and that's going to use a lot of energy right and now it's not a very large portion of American energy but data centers are going to be a growing piece. So individual impact versus

piece. So individual impact versus aggregate. I think people tend to

aggregate. I think people tend to confuse those two a lot and I think we need to think of you know green energy and other policies for large scale >> but the individual aggregates to the aggregate. So you know if every single

aggregate. So you know if every single person gets obsessive about AI and they use a lot more of it than an aggregate we're going to be using a lot more AI and then have a larger energy impact. So

>> I mean yes, but it's also it's a very similar argument to anything else that you're kind of doing that way, right?

Which is again, you know, a hamburger is 150,000 times more water than your weekly chat GPT use. I think if we're trying if we are trying to do aggregate decision making, which is a reasonable

thing to do, it's a weird thing to apply it to AI but not against other areas that have a huge impact as well. That

that makes sense. Um second common critique is that AI is going to come and take all of our jobs. uh what's your what is your reaction to that fear about

AI leading to massive job displacement?

>> I don't know. I think actually and this is a case where you know economists are a little optim over optimistic because we're use we can only model in the past right and every time there's been a wave of disruption uh it creates new jobs and

that may very well be the case right I mean and you know to be clear jobs are bundles of tasks AI is not good at everything right my job is one very overlapping with AI and every research project that people have done business

school professor and I already see it I know AI is a better grader than me but I don't use it for grading why not because my students would hate it if I used it for grading so even though it replaces that part of my job. I'm not doing it, you know, for that yet. So, I think it's

going to be very uneven to see how that happens. But I think there will be job

happens. But I think there will be job changes and I don't know what the end result is, right? I mean, we've never seen a technology like this aim at white collar work in a general way that we've seen before. Will we need as many

seen before. Will we need as many coders? Will we need more coders? I have

coders? Will we need more coders? I have

no way of predicting that. I think it could be a little polyianish to say there will be, you know, that it always leads to more jobs. I also think, you know, everyone won't have a job by 2035 seems unlikely, but I think we should be

taking these scenarios seriously. And

again, we're at the policy level. At the

aggregate level, we need to start be making some choices about how to make AI more human in a useful way for everybody.

>> Yeah, I apprec I appreciate the intellectual humility on that. Um and uh I think often times people will come out with very strong proclamations and you say it's basic it seems to you know relatively unknowable. So I I appreciate

relatively unknowable. So I I appreciate your kind of on the one hand other hand answer there. Um, another critique of AI

answer there. Um, another critique of AI is its treatment of intellectual property that people have spent years, decades, I mean, you've written, you now have written a New York Times

bestselling book. Congratulations. Frank

bestselling book. Congratulations. Frank

and CVY obviously has a corpus of many New York Times bestselling books. We've

spent years, millions of dollars, and a lot of that information is now powering the genius, the the intelligence of these AI systems. So, what do you say about AI's treatment of intellectual

property? I I mean it's I think the

property? I I mean it's I think the lawyers are deciding what this means but it clearly trained on tons of our information without permission and I think the you know the question is what does that mean right it's not directly

reproducing work so it's not plagiarizing in a conventional sense but you know it's also seems disingenuous to say oh it's just like training a child it doesn't matter what we do I mean I think it's an uncomfortable situation to

be in right there the labs did not train on open source material they train on the open web there's obviously pirated books that you whether by crook or by crook that ended up in there. Um, and I

I think that it's a in some ways a legal matter to decide. I mean, I you have your own personal opinions on this. I'm

glad my content is in there because the AI actually is more likely to refer to my stuff. And, you know, we found that

my stuff. And, you know, we found that when, you know, Google digitiz uh the Google books, which basically got shut down for copyright concerns, retrospectively, it turns out from some pretty good research that have being

digitized in Google books actually increase book sales. I don't know what the long-term effect is. I don't think it was a fair decision to just decide to do this. But now it is a legal one and I

do this. But now it is a legal one and I understand people have ethical concerns about that. They they have every right

about that. They they have every right to you have eth reasons to have ethical concerns about how you know energy is used about all these other factors.

These are personal matters that have to be decided on and I think people fall different ways on this.

>> What recommendation do you have to content companies over the course of the next you know medium term let's say 2 to 5 years? Is there an opportunity for

5 years? Is there an opportunity for being a breakthrough IP company in a world in which so much of this IP gets leaked into uh into these AI models than is broadly available?

>> I think the nature of IP is going to change dramatically. Um you know the

change dramatically. Um you know the models are better at writing than they were before. They're better at

were before. They're better at synthesizing and doing research information. Increasingly you'll be

information. Increasingly you'll be using agents to do your work and do searches and you'll want your system to know your content. I'm very happy the AI knows my stuff because some of its ideas are now my ideas, right? and it actually

refers to me. So, I think that's a good thing. But I think that conventional IP

thing. But I think that conventional IP is going to be under stress. I mean, I think what a lot of people think about IP, it's not really the patent holding someone back. It's the ability to build

someone back. It's the ability to build something which is still too complicated with too many pieces and too many jagged edges for the AI to do well. But I think if you're doing pure IP, I I think you know there's going to be a threat to mo

many writers. There already is. If you

many writers. There already is. If you

go look at my book, um you'll find all the I just got an email from a university in Chile saying, "We bought your book and we also bought the workbook that went with it." I'm like, "What workbook?" And I'm like, "Oh,

"What workbook?" And I'm like, "Oh, there's an AI generated workbook that somebody put together and they're selling it as I mean, we're already seeing that stuff happen." And I think, you know, high quality slop is going to

be out there soon.

>> Yeah. Um, normally I end these interviews with a lightning round. Uh,

you you've been a lightning round throughout. Uh, you're an incredibly

throughout. Uh, you're an incredibly quick thinker, but I do have a couple of questions that didn't naturally fit into one of the buckets that I was thinking about in in organizing today's conversation. So, I'm going to kind of

conversation. So, I'm going to kind of fire out a few quick ones for you. Um,

first question is, what advice do you have for college seniors? You mentioned

earlier before we started recording that you're taking your daughter to college today. So, maybe to her, you know,

today. So, maybe to her, you know, you're going to be driving soon to drop her off at college. What advice do you have for her as she's preparing to enter the workforce and to a totally changing world?

>> So, we don't have answers. Um, my two pieces of advice that I give to everybody is, you know, do what you love because you're more likely to do better at that than what you know and be in the top percentage and what you're good at

tends to be what you love and what you get paid for. And the second is actually um I think bundled jobs are the right way to go. Jobs that bundle many tasks, right? Being a doctor where there's an

right? Being a doctor where there's an impossible set of tasks you have to do.

You have to be a good diagnostician and have good hand skills and be, you know, and be able to be have good empathy and be able to, you know, do organize things and run paperwork like that's an impossible job. If AI takes some of that

impossible job. If AI takes some of that and you can concentrate in the stuff you like, that feels like a good thing, right? So, I think narrow jobs where you

right? So, I think narrow jobs where you do one thing and you know, you produce a press release, I think that's under more threat from AI than teacher, professor, doctor, podcaster. Maybe there's jobs

doctor, podcaster. Maybe there's jobs that are very diverse across a set of things where I think there's more value in doing that.

>> Um, relatedly, how do you view AI changing leadership in the future? What

will the job of managers and leaders?

How will that change and how should they be, you know, preparing for it now?

>> I mean, I think that it changes leadership right now. And if you're not getting leadership changes out of AI, you're you're doing it wrong. Right? At

the leadership level, we know that AI gives pretty good advice and leaders are often lonely at the top. I mean, they're listening to podcasts like this one and getting, you know, and taking courses and buying books that, you know, you you

your team writes, that I write. Um, and

going get MBAs. I mean, the AI is a good adviser on this. I don't think it's worth learning this stuff, reading the books, uh, obviously for both of our sakes. But I also think that there is

sakes. But I also think that there is like if you're not getting advice from this, you're, you know, asking it to simulate possible outcomes, pushing you on topics, doing research, all the stuff that you would have to delegate down.

Um, and you know, if nothing else, I think about this story. I I ran to a Harvard physicist who told me that quantum physicist who said, "All his best ideas come from AI?" And I asked him, you know, is AI good at quantum physics? He said, "No, no, but it's

physics? He said, "No, no, but it's really good to ask me the right questions." So, I think just having

questions." So, I think just having someone to talk to and get you through this and help you with thinking through ideas is incredibly helpful.

>> Very tactically on that example, how do you get the AI to be a good thinking partner? What what kind of workflows do

partner? What what kind of workflows do you recommend?

>> It differs for everybody. I know some people who absolutely love voice mode with custom instructions. So you you explain to it who you are or you keep memory on and then you just talk to it on your way to work like I'm working

through this problem. Help me out. The

more formal way is you set up a project in OpenAI in chat GPT or in you know anthropics claude or any of the other systems that do this with all of your sort of decision-making papers and you have an ongoing conversation with it

about it. But it's really about

about it. But it's really about interaction. Give it context. Explain

interaction. Give it context. Explain

what the issue is and say what questions should I be asking? What am I missing?

Give me, you know, 20 ways that somebody might do this. I recommend abundance and everything. Just ask the AI for tons of

everything. Just ask the AI for tons of stuff. Give me 50 different

stuff. Give me 50 different possibilities I should consider. Let it

spark ideas for you.

>> I love that concept of, you know, creating the project and just giving context, context, context is just I think one of the um one of the encouragements that you've always had is play, iterate, test, trial. And I think

that's such an important concept for everybody, such an important mindset for everybody to interpret. Um the uh final second to last question for you. How

should people stay up to speed on the advancements in AI? It feels like everything is moving so quickly. You're

an incredible follow. You have an amazing substack. One useful thing. You

amazing substack. One useful thing. You

wrote a bestselling book on the topic.

Who are the other Ethan Molliks out there? Are there courses people should

there? Are there courses people should be taking? How do we how do we all stay

be taking? How do we how do we all stay up to speed?

>> So it's a little bit of a hard question, right? because there is a there's a thin

right? because there is a there's a thin line between um staying up to speed and joining the churning hype cycle that probably is not interesting to most people, right? So like I have to be part

people, right? So like I have to be part of this whole thing of what's breaking and like 95% of this is people just being like AI is dead, AI is amazing.

Like that's not that useful. Um and so you know I think um you know I think there's you know I I've got a Substack.

I think there's lots of interesting people posting stuff. Alli Miller on LinkedIn is a smart person doing things.

Um, and and I think that social media with a few follows is a kind of useful way to go. I try and be helpful with my Substack. There's other people who do

Substack. There's other people who do this stuff, but I I think it's staying up to date is playing with these systems as they come out. Like that is the most useful thing you could do is is actually use these things. And I really do think

the difference between being good at AI and bad AI is not taking hundreds of courses or doing it's using these systems for everything you do and getting a sense of what they do. And I

find people have a psychological barrier. either they're worried about

barrier. either they're worried about what AI can do or they find it unnerving or they don't know how to start and you just start by starting. You begin by beginning. You just start using this to

beginning. You just start using this to do things and that's the way to stay up to date. Um and it's much more useful

to date. Um and it's much more useful than reading hundreds of articles.

>> That's great. I I totally agree. Um so

final question, this is a Franken company is a leadership company. Uh so

we're going to end on on leaders and people and and not on AI. But final

question for you is who is a leader that has greatly influenced you in your career in life and what are lessons that we all can learn from them?

>> Oh that is a really interesting question and one that I should have been better prepared for than I was. I've been lucky enough to be mentored by lots of people in different areas, you know, and I've also been an entrepreneur and realized

how bad I am, you know, or was as a leader. Hopefully I've gotten better at

leader. Hopefully I've gotten better at it. Um but I I've I've met a lot of

it. Um but I I've I've met a lot of people who I I really admire along the way. Um you know in in the sort of

way. Um you know in in the sort of academicy world um I've I've had um been lucky to have a leader that's a little different than others. My mentor Ezra Zuckermanman who's a professor and a

dean at MIT. And what I found really interesting about him was that you know not only leading by example but actually the idea of mentoring around principles of truth as opposed to getting things done. We can do that in academia. we

done. We can do that in academia. we

can't do in other places about focusing on you know um is this meaningful does every word count um are you doing something that's you know that is going to advance the state of knowledge and the state of humanity I find you know

it's a very different I meet lots of corporate leaders who I really admire all the time but there's just something very different about kind of an intellectual leadership about thinking about you know is the work meaningful and good that you're producing and I

want to help you produce better work that I find just different than you find in other environments >> well I think he has been successful. You

have created very impactful work. Um so

Dr. Mollik, thank you so much for joining today. Uh this is a fascinating

joining today. Uh this is a fascinating conversation. I learned a lot

conversation. I learned a lot throughout. I also just want to thank

throughout. I also just want to thank you for being such a public intellectual on this topic uh and doing it in a way that is both interesting and engaging.

So I have learned a lot following you over the years. I'm sure many of our listeners have as well. And if they haven't, they'll be following now. So

thank you so much. We appreciate your time.

>> Thank you. And to all of our listeners, I hope you enjoyed today's episode as much as I did. Uh, please join us next week for another episode of Franklin CVY on Leadership.

[Music] [Applause] [Music]

Loading...

Loading video analysis...