AI and the Politics of Extraction | Centre for Climate Justice
By Centre for Climate Justice
Summary
## Key takeaways - **AI Redefines Tasks for Machines**: Every time we celebrate computers doing tasks like writing poems or identifying faces, we have transformed the task into something a computer can do, translating it into machine languages and formalisms. We don't pay enough attention to these transformations while celebrating technology's accomplishments. [08:12], [08:38] - **Data is Sticky Human Rights Threat**: Digital data about people are sticky like gum on your shoe, enduring and moving through ecosystems without our knowledge, often generated unbeknownst to us. AI supercharges this datification process of digitizing our thoughts and movements. [11:04], [11:36] - **Data Centers Match Japan’s Power**: By 2026, global data centers will use electricity roughly equal to Japan or Russia. ChatGPT consumes about a half liter of fresh water for every 100 words of output. [19:15], [19:25] - **AI Fuels Fossil Fuel Lifeline**: AI represents a massive subsidy and life raft for the fossil fuel industry, with tech CEOs lining up at Trump's inauguration better than family members. Tech's net zero claims unmask incompatibility with pursuing the AI dream. [14:24], [16:38] - **AI Born from Factory Mind Control**: AI culminates a managerial project from 19th-century factories applying division of labor to intellectual work, reimagining thought as computation for economic optimization. Human 'computers' did additions to enable this. [22:46], [23:22] - **Bipartisan Backlash Blocks Data Centers**: Communities in Tucson, Memphis, and Louisiana organize against data centers with slogans like 'your cloud is not more important than our river,' uniting workers, environmentalists, and migrant rights activists. These factories promise few jobs, driving up electricity costs and flipping elections. [36:31], [37:50]
Topics Covered
- AI Redefines Tasks for Machines
- Data Sticky Exploits Human Values
- Tech-Fossil Alliance Masks Authoritarianism
- AI Data Centers Spark Bipartisan Resistance
Full Transcript
Good afternoon. My name is Carol Leia.
I'm an associate professor of law here at UBC and co-director of the Center for Climate Justice with alongside my dear friend and colleague uh Professor Naomi
Klene. We are so thrilled to have you
Klene. We are so thrilled to have you join us here today for this uh important discussion. Uh I want to begin by
discussion. Uh I want to begin by acknowledging that those of us here in person are situated on the traditional ancestral and unseated territories of
the Musquam peoples. Um I also want to acknowledge the over 500 registered who are joining online acknowledging the traditional owners and caretakers of
those lands where you are. Uh, and as a lawyer and also a UBC law professor, I also feel an obligation to mention my awareness of the ongoing lawsuit by a
group of UBC professors against the university claiming these acknowledgements are too political. Um,
I want to say that silence and silencing is also political which has largely contributed to this climate emergency we're in. And so I double down here uh
we're in. And so I double down here uh to recognize the significance of the lands we're on and the historical and ongoing injustices of uh that indigenous
people still face and the importance of our shared commitment in working towards decolonial solidarity and the full realization of indigenous rights and rights holders here. So I want to
acknowledge that.
I also want to thank uh the Chan Center for Performing Arts for gifting us this time and space, specifically Pat Carbre, uh Jarrett Martino, David Humphrey, and
Lloyd on sound uh for their incredible support.
Today we have an important discussion with some fantastic researchers and experts talking about artificial intelligence and what it holds for our
future. And I want to take a moment to
future. And I want to take a moment to provide some context for why the Center for Climate Justice is hosting this event. Uh as with any crisis, climate
event. Uh as with any crisis, climate change is not an equal opportunity offender. Uh vulnerable groups bear the
offender. Uh vulnerable groups bear the highest cost while having the least say in the implementation of solutions.
the in 2021 the uh center for climate justice was formed to better understand and emphasize how social inequities interact with climate change and climate
solutions.
Our focus is in bringing together interdisciplinary researchers, community members and movements to share knowledge and disseminate research that guides
policy and enacts meaningful change. And
a major component of our mandate is to communicate knowledge in a way uh that increases public literacy on climate justice issues that are vital to an
equitable and sustainable future.
We have 60 faculty affiliates, 27 posttock and graduate affiliate students and over the past two years we have hosted and co-hosted 67 events. We've
engaged over 30 community partners and we've widely disseminated a dozen reports on housing, mining, indigenous rights, climate policy, engaging thousands,
thanks to Naomi, sometimes millions. Uh,
and we have held several high pol high level policy discussions with those holding uh, power and influence in political offices. And I want to give a
political offices. And I want to give a shout out and thanks to our dream team at the CCJ, Amy Harris, Evani Mativ, and Margaretta Kernotova.
At the CCJ, we believe that all of us uh all of you uh are decision makers, whether it be in our work, our day-to-day lives, or with our votes. Um,
and our hope is that with events like this, we can help each other make informed and salient decisions about climate change and its related topics like the rise of AI.
Now, artificial intelligence and especially generative AI has permeated society and its implementation has been
rapid, largely unregulated and often without the consent of users. Uh this
year alone, the International Energy Agency estimates that $580 billion dollar is being invested in data centers. This is $40 billion more than
centers. This is $40 billion more than the total global investment in the oil supply. And uh proponents are hailing AI
supply. And uh proponents are hailing AI as the tech of the future, an unavoidable eventuality that will solve major problems, including climate change. And it's important to
change. And it's important to acknowledge that in some contexts AI can be metamorphic. uh but it is equally
be metamorphic. uh but it is equally important to make sure that this technology does not come at the cost of already struggling communities and
ecosystems or upend the values uh that we have and hold as a society.
Uh in our registration we asked you what your most pressing AI concerns are. Uh
and we received over 400 responses.
Uh on the graph you can see the top categories. Uh they range from impacts
categories. Uh they range from impacts of data centers on the environment to geopolitical issues, geopolitical issues of energy and resource extraction to
issues of the use of AI and widening power and wealth inequality and applications to surveillance and militarism.
These climate justice issues are what we are here uh to discuss today. Now, let
me introduce our incredible panel of experts. Uh, first, Dr. Stephanie Dick
experts. Uh, first, Dr. Stephanie Dick is an assistant professor of communication joining us from Simon Fraser University. She's a historian of
Fraser University. She's a historian of science and technology focusing on how mathematics, computing, and AI is shaping knowledge, labor, and power in the 20th century. Dr. Wendy Wong is a
professor of political science and principal research chair at UBC Okonogan. Among her many publications,
Okonogan. Among her many publications, she's the author of the 2024 Balcilia Prizewinning book, We the Data. I have a physical copy here, Human Rights in the
Digital Age, published by MIT Press. Uh
Dr. Hamish Vanderven is an assistant professor, sustainable business management of natural resources at the UBC Faculty of Forestry. His research
focuses on sustainable supply chain governance and the impacts of online activism on business behavior.
And finally, UBC professor of climate justice, uh, Naomi Klene, columnist at the G, uh, the Guardian, New York Times best-selling and multiple award-winning author of nine books that have been
translated into 35 languages, including doppelganger, the shock doctrine, and this changes everything, who is working on her 10th book related to this topic,
and uh, end times fascism. Please join
me in welcoming these wonderful panelists.
We're going to start uh with some framing here and start by talking about how you ended up in this research. And
while you do that, help us understand this moment uh that we're in right now.
And I thought we'd start maybe at the end there, Stephanie.
>> Sure. Happy to start. It's such an honor to be here. Thank you, Carol. And I
can't uh I can't wait to be in conversation with these panelists and with all of you. Uh, so as Carol mentioned, I come with a background in the history of science and technology and our business is trying to make sense
of how we think we know what we know in different times and places and the role that certain kinds of technologies and institutions play in the knowledge that does so much work for us in our world.
And I started studying the history of artificial intelligence in 2007 before it was a very popular topic and was actually warned off of it for the reason
that it might be too esoteric to have much traction in in broader discourse.
And of course that turned out not to be the case uh at all. But probably the the biggest through line in all of the research that I've done is that every time we say the computer has done a
thing, anything from writing a poem to identifying a face to proving a mathematical theorem or running a simulation in science, every time we
celebrate this kind of automation, we have fundamentally transformed the task.
We've had to we have had to transform it, translate it, reimagine it, reconceptualize it into the kind of thing that a computer can do, right?
Into its languages and formalisms and its operations and uh and constraints and I don't think we pay nearly enough attention to what goes into those
transformations while we celebrate the supposed accomplishments of our technology. So, I don't see this history
technology. So, I don't see this history as one in which AI has been getting better and better and here we are with generative AI. What do we do? I see it
generative AI. What do we do? I see it as a history in which we've been redefining more and more of our world and our concerns in the terms of the machine and that that's something we
need to pay a lot more attention to. So,
that's the research that's brought me to this conversation.
>> Fantastic. Thank you, Wendy.
>> So, thank you all for being here. Thank
you to the CCJ for putting this on. Um I
am a political scientist and before I got into issues around AI, I studied civil society, global governance issues.
I wrote a couple books about non-governmental organizations. So I'm
non-governmental organizations. So I'm very much embedded in a sort of a a tradition of thinking about advocacy at the global level and how that works. And
I sort of fell into AI by accident. I
think I started reading some of the work. So this is around 2019. So, fast
work. So this is around 2019. So, fast
forward well past where when Stephanie got involved, right before the pandemic, I started reading about AI and how transformative it was going to be for humanity. It was going to change
humanity. It was going to change everything about how we live and and how we function. And I thought, gee, you
we function. And I thought, gee, you know, given my human rights background, like you think they would talk about human rights as well in this process, but it was largely left out except for some people would talk about privacy,
which by the way is just one of many, many, many human rights we have, right?
So I thought this is pretty inadequate and I realized it's because people tended to focus on the computing resources or what they call compute or the algorithms themselves the code and
how biased and discriminatory they were and that is part of the problem but the data are where we are the data describe what we think what we do what we want what we value those are where the the
the information about all of us are and that's what makes LLMs large language models and other generative AI so powerful So in this book that that Carol introduced uh we the data I really
wanted us to think through the importance of human rights not as singular rights that are affected which they all are but the human rights values that underpin that structure. So
thinking about dignity autonomy equality in our communities as as human beings and how they're affected by AI. And I
think one of the big messages from the book that I want to convey to everyone is that data, digital data about people are fundamentally different from other types of political and social challenges because they're sticky. They're sticky
in a way that is literally like gum on the bottom of your shoe. They they they endure for a long time. They move around in our information ecosystem without much of our own knowledge typically. And
we often don't even know when the data are being generated. And so I think that's a real issue that requires that we think about this process of datification that has happened.
Datification means digitizing all of the activities, our everyday thoughts and movements making that into digital data.
That has been an abrupt shift. I would
say in the past 20 years or so. So
before that we lived largely analog lives. Today we are living digital and
lives. Today we are living digital and analog lives simultaneously. And so that is a very different way to think about human rights, which is not to say human rights aren't important, but we need to
reinvision our humanity and our agency in light of that. And I would say that AI just supercharges all of that datification.
>> Yeah.
>> Um, yeah. Well, thank you, Carol. Thank
you all for coming. I'm thrilled to be here with these brilliant people. Um, so
yeah, I mean the throughine of the books that Carol mentioned, um, is really following the rise of
corporate power and consolidated wealth and looking at the impacts on labor, um, you know, primarily in no logo, on climate and this changes everything, on
democracy and the shock doctrine and so on. And um so I I have some background
on. And um so I I have some background but before I came to UBC I was at Rutgers and um I was doing a lot of work with tech workers in that area. I was
right around Ruters was just like a sea of big box um uh Amazon warehouses. It
was really a hub. Um and um the position I had at Rutgers allowed me to bring a lot of people together including tech worker uh organizers
um people um like Meredith Whitaker who's now CEO of of Signal but at the time was at AI now. And so as this was a period around 2018 2019 when a lot of
these tech companies had um like they they wanted to have a certain kind of criticism inhouse as it related to AI.
they kind of knew what they were doing was dangerous, but it was still early stages and it was kind of nice to have somebody at the conference saying, you know, this could be dangerous or what, you know, what about this or talking
about gender, talk about racial justice and at the time that we started having some bunch of these convenings at Rutgers, a lot of these people started to get fired and they started to get
fired because um AI was becoming more and more um of a real uh economic model and their critiques were um were more threatening and so funding was being
pulled from centers like AI now um a and so on. So, um, now ever since I I wrote
so on. So, um, now ever since I I wrote This Changes Everything, I've been very engaged in climate justice. Um, and I think what I'm seeing with AI is all of
these worlds collide, right? Because
this, um, you know, we could talk more about this, but I think, you know, for somebody like Donald Trump, what AI represents is really a massive um,
subsidy and life raft for the fossil fuel industry, right? I mean, this is a person who convened the CEOs of the major oil and gas companies at Mara Lago during the campaign and said, "If you
give me a billion dollars, I will, you know, deliver everything you could ask for and more." And he did do that in terms of deregulating um, you know, rolling back Biden's climate agenda. But
what he couldn't do is deliver a market for them. And so what I've been finding
for them. And so what I've been finding in my research is that u you know if we think back to the um you know inauguration day when Trump is sort of
upstaged by these two images right one of them is the lineup of the tech CEOs um you know with better seats than
members of his family right and and um former presidents just sort of watching over his swearing in
um and Then a little while later, we we saw Elon Musk um give what looked very much like a Hitler salute. Um of course, he then claimed that it wasn't and went
after Wikipedia and everyone who said that it was. And that is something that um is quite telling about the interests.
We can come back to that too, but about what AI represents in terms of the ability to to centralize knowledge, right? He then launches Graipedia, goes
right? He then launches Graipedia, goes to war. I mean the ability to control
to war. I mean the ability to control truth I think is at the heart of this and this is why this should be of great interest to all of us the university. Um
but but yeah so I think I think that that we are we are we can't understand this moment without understanding the fossil fuel side of it. um and
understanding what these tech companies had said about who they were as it related to climate, all the net zero targets, all of the we are not an industrial industry, right? We are that
something else, something more ephemeral, something green, something clean and understanding that actually it was impossible to be that and also
pursue the AI dream. Um, and so that image of that lineup of all of that tech power and Donald Trump, um, was, I think, an unmasking of of of of a new
political age that we're in right now, where tech and fossil fuels have come together and really understand that this
project is not compatible um, with democracy, human rights, um, you know, any of the nice things that that that may have been claimed in the past. So,
it's really a mask off moment. And, you
know, I there's been I think there's been a lot of sort of psychological analysis of what has happened to Silicon Valley. Was it, you know, Elon Musk had
Valley. Was it, you know, Elon Musk had a trans daughter? Um, were they upset that their workers wouldn't come back to the office? And I think that there's
the office? And I think that there's things like that at work, but I believe that the most important driver of it is that they know that the industry that
they are all chasing um is really not compatible with a democratic society and so they have to align themselves with an authoritarian agenda, you know, and
NBS's visit to the White House is just the latest little bit of evidence.
>> That is some definite framing. That's
good. Thank you. We'll we'll certainly unpack that um in a minute. Hish want
>> Thank you, Carol. And and wonderful to be here as well, too. So, um I guess similar to Wendy, I probably started thinking a little bit more about the
environmental impacts of AI around the pandemic. Um, I have this very clear
pandemic. Um, I have this very clear memory of being at a climate justice march in uh 2019 and then all of a sudden, you know, a short time later, it
felt like everybody was confined to their houses and buying things on Amazon and binging Tiger King and doom scrolling, right? And so at that point I
scrolling, right? And so at that point I was really thinking about some of the impacts of the digitalization of our lives on the environmental movement as a whole and
what it meant to be moving from the streets to a platform like Twitter as an environmental movement and whether that was going to be as effective. And so
I've kind of followed the environmental impacts of digital technologies from social media through cryptocurrency and now to AI. And I guess the access point
for thinking about the environmental impacts of AI are really kind of the material side which I'm sure most people in this audience know pretty well and Carol you mentioned in your your preface as well too. The energy consumption is
obviously the one most people talk about. But not to marginalize it, right?
about. But not to marginalize it, right?
We are talking by 2026 about all the data centers in the world using roughly the same amount of electricity as Japan
or Russia. We are talking about fresh
or Russia. We are talking about fresh water consumption. Roughly a half liter
water consumption. Roughly a half liter of fresh water for every 100 words of output from chat GPT. Right? So this is a massive drain on our research. or our
resources. Sorry.
>> Also >> and and also our research as well too.
Yes. Yes. So I think where I am now and where I want to kind of steer the conversation today and pick up maybe on some of the themes that Naomi brought up as well too is on some of the indirect
environmental impacts. So we are moving
environmental impacts. So we are moving rapidly towards a posttruth society and I think AI has played a huge role in
accelerating that change. Uh few weeks ago, OpenAI launched a Sora text to video uh platform and within hours you
had videos of Martin Luther King wrestling Malcolm X. Right? If we are now in a society where people can no longer trust what they see, where the
erosion of science is increasing, where we no longer have a common consensus about the climate crisis that we are in, then we are in a very dangerous place
that allows us to elect and believe in demagogues like Donald Trump. And I
think these kinds of indirect impacts are potentially going to be more profound than all of the life cycle associated with AI's very long value chain.
>> Wow. Okay. Thank you so much. There are
so many things that I want to pull on here. Um maybe we'll start Stephanie
here. Um maybe we'll start Stephanie with you talking about the future of work. I know you've done uh some
work. I know you've done uh some research there and >> Oh, sure. And
>> help us.
>> Yeah, of course. Help us crystallize.
It's hard to have these conversations in part because they're all so profoundly entangled, right? We can't disconnect
entangled, right? We can't disconnect the conversations about climate justice, from conversations about work and power and wealth. And Naomi made this
and wealth. And Naomi made this excellent point that the tech industry has worked really hard and intentionally to distance themselves from industrialization.
They have a shining, glowing, blue aesthetic that is the exact opposite of the reality of open pit rare earth mining and so forth. and they're trying
to sell us on an aesthetic and a narrative about technology that is quite distinct from the industrial history that we're most familiar with, which is why I think it's so important to
resituate artificial intelligence and computing in general in an industrial history. uh which is exactly where it
history. uh which is exactly where it was born in the 19th century when the factory is taking on physical labor and this new managerial class is trying to
imagine how to realize Adam Smith's dream of the most optimally profit wealth producing nation by applying the division of labor to all of our manufacturing and all of our work and I
think most people are quite familiar with this part of the story and some of the social transformations that emerged from our attempt to automate elements of physical labor in the context of
industrialization.
But less well known, I think, is that at that same time, there were a number of philosophers and technologists, people like Charles Babage, who were going around touring the factories of
industrializing Europe, saying, "I wish we could do for the mind what the factory has done for the body." There's
this class of managerial thinkers who want to be able to apply industrial and managerial optimization to thinking newly reimagined as a kind of work and
do for the mind what the factory did for the body. And so they start trying to
the body. And so they start trying to imagine how you might break down intellectual labor into more elementary kinds of steps that less well-educated people could do or that machines might
be able to do. when they start with mathematical calculations. Can we do
mathematical calculations. Can we do hard calculations using only addition?
And then we can hire people who have less education in order to do those additions. And so they were called
additions. And so they were called computers. Those people this very idea
computers. Those people this very idea of computation comes from the application of industrial thinking to the mind and a reimagining of thought as
work. And in the history of science and
work. And in the history of science and technology, we've largely converged on a consensus that AI is not a culmination of a history of IT and digital technologies. It's actually the
technologies. It's actually the culmination of a managerial project that is trying to make knowledge and skill accessible to managerial control as much
as possible. It's very explicit, right?
as possible. It's very explicit, right?
Even before generative AI, there's explicit language in reports coming out of the air force and so on saying we need to find the most efficient tool for knowledge extraction so that busy
managers can get on with their important business. So I think as we see now at
business. So I think as we see now at scale and a desire to imple sort of implement our newer AI technologies, what we are also seeing is the
subjection of all our data, our collective outputs, our supposed shared repositories of knowledge now also being subjected to a managerial way of
thinking. Can we use this data to be as
thinking. Can we use this data to be as efficient as possible at all forms of work that have economic relevance? Uh so
I've been working lots and speaking lots with labor organizers who I think are going to be an incredibly important part of how we reimagine labor going forward.
Uh but I think it's helpful for everyone to know that we are living in the long shadow of this industrial moment where we desired to make thinking and knowledge available for economic
optimization. that was very explicit and
optimization. that was very explicit and intentional and is a strong throughine through every paradigm and iteration of AI that that we've seen.
>> Thank you. Uh I'm just getting a question a lot of questions here about if we can have the webinar focusing on our beautiful faces as opposed to the poster. So I don't know if that's
poster. So I don't know if that's something that you can >> double double click on the >> the panel and they've said that doesn't
work. So that happened too. All right.
work. So that happened too. All right.
Sorry. Continuing talking about tech. Um
Wendy, did you want to build on that? I
know that you and Stephanie have had a lot of mind melds of late.
>> Mind meld. I know. Yeah. I just want to add one word to your this repository of knowledge optimization. I think that's
knowledge optimization. I think that's what you said. It's also exploitation, right? Because and and I think that's
right? Because and and I think that's the part where the human rights side of me is like yeah, you know, I get really upset because we don't know that it's happening and we can't get out of the system, right? And and I think when I
system, right? And and I think when I first started working on on this project when I was first presenting um my my book, I'd have people come up to me and say, "Well, I don't do social media, so
you know, am I in the clear?" And I I struggled with how to answer because the the systems of datification are so extensive. It's not just what you
extensive. It's not just what you experience or what you upload or what you provide, right? That's sort of our social media mental model. It's also on the back end. What are those social media platforms and other platforms
doing with those data and how are they sharing those data and how are they exploiting and creating knowledge and extracting further insights
and it's also an entire economic system right so it's not just business to consumer it's business to business so think about one of the biggest tech companies out there Salesforce
Salesforce is not something you and I typically interact with unless you are a customer manager in your in your day job and yet that sort of system pervades how people understand customer relationships
and you may be affected by it. You don't
know, right? Or we think about other types of systems where you know you have what we might think of as a typical bricks and mortar type of company like John Deere. John Deere also sells a
John Deere. John Deere also sells a substantial amount of data. It gathers a lot of data from from the um from its machines and it sells that data for for its own purposes and and for its
partners' purposes. So the reason it's
partners' purposes. So the reason it's exploitative is because we don't have a logical exit because we're kind of just stuck in this computational logic and I
think this is where a lot of times it is difficult when when you know when I talk about my research and then people go what do we do and I we'll I know we'll get to that but part of it is really
recognizing that the the power is is what you know Fuko might have called capillary in a sense it reaches everywhere um and it is a logic that is
is premised on the supremacy of computation versus human thought. It is
one that is seeped deeply in in something called automation bias, right?
Which is the assumption that a machine if it produces an answer is maybe more accurate than even a human expert. And I
think that's also part of part of what I you know want to identify here.
>> Uh hey, you're a political economist.
Did you want to talk a bit more about this reconfiguration of power, concentration of wealth, and then we'll let Naomi have the floor?
>> Sure. Yeah. Um, yeah. So, I think kind of pulling back to the point about indirect impacts on the environment, maybe the biggest impact is this
reconfiguration of political power that we've seen with the rise of AI. So
because AI is perceived as being a dualuse technology, one that has both military applications and economic applications, this affords um the CEOs
and prominent investors in AI a lot of access to the seats of power around the world. Right? So this is why we see the
world. Right? So this is why we see the famous photo that Naomi referenced of all the tech CEOs with Donald Trump in the White House. It's why we have the Peter Teals and the Jeff Bezos's and the
Larry Ellison's of the world now essentially writing regulations to govern their own industries.
And I think I don't think there's been a full reckoning yet of the types of ideological baggage that these CEOs are
bringing to the table. Right? So these
are beliefs that are fundamentally antithetical to the principles of climate justice.
So if you read Mark Andre's tech optimist manifesto, right, he talks about the sustainable development goals as being a mass demoralization campaign.
He says the state is the enemy to technological progress, right? You have
uh a widespread belief in Silicon Valley in what have been called the tessal ideologies. So the idea that
ideologies. So the idea that transhumanism, humans are destined to merge with machines through some way that we will colonize outer space and
long-termism, right? This belief that a
long-termism, right? This belief that a human life today in 2025 has the same moral value as one a million years in the future, right? So if these are the
people in power and these are what they believe, then you can see how they have moral justification to do things like invest in building the most
sophisticated AI models of all time because they believe that somewhere in the future we're going to have a trillion people living in outer space and Jeff Bezos believes we'll have a
million Mozarts living in outer space and that we shouldn't be concerned or care about the vulnerable populations that are going to suffer the worst effects of climate change in the here
and now. Right? And so, uh, I really do
and now. Right? And so, uh, I really do think that placing these particular ideologies so close to the levers of
power is arguably going to be one of the most uh, detrimental impacts on climate justice as movement.
>> Yeah. Um,
where to start on that? I mean, I I think it's really important to understand that they that they aren't just close to power. They are power.
They they are there. They are running the show. Um, you know, Peter Teal
the show. Um, you know, Peter Teal um hi well hired JD Vance right out of law school or helped him get his first job
in Silicon Valley. JD Vance saw him speak um sorry at Yale and said it was the most transformative experience that he had at Yale, decided to get out of
law and go become a VC like his his mentor. Um and then eventually was set
mentor. Um and then eventually was set up with his own venture capital firm um which was funded um by Peter Teal, Mark
Andre, Mark Andre and Eric Schmidt, the former CEO of Google, who actually more than anyone else has been beating the
drums on this idea of uh an AI arms race to beat China. It's, you know, he has been working with defense firms now for
many years. during the pandemic. He
many years. during the pandemic. He
spent the pandemic giving PowerPoint presentations to lawmakers about how disadvantaged um um countries like the
United States and Canada were because of our privacy laws because you know we couldn't have AI doctors like in China because people in Toronto organized to
kick out the sidewalks lab um because they didn't want to live you know have a big part of their city be under constant surveillance and they were very sore about that because China had smart
cities everywhere and the US was falling behind. So it was all of this AI arms
behind. So it was all of this AI arms race model then um uh this was very early in the pandemic. So I wrote a piece called the screen new deal because I really saw it sort of like we had been
talking about a green new deal and suddenly we were talking about a screen new deal which was which was oh you know his whole pitch was um what is happening now with lockdown can be our lives
forever. We can have online education,
forever. We can have online education, we can have teleaalth, we can super, you know, we can have everything delivered.
Everything can be mediated by apps. And
what actually happened is that, you know, they did, you know, get pretty far with that. A lot of stuff got locked in,
with that. A lot of stuff got locked in, but also a lot of people realized they didn't like living um like that. And
there and there was more backlash. And
so the moment that we're um you know we're in this th the this um very tricky moment I think because the um you know I
think what what start like if we think about the bait and switch of open AI it's very telling right that the first wave of open AI and what we heard from Sam Alman when chat GPT was launched in
November November 30th launched to the public November 30th 2022 was Okay, this is a kind of a nonprofit company, right? And there we we need to
company, right? And there we we need to trust them because this is such a powerful technology that it can't be in the private sector, right? It can't just be driven by profit. So, we were
supposed to trust open AI with this very powerful thing precisely because it wasn't all about profit if you remember that, right? And they were also openly
that, right? And they were also openly saying, "Please regulate us. This is so they were signing letters." Remember,
they were all signing letters. This is
so dangerous, right? please, please
regulate us. So, actually, a lot of governments did start regulating, right?
Um maybe not enough, but they're, you know, they were like, okay, well, if you're saying this could destroy humanity, maybe we should regulate it, you know? Um and and so the moment that
you know? Um and and so the moment that we're in now is is the rebellion against that because it was that was never the plan. It was always going to be this
plan. It was always going to be this massive bubble that we're in right now.
Um so, yeah. Yeah. So, I I I do think that the pandemic, thinking about that pandemic moment, this the shift to online, all of the negative effects of that. The fact that people don't
that. The fact that people don't actually want to live like that is something for us to really hold on to.
You know, the rebellion against online education, right? Eric Schmidt was out
education, right? Eric Schmidt was out there going, "This is great. Everyone's
using Google Classroom. You know, this is fantastic." Right? But most people
is fantastic." Right? But most people did not agree, especially parents. Um
so, and students. Um and so so the other thing that I would say just because I know this is getting quite bleak is that people generally hate this. You know
this is the good news. Okay. The good
news I think is that it, you know, and this is an organizing challenge and, you know, Stephanie, I think what you're saying about the role of labor is absolutely crucial because what this reminds me most of in terms of the kind
of backlash that we're seeing against data centers is this wave of, you know, what is sometimes called blockadia, right, where this rapid um expansion of
the tar sands, fracking back in oil and that sort of like unconventional fossil fuel frenzy in all of the pipelines and all the fracking infrastructure led to a huge community-based backlash, right?
And people started organizing against it. But
it. But um the weakness was is that it often or invariably pitted environmentalists against labor, right? Um what we're
seeing in communities that have big data centers cited um whether it's Tucson or Memphis. Um you know, this is happening
Memphis. Um you know, this is happening across Louisiana. I mean, it's happening
across Louisiana. I mean, it's happening all over. people are organizing and
all over. people are organizing and they're organizing, you know, with slogans like your cloud is not more important than our river, you know, and they are bipartisan. It is workers, it
is environmentalists, um because it is it is migrant rights activists and what's it's bringing together the local environmental
impacts, the climate impacts, the labor impacts because these this is a factory without workers that is being built, right? So the usual industrial deal is
right? So the usual industrial deal is we're going to build this thing. It's
going to use a lot of energy, use a lot of water, but you're going to get a few thousand jobs for your communities. Now
the at the end of the day, you get maybe 12 jobs. Um, and the whole machine is it
12 jobs. Um, and the whole machine is it the the whole the whole bubble that we're in right now is based on the promise that eventually it will eliminate jobs on a massive scale. Now,
so far it's not funny, but it's like it's a it's a wild idea that they think they could get away with this. So, I
believe this is an organizing challenge that if we rise to this, um, we can build coalitions that are larger than anything we've experienced before. Um,
because this touches all of our lives.
It is not a partisan issue. I can tell you that the that that Trump and Steve Bannon are sweating bullets because in
the last election cycle they they they saw a flip from Republican to Democrat in areas like Virginia and Georgia and
that and this is decisive issue where people enraged at their electricity costs because data centers were driving their electricity costs up. So um I think this issue is ours to seize. I
think it is a truly inter intersectional issue. I think it touches everything and
issue. I think it touches everything and I think we should organize.
>> I just the part where you said bubble.
We are in a bubble, right? There's no
question about that. I I mean is is that the case? Because Nvidia I know you know
the case? Because Nvidia I know you know they own 90% of the market share in terms of um computer chips, right? And I
think just three weeks ago they announced they're the first publicly traded company um that uh to be worth more than $5 trillion.
Five trillion.
>> So could we unpack that a little bit too in terms of the financial risk that's happening right now? Um Stephanie or Wendy or
>> I mean they're holding up the econ the global economy right now, right? I mean
that it's you know the Magnificent 7 is the Magnificent Seven. You know, this is the Google, Amazon, Meta, Apple, Microsoft, Tesla, Nvidia. I got all seven. Okay. Um, you know, they're
seven. Okay. Um, you know, they're they've been holding up the S&P 500 for a long time. And if were it not for AI investment right now, the US economy would not be looking so so great. This
is from reporting by Derek Thompson among other people. And so, you know, I'm not I'm not an economist. I don't
know. And I, you know, as a political scientist, I never make predictions. So,
I don't know if this is a bubble. I'll
tell you after the fact. Okay. Um
but but what I can say is Nvidia also just yesterday announced better than expected earnings and between them and Walmart they have sort of bolstered up the the projections for for you know
this this um this year in terms of finances and what was really striking wasn't that to me it was reading further down the article it's that you know um
that Jensen Hong has the the idea that Jensen Hong runs Nvidia he's the CEO So, you know, he wants to go beyond this current market of selling to AI companies. He wants GPUs to be
companies. He wants GPUs to be everywhere. And so now we're just h
everywhere. And so now we're just h we're h, you know, hypercharging this computation capacity and and for what end? I was really struck Naomi when you
end? I was really struck Naomi when you were talking like about organizing from the oppositional end. And I think, you know, as someone who studied a lot of advocacy groups, one thing that is
really interesting is the the AI folks can't seem to make up their mind about which narrative they want us to believe. Is it that AI is going to
to believe. Is it that AI is going to destroy us or make it that we never have to work again? And where I feel like a lot of times I'm ping ponging between these explanations because you've also got people like Eric Schmidt who are
like AI is going to fix climate change even though climate change is a political problem, but we're going to fix it with AI. And so I do find that as organizers if potentially there I think
it's fairly straightforward in a way because the opposition is quite disorganized in its narrative and I I wonder how that plays in. I'd love to hear thoughts about that.
>> Yeah. I do want to also ask though there is definitely that narrative that AI is going to help us solve climate change.
>> Yeah.
>> And so can you guys talk a little bit about that? What's the likelihood of
about that? What's the likelihood of that? And what are kind of I guess the
that? And what are kind of I guess the risks in that narrative too?
>> I mean tech loves to set up a problem that only tech can solve, right? And
that's what happened with so-called AI ethics, right? It was widely known now
ethics, right? It was widely known now that most of our data is extraordinarily problematic. All data are impartial,
problematic. All data are impartial, incomplete. All data reflect the values
incomplete. All data reflect the values and concerns of the people who created the conditions through which that data could be created. The data is problematic, right? And then we start
problematic, right? And then we start discovering there's a lot of bias in the data and that's not a surprise to anyone. And then we see the emergence of
anyone. And then we see the emergence of this whole tech field called AI ethics that's trying to solve this problem. But
what AI ethics essentially does is demand that we figure out how to turn our social values into a mathematical definition that can be imposed on the
behavior of our AI. So if we want our AI to treat everybody fairly or equally, suddenly we now need a mathematical definition of fairness and equality that
can be applied to the system. And just
like that, tech took the problem of inequity, social inequality, harm and turned it into a mathematical engineering problem that only computer
scientists could solve. And I think similarly, we will be able to talk about certain climate problems in a technical way that we will be able to use AI to
explore. I think that will happen. But
explore. I think that will happen. But
that's not the solution to a climate crisis or catastrophe given that the entire under the hood infrastructure of artificial intelligence is militias
waring over colan repositories in the Democratic Republic of Congo where children are mining materials that are in probably every iPhone in this room.
This is also about the war in Ukraine which is in large part motivated and structured by Ukraine's rare earths.
It's not just about the power and the water that AI consumes and the carbon it emits. It is also about the way that the
emits. It is also about the way that the rare earth's extraction structures geopolitics and conflict and reinforces colonial supply chains. And tech,
similarly to ethics, is not interested in solving that problem. It is always looking at what it can put into its terms, namely the terms of the
technology, the terms of engineering and mathematics. And engineering and
mathematics. And engineering and mathematics, as we know, isn't going to be the solution to the climate catastrophe, which is a social and political problem. So again, there's
political problem. So again, there's this desire to frame a problem in the terms that tech can solve, which further empowers them, which is what AI ethics did. It gave them more money, more
did. It gave them more money, more postocs, more data, more researchers, more power because they had this really hard problem they had to solve mathematically like only they could,
right? And questions about labor and,
right? And questions about labor and, you know, the racial subjection of the black women who were leading the conversation about that bias in the data getting fired from their positions. As
Naomi pointed out, suddenly AI ethics had nothing to do with that. So I think we're going to see tech redefine the climate problem into the kind of problem
that tech can claim to solve, but that in doing so it will exclude these realities of geopolitics and conflict and mining that can't be sidestepped as
mathematical engineering problems, >> you know, and Yeah. Yeah. Go ahead,
Hamish.
I was just going to piggyback on that a little bit and say AI is going to fix climate the same way that crypto fixed economic inequality and social media brought people closer together, right?
>> Give it some time. Give it some time.
>> Give it some time. Yeah.
>> Um, of course there are applications.
There are climate applications, you know. Um, but the question that I would
know. Um, but the question that I would pose is is that the majority use case?
Are the majority uses of AI going to be about clean techch innovation? No, it's
going to be about YouTube launching a new AI generated video platform that gets more eyeballs, contributes to the attention e economy and asks people to
buy more goods or makes you a more effective target for advertisers as well. So, I think when we have these
well. So, I think when we have these conversations about kind of unlocking or solving the climate crisis, yes, there are going to be AI applications and yes, Silicon Valley is going to hype those
applications, but that's not the better part of what this technology is going to be used for, right? And so, important caveat, >> Naomi.
>> Yeah, I mean, I agree entirely and I I do want to come back to the bubble question because I think it's really important. We are in a bubble and
important. We are in a bubble and everybody knows it. um you know they what happened with Nvidia's stock is the bubble just got bigger um they did it doesn't mean we're not in a bubble and
so Sam Alman has said we're in a bubble um Mark Zuckerberg has said we're probably in a bubble they that's okay right so and we are in an arms race
we're in a tech arms race as they and their business model has always been to spend astronomical amounts of money before they have a business model at all
right? Um until and and this is why
right? Um until and and this is why they're so frantic. They know at the end of the bubble. It doesn't all end. What
ends is that a a lot of a lot of pension funds lose their worth. Um a lot of people go broke, but there will be a couple titans standing, right? I mean,
Sam Alman has said things like, you know, there's going to be just one company left in the entire world and he wants to be that company, right? And we
can all get um some, you know, shares or something. I mean, he's talked about
something. I mean, he's talked about different things. Um, it changes all the
different things. Um, it changes all the time is, but but um, yeah, I don't think anyone believes that this what we're going through right now where we have all all seven of those companies uh,
competing for who's going to win is going to go on forever. They're fighting
for who is going to be the one or two left at the end of it. Right? So, so I do think and also I mean the reason why we know it's a bubble is that there is
no demand for what they are supplying, right? there like there there is like if
right? there like there there is like if you look at their sales for the AI products in no way does it match the buildout and this is the most wasteful way even if you believe that generative
AI is a great idea this is the most wasteful way you could possibly do it because everybody is building parallel infrastructure in a race to see who's going to dominate at the end right and then they'll all buy each other up and
they'll consolidate they're all going to win right so I do think it is important for us to think about how We want to prepare for the bubble bursting. Um, as
you know, I've written a lot about shocks, right? And I'm not going to
shocks, right? And I'm not going to quote Milton Friedman again, but you know the quote I'm talking about. We
actually have to have our ideas lying around for when this happens. I was very happy to see AOC, I think it was yesterday, say, "We're not going to bail them out." Right? But I think we really
them out." Right? But I think we really need to think about this because some of these companies don't have valuable assets when the bubble bursts, right?
But some of them do. Some of them do.
and some of you know Wendy's research about the way we have we like we have been the product for a long time for these companies right they have enclosed us with their appropriation of our data
maybe there's a way to navigate the bubble bursting where we end up um owning what we should have owned in the first place um like our information commons that's the kind those are the
kinds of ideas I want us to have lying around because I think it's a kind of go big go home moment I think this bubble bursting is bigger than 2008.
>> Um there's so many things I want to talk about stemming from that. One is the connection to militarism.
Uh and the other is I guess just talking about big data and also trash data um and how all that relates right now.
So maybe maybe we'll just first pick up on the militarism thing because there are a few questions that came in through the responses but also um I think I saw
one online but just uh right now there is so much money getting poured into defense that's what they call it defense right like and and with the nationalism
and uh these kind of talks about elbows up everybody can kind of feel like they can get behind defense and I was listening very closely to a conference I was at yesterday where
they use that word to sort of almost sanitize the word weapons armament and and and um the military um but they also
find it connected to climate.
Do you guys have any kind of com right now they can't they have so much money they can't even spend it all uh in terms of the billions of dollars that we are putting towards national
defense.
um how in that sense is AI connected and all this in terms of the geopolitics we're happening right now and this these perpetual wars that we're having.
>> Yeah, I mean AI has been a military project from its inception and in the 1950s when we first start to see systems and computer programs being called this,
they are specifically designed to serve a centralized command and control function. That's what computers are,
function. That's what computers are, right? And there's this really strange
right? And there's this really strange moment in the 1980s and 90s when the counterculture social justice organizers who have been opposed to the computer for a really
long time because they saw it as an extension of this cold calculating gaze of the internet turn around and embrace it. and they embrace it because of the
it. and they embrace it because of the narratives that are coming out of Silicon Valley at the end of the 20th century saying we have taken this military artifact and we're going to
give it to you to take home to your house in the form of the personal computer which got framed by Silicon Valley as an inherently democratizing
and an inherently liberating technology.
But even the very first supposedly libertarian, democratically aligned Silicon Valley tech companies that emerged in the 80s
and 90s. They were all receiving DARPA
and 90s. They were all receiving DARPA funding from the Defense Advanced Research Projects Association of the American Military. Xerox Park. You see
American Military. Xerox Park. You see
these guys sitting around on their bean bag chairs in the 1960s selling us all on this inherently democratizing and liberatory technology that's not
actually the military device you thought it was. And their startup funding comes
it was. And their startup funding comes from the exact same military streams as their predecessors had in the 50s and 60s. So I both want to emphasize that
60s. So I both want to emphasize that artificial intelligence was born in a military context at defense think tanks like the Rand Corporation and the military has been the single largest
investor in artificial de uh intelligence development in the United States and just to sort of try to connect some of these ideas together. I
see a lot of these technologies as ex you know as serving the same kind of function as our state and carceral institutions themselves right um data is
state power in a lot of ways a lot of our datadriven methods have been developed by governments in the past not scientists or statistitians because
datadriven population management has been the purview of the state for a very long time but in so far as the state wants to maintain maintain a status quo, so too do our AI systems. They're meant
to police difference. They're meant to punish outliers. They're meant to
punish outliers. They're meant to reinforce the status quo. And so ironic, I think, that so much of what gets called innovative today is actually
profoundly conservative in that way, right? It gives more power to people who
right? It gives more power to people who already have it. reinforces control
infrastructure and every single artificial intelligence tool that we have seen developed in the last 10 years has been used to scale up human against
human violence in really profound ways and most especially in Gaza where we saw artificial intelligence tools developed by and for the Israeli military leading
to the deaths of entire families because they encoded a logic of identific ification and destruction in any context. And so we can see military
context. And so we can see military logics at work across artificial intelligence systems. And there is a deep history of investment and development. And my take is that the
development. And my take is that the other story that there was something inherently democratizing or liberatory about these tools was always floating on top of what was a managerial and
militaristic control project essentially uh throughout the whole last century.
Wendy, can you talk more about big tech and how we'd govern big tech um or not or I don't know.
>> Yeah. So, so thanks for this question.
It's sort of venturing into a project I'm working on now which I talked about yesterday at um the law school. So, I'll
just say a little bit about our our mental models around big tech companies.
So first just to ride on Stephanie's point here um Silicon Valley is only possible because of the state not just of military but might but because of the
state state funding so uh Mariana Mazakado who's an economist in the UK has pointed out that this device is only possible because of massive amounts of government funding right and so it has
been privatized but it is a fundamentally a government um issued item right or made possible like by government uh general generosity in in science research. So,
science research. So, you know, one of the things that I think is is really problematic is the way that companies have tried to say that, you know, we're for innovation and and states are for regulation and therefore
unprogress or or anti-progress. And I
think that's a real dichotomy that needs to stop. But I also think our mental
to stop. But I also think our mental model around what big tech is and and I might have, you know, sort of fed into that by pointing out the Magnificent 7 because that's usually what we think
about as big tech. But I think that's actually a really um misleading model in a way because if you think about all those companies, those seven magnificent companies, they all do very different
things. And so how do we think about
things. And so how do we think about them as one type of entity or one type of business model when they're actually providing, you know, Nvidia is the the biggest outlier. They provide hardware,
biggest outlier. They provide hardware, right? Everybody else provides some
right? Everybody else provides some combination of hardware and platforms. And so I've started thinking about big tech companies not as names or you know
market cap or even number of users and and really started to think about what gives them power and I think you know um there are alternatives out there already in terms of state regulations that tries
to think about big tech in the US of course we have the um floundering anti- monopoly cases in the European Union we have a system of taxation and penalty
for harm I I think what gives all big tech companies power or at least almost all what we call big tech companies power um is their control of data
collecting platforms, right? So they
engage us through these platforms and while we're engaged they suck a ton of data and then they do what they wish with them. And what makes a a platform a
with them. And what makes a a platform a really handy device is that companies that own those platforms can, you know, sell those data to other people. They
can use those data to improve their products and they can also charge you as the user to for the privilege of using their platform right think about Netflix for example and that's called a multi-sided market in economics terms
that's what gives them power but also in the AI age they are increasingly controlling the means of computation which is you know the dominant model
that we're using for um advancing knowledge and uh if we look at who's providing the means of computation typically with advanced AI models it's
done through cloud computing right and there are three dominant players in the world uh with regard to that um the top
two are are um Amazon the top three are Amazon Microsoft and Google okay so so when we think about who's controlling the means of computation how that affects power distribution and and all
the things that we've been talking about here we are they're all driven by data collecting platforms I want to now just ask a little bit about the social impacts. I mean, we're
all absorbing this and sort of as a citizen, I'm also like, what do I do?
Um, what do you think that I I think we can all kind of get a bit of a sense now that AI's been in play for a while,
watching how it's really impacting a lot of industries. We Naomi and I were just
of industries. We Naomi and I were just talking to fine arts deans that are talking about their programs um being decimated. Uh there's some concerns on
decimated. Uh there's some concerns on what AI is going to do to our ability to have critical thinking. Um maybe we can
talk a bit more about that and then I'd like to um talk about actions that we can like that we can do moving forward.
And also uh just for those online and and in the room um our slides up feel free to pop in any questions that you may have now uh too and I'm going to be taking a look at them and feeding them
also to our speakers here.
Social impacts uh the uh ability to critically think. How are we going to
critically think. How are we going to preserve that?
>> Yeah. Yeah, I mean I'm I see some students here from from my graduate seminar on fascism and we've talked a lot about this you know talked a lot
about just that you know when when people can't think as you know Hannah Rent reminded us that fascism spreads um
and what is happening with the the rob like the the you know I think that people are robbing
themselves of a really valuable painful process. Um, which is just the kind of
process. Um, which is just the kind of um, you know, the the chaos that eventually leads to some clarity, you know, the
wrestling that eventually leads to um to to to to some sense making. And when you are working with an interface that can
just spew out instant um synthesis, right? Um we we miss all of that. And
right? Um we we miss all of that. And
and I actually think that that many of us who write articles and books um and papers, we maybe haven't been honest enough about the stages of chaos that
that that we all go through before and the the sort of pain of it before you get to that point of clarity like it it looks it looks so easy in a way like from the outside.
um you know and I'm sort of deep in research right now. So just just remembering like it's always so hard and then at the end of it you have this
sense of calm and order that is really quite hard one. But I also think that we have to find a way of talking about it that is um really recognizing that we
are very deep in the wreckage of neoliberalism and people's lives are extremely hard, overburdened, overstressed. Um even you know at the
overstressed. Um even you know at the university you know we all complain at the faculty level about all of the bureaucracy and computerization like and and so the promise somebody
could do it for you if you just give all your personal information to this AI agent it'll take all the stuff that you hate about your job away from you right
but you have to give up everything um uh u in order to get that benefit so I do think that I think we I think we be careful not to repeat some of the mistakes that we have made in the past
in the climate movement where we've made people feel very accused, right? Um and
as if they can't be a climate activist if they're not already living entirely fossil-free perfect lives and they're like, "Well, I no, but I, you know, I know I'm not good enough to be a climate activist, right?" So, I think it's
activist, right?" So, I think it's really important that we find a way to talk about these systems that acknowledge probably the person you talking to has used one of these shortcuts and that doesn't make them a
bad person who wants to burn the planet.
they are not equivalent to Peter Teal, right? Um,
right? Um, so I do think, you know, and I do think that it is, you know, we, my my my my um co-author Astra Taylor and I were interviewing a group of um organizers in
the Tucson area who had successfully fought off a massive Amazon data center.
And they said to us, we believe we're this is this is spiritual work that that this is it's spiritual organizing.
There's a lot of indigenous leadership in that campaign. It is all about land and water and what it means to be in face tof face community. So in doing that organizing and finding that common
ground like it goes deep. It really goes deep because you know when when people are confiding their like deepest desires and fears into a chatbot, right? Because
that's like the only way they can get any support and it feels like friendship and acceptance. That's a pretty we're
and acceptance. That's a pretty we're pretty far down, right? Um, so I think we just have to find a a way of talking about this that is that is really like
feels like someone's reaching out to you in solidarity and compassion and friendship and not in accusation. I
think we can lose a lot of people that way.
>> If I could Sorry, you go ahead. Go
ahead.
>> You go. Are you sure?
>> Okay. Oh, yeah.
>> So nice.
>> Okay. I'm gonna go. I'm gonna go. Um, I
I love everything Naomi just said and I want to build on this sense that we're kind of in a perfect storm, right? We
even before COVID, people are reporting highest ever levels of loneliness and isolation. People are burnt out,
isolation. People are burnt out, anxious, and exhausted. I hear from students the very prospect of coming to class is a source of social anxiety for
them. even just things that you know
them. even just things that you know when I was growing up you just do these things even if it feels awful we're talking about how it feels awful and that's shifting some of our some of our
narrative in all kinds of institutions but you take a situation like that and you drop in not only something that will listen to you and talk to you but
especially chat GPT is designed to be a sickopant right in this moment where we don't know how to navigate conflict people are so afraid of saying the wrong
thing, doing the wrong thing, upsetting somebody. You know, I've heard of people
somebody. You know, I've heard of people who have a friend of mine has a daughter who's using AI to chat to text her friends because she's worried she's going to upset someone, say the wrong
thing, use the wrong word. We're
terrified of each other in some really basic ways. Drop something like chat GBT
basic ways. Drop something like chat GBT into that situation and we are, I think, facing some some trouble. And in
addition to sort of learning and thinking how we talk about this and being willing to talk about it, which does involve some vulnerability, I think that is anathema to our economic and and
academic institutions in in a lot of ways. It's pluralism that I'm afraid of
ways. It's pluralism that I'm afraid of and that I want to organize for. The
sort of dominant finding of my home discipline of the history of science is that there is more than one way to know, right? There are so many different ways
right? There are so many different ways to know and there's this really ugly history I think that is essential to consider when talking about AI which is
that intelligence has been used as one of the bluntest instruments of colonial violence for centuries right where enslavement and colonization were often
justified by western powers on the grounds that the people they colonized were incapable of what they called the higher cognitive faculties. Right? So so
much violence has been done in the past on the grounds that there are people deemed not intelligent or whose intelligence was not recognized by
western powers. And the more all of us
western powers. And the more all of us are sort of attuning our attention and our work and our thinking and our writing and our communication to this
one system, right? We are going to all be learning and attuning and calibrating to the same forms of thinking and problem solving and communication and we
will lose pluralism and then we might lose some of the conflict we're all dealing with but only because we've all been inducted into one shared system
right and so I think we have to fight for all kinds of intellectual pluralism right now really hard and that's partly the job of the university I'm a humanist
and the datadriven quantitative approach to understanding doesn't work for me.
Right? Like Naomi, I want to be in the chaos of it. And I categorically reject the laziness of this suggestion that somehow all tedious work is bad and that we don't want to do it and we should.
It's so lazy, right? Tedium is the source of wisdom. We know this. So,
yeah, that's all that's me.
>> Well, that's what we tell my that's what I tell my kids, right? They're ted
tedium is supposed to teach you something.
>> Yes. And it does.
>> Work day is out of control. It's not
tedious. It's something else inside UBC joke. Anyway, um I was going to say that one of the things that I think Stephanie and Naomi are really reflecting on is this comfort that that
AI gives us is comfortable, right? We
don't have to confront people we don't agree with or we might not know how to talk to. And that's part of to answer
talk to. And that's part of to answer Carol's original prompt. I think that's that's what critical thinking is, right?
It's not a set of beliefs. It's a it's a way of engaging in the world, which is go out there and find someone you disagree with to figure out what you really think, right? Or is there something wrong with the way I think in
light of what someone else has said? And
and when we rely on these systems that are based on the aggregation of all of human internet knowledge, we are getting the sort of average, right? And so it's and so we're not learning these skills
if we're relying on these machines to tell us things. And I also want to say these beautiful interfaces that these, you know, commercially available LLMs present really bely the fact that
there's an awful lot of elbow greasing and and and hard labor that goes into many of these systems. Now the most sophisticated AI systems are not
handlabeled. But many AI systems have
handlabeled. But many AI systems have droves of people in the global south working for a pittance to te to label
the data so that the machine can learn from that uncredited largely still unseen human labor stuff that many academics have talked about. Um Karen
How talks about it a lot in her book the empire of AI. We don't recognize that enough, right? And and I feel like, you
enough, right? And and I feel like, you know, Naomi, we're we're sort of in this writing. I'm also in the middle of
writing. I'm also in the middle of writing, right? And it's it's painful.
writing, right? And it's it's painful.
It's messy. And you throw away a lot of stuff you write. And that's the point, right? If that's learning, that's
right? If that's learning, that's knowledge. That's that's synthesis. And
knowledge. That's that's synthesis. And
we're if we're not willing to engage that way, I think we are in a real real bind.
Hey Mish, >> can I just I'll I'll jump in on this great point, I think, just about the atrophying of our ability to think critically and think creatively. So, uh,
my students and I wrote a paper where we asked a bunch of the most prominent chat bots to describe environmental challenges and what to do about them.
And what we found was that especially when we asked, "How do you solve a challenge like climate change?"
They will take the very average answer across the past data though, right? So
it's all about solutions that were rooted in past human experience. Right?
So public awareness campaigns or incremental carbon pricing and these are all solutions that are radically incommensurate with the scale and scope of the challenge as it exists today in
2025. So I think a big risk of what's
2025. So I think a big risk of what's being lost in outsourcing our creative thinking to AI is this ability to think beyond past experience, think
creatively, think outside the box to confront the nature of the challenges that we're dealing with today.
>> Did you uh Well, okay.
>> I mean, I was I I could tell >> I did.
No, I I I know people have more questions, so I should probably be be driven by that. But I just wanted to return to um well, what you were saying earlier, Hamish, about some of the
ideology that underpins this. I mean,
even though it seems really upsetting to like really dig into what are the belief systems um and like you know the what I'm writing is like called end times
fascism. It's not fun. Um, but I and and
fascism. It's not fun. Um, but I and and there's a way in which when you really look at it like it's like it sears your retinas. It's just so bleak, right? Um,
retinas. It's just so bleak, right? Um,
you know, Peter Teal like I think I first heard about him because we I found out he had this bunker in New Zealand.
Um, he was like the first of the bunker boys, you know. Um, he brought this big piece of land and then it turned out that Sam Alman was invited. Um, this was
like way before any of us had heard of Sam Alman this piece came out. Um, and
it's I think it's important that so many of these tech executives that we're talking about have really invested very seriously in bunkers including including
Zuckerberg and you know even the the idea of space like colonization what actually is the vision I think it's less about colonizing Mars and more maybe about s space hotels or just kind of an
exodus but I I I think that there's a way in which it's less about taking their ideas super seriously ly and more taking their thought patterns seriously
in that this is how they they propose to respond to global crisis. They're
intensely aware that the crisis are real, right? I mean, Eric Schmidt who's
real, right? I mean, Eric Schmidt who's been pushing this massive, you know, energy acceleration. You know,
he has a foundation that funds climate action, right? I mean, the these, you
action, right? I mean, the these, you know, think about Bezos and the Earth Challenge. I mean, it's not like they
Challenge. I mean, it's not like they know a little. They know a lot. they
know a lot and they have decided to double and triple down. Um, and so in a moment like that, ideologies that triage
human life, that actually are openly eugenic, right, that believe that some people deserve to live and other people it's okay if they die and actually maybe we will kill them with our AI weapons.
Um, that like we're getting to this point, right? I mean Teal co-founded
point, right? I mean Teal co-founded Palunteer which is like rebranding hard tech right Elon Musk he god help us he
calls his data centers macro hard instead of Microsoft right um it's this idea hard tech patriotic tech but I
think the endgame is um is about hollowing out the government right that they that they you know they they they have these ideas the sovereign states theored date. I think what we're what
theored date. I think what we're what this play is is is like the final pillillage. And so what what they want
pillillage. And so what what they want to end up with I I think it's helpful to look at somebody like Eric Prince who founded Blackwater, ended up having to be shut down, but he still has mercenary
armies. Um you know, he got these huge
armies. Um you know, he got these huge contracts under the the Bush administration and, you know, went down in scandal, but he still has all the
weaponry. like he still he he managed to
weaponry. like he still he he managed to use those contracts to build an extra state army. Um and so when I look at all
state army. Um and so when I look at all of these AI contracts that are being handed out right now, I I do think that they're building something outside of the nation state as we traditionally
understand it. That's and that's why it
understand it. That's and that's why it really does matter. I mean among other reasons like what happens when this bubble bursts and what the demands are and whether we want to say this is an
illegitimate business model that this is you know Astra calls it theft tech that the whole thing has been about appropriating our intellectual content the the the the the
you know combined creativity of all of human existence enclosing it and saying mine why why are we accepting this they're losing lawsuits um you know the
whole idea is still based on move fast and break things, put your facts on the ground. It's why they all love Israel,
ground. It's why they all love Israel, right? Um they're trying to outrun the
right? Um they're trying to outrun the law while they simultaneously detonate the law. Like they want they want to go
the law. Like they want they want to go after the International Criminal Court.
There's a reason Francesca Albanz um is is facing sanctions after she immediately after she publishes a po a report that names Palanteer, that names Amazon, that goes after their complicity
with the genocide. So I don't think they think that they can coexist with an international legal architecture and I think they're scared of it the more lawless they are. Right? So it's an it's you know it isn't a knife edge moment
but I believe that's motivating. I
believe that's motivating. Right. Uh we
are I I told you it would go very fast.
We are almost pretty much out of time.
Uh I've tried to incorporate as many of the online questions as possible when I was chatting with you. I will say that a majority of the questions are largely
like can AI get greener and better and what can we do or and or what do we do?
Um I want to give everyone just a chance to give a last thought I think like a takeaway maybe for us because this has been a lot right it's um a lot to absorb
and when we think about also uh public literacy on what's going on uh how do we
convey that I suppose um how do we identify it first of all but also um what do we do going forward So, I'll
give you guys each I I'd say 30 seconds to maybe just finish up or a minute uh each. How about a minute? Uh to just
each. How about a minute? Uh to just give us your final thoughts. And why
don't we start? Uh well, I've been >> I'm going to donate my minute because I think >> donate No, you're not going to donate your minute. You're going to get the
your minute. You're going to get the last minute. Are you crazy? Well, we'll
last minute. Are you crazy? Well, we'll
we'll go Stephanie, Wendy, Hamish, and then Naomi.
Uh I so I think for me the takeaway is that like every single technology we need to do a huge amount of social and political work around artificial
intelligence if we're going to have any kind of version of it that we want to live with. And like Naomi I actually am
live with. And like Naomi I actually am a bit optimistic too uh because I think we're long overdue. We're long overdue for a real reckoning about what we do
when we're in the classroom. what we do when we're at work, how we live together, we're we're overdue and we're going to be forced to have that reckoning whether we want to or not. And
I guess we should have our strategies ready.
>> Uh I have two thoughts. So in answer to all those questions about can AI get greener, AI is not a thing. AI is many things. So there are different types of
things. So there are different types of AI out there that are not neural net-based. We just happen to be really
net-based. We just happen to be really captivated with neural nets right now.
So we can thank Jeff Hinton and all the other godfathers of AI for getting us to this neural net moment. So there are alternatives there are people working on on things that are not so data intensive or so compute intensive and so maybe
those are the places we should be looking but the one so so Carol asked us to prepare a takeaway thought. One thing
I would say is get yourselves familiar with data. Now most most people don't
with data. Now most most people don't deal with data on a daily basis and I know some people are like really weirded out by data and because it's scary. It's
analytical and numbers and confusing and scary. I encourage you to think about
scary. I encourage you to think about data not as binary numbers or even any numbers but just the way that we categorize the world. And we are actually making data all the time, right? We take in information and we
right? We take in information and we sort of sort it with the other things that we've experienced before and we make predictions. Okay. So, so really
make predictions. Okay. So, so really get comfortable with data because this is a datadriven world and I think we need to have that mindset that we can all be data literate and data competent
and and also I'm going to hide behind both Stephanie and Naomi when when the when the movement happens. So, I will follow you >> um I guess my takeaway thought is the AI
revolution is not a fate plea. Uh
there's this wonderful saying from an Australian scholar named Jathan Saddowski who says that the whole AI sector relies on the Tinkerbell effect.
If you stop clapping, it goes away.
Right? And so I think it's really a call for all of us to interrogate uh whether we're being asked to use AI in our workplace to question that to not assume
that we should be asking questions of how will we integrate AI into our workforce but how or should we integrate AI into our workforce right and I think
as we've established tonight right it's a very fragile maybe non-existent business model um there's a reason Open AI is trying to launch erotica for
verified adults. It's because nobody
verified adults. It's because nobody will pay to use their service otherwise, right? So, if we stop using it, right,
right? So, if we stop using it, right, it takes away all of the rationale for the massive cap expenditures that we're seeing right now. So, just push back,
interrogate, go listen to your friendly neolite podcast.
>> Awesome, Naomi. I love that. Um, yeah. I
mean, I would just say that, you know, I agree that there are there are forms of AI that that could end up being really valuable. Um, and they aren't these
valuable. Um, and they aren't these massive large language models. Um, you
know, they are smaller closed loop systems that are fed with good good research, good data. Um, we should fund
universities. Um uh and and and whenever
universities. Um uh and and and whenever we um you know find ourselves wondering whether what we're doing is valuable um
because it's possible for a machine to do it, I think we have to really kind of flip that like that the these are only as valuable as the the inputs, right?
And there's a reason why we're being made to feel sort of replaceable and irrelevant. Um because um you know
irrelevant. Um because um you know they're kind of in competition. what
they want is these companies want to be the only knowledge um arbittors, right?
They they want their they want it to be that centralized. Um so yeah, so I think
that centralized. Um so yeah, so I think we need more spaces like this. I feel
really grateful to you, Carol, and the whole team at CCJ for um convening us.
There was really overwhelming response to this. I think this has been a long
to this. I think this has been a long time coming. Um, so more face-toface
time coming. Um, so more face-toface conversations, but also really um, uh, just so happy that there are hundreds of people who decide to tune in online,
too. So, more soon. Stay tuned. Um, and
too. So, more soon. Stay tuned. Um, and
yeah thanks.
>> Thank you. And I'll do my final thought.
I thought you were going to plug it, but Wendy mentioned Karen How. Uh, we'll be hosting Karen How March 12th at the concert hall here in this building. Um,
so save the date and she'll be in conversation with Naomi. So this is part one of I hope many many conversations talking about uh where we are right now
in terms of AI and in u connection to climate justice. So if you all could
climate justice. So if you all could just join me in thanking these wonderful panelists today and uh just thank you so much for being here. We really
appreciate it.
Thank you.
Have a great day.
Loading video analysis...