TLDW logo

专访前FAIR研究总监田渊栋:Meta裁员之后,对AI的一些遗憾与思考

By 硅谷101

Summary

## Key takeaways - **Layoff as Accelerated Choice**: The speaker had an offer before the layoff and had informed superiors of unhappiness, so the layoff wasn't surprising but accelerated a personal decision to leave after over ten years at Meta. [03:01], [04:22] - **AI Automates Execution Roles**: AI's high automation means fewer people needed for data labeling and model training pipelines, leading to fewer execution-layer jobs as tasks become handled by automated tools and agents. [05:31], [07:36] - **Scaling Law's Pessimistic Future**: Scaling Law requires exponentially more samples and compute for linear gains, leading to a pessimistic future where Earth's resources could be exhausted training models without deeper efficiency improvements. [22:18], [23:16] - **LLM's 1000-Fold Inefficiency**: Large language models need 10-30 trillion tokens to train, a 1000-fold difference from humans' lifetime 10 billion tokens, highlighting inefficiency since humans learn effectively from few samples. [14:28], [15:00] - **RL's Active Learning Edge**: Reinforcement learning excels through active learning by searching for solutions, generating higher-quality data than supervised methods, enabling better reasoning and generalization on difficult problems. [16:30], [17:16] - **Chase Interest Over Scarcity**: In fast-changing AI, chasing market-scarce skills leads to always following trends; instead, pursue genuine interests to innovate, as scarcity definitions shift rapidly like Yann LeCun's delayed Turing Award. [31:47], [33:46]

Topics Covered

  • AI Automation Will Eliminate Traditional Jobs
  • Open Source Thrives in Niche AI Models
  • LLMs Inefficiently Consume Vast Data Unlike Humans
  • Reinforcement Learning Excels Through Active Data Generation
  • Develop Research Taste for Enduring AI Careers

Full Transcript

I actually had an offer before I was laid off.

I'd been with the company for over ten years , so maybe this was the perfect opportunity to step out and see what the future holds.

Do you think LLM (Large Language Modeling) is the right path?

I think LLM is a very interesting path, but I don't know if it's the right one.

Scaling Law is a pessimistic future because, frankly, the topic of Scaling Law itself is quite strange.

So, what's the biggest problem with large language models right now?

The biggest problem is that they require a lot of data.

It 's the same as with autonomous driving before.

Initially, progress was very fast, and everyone thought it would soon replace humans, but the further you go, the bigger the problems become.

Why?

Because good insight and good data are becoming increasingly scarce and difficult to find.

With less and less data , your model can't be trained.

What do you think of the RL (Reinforcement Learning) path?

The biggest advantage of reinforcement learning is that it's active learning; it can have a very positive impact on the distribution of data.

This is its core.

Do you have any regrets about FAIR?

I should have done more in FAIR's engineering work, maybe even better.

My biggest gain should be after 2018.

I should have had a lot of research during this period.

If you have a taste for research , it means setting a path for yourself that you can keep moving forward from.

What's your next step?

Hello everyone, welcome to "Silicon Valley 101," I'm Chen Qian.

On October 22, 2025, Meta CEO Mark Zuckerberg approved a plan to lay off approximately 600 employees from the company's artificial intelligence division .

This is Meta's largest layoff in the AI ​​field this year, mainly targeting the core R&D department known as the "Super Intelligence Lab.

" So why is Meta carrying out this layoff?

How did the company's open-source AI approach encounter obstacles, and what about the new AI head parachuted in, Alex? We discussed how Wang will reshape Meta's AI strategy in the previous episode, which you can find on our homepage.

We also interviewed Tian Yuandong, a key figure in the recent layoffs, former FAIR Research Director and AI scientist .

Our interview covered more than just Meta; I think what's more interesting and valuable is the reflection of these senior AI scientists on AI roadmaps and future cutting-edge research, beyond the company level.

So, in this video, I'm sharing the full interview.

This version has removed the repetition from the previous video and focuses more on AI development itself, especially the LLM roadmap for large language models, the existence of open/closed source research labs , and the choices AI talent makes between R&D and engineering.

I hope this is helpful.

Here's the interview content : You're still wearing that FAIR uniform .

I think generally, people like us don't care much about clothing, right?

So we wear whatever the company provides, maybe even change.

How have the past few days been for you?

I know many people are actually here to reach out to... You (contact me) Yes, and then whether it's the media or many companies, they all came to you.

What was your mindset?

I think it was like this: because I actually already had an offer before I was laid off .

Before I was laid off, I had already told my superiors that I wasn't very happy and that I might want to look around .

They knew that, so I wasn't particularly surprised by the layoff.

It didn't matter, since I had an offer anyway.

I had told them before that , of course, after receiving the offer, I thought I would stay at Meta for a while longer because I still have GPU computing power, right?

I can still do some more things.

But since they laid me off, well, that's it, right?

So, in short, those two years... I've received a lot of contact from people, including many from large companies, and many chatting with me about job opportunities.

I've contacted almost every company you can think of , and they've all been at a high level. There are also many smaller companies and co-founding opportunities.

So, there are many opportunities.

Right now, I'm still thinking about it and haven't decided yet.

But since it's less than a week , less than 168 hours , before the layoffs , I still need to think about it.

Was the layoff something you expected?

Did you sense it was coming?

Otherwise, I wouldn't be looking for a job.

So, I have some... I feel that , personally, I think this place, at some point in time , is a good opportunity for me to leave and see the world, at least for me, since I've been with the company for over ten years.

As for the situation within the company, I'm not in a position to comment right now , but it's a personal choice , and this round of layoffs has accelerated that decision.

I might have stayed with the company a little longer , maybe another six months , and then reconsidered.

But since I've already left, I've left.

I think laying off 600 people is quite shocking; I felt it was a lot, even though it wasn't a complete layoff , just that some... The opportunity to transfer to other groups is just that your AI department feels there's no need for so many positions here, and the department needs to be restructured.

I think we should actually talk about industry trends.

We won't go into the specifics of the recent meta-analysis, because I can't reveal too much .

I think the industry trend is definitely that because AI itself has the highest degree of automation, today we have many people labeling data, but tomorrow the model might be stronger and we won't need so many people labeling data, and the day after tomorrow the model will be even stronger and we'll need fewer people.

And in the past, I've heard all sorts of stories, though I haven't experienced it myself.

For example, there used to be on- call systems where if the model crashed halfway through transmission, you could call back and they'd immediately fix it, adjust parameters , and see if they could recover it.

But now, because there are many automated tools and the whole system is well-designed, these kinds of things have become much less common.

So, you can believe that...

Then, as various pipelines (project processes) gradually mature and become automated, do you think a large number of people are needed?

Not necessarily.

So I think the general trend is that fewer and fewer people will be laid off , or that fewer and fewer people will be doing this kind of work.

So you think this round of layoffs isn't just a problem with Meta, but rather a general trend where more and more engineers, or those working in AI, will be laid off.

The general trend is that one day, everyone will be unemployed.

I think this is a very alarming trend. It's like that , or rather, there won't be traditional jobs where I'm employed by a company and I help that company do its work.

Maybe in the future, that won't be necessary.

For example, if... If I were to become a CEO , a leader of a small company , or start my own business, with these tools at my disposal, I would realize that I wouldn't need as many people to do many things.

Many tasks are automated , and to a very high degree .

So, what might have previously required a team of hundreds or thousands of people to do something now might not require that many.

Many tasks can be automated using agents.

Therefore, I think that in general, fewer people will be working on AI itself but more and more people will be exploring using AI as tools to explore other things .

That's roughly the process.

Do you think there will be fewer people researching Foundation Models ?

Yes, that 's true.

There will likely be more and more exploratory research on the model (base model) , but fewer and fewer people will simply build and train the model according to our previous engineering logic.

This is because we'll find that everyone follows the same logic to train the model, and the code will all run and be effective.

Why would we need so many people?

Many will say we can do research or other exploratory work, and those people will increase.

And there will also be more and more people developing applications .

But these applications aren't general applications ; they'll often be implemented in a specific vertical field or use this technology...

There will likely be more and more people doing what you want to do now , but this applies to the middle layer, the execution team.

For those doing execution , their work is repetitive, right?

Many things need fixing or processing.

But as tools become more automated, repetitive labor will decrease.

That's the general feeling.

Before this layoff, what were you researching at FAIR?

Before the layoffs actually in January of this year, 2011, I went to GenAI to help out.

During that time, we weren't doing research most of the time ; we were doing various emergency response tasks.

Right, that was Llama.

4 Llama 4 Yes, of course. I personally still have some collaborative work with other friends .

For example, in April or May of this year, we published an article analyzing the theoretical strengths of our previous Continuous Thinking Chain.

This analysis was quite effective and influential.

People felt that it added a note to the Continuous Thinking Chain Coconut article, indicating that we had indeed done a more in-depth theoretical analysis .

This analysis made the Continuous Thinking Chain approach more reasonable , and more work might be done on it.

You can talk about the future development of open source and closed source.

You think that because many outsiders say that open source is not feasible in a large company's architecture, because the competition in cutting-edge models is too fierce, and others are closing source, you may not be able to persist in open source.

Do you think that the gap between open source and closed source models will become wider and wider , and will anyone still do open source?

Many companies, especially in China, are doing open source.

But I think there will still be open source in Silicon Valley.

For example, I know some companies like Reflection, right ?

AI developers are likely working on open-source models, right?

They have many requirements and ideas to explore these things.

OpenAI previously developed an open-source GPT-OSS model , so I think open source will continue, and it certainly will.

Ai2 is also working on open-source projects.

The bigger question is , what are the uses of these models ?

Whether open-source or closed-source, once a model is available , it can be used as a chat tool, a search tool, or a productivity tool.

Large companies might work on these, but there are many other directions.

For example, the model can be used for scientific research , scientists' work , or work in vertical fields .

Small companies can do this.

That 's roughly it.

So, at a certain point, how powerful does the model need to be to solve this problem?

That's probably the question. This is a problem that varies from person to person or problem to problem.

Ultimately, we find that in different fields, do we really need a model that is strong in all aspects?

Not necessarily.

It might only be strong in the areas you care about.

At this point, differentiation may begin.

Each person and each model may have their own ideas, and each company may have its own purpose in developing this model.

As a result, there will be all sorts of different models doing different things.

In this situation, there may be different strategies, right?

Some models may want to be open source because after being open sourced , everyone can use them to build a community, right?

Or as a tool platform.

In this case, open source is very reasonable.

For example, I have a model that, after being trained, can call a certain standard toolkit, and then I can use the standard toolkit...

If I could use this model to create a platform for everyone to use, then it would definitely be open source.

However, for other fields, such as personalized search or personalized recommendations, I'd be less willing to open source such models, right?

Or perhaps everyone trains their own model but doesn't open source it.

So ultimately, it depends on the ultimate goal, not on whether open source or closed source is better or worse .

Ultimately, it depends on the company's strategy, because every company and every individual has different strategies .

So, you might think that in state-of-the-art (SOTA) models, it's difficult for an open source model to directly compete with a closed source model , but in many smaller, niche models, there are still many, many opportunities for open source.

That's how it is, right?

Do you think LLM (Large Language Model) is the right path?

I think LLM is a very interesting path , but I don't know if it's the right one.

Because I think you ultimately agree with Yann on this point. LeCun?

That's hard to say.

I think we're all scientists, so people with a scientist's mindset always feel that they want to find something better, rather than being satisfied with the current framework and working on it until the end.

That's definitely not the way I'm going to be.

So I always say there are all sorts of possible problems and how to solve these problems in other ways is a huge issue.

So the biggest problem with large language models right now is that they require a lot of data.

And while the quality of the trained model is certainly very good, it's definitely not as efficient as a human's.

This is a huge problem because for humans, the number of samples you learn is very small, and the number of tokens you can learn in your lifetime is probably only, for example, at most, on the order of 10 billion, especially text tokens.

I've also mentioned this before. I calculated this number on a slide (presentation) , but the training data for large language models can easily reach 10 trillion or 30 trillion, right?

There's a 1000-fold difference .

How can you use human learning ability to bridge this 1000-fold gap ?

It's very difficult.

But humans can learn very well.

We know that throughout human history, there have been all sorts of incredibly talented scientists, right?

Their ideas and approaches were unique.

They didn't have access to many books or much data at the time , yet they were able to discover interesting new theorems, new proofs, new findings , or new inventions .

So where did they get these abilities?

Now, with so many tokens being put into large language models , have they reached human capabilities?

This is actually a huge question right now .

Question mark (big question mark): So, if that's the case, maybe our current training algorithm hasn't reached its optimal state, right?

There might be better algorithms, better logic , and better ways to learn the representations that emerge from the data and use them to solve problems .

Gradient descent might not be a particularly good solution.

Maybe one day we won't use gradient descent anymore; there might be other methods.

This is just a wild guess, right?

In that case, maybe our entire training framework might need to change .

Of course, this might not happen now , but I think it might be an interesting direction to experiment with in the future.

I've seen some debate in the industry recently about reinforcement learning, especially with Andrej Karpathy.

He did a podcast interview and expressed some rather negative views.

What do you think of the RL (reinforcement learning) route?

I've been working in this area for a long time, and I also think that the good thing about RL (reinforcement learning) is that it's essentially a search process.

So, you give it some difficult problems and let it search for them.

The data you learn and the information you gain during the search process are of higher quality than the data you were fed.

It's like one person is supervising another person, for example, someone else is attending a lecture by a teacher, right? Attending a lecture by a teacher can be considered equivalent to being supervised.

In the realm of supervised learning, some argue that one can solve problems independently without attending lectures .

However, I believe the latter approach yields a more fundamental and problem-solving ability.

Therefore, I think Reinforcement Learning (RL) is superior to Supervised Finite Soft (SFT) in this regard. Indeed, many articles demonstrate that Reinforcement Learning is stronger than SFT in many problems, especially inference .

You need Reinforcement Learning to truly enable the model to learn reasoning.

Supervised Finite Soft (SFT) might simply memorize previous reasoning processes , but it doesn't develop generalization ability.

On new problems, its generalization ability might be weaker.

Especially with extensive SFT, the model's quality may decline.

This is the key difference between the two.

However, Reinforcement Learning is merely a paradigm; it doesn't involve any mysterious elements.

Its ultimate goal is still to change weights, just like SFT , only the method of changing weights differs.

Ultimately, perhaps a unified approach exists that can unify Reinforcement Learning and SFT.

Reinforcement learning and Supervised Finite Fibre (SFT), right?

Unifying these things is because the ultimate goal is to change weights.

Perhaps I have better methods for these problems. For most people, reinforcement learning is simply a different data acquisition method.

It collects data while searching, puts the data together, and then trains it.

This is essentially an active learning method, different from SFT.

Therefore, I think the biggest advantage of reinforcement learning is that it's active learning ; it can have a very positive impact on the distribution of data.

This is its core strength , not that its objective function or training algorithm is different .

Ultimately, it depends on the data itself.

The quality of the collected data is different from SFT.

That's why it can solve some more difficult problems. Andrej Karpathy's previous points are actually quite good in some ways.

The assertion that AGI (Artificial General Intelligence) is still 10 years away implies that we've entered an era measured in decades, not a world where AGI capabilities can be acquired immediately.

I believe this.

I myself have used GPT-5 before , and it helped me with a paper.

My most recent paper was actually the result of self-play between GPT-5 and me.

Essentially, I had no students , and I just talked to GPT-5 every day , telling it about problems I needed to solve and how we should develop research methods.

It would provide a plan , but you'll find that without domain knowledge, the plan you create is similar to others— lacking innovation and originality.

However, as a researcher, having a deep understanding of the problem , or knowing that the plan, its impact , or the way of thinking is flawed or has fatal problems , allows GPT-5 to delve deeper and ultimately achieve better results.

So this kind of high-level human insight .

..

Human knowledge and unique insights into the problem are what current models lack.

You need these things to make the model stronger.

So, to say that AGI lacks these things is not entirely accurate. It's still true that AGI will never achieve top-tier insight because insight will always be led by humans.

Yes, that's the problem.

I've mentioned this before , similar to the early days of autonomous driving .

Initially, progress was very rapid, and people thought it would soon replace humans.

But the further we go, the bigger the problems become.

Why?

Because good insights and good data are becoming increasingly scarce and difficult to find.

With less data, your model can't be trained properly.

Humans' ability to acquire and deeply mine data will always surpass that of computers— currently it surpasses all models.

For the same problem , humans might only need one or two samples to... While we can see the essence computers , or large models like today, need at least hundreds or thousands of samples to roughly grasp a contour.

Pre-training may require even more samples .

In this situation, if the number of samples is insufficient, humans will always be better than current large models, especially experts in specific fields.

They cannot , and they themselves cannot, present the samples they have learned to the computer because these samples are their experience in their minds , which is difficult to quantify into sentences.

If this is the case, AI can only forever follow behind humans.

Humans gain insights through some better information processing methods and then feed them to computers and AI to make AI perform better in this direction.

This is the current state.

So I think this is quite close to some of my previous arguments.

I have also been interviewed before and said that the Scaling Law is a pessimistic future.

The Scaling Law is, frankly, a very strange topic.

In the past, if we told people that adding exponential samples or exponential computing power would increase our performance linearly, I think that previous machine learning.

.. Machine learning scientists might consider these things trivial because, regardless of the model, you can conclude that simply feeding in more data will yield better results . But I think what we should truly pursue is a model that can move more efficiently, effectively, and quickly along this path, rather than simply being satisfied with this "law." That's correct , because this "law" leads to a rather pessimistic future —meaning you need to feed in exponentially more samples to get a decent result.

If that's the case, one day all of Earth's resources will be exhausted, and all of Earth's energy and electricity will be used to train large models.

In that situation, will we still rely on this ability to change our world?

That's a huge question.

I think at some point, people will realize that computational power isn't everything; we might need a deeper understanding of models.

I think this change will gradually happen.

That's one of my thoughts.

Yes , but we need a more efficient way to develop intelligence .

But do you think it will take a long time to find this solution?

I think everyone is working on it.

So it will take some time to do these things, at least for now. Let's talk about large language models. Their capabilities are incredibly strong.

Even if our model's capabilities stagnate now, its impact on various industries is still enormous.

I think it can automate a large part of things and enhance the capabilities of many people.

I feel that my understanding of large language models has far surpassed my previous abilities.

This makes me feel there's a lot of room for development in this area , which is a major realization for me.

I believe this marks the arrival of a new era.

So even if the progress of large language models isn't rapid, I think there will be many opportunities in the next two to three years .

So, if you still want to do cutting-edge research or try application development, it would be best to combine both.

If I could do cutting-edge research that is automated, that would be amazing, right?

I already feel that my research paradigm might be partially replaced by automated pipelines.

You mean agents?

Not necessarily agents , but agents are definitely a very important factor.

Using agents can help you do many things.

For example, you might not need to reply to emails yourself or manage your to-do list.

Lists (to-do items) , or tasks that you don't need to do yourself— automation can be done by computers.

This is definitely going to happen .

But the more important question is whether AI can replace humans in some advanced activities.

This is a more complex issue, especially considering the challenges of advanced human thought processes.

The key is the need for human insights.

To what extent can AI help solve many difficult scientific problems?

We don't yet know if AI can accomplish this .

If it can, it could, in turn, impact my research.

From a research perspective, I might become a super researcher.

With the addition of AI, I can conduct better research, and these tools can also benefit other things.

That would be very interesting.

Before you were pulled in to help with Llama 4, what were you researching?

We were doing some research on reasoning, mainly on thought chains, their forms, and training methods.

Before O1 came out (last September ) , we noticed that very long thought chains affect the scaling law of the model.

If you don't have many long thought chains , the scaling law isn't ideal ; you need many samples to get a good result.

But with long thought chains, the model's scaling will be affected. The code of the scaling law becomes very ideal.

I can get better results with, for example, one-tenth of the samples, and one-tenth of the parameters.

It's something like that.

We've actually discovered this , but then we 're doing all sorts of transformations and explorations on the thought chain, right?

Including our recent work at the end of last year, the continuous thought chain , which uses continuous space for latent space inference.

This paper has indeed received a lot of attention, probably over 200 citations in just six months, and many people are willing to follow it.

We 've been doing some exploratory work and have seen some progress, so I think these things are very interesting .

Last year, we also published a paper called "Dualformer," which was one of the earliest to propose how to create hybrid mental models —how to train long-term and short-term thinking together.

We found that this model is actually more effective than simply training long-term or short-term thinking.

Now, this has become standard practice; all mental models have this adaptive property of combining long and short-term thinking . So, last year's research was quite up-to-date.

Do you have any regrets about FAIR?

That's an interesting question.

I think my regret might be this: I should have done more engineering work at FAIR.

Actually, when I first joined FAIR , in the first few years, I did a lot of engineering work.

For some of my previous projects, like Go, I did a lot of engineering work myself.

At the time, I was even criticized for coming here as a research person. The scientist (research scientist) who was always doing engineering told me that while others' screens were full of articles , mine were full of code.

I was criticized like that, so I said, "Okay, if research scientists can't do engineering, then I'll read more code and more articles .

" So, you'll find that from 2015 to 2018, I was mostly doing engineering, and from 2018 until now, I've been doing more research.

That's roughly the pattern.

This is certainly related to the FAIR (Fair) policy at the time , and also because I had some research interests and wanted to do more research, so I switched to that approach.

But now you'll find that in this era, people with strong engineering skills are more sought after, right?

So it's interesting that people with strong research skills are also popular , but ideally, they should have both strong engineering and research skills —that's extremely difficult.

But I think I can achieve that, so I'm doing more engineering work now .

I can pick up a lot of things again and do these engineering things well.

I think my biggest gain from FAIR was after 2018; I've had a lot of research during that period.

Research taste refers to an appreciation for research and an understanding of research methods.

This appreciation can be gradually developed , and it's become increasingly apparent in recent years' publications.

Therefore, having research taste is very helpful for one's future career path.

This is crucial because a person who only does engineering has a significant problem: they might only tackle difficult engineering problems without understanding their applications.

However, having research taste means setting a path for oneself that can be continuously advanced.

This is extremely beneficial for one's life.

Yes, I have another question I'm very curious about.

Given the fierce competition in AI among companies and the intense talent war— including Meta's latest lab , which spends a lot of money on a single person— what kind of AI talent do you think is most scarce at this stage ?

I think it completely depends on each person's positioning.

First, I want to correct a point : don't think about the present... Who is the most scarce?

Because the definition of scarcity might change in a couple of years, right?

So, think about Yann (LeCun) sitting on the sidelines for so many years and then suddenly winning the Turing Award .

So I think everyone should think about what they truly want to do, rather than doing what companies might like.

I think that's more important because the whole process is different now.

In the past, the market would send a signal saying what kind of talent we needed.

This signal would then spread through universities, saying what kind of talent would be most sought after in the next ten years .

Universities would then expand enrollment in the corresponding departments and hire more professors.

Students would apply to those departments , and after four or more years of training, these students would finally meet the market's requirements.

That's roughly how it worked before because the whole logic and speed were relatively slow, right?

The industry cycle might have been...

For example, the fluctuations used to occur over 10 or 20 years, so this process was possible.

But now, the entire cycle might be very fast .

By the time you want to learn a hot technology in the market, everyone in the world is learning it, right?

You've thought of it, and others have thought of it too, right?

Everyone in the world is learning it.

There will always be someone who learns faster than you, someone who learns better than you, and someone who can immediately get started and make things work.

So you might find that after studying for half a year or a year, you can't compete with others, and you still can't stand out.

In this case, the market has changed.

Maybe next year won't be the era where this particular skill is most important.

Maybe something else has taken its place.

If you start learning then, you might always be following in others' footsteps.

So maybe in the future, everyone will suddenly realize that instead of following the market's orders, it's better to do what you want to do.

You 'll be happy doing it, and also, once this thing is discovered .

.. The benefits are huge, of course, that's the ideal situation, right?

Because in reality, you definitely need to combine both sides.

You 'll definitely want to judge for yourself whether this thing will be useful in the future, plus your own interests.

Finally, you can put more effort into it after combining the two.

Yes , that's roughly it.

So it's very difficult to make a judgment because it completely depends on your own ability.

I feel you are a very idealistic person.

Yes, and I feel that FAIR was a very idealistic team before , as we talked about in the last podcast.

But you feel that the market is a bit distorted now because when the competition is particularly fierce, many cultures and beliefs may deviate.

Do you think that in the current situation, there are still relatively idealistic research labs?

Maybe Ilya Sutskever's team or Mira's team are considered relatively idealistic.

Their counterparts are Sam...

Altman is very commercial and aggressive.

How do you view this balance?

I think firstly, you shouldn't treat large companies as monolithic.

In fact, there are many groups , and many of these groups have research teams. These teams themselves have a research spirit and research freedom.

This will always exist.

Fair is just a very famous and well-known place.

But there are many places that are not as famous as Fair , but they also have a free space to do research.

Even within Meta, there are many groups that have space to do research.

I have many collaborators in Meta who also do some research.

I don't think this is a problem.

Maybe Fair might not be as research-oriented in the future because of this or other reasons, right?

But there will still be many places where you can do research.

Even when you are a startup, you might find that the problem is very cutting-edge, so there will definitely be things you can do there.

Because when we talk about research, we mean the process itself is to find new solutions to difficult problems. That's called research, or re-search.

Right?

Actually, it's about research (exploration) , so it's not an abstract concept .

I think there are many areas where it can be done.

It's not a monolithic concept; it's not that big companies can't do it, but small companies can.

It's not that simple .

It completely depends on which group, which person , what resources, what kind of things, and what kind of chemical reaction will occur when these people come together.

Maybe it can be done today but not tomorrow , or maybe there's room for it for a period of time, but not at other times.

So countless people are thinking about this problem , and maybe a new work will definitely emerge during this period, influencing the entire field.

So research will always continue, it's just that its form may become more like guerrilla warfare.

It's not that some very famous research institutions will do research saying " I'll dedicate all our time and energy to research.

" Maybe not.

But you will always find many idealistic people and small organizations continuing to do what they want to do.

It's roughly like this process.

Yes, it's not 0 or 1 ; there will be many gray areas.

The last question is, what is your next step?

As I just said, the next step is not yet determined, so it's still under discussion.

Because it hasn't been a week since I was laid off, so.

.. There are some considerations and ideas.

The question you just asked was whether I want to work on applications or continue my scientific research, right?

My answer is, of course, it's best to combine both.

We want to find a way to empower my scientific research while also being able to do many other things.

Does such an opportunity exist?

I don't know, but generally speaking , we set a high goal first and then look at the options.

Because generally, people are more realistic ; they think, "If such an opportunity exists, I don't need to think about it.

" But actually, it should be the other way around.

First, think of an impossible goal, and then think about what can support it.

This might give you a better direction to take.

Okay, then we look forward to your next announcement.

Okay that concludes our interview with Tian Yuandong.

We also look forward to his next move.

I sincerely hope he can find a new role that balances cutting-edge research and engineering applications.

I think this is the path that cutting-edge AI engineers are exploring.

Good luck to him!

Do you think such AI work exists?

Welcome to leave us comments, share, and like!

Your support is the best motivation for "Silicon Valley 101" to produce in-depth technology and business content.

See you in the next video! Bye!

Loading...

Loading video analysis...