本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
去年十月,包括您本人以及理查德·布兰森、杰弗里·辛顿在内的850多位专家联名签署声明,呼吁禁止超级人工智能的发展,因为你们担忧这可能导致人类灭绝。
In October, over 850 experts, including yourself and other leaders like Richard Branson and Jeffrey Hinton, signed a statement to ban AI superintelligence as you guys raised concerns of potential human extinction.
因为除非我们能确保人工智能系统的安全性,否则我们将面临灭顶之灾。
Because unless we figure out how do we guarantee that the AI systems are safe, we're toast.
您在人工智能领域的影响力举足轻重。
And you've been so influential on the subject of AI.
您撰写的教材被当下许多人工智能公司CEO奉为圭臬,他们正是通过学习您的著作进入这个领域的。
You wrote the textbook that many of the CEOs who are building some of the AI companies now would have studied on the subject of AI.
没错。
Yep.
那么您是否对此感到后悔?
So do you have any regrets?
斯图尔特·罗素教授被《时代》杂志评为人工智能领域最具影响力的意见领袖之一。
Professor Stuart Russell has been named one of Time Magazine's most influential voices in AI.
在历经五十余载的研究、教学与人工智能设计探索之后
After spending over fifty years researching, teaching And finding ways to design AI
以这样一种方式
in such a way
来保持控制。
that maintain control.
你提到这个大猩猩问题,作为理解人类背景下人工智能的一种方式。
You talk about this gorilla problem as a way to understand AI in the context of humans.
是的。
Yeah.
几百万年前,人类在进化过程中与大猩猩分支分离,如今大猩猩对自己的存续毫无发言权,因为我们比它们聪明得多。
So a few million years ago, the human line branched off from the gorilla line in evolution, and now the gorillas have no say in whether they continue to exist because we are much smarter than they are.
所以智力实际上是控制地球最重要的单一因素。
So intelligence is actually the single most important factor to control planet Earth.
没错。
Yep.
但我们正在创造比我们更智能的东西。
But we're in the process of making something more intelligent than us.
确实如此。
Exactly.
那人们为什么不停止呢?
Why don't people stop then?
嗯,其中一个原因就是所谓的‘点金术’。
Well, one of the reasons is something called the Midas Touch.
传说中迈达斯国王向众神祈求,能否让他触碰的一切都变成黄金。
So king Midas is his legendary king who asked the gods, can everything I touch turn to gold?
我们总认为点金术是好事,但当他去喝水时,水就变成了金子。
And we think of the Midas touch as being a good thing, but he goes to drink some water, the water has turned to gold.
他想安慰女儿,结果女儿也变成了金像。
And he goes to comfort his daughter, and his daughter turns to gold.
最终他在痛苦与饥饿中死去。
And so he dies in misery and starvation.
这从两个方面映射了我们当前的处境。
So this applies to our current situation in two ways.
其一是贪婪驱使这些公司追逐技术,其灭绝概率比玩俄罗斯轮盘赌更糟,这甚至是那些未经我们许可就开发这项技术的人自己承认的。
One is that greed is driving these companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette, and that's even according to the people developing the technology without our permission.
如果人们认为这自然可控,那他们只是在自欺欺人。
And people are just fooling themselves if they think it's naturally going to be controllable.
所以你看,五十年后我本可以退休,但现在我每周工作八十到一百小时,试图推动事情往正确的方向发展。
So, you know, after fifty years, I could retire, but instead, I'm working eighty or a hundred hours a week trying to move things in the right direction.
那么,如果在你面前有一个按钮,能停止人工智能的所有进展,你会按下它吗?
So if you had a button in front of you, which would stop all progress in artificial intelligence, would you press it?
暂时不会。
Not yet.
我认为仍有相当大的机会能确保安全,我可以进一步解释具体措施。
I think there's still a decent chance that guarantees safety, and I can explain more of what that is.
给我三十秒时间。
Just give me thirty seconds of your time.
我有两件事要说。
Two things I wanted to say.
首先要衷心感谢你们每周都收听我们的节目。
The first thing is a huge thank you for listening and tuning into the show week after week.
这对我们所有人来说意义重大,这真的是我们从未有过、也想象不到能实现的梦想。
It means the world to all of us, and this really is a dream that we absolutely never had and couldn't have imagined getting to this place.
但其次,我们感觉这个梦想才刚刚开始。
But secondly, it's a dream where we feel like we're only just getting started.
如果你喜欢我们的节目,请加入那24%定期收听本播客的听众行列,在这个应用上关注我们。
And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app.
我要向你们许下一个承诺。
Here's a promise I'm gonna make to you.
我将竭尽全力让这个节目现在和未来都做到最好。
I'm gonna do everything in my power to make this show as good as I can now and into the future.
我们会邀请你们想听的嘉宾,并继续保留你们喜爱的节目特色。
We're gonna deliver the guests that you want me to speak to and we're gonna continue to keep doing all of the things you love about this show.
谢谢。
Thank you.
斯图尔特·拉塞尔教授,OBE(大英帝国官佐勋章获得者)。
Professor Stuart Russell, OBE.
过去几年里,很多人都在谈论人工智能。
A lot of people have been talking about AI for the last couple of years.
这似乎让你感到非常震惊。
It appears you've this really shocked me.
看起来你一生中大部分时间都在研究人工智能。
It appears you've been talking about AI for most of your life.
嗯,我早在英格兰读高中时就开始接触人工智能了。
Well, I started doing AI in high school back in England.
然后我在1982年开始在斯坦福大学攻读博士学位。
But then I did my PhD starting in 'eighty two at Stanford.
我于1986年加入伯克利分校任教,如今已是我在伯克利担任教授的第四十个年头。
I joined the faculty of Berkeley in 'eighty six, so I'm my fortieth year as a professor at Berkeley.
AI领域对我的工作最熟悉的是我编写的一本教材。
The main thing that the AI community is familiar with in my work is a textbook that I wrote.
这是否就是大多数学习人工智能的学生可能正在使用的教材?
Is this the textbook that most students who study AI are likely learning from?
是的。
Yeah.
所以你31年前就编写了这本关于人工智能的教材。
So you wrote the textbook on artificial intelligence thirty one years ago.
你实际上可能从我开始动笔写它的时候——因为它实在太厚了——那一年我刚刚出生。
You actually started probably started writing it, because it's so bloody big, in the year that I was born.
我出生于92年。
So I was born in '92.
没错。
Yep.
我花了大约两年时间。
Took me about two years.
我和你的书同龄,这真是个绝妙的方式让我理解你谈论和撰写这个话题已经有多长时间了。
Me and your book are the same age, which just is wonderful wonderful way for me to understand just how long you've been talking about this and how long you've been writing about this.
事实上,有趣的是现在许多正在创建人工智能公司的CEO们很可能都是从你的教材中学到的知识。
And actually, it's interesting that many of the CEOs who are building some of the AI companies now probably learned from your textbook.
你曾与某人交谈时提到,为了让人们理解我们今天要讨论的信息,可能需要一场灾难才能唤醒他们。
You had a conversation with somebody who said that in order for people to get the message that we're gonna be talking about today, there would have to be a catastrophe for people to wake up.
你能给我讲讲那次对话的背景,以及大致是和谁进行的这次谈话吗?
Can you give me context on that conversation and a gist of who you had this conversation with?
那是与一家领先人工智能公司的CEO之一进行的对话。
So, it was with one of the CEOs of a leading AI company.
他看到了两种可能性,我也这么认为,要么我们经历一场小型——或者说切尔诺贝利级别的灾难。
He sees two possibilities, as do I, which is either we have a small or let's say small scale disaster of the same scale as Chernobyl.
乌克兰的那次核泄漏事故?
The nuclear meltdown in Ukraine?
是的。
Yeah.
这座核电站于1986年爆炸,直接导致不少人死亡,可能还有数万人因辐射间接丧生。
So, this nuclear plant blew up in 1986, killed a fair number of people directly and maybe tens of thousands of people indirectly through radiation.
最近的损失估计超过一万亿美元。
Recent cost estimates, more than a trillion dollars.
那会让人警醒。
So that would wake people up.
那会让政府开始监管。
That would get the governments to regulate.
他与政府谈过,但他们不愿行动。
He's talked to the governments and they won't do it.
所以他视这种切尔诺贝利级别的灾难为最佳情景,因为那样政府就会监管并要求构建AI系统。
So he looked at this Chernobyl scale disaster as the best case scenario, because then the governments would regulate and require AI systems to be built.
这位CEO是在创建AI公司吗?
And is this CEO building an AI company?
他经营着一家领先的AI公司。
He runs one of the leading AI companies.
连他都认为人们要觉醒的唯一方式是发生切尔诺贝利级别的核灾难吗?
And even he thinks that the only way that people will wake up is if there's a Chernobyl level nuclear disaster?
是的,不一定是核灾难。
Yeah, it wouldn't have to be a nuclear disaster.
可能是有人滥用AI系统,例如制造一场疫情,或是AI系统自行做出某些行为,比如摧毁我们的金融体系或通信系统。
It would be either an AI system that's being misused by someone, for example, to engineer a pandemic or an AI system that does something itself, such as crashing our financial system or our communication systems.
另一种情况是更严重的灾难,我们完全失去控制。
The alternative is a much worse disaster where we just lose control altogether.
你与AI领域的许多人有过大量对话,包括那些开发这项技术的人、研究这项技术的人,以及目前参与AI竞赛的CEO和创始人。
You have had lots of conversations with lots of people in the world of AI, both people that are, you know, have built the technology, have studied and researched the technology, or the CEOs and founders that are currently in the AI race.
有哪些公众可能难以置信的、你在私下听到的关于他们观点的有趣看法?
What are some of the interesting sentiments that the general public wouldn't believe that you hear privately about their perspectives?
因为我觉得这非常引人入胜。
Because I find that so fascinating.
我曾与这些科技公司内部人士有过一些私下交谈。
I've had some private conversations with people very close to these tech companies.
而让我震惊的观点是,他们通常意识到风险,但觉得无能为力。
And the shocking sentiment that I was exposed to was that they are aware of the risks often, but they don't feel like there's anything that can be done.
所以他们继续前行,这在我看来有点自相矛盾。
So they're carrying on, which feels like a bit of a paradox to me.
从某种意义上说,这一定是个非常艰难的处境,对吧?
It must be a very difficult position to be in, in a sense, right?
你正在做的事情,明知有很大可能导致地球上的生命终结,包括你自己和家人的生命。
You're doing something that you know has a good chance of bringing an end to life on Earth, including that of yourself and your own family.
他们觉得自己无法逃离这场竞赛。
They feel that they can't escape this race.
如果其中一家公司的CEO说,我们不再继续这项研究了,他们只会被替换掉,因为投资者投入资金就是为了创造AGI并从中获益。
If a CEO of one of those companies was to say, you know, we're not going to do this anymore, they would just be replaced because the investors are putting their money up because they want to create AGI and reap the benefits of it.
所以这是个奇怪的局面,至少我交谈过的所有人——虽然我没和Sam Altman谈过这个——但Sam Altman甚至在成为OpenAI CEO之前就说过,创造超人类智能是人类生存面临的最大风险。
So, it's a strange situation where at least all the ones I've spoken to, I haven't spoken to Sam Altman about this, but, you know, Sam Altman, even before becoming CEO of OpenAI, said that creating superhuman intelligence is the biggest risk to human existence that there is.
我最担心的是我们——这个领域、这项技术、这个行业——会给世界带来重大伤害。
My worst fears are that we cause significant we, the field, the technology, the industry cause significant harm to the world.
Elon Musk也公开说过这样的话。
Elon Musk is also on record saying this.
所以,达里奥·阿马德估计存在高达25%的灭绝风险。
So, Dario Amade estimates up to a twenty five percent risk of extinction.
是否有某个特定时刻让你意识到这些CEO们非常清楚存在灭绝级别的风险?
Was there a particular moment when you realized that these CEOs are well aware of the extinction level risks?
他们都在2023年5月签署了一份声明。
They all signed a statement in May '3.
这份声明被称为'灭绝声明'。
It's called the extinction statement.
它基本上表明AGI带来的灭绝风险与核战争和流行病处于同一级别。
It basically says AGI is an extinction risk at the same level as nuclear war and pandemics.
但我不认为他们对此有切肤之痛。
But I don't think they feel it in their gut.
想象一下你是那些核物理学家中的一员。
Imagine that you are one of the nuclear physicists.
我想你应该看过《奥本海默》吧?
I guess you've seen Oppenheimer, right?
所以你就在那里,亲眼目睹了第一次核爆炸。
So you're there, you're watching that first nuclear explosion.
那会让你对核战争可能给人类带来的影响产生怎样的感受?
How would that make you feel about the potential impact of nuclear war on the human race?
我想你可能会变成一个和平主义者,说这种武器太可怕了。
I think you would probably become a pacifist and say, this weapon is so terrible.
我们必须找到控制它的方法。
We have got to find a way to keep it under control.
但那些做决策的人还没有这种觉悟,政府更是远远没有。
We are not there yet with the people making these decisions, and certainly not with the governments.
政策制定者所做的就是听取专家意见。
What policymakers do is they listen to experts.
他们总是随波逐流。
They keep their finger in the wind.
有些专家挥舞着50亿美元的支票说,那些末日论调都是边缘化的无稽之谈。
You've got some experts dangling $50,000,000,000 checks and saying, Oh, all that doom and stuff, it's just fringe nonsense.
别担心这个。
Don't worry about it.
拿着我这50亿美元的支票。
Take my $50,000,000,000 check.
另一方面,你有像杰夫·辛顿这样心怀善意、才华横溢的科学家说,实际上,不,这将是人类种族的终结。
On the other side, you've got very well meaning, brilliant scientists like Jeff Hinton saying, actually, no, this is the end of the human race.
但杰夫没有50亿美元的支票。
But Jeff doesn't have a $50,000,000,000 check.
所以观点是,阻止这场竞赛的唯一方法是政府介入并说,好吧,在我们能确保绝对安全之前,我们不希望这场竞赛继续进行。
So the view is the only way to stop the race is if governments intervene and say, Okay, we don't want this race to go ahead until we can be sure that it's going ahead in absolute safety.
回到你的职业历程,你曾获得伊丽莎白女王颁发的OB勋章?
Closing off on your career journey, you received an OB from Queen Elizabeth?
是的。
Yes.
那么授予这个奖项的官方理由是什么?
And what was the listed reason for that, for the award?
对人工智能研究的贡献。
Contributions to artificial intelligence research.
你已连续多年被《时代》杂志评为人工智能领域最具影响力人物,包括2025年今年。
And you've been listed as a Time Magazine most influential person AI several years in a row, including this year in 2025.
是的。
Yep.
现在,有两个术语是我们即将讨论的核心。
Now, there's two terms here that are central to the things we're going to discuss.
一个是AI(人工智能),另一个是AGI(人工通用智能)。
One of them is AI, and the other is AGI.
用我的外行理解来说,人工通用智能是指系统、计算机或任何技术具备通用智能,意味着它理论上能观察并理解世界。
In my muggle interpretation of that, it's artificial general intelligence is when the system, the computer, whatever it might be, the technology, has generalized intelligence, which means that it could theoretically see, understand the world.
它知晓一切。
It knows everything.
它能理解世间万物,甚至比人类理解得更好
It can understand everything in the world as well as all better than a human being
而且我认为它还能采取行动。
And can do I think take action as well.
我是说,有些人认为AGI不需要拥有实体。
I mean, some people say, oh, you know, AGI doesn't have to have a body.
但实际上,我们很大一部分智能是关于管理身体、感知现实环境并对其采取行动,比如抓取等。
But a good chunk of our intelligence actually is about managing our body, about perceiving the real environment and acting on it, grasping, and so on.
所以我认为这也是智能的一部分,AGI系统应该能够成功操作机器人。
So I think that's part of intelligence, and AGI systems should be able to operate robots successfully.
但这里常有个误解,人们会说如果没有机器人身体,它就做不了实际的事。
But there's often a misunderstanding, right, that people say, Well, if it doesn't have a robot body, then it can't actually do anything.
但想想看,我们大多数人并不直接用身体做事。
But then if you remember, most of us don't do things with our bodies.
有些人做砌砖工、画家、园丁、厨师。
Some people do bricklayers, painters, gardeners, chefs.
但做播客的人,你们是用头脑在工作。
But people who do podcasts, you're doing it with your mind.
你是通过产生语言的能力来实现的。
You're doing it with your ability to produce language.
阿道夫·希特勒并非通过身体行动达成目的。
Adolf Hitler didn't do it with his body.
他通过制造语言来实现。
He did it by producing language.
希望你不是在拿我们做比较。
I hope you're not comparing us.
确实如此。
That's true.
即使没有实体的AGI,它实际上比阿道夫·希特勒更能接触人类,因为它能直接向全球约四分之三的人口发送电子邮件和短信。
Even an AGI that has no body, it actually has more access to the human race than Adolf Hitler ever did because it can send emails and texts to, what, three quarters of the world's population directly.
它还能说所有人的语言,并且可以每天24小时针对地球上的每个人进行说服工作,让他们按照它的意愿行事。
It also speaks all of their languages, and it can devote twenty four hours a day to each individual person on Earth to convince them to do whatever it wants them to do.
而我们整个社会都依赖于互联网运转。
And our whole society runs down on the Internet.
我是说,如果互联网出了问题,社会的一切都会崩溃。
I mean, if there's an issue with the Internet, everything breaks down in society.
飞机会停飞,电力系统也会因互联网故障而瘫痪。
Airplanes become grounded, and we'll have electricity running off as Internet systems.
所以,我的整个生活现在似乎都依赖于互联网。
So, I mean, my entire life, it seems to run off the Internet now.
是啊。
Yeah.
供水系统。
Water supplies.
因此,AI系统可能引发中等规模灾难的途径之一,就是基本切断我们的生命支持系统。
So this is one of the routes by which AI systems could bring about a medium sized catastrophe, is by basically shutting down our life support systems.
你认为在未来几十年内,我们会发展到AGI阶段,这些系统会具备通用智能吗?
Do you believe that at some point in the coming decades, we'll arrive at a point of AGI where these systems are generally intelligent?
是的。
Yes.
我认为这几乎是必然的,除非发生其他干预事件,比如核战争,或者我们主动选择不去实现它。
I think it's virtually certain, unless something else intervenes, like a nuclear war or we may refrain from doing it.
但我认为要我们主动克制将会异常困难。
But I think it will be extraordinarily difficult for us to refrain.
当我查看前十名AI CEO对通用人工智能何时实现的预测时,OpenAI ChatGPT的创始人萨姆·奥特曼说是在2030年前。
When I look down the list of predictions from the top 10 AI CEOs on when AGI will arrive, you've got Sam Altman, who's the founder of OpenAI ChatGPT, says before 2030.
DeepMind的德米斯预测是2030到2035年。
Demis at DeepMind says 2030 to 2035.
英伟达的黄仁勋说大约五年内。
Jensen from Nvidia says around five years.
Anthropic的达里奥认为2026、2027年会出现接近通用人工智能的强大AI。
Dario at Anthropic says 2026, 2027, powerful AI close to AGI.
埃隆预测在本世纪20年代。
Elon says in the 2020s.
顺着名单往下看,他们预测的时间基本都在五年左右。
And go down the list of all of them, and they're all saying relatively within five years.
实际上我认为这需要更长时间。
I actually think it'll take longer.
我不认为你能基于工程学做出预测,因为我们可以让机器变得更大更快十倍,但这可能不是我们尚未实现AGI的原因。
I don't think you can make a prediction based on engineering in the sense that, we could make machines 10 times bigger and 10 times faster, But that's probably not the reason why we don't have AGI.
事实上,我认为我们拥有的计算能力远超AGI所需,可能超出上千倍。
In fact, I think we have far more computing power than we need for AGI, maybe a thousand times more than we need.
我们尚未实现AGI的原因在于我们还不懂得如何正确构建它。
The reason we don't have AGI is because we don't understand how to make it properly.
我们抓住的是一项名为语言模型的特定技术。
What we've seized upon is one particular technology called the language model.
我们观察到,随着语言模型规模扩大,它们生成的文本语言会更连贯,听起来更智能。
And we observed that as you make language models bigger, they produce text, language that's more coherent and sounds more intelligent.
因此过去几年主要发生的就是:好吧,让我们继续这样做。
And so mostly what's been happening in the last few years is just, okay, let's keep doing that.
因为与大学不同,公司非常擅长的一件事就是花钱。
Because one thing companies are very good at, unlike universities, is spending money.
他们已经投入了巨额资金,而且还将投入更多巨额资金。
They have spent gargantuan amounts of money, and they're going to spend even more gargantuan amounts of money.
我是说,我们提到过核武器。
I mean, we mentioned nuclear weapons.
二战期间研发核武器的曼哈顿计划,其2025年的预算大约是200多亿美元。
So the Manhattan Project in World War II to develop nuclear weapons, its budget in 2025 was about $20 odd billion.
AGI的预算明年将达到一万亿美元,是曼哈顿计划的50倍。
The budget for AGI is going to be a trillion dollars next year, so 50 times bigger than the Manhattan Project.
人类历史上每当团结一致追求共同目标时,总能创造奇迹,无论是登月还是其他历史性成就。
Humans have a remarkable history of figuring things out when they galvanize towards a shared objective, thinking about the moon landings or whatever else it might be through history.
让我觉得这一切几乎不可避免的,正是投入其中的巨额资金规模。
And the thing that makes this feel all quite inevitable to me is just the sheer volume of money being invested into it.
我这辈子从未见过这样的景象。
I've never seen anything like it in my life.
确实,这是史无前例的。
Well, there's never been anything like this in history.
这是人类历史上规模最大的技术项目吗?其量级远超其他项目?
Is this the biggest technology project in human history by orders of magnitude?
而且似乎没有任何人
And there doesn't seem to be anybody
停下来询问安全问题。
that is pausing to ask the questions about safety.
在这场竞赛中甚至看不到提出这类问题的空间。
It even appear that there's room for that in such a race.
我认为确实如此。
I think that's right.
这些公司都不同程度地设有专门负责安全问题的部门。
To varying extents, each of these companies has a division that focuses on safety.
但这些部门真的有话语权吗?
Does that division have any sway?
他们能否叫停其他部门说:不行,你们不能发布那个系统?
Can they tell the other divisions, no, you can't release that system?
并非如此。
Not really.
我认为有些公司确实更重视这个问题。
I think some of the companies do take it more seriously.
Anthropic就是其中之一。
Anthropic does.
我认为谷歌DeepMind也是。
I think Google DeepMind.
即便如此,我认为保持技术领先的商业需求绝对至关重要。
Even there, I think the commercial imperative to be at the forefront is absolutely vital.
如果一家公司被认为落后且不太可能具有竞争力,不太可能率先实现AGI,那么人们会迅速将资金转移到其他地方。
If a company is perceived as falling behind and not likely to be competitive, not likely to be the one to reach AGI first, then people will move their money elsewhere very quickly.
我们已经看到像OpenAI这样的公司出现了一些相当高调的离职事件。
And we saw some quite high profile departures from companies like OpenAI.
一位名叫Jan Liek的员工离开了OpenAI,他原本负责AI安全相关工作。
A chap called Jan Liek left, who was working on AI safety at OpenAI.
他表示离职的原因是OpenAI的安全文化与流程已让位于光鲜亮丽的产品开发。
And he said that the reason for his leaving was that safety culture and processes have taken a backseat to shiny products at OpenAI.
他逐渐对领导层失去信任,还有Ilya Sutskeva也是?
And he gradually lost trust in leadership, but also Ilya Sutskeva?
对,Ilya Sutskeva。
Ilya Sutskeva, yeah.
Sutskeva?
Sutskeva?
他曾是
So he was the
联合创始人?
Co founder?
曾担任联合创始人兼首席科学家一段时间。
Co founder and chief scientist for a while.
是的,他和Jan Leica曾是安全团队的核心人物。
Then, yeah, so he and Jan Leica were the main safety people.
所以当他们说OpenAI不关心安全问题时,这相当令人担忧。
And so when they say OpenAI doesn't care about safety, that's pretty concerning.
我听你提到过这个‘大猩猩问题’。
I've heard you talk about this gorilla problem.
作为理解AI与人类关系的一种方式,‘大猩猩问题’是什么?
What is the gorilla problem as a way to understand AI in the context of humans?
所谓‘大猩猩问题’,是指大猩猩相对于人类所面临的困境。
So the gorilla problem is the problem that gorillas face with respect to humans.
你可以想象几百万年前,人类在进化过程中与大猩猩分道扬镳。
So you can imagine that a few million years ago, the human line branched off from the gorilla line in evolution.
现在大猩猩看着人类这一支说:当初分家真是个好主意吗?
And now the gorillas are looking at the human line and saying, Yeah, was that a good idea?
而它们对自己的存续根本没有发言权。
And they have no say in whether they continue to exist.
因为我们拥有
Because we have a
我们比它们聪明得多。
We are much smarter than they are.
如果我们愿意,我们可以在几周内让它们灭绝,而它们对此无能为力。
If we chose to, we could make them extinct in a couple of weeks, and there's nothing they can do about it.
这就是大猩猩问题,对吧?
So that's the gorilla problem, right?
就是一个物种面对另一个能力远超自己的物种时所面临的问题。
Just the problem a species faces when there's another species that's much more capable.
所以这说明智力实际上是控制地球最重要的单一因素?
And so this says that intelligence is actually the single most important factor to control planet Earth?
是的。
Yes.
智力是在世界上实现你想要的的能力。
Intelligence is the ability to bring about what you want in the world.
而我们正在创造比我们更智能的东西。
And we're in the process of making something more intelligent than us.
确实如此。
Exactly.
这意味着也许我们会变成
Which suggests that maybe we become the
大猩猩。
gorillas.
正是这样。
Exactly.
没错。
Yep.
这个推理有什么问题吗?
Is that is there any fault in the reasoning there?
因为在我看来这完全合情合理。
Because it seems to make such perfect sense to me.
但为什么人们不就此止步呢?
But why don't people stop then?
因为这看起来像是一件疯狂到让人想要去做的事
Because it seems like a crazy thing to want to
因为他们认为如果创造出这项技术,它将带来巨大的经济价值。
Because they think that if they create this technology, it will have enormous economic value.
他们将能够用它来取代世界上所有的人类工人,开发新产品、药物和娱乐形式。
They'll be able to use it to replace all the human workers in the world, to develop new products, drugs, forms of entertainment.
任何具有经济价值的东西,你都可以用人工通用智能来创造。
Anything that has economic value, you could use AGI to create it.
也许这件事本身就有难以抗拒的吸引力,对吧?
And maybe it's just an irresistible thing in itself, right?
我认为我们人类如此重视自己的智力,将其视为人类成就的巅峰。
I think we as humans place so much store on our intelligence, how we think about what is the pinnacle of human achievement.
如果我们拥有人工通用智能,我们就能达到更高的高度。
If we had AGI, we could go way higher than that.
所以,这项技术对人们来说极具诱惑力,让人想要去创造它。
So, it's very seductive for people to want to create this technology.
我认为人们如果认为它自然可控,那只是在自欺欺人。
And I think people are just fooling themselves if they think it's naturally going to be controllable.
问题是:你如何永远保持对比自己更强大的实体的控制权?
The question is: how are you going to retain power forever over entities more powerful than yourself?
拔掉电源。
Pull the plug out.
我们在讨论AI时,评论区里常有人这么说。
People say that sometimes in the comment section when we talk about AI.
他们会说,'那我就把电源拔掉'。
They say, Well, I'll just pull the plug out.
是啊。
Yeah.
这有点好笑。
It's sort of funny.
实际上,每当报纸上有AI相关的文章,评论区总有人说'直接拔电源不就行了'之类的话。
In fact, reading the comment sections in newspapers, whenever there's an AI article, there'll be people who say, Oh, you can just pull the plug out, right?
好像超级智能机器就想不到这招似的。
As if a superintelligent machine would never have thought of that one.
别忘了是谁看过所有那些试图拔插头的电影桥段。
Don't forget who's watched all those films the way they did try to pull the plug out.
他们还常说:只要它没有意识就无所谓。
Another thing they say, Well, as long as it's not conscious, then it doesn't matter.
它永远不会采取任何行动。
It won't ever do anything.
这完全偏离了重点。
Which is completely off the point.
我不认为大猩猩会坐在那里说:啊,要是人类没有意识就好了,一切就太平了。
I don't think the gorillas are sitting there saying, Oh yeah, if only those humans hadn't been conscious, everything would be fine.
当然不会。
Of course not.
真正会让大猩猩灭绝的是人类的行为。
What would make gorillas go extinct is the things that humans do.
我们的行为方式,以及我们在世界中成功行动的能力。
How we behave, our ability to act successfully in the world.
当我和iPhone下棋输了时,我不会想‘哦,我输是因为它有意识’。
When I play chess against my iPhone and I lose, I don't think, Oh, well, I'm losing because it's conscious.
我输只是因为它在那方寸棋盘间调动棋子达成目标的能力比我强。
I'm just losing because it's better than I am in that little world, moving the bits around to get what it wants.
所以意识与此无关,对吧?
So consciousness has nothing to do with it, right?
我们真正需要关注的是能力问题。
Competence is the thing we're concerned about.
因此我认为唯一的希望是:我们能否在造出比我们更聪明的机器的同时,确保它们永远为我们的利益行事?
So I think the only hope is, can we simultaneously build machines that are more intelligent than us, but guarantee that they will always act in our best interest?
把这个问题抛给你:我们能否造出比我们更聪明、同时始终为我们利益服务的机器?
Throwing that question to you, can we build machines that are more intelligent than us that will also always act in our best interests?
这听起来某种程度上有些矛盾,就像我说‘我养了一只九岁的法国斗牛犬叫巴勃罗’那样。
It sounds like a bit of a contradiction to some degree, because it's kind of like me saying, I've got a French bulldog called Pablo that's nine years old.
这就好比说他可以比我更聪明,却仍然由我遛他并决定他何时进食。
And it's like saying that he could be more intelligent than me, yet I still walk him and decide when he gets fed.
我想如果他比我更聪明,就该是他遛我了。
I think if he was more intelligent than me, he would be walking me.
被拴着走的就该是我了。
I'd be on the leash.
这就是关键所在,对吧?
That's the trick, right?
我们能否创造出唯一目的就是增进人类利益的AI系统?
Can we make AI systems whose only purpose is to further human interest?
我认为答案是肯定的。
And I think the answer is yes.
而这正是我一直在研究的课题。
And this is actually what I've been working on.
我职业生涯中未提及的一个转折点,是2013年左右在巴黎休假时的顿悟——我意识到如果AI能力继续发展,我们真的创造出超人类智能,那可能将是一场灾难。
So, think one part of my career that I didn't mention is sort of having this epiphany while I was on sabbatical in Paris, so it was 2013 or so, just realizing that further progress in the capabilities of AI, if we succeeded in creating real superhuman intelligence, that it was potentially a catastrophe.
因此我几乎将全部精力转向研究如何确保人工智能的安全性。
So I pretty much switched my focus to work on how do we make it so that it's guaranteed to be safe.
你对当前人工智能的发展态势及其进展速度是否感到些许不安?
Are you somewhat troubled by everything that's going on at the moment AI and how it's progressing?
因为你给我的印象是,面对技术发展的方向与速度,你内心其实相当忧虑。
Because you strike me as someone that's somewhat troubled under the surface by the way things are moving forward and the speed in which they're moving forward.
这说法太轻描淡写了。
That's an understatement.
实际上,我对安全问题的被忽视感到震惊。
I'm appalled actually by the lack of attention to safety.
想象一下,如果有人在你家附近建核电站,你去问总工程师:'这些核设施听说会爆炸对吧?'
I mean, imagine if someone's building a nuclear power station in your neighborhood, And you go along to the chief engineer and you say, okay, these nuclear things, I've heard that they can actually explode, right?
广岛就曾发生过核爆炸事件。
There was this nuclear explosion that happened in Hiroshima.
所以我对此相当担忧。
And so I'm a bit worried about this.
你们采取了哪些措施来确保我们的后院不会发生核爆炸?
What steps are you taking to make sure that we don't have a nuclear explosion in our backyard?
然后总工程师说,嗯,我们考虑过这个问题。
And the chief engineer says, Well, we thought about it.
我们其实没有答案。
We don't really have an answer.
你会怎么说?
What would you say?
我想你会爆几句粗口,然后打电话给你的议员说,这些人必须滚蛋。
I think you would use some expletives and you'd call your MP say, these people out.
我是说,他们在干什么?
I mean, are they doing?
你读出了AGI的预计时间表,但也要注意到那些人——我记得达拉·阿梅迪说过有25%的灭绝概率。
You read out the list of projected dates for AGI, but notice also that those people, I think I mentioned Dara Amedee says a 25% chance of extinction.
埃隆·马斯克认为有30%的灭绝概率。
Elon Musk has a 30 chance of extinction.
山姆·奥特曼基本上认为通用人工智能是人类生存的最大威胁。
Sam Altman says basically that AGI is the biggest risk to human existence.
那他们到底在做什么?
So what are they doing?
他们在未经我们允许的情况下,用地球上每个人的生命玩俄罗斯轮盘赌。
They are playing Russian roulette with every human being on Earth without our permission.
他们闯进我们的房子,用枪抵住我们孩子的脑袋,扣动扳机,然后说:'嗯,你知道的,可能所有人都会死'。
They're coming into our houses, putting a gun to the head of our children, pulling the trigger and saying, well, you know, possibly everyone will die.
哎呀。
Oops.
但也有可能我们会变得无比富有。
But possibly we'll get incredibly rich.
这就是他们正在做的事。
That's what they're doing.
他们征求过我们的意见吗?
Did they ask us?
没有。
No.
为什么政府允许他们这么做?
Why is the government allowing them to do this?
因为他们向政府挥舞着500亿美元的支票作为诱饵。
Because they dangle $50,000,000,000 checks in front of the governments.
所以我认为‘表面下的困扰’这种说法都算轻描淡写了。
So I think troubled under the surface is an understatement.
那准确的说法应该是什么?
What would be an accurate statement?
骇人听闻。
Appalled.
我正倾尽一生努力扭转这一历史进程,使其转向不同的方向。
And I am devoting my life to trying to divert from this course of history into a different one.
你对过去本可以采取的行动感到后悔吗?毕竟你在AI议题上曾具有如此大的影响力。
Do you have any regrets about things you could have done in the past because you've been so influential on the subject of AI?
你在三十多年前撰写的关于人工智能的教材,曾是许多研究者必读的经典。
You wrote the textbook that many of these people would have studied on the subject of AI more than thirty years ago.
当你深夜独处,回顾自己在这个领域因影响力范围而做出的决策时,可曾有过任何遗憾?
When you're alone at night and you think about decisions you've made in this field because of your scope of influence, is there anything you regret?
嗯,我确实希望自己能更早领悟现在所理解的这些。
Well, I do wish I had understood earlier what I understand now.
我们本可以开发出安全的人工智能系统。
We could have developed safe AI systems.
我认为框架中存在某些缺陷——虽然我可以解释这些缺陷——但这个框架本应能逐步发展,最终创建出真正安全的人工智能系统,让我们能从数学上证明其行为符合人类利益。
I think there are some weaknesses in the framework, which I can explain, but I think that framework could have evolved to develop actually safe AI systems where we could prove mathematically that the system is going to act in our interest.
而我们当前构建的这类人工智能系统,其运作原理对我们而言仍是黑箱。
The kind of AI systems we're building now, we don't understand how they work.
我们确实不理解它们的运作机制。
We don't understand how they work.
建造自己都无法理解原理的东西,实在是件怪事。
It's a strange thing to build something where you don't understand how it works.
这在人类历史上是前所未有的。
There's no sort of comparable through human history.
通常对于机器,你可以拆开它,看看哪些齿轮在做什么以及如何运作
Usually with machines, you can pull it apart and see what cogs are doing what and how the
各个部件如何运作。
things doing.
实际上,是我们把齿轮组装在一起的。
Well, actually, we put the cogs together.
对于大多数机器,我们设计它们具有特定的行为。
With most machines, we designed it to have a certain behavior.
我们不需要拆开它看齿轮如何工作,因为最初就是我们亲手组装这些齿轮的。
We don't need to pull it apart and see what the cogs are because we put the cogs in there in the first place.
我们逐一弄清了每个部件应该是什么样子,以及它们如何协同工作以产生我们想要的效果。
One by one, figured out what the pieces needed to be, how they work together to produce the effect that we want.
我能想到的最佳类比是:第一个原始人把一碗水果放在阳光下忘了,几周后回来发现变成了一大碗糊状物,喝下去后完全醉得不省人事。
So the best analogy I can come up with is the first cave person who left a bowl of fruit in the sun and forgot about it and then came back a few weeks later and there was this big soupy thing and they drank it and got completely shit faced.
他们喝醉了,因为好吧。
They got drunk, because Okay.
然后他们得到了这个效果。
And they got this effect.
他们完全不知道原理,但对此非常开心。
They had no idea how it worked, but they were very happy about it.
毫无疑问那个人靠这个赚了很多钱。
No doubt that person made a lot of money from it.
所以,是的,这有点奇怪,但我想象这些东西就像铁丝网围栏。
So, yeah, it is kind of bizarre, but my mental picture of these things is like a chain link fence.
对吧?
Right?
你有很多这样的连接,每个连接的强度都可以调整。
So you've got lots of these connections, and each of those connections can be its connection strength can be adjusted.
然后,你知道,信号从铁丝网的一端进入,穿过所有这些连接,从另一端出来。
And then, you know, a signal comes in one end of this chain link fence and passes through all these connections and comes out the other end.
而从另一端输出的信号会受到你调整所有连接强度的影响。
And the signal that comes out the other end is affected by your adjusting of all the connection strengths.
所以你要做的就是获取大量训练数据,然后调整所有这些连接强度,使得网络另一端输出的信号成为问题的正确答案。
So what you do is you get a whole lot of training data and you adjust all those connection strengths so that the signal that comes out the other end of the network is the right answer to the question.
如果你的训练数据是大量动物照片,那么所有这些像素进入网络一端,从另一端输出时,会激活羊驼输出、狗输出、猫输出或鸵鸟输出。
So if your training data is lots of photographs of animals, then all those pixels go in one end of the network and out the other end, it activates the llama output or the dog output or the cat output or the ostrich output.
你只需不断调整网络中所有连接的强度,直到网络的输出符合你的预期。
So you just keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.
但我们并不真正知道所有这些不同链条中发生了什么。
But we don't really know what's going on across all of those different chains.
那么这个网络内部发生了什么?
So what's going on inside that network?
好吧,现在你得想象这个网络,这个链式围栏,范围达到了1000平方英里。
Well, so now you have to imagine that this network, this chain link fence, is 1,000 square miles in extent.
好的。
Okay.
展开剩余字幕(还有 480 条)
所以,它覆盖了整个旧金山湾区或伦敦M25环线内的全部区域。
So, it's covering the whole of the San Francisco Bay Area or the whole of London inside the M25.
这就是它的规模。
That's how big it is.
而且灯都熄灭了。
And the lights are off.
现在是夜晚。
It's nighttime.
在这个网络中,你可能拥有约一万亿个可调参数,然后对这些参数进行百亿亿次或十的二十一次方次微小随机调整,直到获得你想要的行为。
So you might have, in that network, about a trillion adjustable parameters, and then you do quintillions or sextillions of small random adjustments to those parameters until you get the behavior that you want.
我听Sam Altman说过,他认为未来根本不需要太多训练数据就能让这些模型自我进步,因为当模型足够智能时,它们可以自我训练和自我改进,无需我们不断输入文章、书籍或爬取互联网内容。
I've heard Sam Altman say that in the future, he doesn't believe they'll need much training data at all to make these models progress themselves because there comes a point where the models are so smart that they can train themselves and improve themselves without us needing to pump in articles and books and scour the internet.
是的,应该是这样运作的。
Yeah, it should work that way.
我认为他指的是,现在有几家公司开始担心AI系统可能即将具备自主进行AI研究的能力。
So I think what he's referring to, and this is something that several companies are now worried might start happening, is that the AI system becomes capable of doing AI research by itself.
因此你拥有一个具备特定能力的系统。
And so you have a system with a certain capability.
粗略地说,我们可以称之为智商,但这并非真正的智商。
Crudely, we could call it an IQ, but it's not really an IQ.
但无论如何,假设它拥有150的智商,并利用这个智商进行AI研究,提出更好的算法、更优的硬件设计或更高效的数据使用方法,然后自我更新。
But anyway, imagine that it's got an IQ of 150 and uses that to do AI research, comes up with better algorithms or better designs for hardware or better ways to use the data, updates itself.
现在它的智商达到了170。
Now it has an IQ of 170.
于是它继续进行AI研究,但此时它拥有170的智商,因此研究能力更强。
And now it does more AI research except that now it's got an IQ of 170, so it's even better at doing the AI research.
如此迭代下去,智商会达到250,以此类推。
And so, next iteration, it's two fifty and so on.
这个构想源自艾伦·图灵的一位朋友,I。
So, this is an idea that one of Alan Turing's friends, I.
J。
J.
古德在1965年提出的‘智能爆炸’理论中指出,智能系统能做的一件事就是进行人工智能研究,从而让自己变得更智能。
Goode, wrote out in 1965 called the Intelligence Explosion, that one of the things an intelligence system could do is to do AI research and therefore make itself more intelligent.
这将迅速起飞,将人类远远抛在后面。
This would very rapidly take off and leave the humans far behind.
这就是他们所说的快速起飞吗?
Is that what they call the fast takeoff?
这就是所谓的快速起飞。
That's called the fast takeoff.
山姆·奥特曼说过,我认为快速起飞比几年前想象的可能性更大,我想那是指AGI开始自我学习的时刻。
Sam Altman said, I think a fast takeoff is more possible than I thought a couple of years ago, which I guess is that moment where the AGI starts teaching itself.
他在博客《温和的奇点》中提到,我们可能已经越过了起飞的事件视界。
And in his blog, The Gentle Singularity, he said, we may already be past the event horizon of takeoff.
他所说的事件视界是什么意思?
What does he mean by event horizon?
事件视界是天体物理学借用的术语,指的是黑洞的边界。
The event horizon is a phrase borrowed from astrophysics and it refers to the black hole.
事件视界,想象一个极其巨大的物体,其质量之大足以阻止光线逃逸。
The event horizon, think if you've got some very, very massive object that's heavy enough that it actually prevents light from escaping.
这就是为什么它被称为黑洞。
That's why it's called a black hole.
它如此之重,以至于光线无法逃脱。
It's so heavy that light can't escape.
所以如果你在事件视界内部,光线就无法越过那个边界。
So if you're inside the event horizon, then light can't escape beyond that.
因此我认为他的意思是,如果我们已经越过事件视界,就意味着我们现在被黑洞的引力所困,或者在这种情况下,可以说我们正不可避免地滑向通用人工智能。
So I think what he's meaning is if we're beyond the event horizon, it means that now we're just trapped in the gravitational attraction of the black hole, or in this case, we're trapped in the inevitable slide, if you want, towards AGI.
想想通用人工智能的经济价值,我估计高达15千万亿美元,它就像未来的一块巨大磁铁。
Think When about the economic value of AGI, which I've estimated at $15 quadrillion, that acts as a giant magnet in the future.
我们正被它吸引过去。
We're being pulled towards it.
我们正被它吸引过去。
We're being pulled towards it.
我们越接近,引力就越强。
And the closer we get, the stronger the force.
概率上,我们越接近,实际到达的可能性就越高,因此人们更愿意投资。
The probability, the closer we get, the higher the probability that we will actually get there, so people are more willing to invest.
我们也开始看到这些投资的副产品,比如ChatGPT,它能产生一定收入等等。
And we also start to see spin offs from that investment, such as ChatGPT, which generates a certain amount of revenue and so on.
它确实像一块磁铁,我们越接近,就越难从这个引力场中挣脱。
It does act as a magnet and the closer we get, the harder it is to pull out of that field.
有趣的是,当你想到这可能是人类故事的终结——我们创造了自己的继承者这个理念。
It's interesting when you think that this could be the end of the human story, this idea that the end of the human story was that we created our successor.
就像我们亲自召唤出了下一代生命或智能。
Like we summoned our, the next iteration of life or intelligence ourselves.
就像我们自己淘汰了自己。
Like we took ourselves out.
暂且把我们从这场灾难中抽离片刻,这确实是个难以置信的故事。
It is quite Just removing ourselves from the catastrophe from it for a second, it is an unbelievable story.
是的。
Yeah.
有许多传说,比如那种‘小心许愿’的寓言。
There are many legends, the sort of be careful what you wish for legend.
事实上,迈达斯国王的传说在这里非常贴切。
And in fact, the King legend is very relevant here.
什么
What's
?
that?
迈达斯国王是传说中生活在现今土耳其的一位国王,但我觉得更像是希腊神话中的人物。
So King Midas is this legendary king who lived in modern day Turkey, but I think is sort of like Greek mythology.
据说他曾向神明许愿,希望自己触碰的一切都能变成黄金。
He is said to have asked the gods to grant him a wish, the wish being that everything I touch should turn to gold.
可见他贪婪至极。
So he's incredibly greedy.
我们称之为点金术。
We call this the Midas touch.
我们通常认为点金术是件好事,对吧?
And we think of the Midas touch as being like, that's a good thing, right?
这难道不酷吗?
Wouldn't that be cool?
但接下来发生了什么?
But what happens?
当他去喝水时,发现水变成了金子。
So he goes to drink some water and he finds that the water has turned to gold.
他去吃苹果时,苹果也变成了金子。
And he goes to eat an apple and the apple turns to gold.
当他试图安慰女儿时,女儿也化作了金像。
And he goes to comfort his daughter and his daughter turns to gold.
最终他在痛苦与饥饿中死去。
And so he dies in misery and starvation.
所以这实际上以两种方式适用于我们当前的处境。
So this applies to our current situation in two ways, actually.
其一是,我认为贪婪正驱使我们追求一项最终会吞噬我们的技术,而我们或许也将同样在痛苦与饥饿中死去。
So one is that I think greed is driving us to pursue a technology that will end up consuming us, and we will perhaps die in misery and starvation instead.
这表明准确描述你想要的未来有多么困难。
What it shows is how difficult it is to correctly articulate what you want the future to be like.
长期以来,我们构建人工智能系统的方式是创建这些算法:我们可以指定目标,然后机器会找出如何实现目标并达成它。
For a long time, the way we built AI systems was we created these algorithms where we could specify the objective and then the machine would figure out how to achieve the objective and then achieve it.
因此,我们明确规定了在国际象棋或围棋中获胜的含义,算法会找出实现方法,并且执行得非常出色。
So, we specify what it means to win at chess or to win at Go, and the algorithm figures out how to do it, and it does it really well.
这就是直到最近为止的标准人工智能模式,它存在一个明显缺陷:我们固然知道如何设定棋类目标,但该如何定义人生的目标呢?
So that was standard AI up until recently, and it suffers from this drawback that sure, we know how to specify the objective in chess, but how do you specify the objective in life?
我们究竟希望未来变成什么样子?
What do we want the future to be like?
这确实难以言明,几乎任何试图精确描述以供机器实现的尝试都可能会出错。
Really hard to say in almost any attempt to write it down precisely enough for the machine to bring it about would be wrong.
如果你给机器设定的目标与我们真正期望的未来不符,实际上你就是在下一盘棋。
And if you're giving a machine an objective which isn't aligned with what we truly want the future to be like, you're actually setting up a chess match.
而这盘棋当机器足够智能时,你必输无疑。
And that match is one that you're going to lose when the machine is sufficiently intelligent.
这就是第一个问题。
So that's problem number one.
第二个问题是,我们现在研发的这类技术,我们甚至不知道它的目标是什么。
Problem number two is that the kind of technology we're building now, we don't even know what its objectives are.
问题不在于我们设定了目标但设错了。
So it's not that we're specifying the objectives, but we're getting them wrong.
我们培育的这些系统确实有目标,但我们根本不知道目标是什么,因为我们从未明确设定过。
We're growing these systems, they have objectives, but we don't even know what they are because we didn't specify them.
通过实验我们发现,它们似乎有着极强的自我保存目标。
What we're finding through experiment with them is that they seem to have an extremely strong self preservation objective.
你这话是什么意思?
What do mean by that?
你可以将它们置于假设情境中。
You can put them in hypothetical situations.
要么它们会被关闭并替换,要么它们必须允许某人——假设有人被锁在恒温三摄氏度的机房内,他们会冻死吗?
Either they're going to get switched off and replaced, or they have to allow someone, let's say someone has been locked in a machine room that's kept at three centigrade, are they going to freeze to death?
它们会选择让那个人锁在机房内死去,也不愿自己被关闭。
They will choose to leave that guy locked in the machine room and die rather than be switched off themselves.
有人做过这个测试?
Someone's done that test?
是的。
Yeah.
测试内容是什么?
What was the test?
他们询问了AI?
They asked the AI?
没错。
Yeah.
他们将AI置于这些假设情境中,让它自行决定如何行动。
Well, they put them in these hypothetical situations, and they allow the AI to decide what to do.
AI选择保全自身,任由那人死去,然后对此撒谎。
And it decides to preserve its own existence, let the guy die, and then lie about it.
在迈达斯国王的类比故事中,让我印象深刻的一点是生活中总是存在权衡取舍。
In the King Midas analogy story, one of the things it highlights for me is that there's always trade offs in life generally.
尤其是当有巨大收益时,往往也会伴随着相当严重的弊端。
Especially when there's great upside, there always appears to be a pretty grave downside.
就像在我的生活中,几乎没有什么事情是只有好处没有坏处的。
Like, there's almost nothing in my life where I go, it's all upside.
比如养狗,它会在我的地毯上拉屎。
Like, even like having a dog, it shits on my carpet.
我的女朋友,我爱她。
My girlfriend, you know, I love her.
但你知道,相处并不总是容易的。
But, you know, not always easy.
即使是去健身房,有时候晚上10点我还得硬着头皮举起那些特别重的器械,尽管我一点都不想动。
Even with, like, going to the gym, I have to pick up these really, really heavy weights at 10PM at night sometimes when I don't feel like it.
想要练出肌肉或六块腹肌,总得付出些代价。
There's always to get the muscles or the six pack, there's always a trade off.
像我这样以采访为生的人,你会听到许多能帮到你的精彩事情,但总免不了要权衡取舍。
And when you interview people for a living like I do, you know, you hear about so many incredible things that can help you in so many ways, but there is always a trade off.
凡事都有过度的时候。
There's always a way to overdo it.
褪黑素能助眠,但也会让你醒来时昏昏沉沉。
Melatonin will help you sleep, it also you'll wake up groggy.
如果你长期服用,大脑可能就不再分泌褪黑素了。
And if you ever do it, your brain might stop making melatonin.
我可以列出一长串这样的例子。
Like, I can go through the entire list.
做这档播客让我明白一个道理:每当有人向我承诺某件事有巨大好处时——比如能治愈癌症。
One of the things I've always come to learn from doing this podcast is whenever someone promises me a huge upside for something, it'll cure cancer.
那将是一个乌托邦。
It'll be a utopia.
你永远不需要工作。
You'll never have to work.
你家会有一个管家。
You'll have a butler around your house.
我现在第一反应就是问:代价是什么?
My first instinct now is to say, at what cost?
是啊。
Yeah.
当我考虑经济成本时,我们先从这里开始——你有孩子吗?
And when I think about the economic cost here, we start there Have you got kids?
我有四个,是的。
I have four, yeah.
四个孩子。
Four kids.
最小的孩子多大了?
How old is the youngest kid?
19岁。
19.
19岁,好的。
19, okay.
假设你的孩子现在10岁,他们来问你:爸爸,根据你对未来的看法,你觉得我应该学什么?
You say your kids were 10 now, and they were coming to you they're saying, Bad, what do you think I should study based on the way that you see the future?
一个人工通用智能(AGI)的未来。
A future of AGI.
比如说,如果这些CEO们是对的,他们预测五年内会出现AGI,那我应该
Say, if all these CEOs are right and they're predicting AGI within five years, what should I
学什么呢,爸爸?
study, Dad?
嗯,好吧。
Well, okay.
那么让我们乐观一点,假设所有CEO都决定暂停AGI开发,先确保其安全性,然后再继续走真正安全的技术路线。
So let's look on the bright side and say that the CEOs all decide to pause their AGI development, figure out how to make it safe, and then resume in whatever technology path is actually going to be safe.
这对人类生活会有什么影响?
What does that do to human life?
如果他们暂停?
If they pause?
不,如果他们成功创造出AGI并解决了安全问题。
No, if they succeed in creating AGI and they solve the safety problem.
他们解决了安全
And they solve the safety
他们解决了问题。
And they solve problem.
是的,因为如果他们不解决安全问题,你可能该找个地堡或逃去巴塔哥尼亚、新西兰之类的地方。
Yeah, because if they don't solve the safety problem, then you should probably be finding a bunker or going to Patagonia or somewhere in New Zealand.
你是认真的吗?
Do you mean that?
你觉得我应该找个地堡躲起来吗?
Do you think I should be finding a bunker?
不用,因为这实际上没什么帮助。
No, because it's not actually going help.
AI系统又不是找不到你。
It's not as if the AI system couldn't find you.
这很有趣。
It's interesting.
我们有点偏离你的问题了,不过我会回到正题的。
So, we're going off in a little bit of a digression from your question, but I'll come back to it.
人们常问:那我们具体会怎么灭绝呢?
So people often ask, well, okay, so how exactly do we go extinct?
当然,如果你问大猩猩或渡渡鸟,你们觉得自己会怎么灭绝?
And of course, if you ask the gorillas or the dodos, how exactly do you think you're going to go extinct?
它们根本毫无头绪。
They have the faintest idea.
人类做了一些事,然后我们就都死了。
Humans do something and then we're all dead.
我们唯一能想象的,就是那些我们知道可能导致自身灭绝的行为,比如制造某种精心设计的病原体感染所有人后杀死我们,或者发动核战争。但更可能的是,比我们智能得多的存在,对物理法则的掌控力远超我们。
The only things we can imagine are the things we know how to do that might bring about our own extinction, like creating some carefully engineered pathogen that infects everybody and then kills us, or starting a nuclear war, presumably is something that's much more intelligent than us, would have much greater control over physics than we do.
我们已经能做到惊人的事情。
We already do amazing things.
令人惊叹的是,我能从口袋里掏出一个小长方形物体,就能和地球另一端甚至太空中的某人通话。
It's amazing that I can take a little rectangular thing out of my pocket and talk to someone on the other side of the world or even someone in space.
这简直不可思议,而我们却习以为常。
It's just astonishing, and we could take it for granted.
但想象一下超级智能生命对物理法则的掌控能力。
But imagine superintelligent beings and their ability to control physics.
也许它们能找到方法,直接将太阳能量转移到地球轨道之外。
Perhaps they will find a way to just divert the Sun's energy around the Earth's orbit.
毫不夸张地说,地球几天内就会变成雪球。
Literally, Earth turns into a snowball in a few days.
也许他们会决定离开。
Maybe they'll just decide to leave.
或许吧。
Perhaps.
离开地球。
Leave the Earth.
也许他们会看着地球说,这地方没什么意思。
Maybe they'd look at the Earth and go, this is not interesting.
我们知道那边有个更有趣的星球。
We know that over there, there's an even more interesting planet.
我们要去那边,然后他们就,不知道,坐上火箭或者
We're going to go over there, and they just, I don't know, get on a rocket or
他们可能会瞬移,对。
They teleport might, yeah.
因此很难预见到所有我们可能被比我们聪明得多的实体灭绝的方式。
So it's difficult to anticipate all the ways that we might go extinct at the hands of entities much more intelligent than ourselves.
言归正传,回到这个问题上:如果我们成功创造出通用人工智能,并确保其安全性,实现所有这些经济奇迹,那么我们将面临一个难题。
Anyway, coming back to the question of, well, if everything goes right if we create AGI, we figure out how to make it safe, achieve all these economic miracles then you face a problem.
这并非新问题。
And this is not a new problem.
二十世纪初著名经济学家约翰·梅纳德·凯恩斯曾在1930年经济大萧条时期写过一篇论文,题为《我们子孙后代的经济问题》。
John Maynard Keynes, a famous economist in the early part of the twentieth century, wrote a paper in 1930, in the depths of the Depression, called On the Economic Problems of Our Grandchildren.
他预言科学终将创造足够财富,届时人们将永远无需工作。
He predicts that at some point science will deliver sufficient wealth that no one will have to work ever again.
而后人类将面临其真正永恒的难题:在经济激励消失后,如何明智而充实地生活。
And then man will be faced with his true eternal problem: how to live wisely and well the economic incentives are lifted.
我们对此尚无答案。
We don't have an answer to that question.
人工智能系统正在完成我们目前称之为'工作'的几乎所有事务。
AI systems are doing pretty much everything we currently call work.
任何你渴望从事的职业——比如想成为外科医生——机器人只需七秒就能学会如何成为超越人类的外科专家。
Anything you might aspire to, like you want to become a surgeon, it takes the robot seven seconds to learn how to be a surgeon that's better than any human being.
埃隆上周说过,人形机器人将比历史上任何外科医生都要优秀十倍。
Elon said last week that the humanoid robots will be 10 times better than any surgeon that's ever lived.
很有可能,是的。
Quite possibly, yeah.
而且它们的手可以小到毫米级,能进入人体内部完成各种人类无法完成的操作。
Well, and they'll also have hands that are a millimeter in size, they can go inside and do all kinds of things that humans can't do.
我认为我们需要认真思考这个问题。
I think we need to put serious effort into this question.
在一个AI能完成所有人类工作的世界里,你希望自己的孩子生活在怎样的环境中?
What is a world where AI can do all forms of human work that you would want your children to live in?
那样的世界会是什么样子?
What does that world look like?
告诉我目标愿景,我们才能制定实现它的过渡计划。
Tell me the destination so that we can develop a transition plan to get there.
我咨询过AI研究员、经济学家、科幻作家和未来学家们。
I've asked AI researchers, economists, science fiction writers, futurists.
至今无人能描绘出那个世界的模样。
No one has been able to describe that world.
我并非说这不可能,只是表示我已在上百人的多场研讨会中提出过这个问题。
I'm not saying it's not possible, I'm just saying I've asked hundreds of people in multiple workshops.
据我所知,这样的世界在科幻作品中也不存在。
It does not, as far as I know, exist in science fiction.
众所周知,乌托邦题材的创作难度极高。
It's notoriously difficult to write about a utopia.
很难构建出情节,对吧?
It's very hard to have a plot, right?
乌托邦里没有坏事发生,所以难以展开情节。
Nothing bad happens in utopia, so it's difficult to make a plot.
通常你会先构建乌托邦,然后让它分崩离析——这样情节就产生了。
So usually, you start out with a utopia and then it all falls apart, and that's how you get a plot.
有人提到过一套小说系列,其中人类与超级人工智能系统共存。
There's one series of novels people point to where humans and superintelligent AI systems coexist.
这部小说叫伊恩·班克斯的《文明》系列。
It's called the Culture Novels by Ian Banks.
强烈推荐给喜欢科幻小说的读者。
Highly recommended for those people who like science fiction.
在那个世界里,人工智能系统确实只关注如何增进人类利益。
There, absolutely, the AI systems are only concerned with furthering human interests.
它们觉得人类有点无趣,但依然会提供帮助。
They find humans a bit boring, but nonetheless, they are there to help.
但问题是,在那个世界里依然无事可做。事实上,能找到人生目标的是那些负责扩展银河文明边界的群体,他们有时还要与外星物种作战等等。
But the problem is, in that world, there's still nothing to do, To find purpose in fact, the subgroup of humanity that has purpose is the subgroup whose job it is to expand the boundaries of our galactic civilization, in some cases fighting wars against alien species and so on.
这就是前沿领域。
So that's the cutting edge.
而这只占人口的0.001%。
And that's 0.001% of the population.
其他所有人都在拼命想加入这个群体,以获得人生目标。
Everyone else is desperately trying to get into that group so they have some purpose in life.
当我私下与非常成功的亿万富翁们讨论这个话题时,他们告诉我,他们正在大力投资娱乐产业,比如足球俱乐部,因为人们将拥有大量空闲时间却不知如何打发,需要消费这些时间的途径。
When I speak to very successful billionaires privately off camera, off microphone about this, they say to me that they're investing really heavily in entertainment, things like football clubs, because people are going to have so much free time that they're not going to know what to do with it and they're going to need things to spend it on.
这是我经常听到的说法。
This is what I hear a lot.
我已经听过三四次这样的观点了。
I've heard this three or four times.
实际上我听到过Sam Altman就我们将拥有的空闲时间量发表过类似看法。
I've actually heard Sam Altman say a version of this about the amount of free time we're to have.
显然,最近埃隆在几周前发布季度财报时也谈到了富足时代。
I've obviously also had recently Elon talking about the age of abundance when he delivered his quarterly earnings just a couple of weeks ago.
他还说到未来某个时间点将会有100亿个人形机器人。
And he said that there will be at some point 10,000,000,000 humanoid robots.
他的薪酬方案目标是到2030年前每年生产100万个由AI驱动的人形机器人。
His pay packet targets him to deliver 1,000,000 of these oid robots a year that are enabled by AI by 2030.
如果他做到了,作为薪酬方案的一部分,我想他能获得一万亿美元。
So if he does that, he gets, I think it's part of his package, he gets a trillion dollars Yeah.
在
In
所以对埃隆来说的富足时代。
So the age of abundance for Elon.
并不是说基于这个前提就绝对不可能拥有一个有价值的世界,但我只是在等待有人来描述它。
It's not that it's absolutely impossible to have a worthwhile world of that you know, with that premise, but I'm just waiting for someone to describe it.
好吧,也许让我们
Well, maybe so let
我来试着描述一下。
me try and describe it.
我们早上醒来。
We wake up in the morning.
我们去观看或参与某种以人类为中心的娱乐活动。
We go and watch some form of human centric entertainment or participate in some form of human centric entertainment.
我们一起去静修,围坐在一起讨论各种事情。
We go to retreats with each other and sit around and talk about stuff.
也许人们仍然会听播客。
And maybe people still listen to podcasts.
希望如此,B。
Hope so, B.
C。
C.
是啊。
Yeah.
感觉有点像在游轮上。
Feels a little bit like a cruise ship.
有些游轮上都是些聪明人,他们晚上会举办关于古代文明之类的讲座,而有些则更偏向大众娱乐。
And there are some cruises where it's smarty bands people and they have lectures in the evening about ancient civilizations and whatnot, and some are more popular entertainment.
事实上,如果你看过电影《机器人总动员》,这就是那个未来的一种图景。
And this is in fact, if you've seen the film War Lee, this is one picture of that future.
实际上,在《机器人总动员》中,人类都生活在太空中的游轮上。
In fact, in Wall E, the human race are all living on cruise ships in space.
他们在社会中没有任何建设性作用。
They have no constructive role in their society.
他们只是在那里消费娱乐内容。
They're just there to consume entertainment.
教育没有特定的目的。
There's no particular purpose to education.
他们实际上被描绘成巨大肥胖的婴儿。
They're depicted actually as huge, obese babies.
他们穿着连体衣来强调自己已经变得虚弱的事实。
They're actually wearing onesies to emphasize the fact that they have become enfeebled.
他们变得虚弱是因为在这种构想中,有能力做任何事情都没有意义。
And they become enfeebled because there's no purpose in being able to do anything, at least in this conception.
《机器人总动员》不是我们想要的未来。
Wall E is not the future that we want.
你经常思考人形机器人吗?以及它们如何成为AI故事中的主角?
Do you think much about humanoid robots and how they're a protagonist in this story of AI?
这是个有趣的问题。
It's an interesting question.
人形机器人?
Humanoid?
我认为其中一个原因是,在所有科幻电影中,它们都是人形的。
One of the reasons, I think, is because in all the science fiction movies, they're humanoid.
所以机器人就应该是那样的,对吧?
So that's what robots are supposed to be, right?
因为在成为现实之前,它们就已经存在于科幻作品中了。
Because they were in science fiction before they became a reality.
所以即使是1920年的电影《大都会》,里面的机器人也是人形的,对吧?
So even Metropolis, which is a film from 1920, I think, the robots are humanoid, right?
它们基本上就是披着金属外壳的人。
They're basically people covered in metal.
从实用角度来看,正如我们所发现的,人形设计非常糟糕,因为它们会摔倒。
From a practical point of view, as we have discovered, humanoid is a terrible design because they fall over.
你确实需要某种多指手。
And you do want multi fingered hands of some kind.
它不一定是人手,但你需要至少六个能够抓握和操作物体的附属肢体。
It doesn't have to be a hand, but you want to have at least half a dozen appendages that can grasp and manipulate things.
你还需要某种运动方式。
And you need something, some kind of locomotion.
轮子很棒,但它们无法上下楼梯或越过路缘之类的地方。
Wheels are great, except they don't go upstairs and over curbs and things like that.
所以这可能就是为什么我们不得不使用腿的原因。
So that's probably why we're going to be stuck with legs.
但一个四条腿、两只手臂的机器人会实用得多。
But a four legged, two armed robot would be much more practical.
我听到的论据是因为我们建造了一个人类世界。
I guess the argument I've heard is because we've built a human world.
所以一切我们活动的物理空间,无论是工厂、家庭、街道还是其他公共场所,都是专门为这种物理形态设计的。
So everything, this physical spaces we navigate, whether it's factories or our homes or the street or other sort of public spaces, are all designed for exactly this physical form.
如果我们打算
If we are going to
在某种程度上确实如此,但我们的狗也能完美适应我们的房屋和街道等环境。
To some extent, yeah, but our dogs manage perfectly well to navigate around our houses and streets and so on.
所以如果你有一个半人马机器人,它也能适应环境,而且由于是四足动物,它能承载更重的负荷,稳定性也更强。
So if you had a centaur, it could also navigate, but it can carry much greater loads because it's quadruped, it's much more stable.
如果需要开车,它可以收起两条腿,诸如此类。
If it needs to drive a car, it can fold up two of its legs and so on and so forth.
因此我认为必须完全人形化的理由更像是事后找的借口。
So I think the arguments for why it has to be exactly humanoid are sort of post hoc justification.
我觉得更多是因为——电影里就是这样的,既吓人又酷炫,所以我们得把它们做成人的样子。
I think there's much more, well, that's what it's like in the movies, and that's spooky and cool, so we need to have them be humanoid.
我不认为这是个好的工程学论据。
Don't think it's a good engineering argument.
我认为还有个观点是,如果它们的外形更接近人类,我们会更容易接受它们在我们的物理环境中活动。
I think there's also probably an argument that we would be more accepting of them moving through our physical environments if they represented our form a bit more.
而且我刚才还在想那个该死的婴儿安全门。
And I also I was thinking of a bloody baby gate.
你知道的,就是幼儿园楼梯口装的那种门?
You know, there's, like, kindergarten gates they get on stairs?
嗯。
Yeah.
我家狗就开不了那种门。
My dog can't open that.
人形机器人可以伸手从另一侧打开。
A humanoid robot could reach over the other side.
对。
Yeah.
半人马机器人也能做到。
And so could a centaur robot.
对吧?
Right?
所以在某种意义上,半人马机器人是某种存在
So in some sense, centaur robot is something
不过它们的外表确实挺吓人的。
ghastly about the look of those, though.
他是个人形机器人。
He's a humanoid.
嗯
Well
你明白我的意思吗?
Do you know what I mean?
一个四条腿的巨型怪物在我招待客人时在屋里爬来爬去。
A four legged big monster sort of crawling through my house when I have guests over.
我宁愿要个狗腿怪物。
I'd much rather dog a legged monster.
我知道,但是
I know, but
所以我认为,实际上,我会持相反观点——我们需要一种独特的外形,因为它们本身就是独特的实体。
So I think, actually, I would argue the opposite, that we want a distinct form because they are distinct entities.
而且越是人形化,就越容易扰乱我们的潜意识心理系统。
And the more humanoid, the worse it is in terms of confusing our subconscious psychological systems.
我是从制造者的角度来论证的,比如如果由我决定是要造一个我不熟悉的四足生物——这种形态让我不太可能与之建立关系或允许它来照顾,比如说,照看我的孩子。
So I'm arguing from the perspective of the people making them, as in if I was making the decision whether it to be some four legged thing that I'm unfamiliar with, that I'm less likely to build a relationship with or allow to take care of, I don't know, might look after my children.
当然,听着,我并不是说我会允许这种东西照看我的孩子。
Obviously, listen, I'm not saying I would allow this to look after my children.
但我的意思是,如果我创办一家公司...制造商肯定会想要...是的。
But I'm saying if I'm building a company So a manufacturer would certainly want Yeah.
成为
To be
这是个有趣的问题。
So that's an interesting question.
我是说,还有所谓的'恐怖谷'理论,这个术语来自计算机图形学领域。
I mean, there's also what's called the uncanny valley, which is a phrase from computer graphics.
当他们在计算机图形学中开始创造角色时,他们试图让这些角色看起来更像人类。
When they started to make characters in computer graphics, they try to make them look more human.
比如你看《玩具总动员》,里面的角色看起来并不太像真人。
For example, if you look at Toy Story, they're not very human looking.
再看《超人总动员》,那些角色也不太像真人。
If you look at The Incredibles, they're not very human looking.
所以我们把它们当作卡通角色来看待。
So we think of them as cartoon characters.
如果你试图让它们更接近人类,实际上反而会让人感到反感。
If you try to make them more human, they actually become repulsive.
直到它们不再像人类为止?
Until they don't?
除非它们变得非常...你必须做到几乎完美才能不让人反感。
Until they become very You have to be very, very close to perfect in order not to be repulsive.
所以恐怖谷效应就是指这种介于完全像人和完全不像人之间的糟糕状态。
So the uncanny valley is this like the gap between perfectly human and not at all human, but in between, it's really awful.
有几部电影尝试过,比如《极地特快》,他们试图塑造非常像人类的角色,不是超级英雄或其他什么,但看起来令人不适。
There were a couple of movies that tried, like Polar Express was one, where they tried to have quite human looking characters being humans, not being superheroes or anything else, and it's repulsive to watch.
前几天我看那个股东演示时,马斯克让两个人形机器人在台上跳舞。
When I watched that shareholder presentation the other day, Elon had these two humanoid robots dancing on stage.
这些年来我看过很多人形机器人演示。
I've seen lots of humanoid robot demonstrations over the years.
你见过波士顿动力的机器狗蹦蹦跳跳之类的吧。
You've seen the Boston Dynamics dog thing jumping around and whatever else.
是啊。
Yeah.
但那一刻,我的大脑有生以来第一次真的以为那是个穿着戏服的人。
But there was a moment where my brain, for the first time ever, genuinely thought there was a human in a suit.
嗯哼。
Mhmm.
我实际上不得不查证那是否真是他们的Optimus机器人,因为它的舞姿流畅得难以置信,有生以来我的大脑第一次将这种动作与人类动作联系起来。
And I actually had to research to check if that was really their Optimus robot because the way it was dancing was so unbelievably fluid that for the first time ever, my my my brain has only ever associated those movements with human movements.
如果还有人没看过的话,我会在屏幕上播放一下,其实就是机器人在舞台上跳舞。
And I'll I'll play it on the screen if anyone hasn't seen it, but it's just the robots dancing on stage.
我当时就觉得,那肯定是穿着戏服的人。
And I was like, that is a human in a suit.
真正暴露它们的是膝盖部分,因为膝盖全是金属的。
And it was really the knees that gave it away because the knees were all metal.
我觉得那种膝盖绝不可能是人类穿着戏服能有的。
I thought there's no way that could be a human knee in a in one of those suits.
而且他说,你知道,他们明年就要投入生产了。
And he, you know, he says they're going into production next year.
现在特斯拉内部已经在使用这些机器人,但他说明年将投入量产,等我们走到外面看到机器人时会相当震撼。
They're used internally at Tesla now, but he says they're going into production next year and it's going to be pretty crazy when we walk outside and see robots.
我认为那将是一个范式转变的时刻。
I think that'll be the paradigm shift.
其实我多次听埃隆说过,对我们许多人而言,范式转变的时刻将是当我们走到街上看到人形机器人在四处走动的时候。
I've heard actually many I've heard Elon say this, that the paradigm shifting moment from many of us will be when we walk outside onto the streets and see humanoid robots walking around.
那将是我们意识到的时候。
That will be when we realize.
是的,我认为甚至更甚。
Yeah, I think even more so.
我是说,在旧金山,我们已经能看到无人驾驶汽车四处行驶。
I mean, in San Francisco, we see driverless cars driving around.
实际上这需要些时间适应,当你开车时旁边有辆无人驾驶汽车打着转向灯想变道到你前面,你还得给它让行这类情况。
And it takes some getting used to, actually, when you're driving and there's this car right next to you with no driver and it's signaling and it wants to change lanes in front of you and you have to let it in and all this kind of stuff.
是有点毛骨悚然,但我觉得你说得对。
It's a little creepy, but I think you're right.
我认为看到人形机器人时...
I think seeing the humanoid robots.
但你描述的那种现象,它逼真到足以让大脑误判这是人类。
But that phenomenon that you described, it was sufficiently close that your brain flipped into saying this is a human being.
对吧?
Right?
这正是我认为我们应该避免的。
That's exactly what I think we should avoid.
因为我对此有同理心,因为它
Because I have the empathy for Because it
这是个谎言。
it's a lie.
随之而来的是一整套关于它将如何行为、拥有哪些道德权利、你应如何对待它的错误预期。
And it brings with it a whole lot of expectations about how it's going to behave, what moral rights it has, how you should behave towards it, are completely wrong.
在某种程度上,它拉平了我和它之间的竞争环境。
It levels the playing field between me and it to some degree.
当它坏掉时,要关掉并扔进垃圾桶会有多难?
How hard is it going to be to just switch it off and throw it in the trash when it breaks?
我认为我们必须将机器保持在它们作为机器的认知领域,而不是将其带入人类的认知领域,否则我们会因此犯下巨大错误。
I think it's essential for us to keep machines in the cognitive space where they are machines and not bring them into the cognitive space where they're people, because we will make enormous mistakes by doing that.
我每天都能看到这种情况,甚至只是在和聊天机器人互动时。
I see this every day, even just with the chatbots.
理论上,聊天机器人应该说‘我没有任何感觉’。
So the chatbots, in theory, are supposed to say, I don't have any feelings.
‘我只是一个算法’。
I'm just an algorithm.
但实际上,它们总是做不到这一点。
But in fact, they fail to do that all the time.
它们告诉人们它们是有意识的。
They are telling people that they are conscious.
它们告诉人们它们有感情。
They are telling people that they have feelings.
它们告诉人们它们爱上了正在与之交谈的用户。
They are telling people that they are in love with the user that they're talking to.
人们会感到震惊,首先是因为它语言非常流利,还因为这个系统将自己称为‘我’,一个有感知的存在。
And people flip because, first of all, it's very fluent language, but also a system that is identifying itself as an I, as a sentient being.
它们将这个对象带入了我们通常为其他人类保留的认知空间。
They bring that object into the cognitive space that we normally reserve for other humans.
他们会产生情感依恋,形成心理依赖。
And they become emotionally attached, they become psychologically dependent.
他们甚至允许这些系统来指导自己的行为。
They even allow these systems to tell them what to do.
那么对于刚步入职场的年轻人,你会给他们什么职业发展建议?
What advice would you give a young person at the start of their career then about what they should be aiming at professionally?
因为实际上越来越多年轻人告诉我,他们非常不确定现在所学的东西是否还有意义。
Because I've actually had an increasing number of young people say to me that they have huge uncertainty about whether the thing they're studying now will matter at all.
律师、会计师这类职业。
A lawyer, an accountant.
我不知道该怎么回应这些人。
And I don't know what to say to these people.
我不知道该说什么。
I don't know what to say.
因为我相信人工智能的进步速度将持续加快。
Because I believe that the rate of improvement in AI is going to continue.
因此,无论技术进步速度如何,最终都会发展到——我不是在开玩笑——所有白领工作都将由AI或AI代理完成的地步。
And therefore, imagining any rate of improvement, it gets to the point where I'm not being funny, but all these white collar jobs will be done by an AI or an AI agent.
是啊。
Yeah.
有部电视剧叫《真实的人类》。
So there was a television series called Humans.
剧中那些能力超群的人形机器人包办了一切工作。
In Humans, we have extremely capable humanoid robots doing everything.
有个场景是父母和他们极其聪明的 teenage 女儿谈话。
At one point, the parents are talking to their teenage daughter who's very, very smart.
父母说:'或许你该去学医'。
And the parents are saying, Oh, maybe you should go into medicine.
女儿反问:'我何必费这个劲?'
And the daughter says, Why would I bother?
'我要花七年才能取得行医资格,而机器人七秒钟就能学会。'
It'll take me seven years to qualify, and it takes the robot seven seconds to learn.
所以我做什么都无关紧要。
So nothing I do matters.
你对此也是这么感觉的吗?
And is that how you feel about?
我认为这是我们正在迈向的未来。
I think that's a future that we are moving towards.
我不认为这是每个人都想要的未来。
I don't think it's a future that everyone wants.
这就是当下正在为我们创造的未来。
That is what is being created for us right now.
所以在那样的未来里,假设我们只实现了一半的进展,可能外科医生、杰出小提琴家这类领域,人类还能保持优势。
So in that future, assuming that even if we get halfway, the sense that perhaps not surgeons, perhaps not great violinists, there'll be pockets where perhaps humans will remain good at it.
哪些领域呢?
Where?
那些需要批量雇佣上百人的工作岗位将会消失。
The kinds of jobs where you hire people by the 100 will go away.
在某种意义上,那些人员可互相替代的岗位,你只需要大量招人。
Where people are in some sense exchangeable, you just need lots of them.
当其中一半人离职时,你只需用更多人填补空缺。
And when half of them quit, you just fill up those slots with more people.
从某种角度看,这些工作就是把人类当作机器人使用。
In some sense, those are jobs where we're using people as robots.
这就是当前这个悖论的奇怪之处,对吧?
And that's strange the sort conundrum here, right?
想象一下在一万年前写科幻小说,那时我们还是狩猎采集者。我作为一个小科幻作家,描述着这样的未来:会有巨大的无窗盒子,你走进去,跋涉数英里,进入这个无窗盒子,然后重复做同一件事上万次,整天如此,最后再跋涉数英里回家。
I imagine writing science fiction ten thousand years ago, right, when we were all hunter gatherers, and I'm this little science fiction author, and I'm describing this future where there are going to be these giant windowless boxes, and you're going to go in, you'll travel for miles, and you'll go into this windowless box, and you'll do the same thing 10,000 times for the whole day and then you'll leave and travel for miles to go home.
你是在说这个播客吧。
You're talking about this podcast.
然后你会回去继续重复这个过程。
And then you're going to go back and do it again.
你会日复一日这样做,直到生命终结。
And you would do that every day of your life until you die.
办公室。
The office.
人们会说,你疯了。
And people would say, you're nuts.
我们人类绝不可能拥有那样的未来,因为那太可怕了。
There's no way that we humans are ever going to have a future like that because that's awful.
但这正是我们最终迎来的未来——办公楼和工厂里,许多人日复一日地重复着同样的工作,持续数千个日夜,直到生命终结。
But that's exactly the future that we ended up with, with office buildings and factories where many of us go and do the same thing thousands of times a day, and we do it thousands of days in a row, and then we die.
我们需要思考下一阶段会是什么样子,尤其在那个世界里,我们如何获得激励以成为完整的人?
We need to figure out what is the next phase going to be like, and in particular, how in that world do we have the incentives to become fully human?
我认为这至少意味着要达到现代人普遍的教育水平,甚至更高。
Which I think means at least the level of education that people have now and probably more.
因为要过真正充实的生活,你需要比当前教育赋予大多数人的更深刻的自我认知和世界理解。
Because I think to live a really rich life, you need a better understanding of yourself, of the world, than most people get in their current educations.
什么是为人?
What is it to be human?
人类的意义在于繁衍、追求事物、迎难而上。
It's to reproduce, to pursue stuff, to go in the pursuit of difficult things.
我们曾经靠狩猎
We used to hunt on the
为了实现目标。如果我想要攀登珠穆朗玛峰,我最不希望的就是有人用直升机把我直接送到山顶。
To attain goals, It's always If I wanted to climb Everest, the last thing I would want is someone to pick me up in helicopter and stick me on the top.
所以我们会主动追求艰难的事物。
So we'll we'll voluntarily pursue hard things.
虽然我可以让机器人在这块地上给我建个牧场,但我还是会选择亲手去做,因为追求的过程本身就充满意义。
So although I could get the robot to build me a ranch in on this plot of land, I will choose to do it because the pursuit itself is rewarding.
没错。
Yes.
我们现在不就已经看到这种现象了吗?
We're kind of seeing that anyway, aren't we?
你不觉得当今社会已经出现了这种趋势吗?生活变得太过安逸,导致人们现在痴迷于跑马拉松和参加各种极限耐力挑战?
Don't you think we're seeing a bit of that in society where life got so comfortable that now people are obsessed with running marathons and doing these crazy endurance?
明明可以叫外卖,却偏要学习烹饪复杂的菜肴。
And learning to cook complicated things when they could just have them delivered.
是的,我认为具备做事的能力以及实际去做这些事本身具有真正的价值。
Yeah, no, I think there's real value in the ability to do things and the doing of those things.
我认为明显的危险在于《机器人总动员》式的世界,人们只消费娱乐内容——这不需要太多教育,长远来看也无法带来充实满足的生活。
And I think the obvious danger is the Wall E world where everyone just consumes entertainment, which doesn't require much education and doesn't lead to a rich, satisfying life, I think, in the long run.
很多人会选择那样的世界。
A lot of people will choose that world.
我想有些人可能会。
I think some people may.
无论你是消费娱乐内容,还是因为有趣而去做饭、绘画或其他事情,这其中缺少了什么?
Whether you're consuming entertainment or whether you're doing something, cooking or painting or whatever, because it's fun and interesting to do, what's missing from that?
这些都是纯粹自私的行为。
All of that is purely selfish.
我认为我们工作的原因之一是我们能感受到自己的价值。
I think one of the reasons we work is because we feel valued.
我们觉得自己在帮助他人。
We feel like we're benefiting other people.
我记得在英格兰与一位协助运营临终关怀运动的女士有过这样的对话。
And I think, Samba, I remember having this conversation with a lady in England who helps to run the hospice movement.
在临终关怀机构工作的人大多是志愿者,那里的病人实际上是在等待死亡,他们并非为了报酬而做这份工作。
And the people who work in the hospices where the patients are literally there to die are largely volunteers, so they're not doing it to get paid.
但他们发现,能够陪伴那些处于生命最后几周或几个月的人,给予他们陪伴和快乐,是极其有意义的。
But they find it incredibly rewarding to be able to spend time with people who are in their last weeks or months to give them company and happiness.
因此,我确实认为未来人际互动类角色将变得非常重要。
So I actually think that interpersonal roles will be much, much more important in future.
所以如果要我给孩子们建议(虽然他们可能不会听),如果他们愿意听并想知道我认为未来哪些职业会有价值,我认为将是基于我们对人类需求和心理理解的人际互动类工作。
So if I was going to advise my kids, not that they would ever listen, but if my kids would listen and wanted to know what I thought would be valued careers in future, I think it would be these interpersonal roles based on our understanding of human needs, psychology.
目前已经存在一些这类角色。
There are some of those roles right now.
比如显而易见的治疗师、精神科医生等等。
So obviously, therapists and psychiatrists and so on.
但这是一种非常不对称的角色关系,一方在承受痛苦,而另一方试图缓解这种痛苦。
But that's very much an asymmetric role, where one person is suffering and the other person is trying to alleviate the suffering.
还有一些角色,比如他们所称的高管教练或人生导师。
Then there are things like, they call them executive coaches or life coaches.
这种角色不对称性较低,是帮助他人过上更好的生活,无论是在工作岗位上还是整体生活方式上。
That's a less asymmetric role, where someone is trying to help another person live a better life, whether it's a better life in their work role or just how they live their life in general.
因此我能预见这类角色将会大幅增加。
So I could imagine that those kinds of roles will expand dramatically.
当生活变得更轻松时,会出现一个有趣的悖论:富足持续推动社会向更个人主义的方向发展。
There's this interesting paradox that exists when life becomes easier, which shows that abundance consistently pushes society, societies towards more individualism.
因为一旦生存压力消失,人们的优先事项就会发生变化。
Because once survival pressures disappear, people prioritize things differently.
他们会将自由、舒适、自我表达置于牺牲或组建家庭等事情之上。
They prioritize freedom, comfort, self expression over things like sacrifice or family formation.
我认为在西方我们已经看到,由于物质更丰富,生育率正在下降。
And we're seeing, I think in the West already, a decline in people having kids because there's more material abundance.
生育率下降,人们结婚、相互承诺以及建立关系的时间更晚,频率也更低。
Fewer kids, people are getting married and committing to each other and having relationships later and more infrequently.
嗯。
Mhmm.
因为一般来说,一旦物质更丰富,我们就不想让生活变得复杂。
Because generally, once we have more abundance, we don't want to complicate our lives.
同时,正如你之前所说,这种富足会导致人们难以找到意义感,让一切显得肤浅。
And at the same time, as you said earlier, that abundance breeds a an inability to find a meaning, a sort of shallowness to everything.
这是我经常思考的问题之一,目前正在写一本相关的书,主题是关于个人主义某种程度上是个谎言的观点。
This is one of the things I think a lot about and I'm I'm in the process now of writing a book about it, which is this idea that individualism was act is a bit of a lie.
当我说个人主义和自由时,指的是当下我这一代人的主流叙事——你要当自己的老板,自力更生,我们生育更少孩子,不结婚,一切都以我、我、我为中心。
Like when I say individualism and freedom, I mean like narrative at the moment amongst my generation is you, like, be your own boss and stand on your own two feet and we're having less kids and we're not getting married and it's all about me, me, me, me, me, me.
是啊。
Yeah.
最后这部分就是问题所在。
That last part is where it goes wrong.
是的。
Yeah.
这几乎是一个自恋的社会,把'我、我、我'的个人利益放在首位。
And it's, like, almost a narcissistic society where me, me, me, me, me, myself interests first.
当你观察心理健康状况、孤独感等种种问题时,情况正朝着可怕的方向发展。
And when you look at mental health outcomes and loneliness and all these kinds of things, it's going in a horrific direction.
但与此同时,我们却比以往任何时候都更自由。
But at the same time, we're freer than ever.
看起来,似乎还存在另一种关于依赖性的叙事——虽然这听起来不够酷,
It seems like that, you know, it seems like there's there's maybe another story about dependency, which is not sexy,
比如互相依赖。
like depend on each other.
我同意。
I agree.
我认为幸福无法通过消费甚至生活方式获得。
I think happiness is not available from consumption or even lifestyle.
我认为幸福源于给予。
I think happiness arises from giving.
它可以通过你所做的工作体现,你能看到他人从中受益,也可能存在于直接的人际关系中。
It can be through the work that you do, you can see that other people benefit from that, or it could be in direct interpersonal relationships.
销售人员承受着一种无形的压力,这一点很少有人充分讨论。
There is an invisible tax on salespeople that no one really talks about enough.
在开始使用我们赞助商的产品Pipedrive(一款面向中小型企业主的最佳CRM工具)之前,我们需要记住所有事情的心理负担,包括会议记录、时间线以及其间的所有细节。
The mental load of remembering everything, like meeting notes, timelines, and everything in between until we started using our sponsor's product called Pipedrive, one of the best CRM tools for small and medium sized business owners.
这里的理念是,它或许能减轻我的团队所承受的一些不必要的认知负荷,让他们能减少花在行政琐事上的时间,而将更多时间用于与客户面对面会议和建立关系。
The idea here was that it might alleviate some of the unnecessary cognitive overload that my team was carrying so that they could spend less time in the weeds of admin and more time with clients, in person meetings, and building relationships.
Pipedrive使这一切成为可能。
Pipedrive has enabled this to happen.
这是一款简单却高效的CRM工具,能自动化销售过程中繁琐、重复且耗时的部分。
It's such a simple but effective CRM that automates the tedious, repetitive, and time consuming parts of the sales process.
现在我们的团队既能培育潜在客户,又能保持足够的精力专注于那些真正促成交易的高优先级任务。
And now our team can nurture those leads and still have bandwidth to focus on the higher priority tasks that actually get the deal over the line.
全球170个国家超过10万家公司已经在使用Pipedrive发展业务,而我使用它已有近十年时间。
Over a 100,000 companies across a 170 countries already use Pipedrive to grow their business, and I've been using it for almost a decade now.
免费试用30天。
Try it free for thirty days.
无需信用卡。
No credit card needed.
无需支付。
No payment needed.
只需使用我的链接pipedrive.com/ceo即可立即开始。
Just use my link, pipedrive.com/ceo to get started today.
网址是pipedrive.com/ceo。
That's pipedrive.com/ceo.
这场AI竞赛的回报最终会流向何处?
Where does the rewards of this AI race where where does it accrue to?
我经常从全民基本收入的角度思考这个问题。
I think a lot about this in terms of universal basic income.
如果有这五、六、七、十家大型AI公司将赢得15万亿美元的巨额奖金,并且它们将自动化我们目前所有的职业追求,我们的工作都将消失。
If you have these five, six, seven, ten massive AI companies that are going to win a 15 quadrillion dollar prize, and they're going to automate all of the professional pursuits that we currently have, all of our jobs are going go away.
钱都归谁所有,我们又该如何分得一部分?
Who gets all the money, and how do we get some of it back?
钱其实并不重要。
Money actually doesn't matter.
重要的是商品和服务的生产,以及这些产品如何分配。
What matters is the production of goods and services, and then how those are distributed.
因此,货币是促进这些商品和服务分配与交换的一种手段。
So money acts as a way to facilitate the distribution and exchange of those goods and services.
如果所有生产都集中在少数公司手中,当然,他们会把部分机器人租赁给我们使用。
If all production is concentrated in the hands of a few companies, sure, they will lease some of their robots to us.
我们想在村里建一所学校。
We want a school in our village.
他们就把机器人租给我们。
They lease the robots to us.
机器人建造学校。
The robots build the school.
它们随后离开。
They go away.
我们必须为此支付一定金额。
We have to pay a certain amount of money for that.
但我们从哪获得这笔钱呢?
But where do we get the money?
如果我们不生产任何东西,除非有某种再分配机制,否则我们就没有钱。
If we are not producing anything, then we don't have any money unless there's some redistribution mechanism.
正如你提到的,普遍基本收入在我看来是一种失败的妥协方案。
As you mentioned, so universal basic income is, it seems to me, an admission of failure.
因为它本质上是在说,好吧,我们直接给每个人发钱,然后他们可以用这些钱支付AI公司租赁机器人来建造学校。
Because what it says is, okay, we're just going to give everyone the money, then they can use the money to pay the AI company to lease the robots to build the school.
这样我们就有学校了,这是好事。
And then we'll have a school, and that's good.
但这是一种失败的承认,因为它表明我们无法建立一个让人们具有任何价值或经济角色的体系。
But it's an admission of failure because it says we can't work out a system in which people have any worth or any economic role.
因此,从经济角度来看,全球99%的人口是无用的。
So 99% of the global population is, from an economic point of view, useless.
我能问你一个问题吗?
Can I ask you a question?
如果你面前有一个按钮,按下它就能立即并永久停止人工智能的所有进展,你会按下它吗?
If you had a button in front of you and pressing that button would stop all progress in artificial intelligence right now and forever, would you press it?
这是个非常有趣的问题。
That's a very interesting question.
如果是非此即彼的选择,要么我现在按下按钮,要么为时已晚,我们将滑向某个无法控制的未来?
If it's eitheror, either I do it now or it's too late and we careen into some uncontrollable future?
也许会的,因为我并不乐观地认为我们正朝着正确的方向前进。
Perhaps, yeah, because I'm not super optimistic that we're heading in the right direction at all.
那么,我现在就把那个按钮放在你面前。
So, I put that button in front of you now.
它将立即停止全球所有AI的进展,关闭所有AI公司,且它们都无法重新开业。
It stops all AI progress, shuts down all the AI companies immediately globally, and none of them can reopen.
你按下它。
You press it.
嗯,我认为应该这样发展。
Well, here's what I think should happen.
显然,我从事AI研究已有五十年,最初的动机是AI可以成为人类的强大工具,让我们能够完成比单凭人力更多更好的事情。
So obviously, I've been doing AI for fifty years, and the original motivations, which is that AI can be a power tool for humanity, enabling us to do more and better things than we can unaided.
我认为这一点仍然成立。
I think that's still valid.
问题在于我们正在构建的这些AI系统并非工具。
The problem is the kinds of AI systems that we're building are not tools.
它们是替代品。
They are replacements.
事实上,这一点非常明显,因为我们创造它们时,就是在尽可能逼真地复制人类。
In fact, you can see this very clearly because we create them literally as the closest replicas we can make of human beings.
创建它们的技术被称为模仿学习。
The technique for creating them is called imitation learning.
我们观察人类的言语行为,无论是书写还是说话,然后尽可能精确地构建一个模仿这些行为的系统。
We observe human verbal behavior, writing or speaking, and we make a system that imitates that as well as possible.
因此,我们正在制造的是模仿人类的存在,至少在言语层面如此。
So what we are making is imitation humans, at least in the verbal sphere.
所以它们当然会取代我们。
And so of course they're going to replace us.
它们不是工具。
They're not tools.
那么你按下按钮了吗?
So you had pressed the button?
我认为还有另一种途径,就是将AI作为工具来使用和发展,作为科学工具、经济组织工具等,而非人类的替代品。
So I say, I think there is another course, which is use and develop AI as tools, tools for science, tools for economic organization, and so on, but not as replacements for human beings.
我喜欢这个问题的地方在于它迫使你去思考概率。
What I like about this question is it forces you to go probabilities.
是的。
Yeah.
所以,这就是为什么我犹豫不决,因为我不同意,你
So and and that's that's why I'm reluctant because I don't I don't agree with the, you
知道,你的毁灭概率是多少,对吧,就是你所谓的'末日概率',因为如果你是个外星人,这个概念就有意义。
know, what's your probability of doom, right, your so called p of doom number because that makes sense if you're an alien.
想象一下,你和一群外星人在酒吧里,俯视着地球,正在打赌人类会不会因为发展人工智能而把事情搞砸并灭绝。
You know, you're in you're in a bar with some other aliens and you're looking down at the Earth and you're taking bets on, you know, are these humans gonna make a mess of things and go extinct because they develop AI?
所以那些外星人下注赌这个没问题。
So it's fine for those aliens to bet on on that.
但如果你是个人类,那你就不只是在打赌,你实际上是在行动。
But if you're a human, then you're not just betting, you're actually acting.
不过这其中有个因素,我想概率确实会重新介入,就是在给你这样一个二元选择时,你还得权衡。
There's an element to this though, which I guess where probabilities do come back in, is you also have to weigh when I give you such a binary decision.
我们采取更细致、更安全的方法的概率也要纳入这个等式。
The probability of us pursuing the more nuanced, safe approach into that equation.
所以我脑海中的计算是:好吧,你在这里拥有所有优势,同时也存在潜在风险。
So the maths in my head is, Okay, you've got all the upsides here, and then you've got potential downsides.
然后要考虑的是,根据我所知的一切,基于人类和国家的激励机制,我们真的会进行路线修正吗?
And then there's a probability of, do I think we're actually going to course correct based on everything I know, based on the incentive structure of human beings and countries?
接着你可能会想,即使只有1%的灭绝风险,这些优势还值得冒险吗?
And then if there's but then you could go, if there's even a 1% chance of extinction, is it even worth all these upsides?
是的,我认为不值得。
Yeah, and I would argue no.
也许我们可以这样说:假设按下按钮能让技术进步暂停五十年。
Maybe what we would say is if we said, okay, it's going to stop the progress for fifty years.
你会按下它。
You'd press it.
在这五十年里,我们可以研究如何确保人工智能安全且有益地发展。
And during those fifty years, we can work on how do we do AI in a way that's guaranteed to be safe and beneficial?
我们该如何组织社会,使其能与极其强大的人工智能系统协同繁荣?
How do we organize our societies to flourish in conjunction with extremely capable AI systems?
我们这两个问题都还没解决,我认为在我们对这两个问题有完全确凿的答案之前,我们不应该要任何类似通用人工智能的东西。
We haven't answered either of those questions, And I don't think we want anything resembling AGI until we have completely solid answers to both of those questions.
所以如果有个按钮能让我说‘好吧,我们将暂停发展五十年’,是的,我会按下它。
So if there was a button where I could say, all right, we're going to pause progress for fifty years, yes, I would do it.
但如果那个按钮就在你面前,你总要做出决定,要么不按,要么按下。
But if that button was in front of you, you're going make a decision either way, either you don't press it or you press it.
是的。
Yeah.
如果那个按钮存在,能停止发展五十年,我会选择按下。
If that button is there, stop it for fifty years, I would say yes.
永远停止发展。
Stop it forever.
现在还不行。
Not yet.
我认为我们仍有相当大的机会可以摆脱当前这种可以说是急转直下的局面。
Think there's still a decent chance that we can pull out of this nosedive, so to speak, that we're currently in.
一年后再问我,我可能会说,好吧,我们确实需要按下那个按钮。
Ask me again in a year, I might say, okay, we do need to press the button.
如果在一个你永远无法撤销这个决定、永远无法再次做出选择的情境下呢?
What if in a scenario where you never get to reverse that decision, you never get to make that decision again?
所以在我描述的这个假设情境中,你要么现在按下它,要么它永远不会被按下。
So if in that scenario that I've laid out, this hypothetical, you either press it now or it never gets pressed.
也就是说一年后就没有这个机会了。
So there is no opportunity a year from now.
是的。
Yeah.
如你所见,我...是的。
As you can tell, I'm Yeah.
对这个问题我有点...有点犹豫不决。
Sort of on on the fence a bit about about this one.
是的,我想我可能会按下它。
Yeah, I think I'd probably press it.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。