Lex Fridman Podcast - 史蒂芬·平克:理性时代的人工智能 封面

史蒂芬·平克:理性时代的人工智能

Steven Pinker: AI in the Age of Reason

本集简介

史蒂文·平克是哈佛大学教授,此前曾任教于麻省理工学院。他著有多本著作,其中几本极大地改善了我看待世界的方式。特别是《人性中的善良天使》和《当下的启蒙》,让我基于数据、科学和理性建立起一种乐观主义。视频版本可在YouTube上观看。如需获取本播客更多信息,请访问https://lexfridman.com/ai,或在Twitter、LinkedIn、Facebook或YouTube上关注@lexfridman,观看这些对话的视频版本。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

欢迎收听人工智能播客。

Welcome to the Artificial Intelligence Podcast.

Speaker 0

我是莱克斯·弗里德曼。

My name is Lex Friedman.

Speaker 0

我是麻省理工学院的一名研究科学家。

I'm a research scientist at MIT.

Speaker 0

今天我们将与史蒂芬·平克展开对话。

Today is a conversation with Steven Pinker.

Speaker 0

他是哈佛大学教授,此前曾任教于麻省理工学院。

He's a professor at Harvard and before that was a professor at MIT.

Speaker 0

他著有多本著作,其中一些极大地改善了我看待世界的方式。

He's the author of many books, some of which had a big impact on the way I see the world for the better.

Speaker 0

尤其是《人性中的善良天使》和他的新书《当下的启蒙》,让我树立了一种乐观主义精神。

In particular, The Better Angels of Our Nature and his latest book, Enlightenment Now, have instilled in me a sense of optimism.

Speaker 0

这种乐观主义建立在数据、科学与理性之上。

Optimism grounded in data, science, and reason.

Speaker 0

我真的很享受这次对话。

I really enjoyed this conversation.

Speaker 0

希望你也一样。

I hope you do as well.

Speaker 0

你研究过人类心智、认知、语言、视觉、进化,从儿童到成人的心理学,从个体层面到整个文明层面。

You've studied the human mind, cognition, language, vision, evolution, psychology from child to adult, from the level of individual to the level of our entire civilization.

Speaker 0

所以我觉得可以从一个简单的选择题开始。

So I feel like I can start with a simple multiple choice question.

Speaker 0

生命的意义是什么?

What is the meaning of life?

Speaker 0

是a,如柏拉图所说获取知识;b,如尼采所说获取权力;c,如欧内斯特·贝克尔所说逃避死亡;d,如达尔文等人所说繁衍基因;e,如虚无主义者所说没有意义;f,如二十年前我解读斯蒂芬·平克的观点,认知生命意义超出我们的能力范围;还是g,以上都不是?

Is it, a, to attain knowledge, as Plato said, b, to attain power, as Nietzsche said, c, to escape death, as Ernest Becker said, d, to propagate our genes, as Darwin and others have said, e, there is no meaning as the nihilists have said, f, knowing the meaning of life is beyond our cognitive capabilities as Stephen Pinker said based on my interpretation twenty years ago, and g, none of the above.

Speaker 1

我认为第八个选项最接近,但我会修正为不仅追求知识,更广义地说是实现人生价值。

I'd say eight comes closest, but I would amend that to attaining not only knowledge but, fulfillment more generally.

Speaker 1

那就是生命、健康、精神激励,以及对鲜活文化和社会世界的接触。

That is life, health, stimulation, access to the, living, cultural and social world.

Speaker 1

这是我们赋予生命的意义,若询问我们的基因,这并非它们所认为的生命意义。

Now this is our meaning of life, it's not the meaning of life, if you were to ask our genes.

Speaker 1

基因的意义在于自我复制,但这与它们促成的大脑为自身设定的意义截然不同。

Their meaning is to propagate copies of themselves, but that is distinct from the meaning that the brain that they, lead to sets for itself.

Speaker 0

那么对你而言,知识是其中的一小部分还是主要部分?

So to you knowledge is a small subset or a large subset?

Speaker 1

它是主要部分,但并非人类追求的全部,因为我们还渴望与人交往。

It's a large subset, but it's not the entirety of human striving because, we also want to, interact with people.

Speaker 1

我们渴望体验美。

We want to experience beauty.

Speaker 1

我们渴望感受自然界的丰饶。

We want to experience the the richness of the natural world.

Speaker 1

但理解宇宙运行的规律确实占据重要位置。

But, understanding the what makes the universe tick is is way up there.

Speaker 1

对某些人(尤其对我而言)来说,这绝对位列前五重要的追求。

For some of us more than others, certainly for me, that's, that's one of the top five.

Speaker 0

那么这是一个根本属性吗?

So is that a fundamental aspect?

Speaker 0

你只是在描述个人偏好,还是说求知欲是人类天性中的根本属性?就像你在新书中提到的理性与推理的力量及其实用性。

Are you just describing your own preference, or is this a fundamental aspect of human nature is to seek knowledge, just to, in your latest book, you talk about the the the power, the usefulness of rationality and reason and so on.

Speaker 0

这是人类与生俱来的本质,还是我们应当努力追求的目标?

That a fundamental nature of human beings or is it something we should just strive for?

Speaker 1

两者皆是。

It's both.

Speaker 1

我们之所以能追求它,正是因为这是我们作为智人(Homo sapiens)的本质特征之一。

It is We're we're capable of striving for it because it is one of the things that make us what we are, homo sapiens, wise men.

Speaker 1

在动物界中,我们获取知识并用以生存的程度是罕见的。

We are unusual among animals in the degree to which we acquire knowledge and use it to survive.

Speaker 1

我们会制造工具、通过语言达成协议、提取毒素、预测动物行为、探究植物机理——这里的‘我们’不仅指现代西方人,而是全球各地的人类物种。正因如此,我们才能占据地球每个生态位,导致其他动物灭绝。而当前我们的主要挑战,就是通过精进理性来追求人类福祉:健康、幸福、社会繁荣与文化丰盛。

We make tools, we strike agreements via language, we extract poisons, we predict the behavior of animals, we try to get at the workings of plants, and when I say we, I don't just mean we in the modern West, but we as a species everywhere, which is how we've managed to occupy every niche on the planet, how we've managed to drive other animals to to extinction, and the refinement of reason in pursuit of human well-being of health, happiness, social richness, cultural richness is our, our main challenge in the present.

Speaker 1

即运用智慧与知识去理解世界运转规律、探索人类自身,从而做出发现并达成协议,最终实现全人类的长期共同繁荣。

That is using our intellect, using our knowledge to figure out how the world works, how we work in order to make discoveries and strike agreements that make us all better off in the long run.

Speaker 0

对。

Right.

Speaker 0

而且,你在新书中几乎无可辩驳地以数据驱动的方式做到了这一点,但我想聚焦于人工智能这个方面。

And, you do that almost undeniably and, in a data driven way in your recent book, but I'd like to focus on the artificial intelligence aspect of things.

Speaker 0

不仅是人工智能,还包括自然智能。

And not just artificial intelligence, but natural intelligence too.

Speaker 0

二十年前,你在关于心智运作的书中曾写道——

So twenty years ago, in the book, you've written on how the mind works.

Speaker 0

让我确认下理解是否正确——

You conjecture, again, am I right to interpret things?

Speaker 1

嗯。

Mhmm.

Speaker 0

如果我说错了请纠正,但你推测人脑中的思维可能是由高度互联的神经元组成的庞大网络所产生的结果。

You can correct me if I'm wrong, but you conjecture that human thought in the brain may be a result of a network a massive network of highly interconnected neurons.

Speaker 0

正是这种互联性催生了思想。

So from this interconnectivity emerges thought.

Speaker 0

与我们今天用于机器学习的人工神经网络相比,生物神经网络是否存在某种本质上更复杂、更神秘甚至近乎神奇的特性?毕竟过去六十年我们才开始使用人工神经网络,而真正取得成功不过是最近十年的事。

Compared to artificial neural networks, which we use for machine learning today, is there something fundamentally more complex, mysterious, even magical about the biological neural networks versus the ones we've been starting to use, over the past sixty years and have become to success in the past ten?

Speaker 1

人类神经网络确实存在某种神秘之处——我们每个作为神经网络存在的个体,都知道自己具有意识。

There there is something, a little bit mysterious about, the human neural networks, which is that each one of us who is a neural network knows that we ourselves are conscious.

Speaker 1

这种意识不仅在于能感知周围环境或内部状态,而在于拥有主观的第一人称当下体验。

Conscious not in the sense of registering our surroundings or even registering our internal state, but in having subjective first person present tense experience.

Speaker 1

比如当我看到红色时,不仅与绿色不同,更重要的是我能感受到那种'红'的质感。

That is when I see red, it's not just different from green, but it just there's there's a redness to it that I feel.

Speaker 1

人工系统是否会有这种体验?我不知道,也不认为我能知道,这正是其神秘之处。

Whether an artificial system would experience that or not, I don't know and I don't think I can know, that's why it's mysterious.

Speaker 1

如果我们造出一个行为上与人类完全无法区分的完美仿生机器人,我们是否应该承认它具有意识?

If we had a perfectly lifelike robot that was behaviorally indistinguishable from a human, would we attribute consciousness to it, or or ought we to attribute consciousness to it?

Speaker 1

这个问题确实非常难以回答。

And that's something that it's, very hard to know.

Speaker 1

但抛开这个偏哲学的问题不谈,核心在于:人类神经网络与我们正在构建的人工智能神经网络之间的差异,是否意味着按照当前发展轨迹,我们永远无法造出与人类无异的仿生机器人?因为两者的所谓神经网络组织结构存在根本不同。

But putting that aside, putting aside that that largely philosophical question, the question is, is there some difference between the human neural network and the ones that we're building in artificial intelligence will mean that we're on the current trajectory not going to reach the point where we've got a lifelike robot indistinguishable from a human because the way their neural so called neural networks are organized are different from the way ours are organized.

Speaker 1

我认为存在重叠,但当前神经网络——即所谓的深度学习系统——实际上并不那么‘深’,它们更擅长提取高阶统计规律。

I think there's overlap, but I think there are some some big differences that their current neural networks, current so called deep learning systems are are in reality not all that deep, that is they are very good at extracting high order statistical regularities.

Speaker 1

但大多数系统缺乏语义层面,无法真正理解谁对谁做了什么、为什么、在哪里、事物如何运作以及因果关系。

But most of the systems don't have a semantic level, a level of actual understanding of who did what to whom, why, where, how things work, what causes what else.

Speaker 0

你认为这类能力能否像人工神经网络那样自然涌现?毕竟人工神经网络的连接数量等规模远小于当前人类生物神经网络。

Do you think that kind of thing can emerge as it does so artificial neural networks are much smaller, the number of connections and so on, than the current human, biological networks.

Speaker 0

但你认为仅靠更大规模、更复杂互联的网络,就能涌现出意识或更高层次的语义推理能力吗?

But do you think sort of go to go to consciousness or to go to this higher level semantic reasoning about things, do you think that can emerge with just a larger network, with a more richly, weirdly interconnected network?

Speaker 1

再次将意识问题分开讨论,因为意识甚至与复杂性无关。

Separate it again consciousness because consciousness isn't even a matter of complexity.

Speaker 0

这确实是个奇怪的问题。

It's a really weird one.

Speaker 1

是的。

Yeah.

Speaker 1

比如你可以合理地问虾是否有意识。

Could have you could you could sensibly ask the question of whether shrimp are conscious for example.

Speaker 1

它们并不特别复杂,但也许能感受到疼痛。

They're not terribly complex but maybe they feel pain.

Speaker 1

所以,我们先把这部分问题搁置一边。

So let's let's just put that one that part of it aside.

Speaker 1

嗯。

Yep.

Speaker 1

但我认为,仅仅神经网络的规模不足以赋予其结构和知识。

But I think sheer size of a neural network is not enough to give it structure and knowledge.

Speaker 1

但如果经过适当设计,那为何不行呢?

But if it's suitably engineered, then then then why not?

Speaker 1

也就是说,我们就是神经网络。

That is, we're neural networks.

Speaker 1

自然选择对我们大脑进行了一种等效的工程设计。

Natural selection did a kind of equivalent of engineering of our brains.

Speaker 1

因此,我并不认为存在什么神秘之处,即没有任何硅基系统能够实现人脑的功能。

So I don't think there's anything mysterious in the sense that no, no system made out of silicon could ever do what a human brain can do.

Speaker 1

我认为这在理论上是可行的。

I think it's possible in principle.

Speaker 1

能否实现不仅取决于我们在设计这些系统时的聪明才智,还取决于我们是否真的愿意这么做,甚至这个目标本身是否明智。

Whether it'll ever happen depends not only on how clever we are in engineering these systems but whether even we even want to, whether that's even a sensible goal.

Speaker 1

也就是说,你可以提出这样的问题:是否存在与人类同等优秀的运动系统?

That is, you can ask the question, is there any locomotion system that is as good as a human?

Speaker 1

其实,我们最终希望的是在腿式运动方面超越人类。

Well, we kind of want to do better than a human ultimately in terms of legged locomotion.

Speaker 1

没有理由将人类作为我们的基准。

There's no reason that humans should be our benchmark.

Speaker 1

它们是可能在某种程度上更优越的工具,也可能我们无法完全复制自然系统——因为在某些情况下,使用自然系统成本低廉得多,我们不会投入更多智力和资源。

They're tools that might be better in some ways, it may just be not as, it may be that we can't duplicate a natural system because, at some point it's so much cheaper to use a natural system that we're not going to invest more brainpower and resources.

Speaker 1

例如,我们至今没有找到木材的完美替代品。

So for example, we don't really have a substitute, an exact substitute for wood.

Speaker 1

我们仍然用木材建造房屋、制作家具,我们喜欢它的外观和触感,木材具有合成材料无法比拟的特性。

We still build houses out of wood, we still build furniture out of wood, we like the look, we like the feel, it's wood has certain properties that synthetics don't.

Speaker 1

木材并非有什么神奇或神秘之处,只是我们懒得费心去复制木材的所有特性,毕竟我们已经有现成的木材可用。

It's not that there's anything magical or mysterious about wood, it's just that the extra steps of duplicating everything about wood is something we just haven't bothered because we have wood.

Speaker 1

就像我说的棉质衣物——我现在就穿着棉质衣服,感觉比聚酯纤维舒服多了。

Like when I say cotton, I mean I'm wearing cotton clothing now, feels much better than than polyester.

Speaker 1

并非棉花有什么魔力,也不是说我们永远无法合成出与棉花完全相同的材料,只是到了某个程度,这件事就不值得了。

It's not that cotton has something magic in it, and it's not that if there was that we couldn't ever synthesize something exactly like cotton, but at some point it's just, it's just not worth it.

Speaker 1

我们已经有棉花了。

We've got cotton.

Speaker 1

同理,在人类智能方面,制造一个与人脑完全一样的人工系统这个目标,我怀疑很可能没人会坚持到底。

And likewise, in the case of human intelligence, the goal of making an artificial system that is exactly like the human brain is a a goal that we probably no one is gonna pursue to the bitter end, I suspect.

Speaker 1

因为如果你想要比人类更出色的工具,你根本不会在意它是否像人类那样运作。

Because if you want tools that do things better than humans, you're not gonna care whether it does something like humans.

Speaker 1

比如说诊断癌症或预测天气,为什么要以人类作为基准呢?

So for example, you know, diagnosing cancer or predicting the weather, why set humans as your benchmark?

Speaker 0

但总的来说,我猜你也认为即使不应该以人类为基准,也不想在系统中模仿人类,但通过研究人类来学习如何创建人工智能系统仍有很多可借鉴之处。

But in in general, I suspect you also believe that even if the human should not be a benchmark, and we don't wanna imitate humans in their system, there's a lot to be learned about how to create an artificial intelligence system by studying the human.

Speaker 1

是的。

Yeah.

Speaker 1

我...我觉得这是对的。

I I I think that's right.

Speaker 1

就像我们制造飞行器时,需要理解空气动力学原理包括鸟类飞行,但并非要模仿鸟类。

In the in the same way that, to build flying machines, we wanna understand the laws of aerodynamics and including birds, but not mimic the birds.

Speaker 0

没错。

Right.

Speaker 1

但它们遵循相同的物理法则。

But they're the same laws.

Speaker 0

你对人工智能及其安全的观点,在我看来既理性得令人耳目一新,更重要的是蕴含积极元素——我认为这种态度能激发力量而非令人瘫痪。

You have a view on AI, artificial intelligence, and safety that, from my perspective, is refreshingly rational or perhaps more importantly, has elements of positivity to it, which I think can be inspiring and empowering as opposed to paralyzing.

Speaker 0

对许多人(包括AI研究者)而言,AI带来的终极生存威胁是显而易见的。

For many people, including AI researchers, the eventual existential threat of AI is obvious.

Speaker 0

不仅是可能,而是显而易见。

Not only possible, but obvious.

Speaker 0

而对包括AI研究人员在内的许多人来说,这种威胁并不明显。

And for many others, including AI researchers, the threat is not obvious.

Speaker 0

因此埃隆·马斯克以高度关注AI阵营而闻名,他曾表示AI比核武器危险得多,并可能摧毁人类文明。

So Elon Musk is is famously in the highly concerned about AI camp, saying things like AI is far more dangerous than nuclear weapons and that AI will likely destroy, human civilization.

Speaker 0

所以在二月份他说,如果埃隆真的重视AI威胁,就应该停止研发自动驾驶汽车——而他在特斯拉这方面做得非常成功。

So in February, he said that if Elon was really serious about AI, the the threat of AI, he would stop building self driving cars that he's doing very successfully as part of Tesla.

Speaker 0

然后他说,哇。

Then he said, wow.

Speaker 0

如果连平克都分不清像汽车这样的狭义AI与通用AI的区别——后者计算能力是前者的百万倍且具有开放式效用函数——那人类就真的麻烦大了。

If even Pinker doesn't understand the difference between narrow AI like a car and general AI, when the latter literally has a million times more compute power and an open ended utility function, humanity is in deep trouble.

Speaker 0

那么首先,你关于'如果埃隆·马斯克真的深表担忧就该停止研发自动驾驶汽车'的言论是什么意思?

So first, what did you mean by the statement about Elon Musk should stop building self driving cars if he's deeply concerned about?

Speaker 1

这可不是埃隆·马斯克第一次发表不理智的推文了。

Not not the last time that Elon Musk has fired off an intemperate tweet.

Speaker 0

是啊。

Yeah.

Speaker 0

唉,我们生活在一个推特拥有权力的世界里。

Well, we live in a world where Twitter has power.

Speaker 1

是的。

Yes.

Speaker 1

对,我认为关于人工智能讨论中存在两种生存威胁,我觉得这两种说法都站不住脚。

Yeah, I think the the There are two kinds of existential threat that have been discussed in connection with artificial intelligence and I think that they're both incoherent.

Speaker 1

其中一种是对AI接管世界的模糊恐惧,认为就像我们征服动物和技术落后民族一样,如果我们创造出比我们更先进的东西,它必然会让我们沦为宠物、奴隶或类似家畜的存在。

One of them is a vague fear of AI takeover, that, it just as we subjugated animals and less technologically advanced peoples, so if we build something that's more advanced than us, it will inevitably turn us into pets or slaves or or domesticated animal equivalents.

Speaker 1

我认为这混淆了智力与权力意志——在我们最熟悉的智力系统(即智人)中,我们是自然选择的产物,而自然选择是竞争性过程,因此与我们的问题解决能力相伴而生的是许多恶劣特质,如支配欲、剥削、权力与荣耀最大化以及对资源和影响力的追逐。

I think this confuses intelligence with, a will to power that it so happens that in the intelligence system we are most familiar with, namely homo sapiens, we are products of natural selection which is a competitive process and so bundled together with our problem solving capacity are a number of nasty traits like dominance and exploitation and maximization of power and glory and resources and influence.

Speaker 1

没有理由认为纯粹的问题解决能力会将这些设为目标。

There's no reason to think that sheer problem solving capability will set that as one of its goals.

Speaker 1

它的目标将完全由我们设定,只要没人正在建造一个狂妄自大的人工智能,就没有理由认为它会自然朝那个方向发展。

Its goals will be whatever we set its goals as, and as long as someone isn't building a megalomaniacal artificial intelligence, then there's no reason to think that it would naturally evolve in that direction.

Speaker 1

你可能会说:那如果我们给它设定最大化自身能源的目标呢?

Now you might say, well what if we gave it the goal of maximizing its own power source?

Speaker 1

嗯,给一个自主系统设定这样的目标相当愚蠢。

Well, that's a pretty stupid goal to give a an autonomous system.

Speaker 1

你不会给它设定这个目标。

You don't give it that goal.

Speaker 1

我的意思是,这显然是极其愚蠢的。

I mean, that's just self evidently, idiotic.

Speaker 0

纵观世界历史,工程师们曾多次有机会在系统中植入破坏性力量,但他们选择不这样做,因为这是工程学的自然过程。

So if you look at the history of the world, there's been a lot of opportunities where engineers could instill in a system destructive power, and they choose not to because that's the natural process of engineering.

Speaker 1

嗯,武器除外。

Well, except for weapons.

Speaker 1

我是说,如果你在制造武器,它的目标就是摧毁人类。

I mean, you're building a weapon, its goal is to destroy people.

Speaker 1

因此我认为有充分理由不去研发某些类型的武器。

And so I think there are good reasons to not not build certain kinds of weapons.

Speaker 1

我认为研发核武器是一个巨大的错误。

I think building nuclear weapons was a massive mistake.

Speaker 1

但也许

But probably

Speaker 0

你确实如此。

You do.

Speaker 0

你这么认为,或许应该暂停一下,因为那是一个严重的威胁。

You think so, maybe pause on that because that is one of the serious threats.

Speaker 0

你认为这是一个本应早期就被阻止的错误,还是觉得这只是发明过程中不幸的产物?

Do you think that, it was a mistake in a sense that it should have been stopped early on, or do you think it's just an unfortunate event of invention that this was invented?

Speaker 1

你认为

Do you think

Speaker 0

是否有可能阻止,我想这就是问题的关键。

it was possible to stop, I guess, is the question on that.

Speaker 1

是的。

Yeah.

Speaker 1

时光难以倒流,毕竟它是在二战背景下发明的,当时担心纳粹可能会率先研制出来。

It's hard to rewind the clock because, of course, it was invented in the context of World War two and the fear that the Nazis might develop one first.

Speaker 1

一旦基于这个原因启动,就很难停止,特别是因为战胜日本和纳粹是每个有责任感人士压倒一切的目标,当时人们会不惜一切代价确保胜利。

Then once it was initiated for that reason, it was hard to turn off, especially since winning the war against the Japanese and the Nazis was such an overwhelming goal of every responsible person that there's just nothing that people wouldn't have done then to ensure victory.

Speaker 1

如果二战没有发生,核武器很可能不会被发明——我们无法确知,但我认为这绝非必然,就像那些曾被构想但从未实施的武器系统一样,比如像喷洒农药那样在城市散布毒气的飞机,或是试图在敌国制造地震和海啸的系统,将天气武器化、太阳耀斑武器化等各种我们最终认为不妥的疯狂计划。

It's quite possible if World War two hadn't happened that nuclear weapons wouldn't have been invented, we can't know, but I don't think it was by any means a necessity, anymore than some of the other weapon systems that were envisioned but never implemented, like planes that would disperse poison gas over cities like crop dusters, or systems to try to to to, create, earthquakes and tsunamis in enemy countries, to weaponize the weather, weaponize solar flares, all kinds of crazy schemes that, that that we thought the better of.

Speaker 1

我认为将核武器与人工智能进行类比从根本上就是错误的,因为核武器的本质是毁灭。

I think analogies between nuclear weapons and artificial intelligence are are fundamentally misguided because the whole point of nuclear weapons is to destroy things.

Speaker 1

而人工智能的本质并非毁灭。

The point of artificial intelligence is not to destroy things.

Speaker 1

所以这种类比具有误导性。

So the analogy is misleading.

Speaker 0

你提到了两种人工智能。

So there's two artificial intelligence you mentioned.

Speaker 1

首先我猜是那个……

First We one I just guess it was the

Speaker 0

高智能或权力饥渴的

highly intelligent or power hungry

Speaker 1

是的。

Yeah.

Speaker 1

在我们自己设计的系统中,我们赋予它目标。

In a system that we design ourselves where we give it the goals.

Speaker 1

目标与实现目标的手段是分离的。

Goals are external to the means to attain the goals.

Speaker 1

如果我们不设计一个以最大化支配为目标的智能系统,它就不会追求支配。

If we don't design an artificially intelligent system to maximize, dominance, then it won't maximize dominance.

Speaker 1

只是我们太熟悉智人这个物种了——这两种特质(高智商与权力欲)往往捆绑出现,尤其在男性身上——以至于我们容易将高智商与权力意志混为一谈。

It's just that we're so familiar with homo sapiens where these two traits come bundled together, particularly in men, that we are apt to confuse high intelligence with a will to power.

Speaker 1

但这纯粹是个误解。

But that's just an error.

Speaker 1

另一种担忧是:若给人工智能设定某个目标(比如制造回形针),它可能因执行过于高效,在我们阻止前就把人类变成回形针原料。

The other fear is that there will be collateral damage that will give artificial intelligence a goal, like make paper clips, and it will pursue that goal so brilliantly that before we can stop it, it turns us into paper clips.

Speaker 1

若让它攻克癌症,它可能把人类当作致命实验的小白鼠。

We'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal experiments.

Speaker 1

或者我们会赋予它世界和平的目标,而它对世界和平的理解就是没有人类就没有争斗,于是它会杀死所有人。

Or we'll give it the goal of world peace and its conception of world peace is no people, therefore no fighting and so it will kill us all.

Speaker 1

我认为这些想法完全是异想天开,实际上它们本身就是自相矛盾的。

Now I think these are utterly fanciful, in fact I think they're actually self defeating.

Speaker 1

首先这些假设认为我们将聪明到能设计出治愈癌症的人工智能,却又愚蠢到无法详细说明'治愈癌症'的具体含义以避免在此过程中杀死人类。

They first of all assume that we're going to be so brilliant that we can design an artificial intelligence that can cure cancer, but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the process.

Speaker 1

它还假设系统会聪明到能治愈癌症,却又愚钝到无法理解我们所说的'治愈癌症'不包括杀死所有人。

And it assumes that the system will be so smart that it can cure cancer, but so idiotic that it doesn't can't figure out that what we mean by curing cancer is not killing everyone.

Speaker 1

因此我认为附带损害情景和价值对齐问题同样基于一种误解。

So I think that the the collateral damage scenario, the value alignment problem is, is also based on a misconception.

Speaker 0

当然,目前的挑战之一是我们还不知道如何构建这两种系统,甚至远未接近这个目标。

So one of the challenges, of course, we don't know how to build either system currently, or are we even close to knowing.

Speaker 0

当然,这些情况可能一夜之间改变,但现阶段无论朝哪个方向进行理论推演都极具挑战性。

Of course, those things can change overnight, but at this time, theorizing about it is very challenging in in either direction.

Speaker 0

所以问题的核心或许在于:缺乏对实际工程问题的推理能力时,人们的想象力就会天马行空。

So that that's probably at the core of the problem is without that ability to reason about the real engineering things here at hand is your imagination runs away with things.

Speaker 1

确实如此。

Exactly.

Speaker 0

但我想问问,你认为埃隆·马斯克的动机和思考过程是怎样的?

But let me sort of ask, what do you think was the motivation and the thought process of Elon Musk?

Speaker 0

哦,我我制造自动驾驶汽车。

Oh, I I build autonomous vehicles.

Speaker 0

我研究自动驾驶汽车。

I study autonomous vehicles.

Speaker 0

我研究特斯拉自动驾驶系统。

I study Tesla autopilot.

Speaker 0

我认为这是目前世界上人工智能最大规模的应用之一。

I think it is one of the greatest currently application large scale application of artificial intelligence in the world.

Speaker 0

它对社会可能产生非常积极的影响。

It has potentially a very positive impact on society.

Speaker 0

那么,一个正在创造这种非常优秀的(姑且称之为)狭义AI系统的人,为何又会对通用AI如此担忧呢?

So how does a person who's creating this very good, quote, unquote, narrow AI system also seem to be so concerned about this other general AI.

Speaker 0

你认为其中的动机是什么?

What do you think is the motivation there?

Speaker 0

你认为关键点是什么

What do you think is the thing

Speaker 1

嗯,可能得问他本人,但众所周知他行事浮夸冲动——正如我们所见,甚至损害了他自己的目标以及公司的健康发展。

Well, probably have to ask him, but they're and and he is, notoriously flamboyant, impulsive to the as we have just seen, to the detriment of his own goals of of the the health of the company.

Speaker 1

所以我也不清楚他脑子里在想什么。

So I I don't know what's going on on in his mind.

Speaker 1

你可能得亲自问他。

You you probably have to ask him.

Speaker 1

但我认为专用AI与所谓通用AI的区别并不重要,就像专用AI不会为实现目标而不择手段一样。

But I don't think the and I don't think the distinction between special purpose, AI and so called general AI is relevant, that in the same way that special purpose AI is not going to do anything conceivable in order to attain a goal.

Speaker 1

所有工程系统在设计时都需要权衡多个目标。

All engineering systems have to, are are designed to trade off across multiple goals.

Speaker 1

我们最初造车时,可没因为‘车要跑得快’这个目标就忘记装刹车。

When we build cars in the first place, we didn't forget to install brakes because the goal of a car is to go fast.

Speaker 1

人们意识到,是的,你希望它跑得快,但并非总是如此,所以你也需要安装刹车。

It occurred to people, yes, you want it to go fast, but not always, so you build in brakes too.

Speaker 1

同理,如果一辆车要实现自动驾驶,编程时不会让它选择最短路线去机场,它不会走对角线撞倒行人、树木和围栏,就因为那是最短路线。

Likewise, if a car is going to be autonomous, that doesn't program it to take the shortest route to the airport, it's not gonna take the diagonal and mow down people and trees and fences because that's the shortest route.

Speaker 1

我们编程时所说的最短路线并非这个意思,而这正是一个智能系统应有的定义。

That's not what we mean by the shortest route when we program it, and that's just what an intelligent system, is by definition.

Speaker 1

它会综合考虑多种约束条件。

It takes into account multiple constraints.

Speaker 1

事实上,所谓的通用智能更是如此。

The same is true, in fact even more true, of so called general intelligence.

Speaker 1

也就是说,如果它真正智能,就不会一心只追求某个目标,而忽略其他所有考虑因素和附带影响。

That is, if it's genuinely intelligent, it's not going to pursue some goal single mindedly, omitting every other consideration and collateral effect.

Speaker 1

那不是人工通用智能,那是人工愚蠢。

That's not artificial and general intelligence, that's, that's artificial stupidity.

Speaker 1

顺便说一句,我同意你关于自动驾驶汽车有望改善人类福祉的观点。

I agree with you by the way on the promise of autonomous vehicles for improving human welfare.

Speaker 1

我认为这非常惊人,而且令我惊讶的是,在美国本土,每年约有四万人死于高速公路事故,远超恐怖袭击致死人数,却鲜有媒体报道提及这一点。

I think it's spectacular and I'm surprised at how little press coverage notes that in The United States alone, something like forty thousand people die every year on the highways, vastly more than are killed by terrorists.

Speaker 1

我们花费了上万亿美元发动战争以对抗恐怖主义造成的死亡,每年大约只有五六人因此丧生。

We spend we spent a trillion dollars on a war to combat deaths by terrorism, about half a dozen a year.

Speaker 1

然而年复一年,四万人在公路上惨遭不幸,这个数字本可以降至近乎零。

Whereas, year in year out, forty thousand people are are massacred on the highways, which could be brought down to very close to zero.

Speaker 1

所以在人道主义效益方面,我完全支持你的观点。

So I'm I'm I'm with you on the humanitarian benefit.

Speaker 0

请允许我以汽车研发者的身份说,认为工程师会无知到不将安全性纳入系统设计,这种说法让我有些被冒犯。

Let me just mention that it's as a person who's building these cars, it is a little bit offensive to me to say that engineers would be clueless enough not to engineer safety into systems.

Speaker 0

我常常彻夜难眠地思考那四万逝去的生命,我所有的工程努力都是为了拯救这些人。

I often stay up at night thinking about those 40,000 people that are dying, and everything I try to engineer is to save those people's lives.

Speaker 0

因此让我深感兴奋的每一项新发明,所有深度学习文献、CVPR会议和NIPS论文中的突破,其根本出发点都是确保安全并造福人类。

So every new invention that I'm super excited about, every new and the in in all the deep learning literature and CVPR conferences and NIPS, everything I'm super excited about is all grounded in making it safe and to help people.

Speaker 0

所以我实在看不出这条发展轨迹如何会突然滑向智能技术产生严重负面影响的境地。

So I just don't see how that trajectory can all of a sudden slip into a situation where intelligence will be highly negative.

Speaker 1

你我当然对此达成共识,我认为这只是人工智能潜在人道主义效益的开端。

You and I certainly agree on that and I think that's only the beginning of the potential humanitarian benefits of artificial intelligence.

Speaker 1

人们高度关注人工智能将如何取代人类工作岗位的问题。

There's been enormous attention to what are we gonna do with the people whose jobs are made obsolete by artificial intelligence.

Speaker 1

却极少注意到那些将被淘汰的工作本身就是糟糕的工作。

But very little attention given to the fact that the jobs that are going be made obsolete are horrible jobs.

Speaker 1

人们不再需要采摘作物、整理床铺、驾驶卡车和开采煤矿——这些都是扼杀灵魂的工作,而我们却充斥着同情那些被困在卑微、麻木思维、危险工作中的人的文献。

The fact that people aren't going to be picking crops and making beds and driving trucks and mining coal, these are soul deadening jobs and we have a whole literature sympathizing with the people stuck in these menial, mind deadening, dangerous jobs.

Speaker 1

若能消除这些工作,将是人类的一大福祉。

If we can eliminate them, this is a fantastic boon to humanity.

Speaker 1

当然,解决一个问题又会带来新问题,即如何保障这些人的体面收入。但既然我们有智慧发明能铺床、洗碗、照料病患的机器,我相信我们也有智慧解决收入再分配问题,将部分巨大的经济节省分配给那些不再需要从事铺床工作的人们。

Now granted we, you solve one problem and there's another one, namely how do we get these people a decent income, but if we're smart enough to invent machines that can make beds and put away dishes and handle hospital patients, I think we're smart enough to figure out how to redistribute income to apportion some of the vast economic savings to the human beings who will no longer be needed to to make beds.

Speaker 0

好的。

Okay.

Speaker 0

萨姆·哈里斯认为人工智能终将成为生存威胁是显而易见的事。

Sam Harris says that it's obvious that eventually AI will be an existential risk.

Speaker 0

他是那些认为这显而易见的人之一。

He's one of the people who says it's obvious.

Speaker 0

我们不知道这个断言何时会成真,但最终,这是显而易见的。

We don't know when the claim goes, but eventually, it's obvious.

Speaker 0

正因为我们不知道具体时间,所以现在就该为此担忧。

And because we don't know when, we should worry about it now.

Speaker 0

在我看来,这是个非常有趣的论点。

So it's a very interesting argument in my eyes.

Speaker 0

那么我们该如何考虑时间尺度呢?

So how do you how do we think about time scale?

Speaker 0

当我们对威胁知之甚少时——不像核武器那样明确——该如何思考生存威胁?这种特定威胁可能明天就会发生。

How do we think about existential threats when we don't really we we know so little about the threat, unlike nuclear weapons perhaps, about this particular threat, that it could happen tomorrow.

Speaker 0

对吧?

Right?

Speaker 0

不过,可能性很大的是,它不会发生。

So but very likely, it won't.

Speaker 0

很可能,它会在百年之后发生。

Likely, it'll be a hundred years away.

Speaker 0

那么我们该如何忽视它呢?

So how do do we ignore it?

Speaker 0

我们该如何讨论它?

Do how do we talk about it?

Speaker 0

我们需要为此担忧吗?

Do we worry about it?

Speaker 0

我们该怎么看待这些问题?

What how do we think about those?

Speaker 1

那是什么?

What is it?

Speaker 0

一种我们可以想象的威胁。

A threat that we can imagine.

Speaker 0

它存在于我们的想象范围内,但超出了我们准确预测的理解极限。

It's within the limits of our imagination, but not within our limits of understanding to sufficient to accurately predict it.

Speaker 1

但但我们在对抗的究竟是什么?

But but what what is what is the it that we're fighting?

Speaker 0

AI,抱歉。

AI sorry.

Speaker 0

AI,即AI作为存在性威胁。

AI, x AI being the existential threat.

Speaker 0

AI总能怎样?

AI can always How?

Speaker 1

比如,奴役我们或将我们变成回形针?

Like, enslaving us or turning us into paper clips?

Speaker 0

我认为从Sam Harris的角度看,最具说服力的是回形针情景。

I think the most compelling from the Sam Harris perspective would be the paper clip situation.

Speaker 1

是啊。

Yeah.

Speaker 1

我是说,我觉得这完全是异想天开。

I mean, I I just think it's totally fanciful.

Speaker 1

我是说,别构建那个系统。

I mean, is don't build a system.

Speaker 1

首先,工程学的准则是不要在未经测试前就实施一个具有巨大控制力的系统。

Don't give a don't don't first of all, the code of engineering is you don't implement a system with massive control before testing it.

Speaker 1

也许工程文化会发生根本性改变,那样我才会担心。

Now perhaps the culture of engineering will radically change, then I would worry.

Speaker 1

但我没看到任何迹象表明工程师会突然做出蠢事,比如让一个未经测试的系统控制发电厂。

But I don't see any signs that engineers will suddenly do idiotic things like put an electric power plant in control of a system that they haven't tested first.

Speaker 1

所有这些场景不仅假设存在近乎魔法般强大的智能——包括治愈癌症这种可能不连贯的目标(因为癌症种类太多),或是实现世界和平。

Or all of these scenarios not only imagine almost a magically powered intelligence, including things like cure cancer, which is probably an incoherent goal because there's so many different kinds of cancer, or bring about world peace.

Speaker 1

这种目标本身要如何定义呢?

I mean how do you even specify that as a goal?

Speaker 1

这些场景还假设能控制宇宙中每个分子,这本身就不现实。而且我们不会像对待任何工程系统那样,未经测试就把这些系统接入基础设施。

But the scenarios also imagine some degree of control of every molecule in the universe, which not only is itself unlikely, but we would not start to connect these systems to infrastructure without testing as we would any engineering system.

Speaker 1

或许有些工程师会不负责任,所以我们需要通过法律和监管责任来确保工程师不会做出他们自己标准下的愚蠢行为。

Now maybe some engineers will be irresponsible, and we need legal and regulatory legal responsibility implemented so that engineers don't do things that are stupid by their own standards.

Speaker 1

但我从未见过足够可信的、关于存在性威胁的情景,值得投入大量脑力去预防它。

But the, I've never seen enough of a plausible scenario of, existential threat to devote large amounts of, brainpower to to forestall it.

Speaker 0

所以你相信工程理性的大规模力量,正如你在新书《理性与科学》中所论证的,这种力量能引导新技术的开发,使其既安全又能持续保持安全。

So you you believe in the sort of the power en masse of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of be the very thing that that guides the development of new technology so it's safe and also keeps safe.

Speaker 1

是的。

Yeah.

Speaker 1

这与当前工程文化中的安全理念相同,比如飞机设计中的安全思维。

It's the same and, you know, granted the same culture of safety that currently is part of the engineering, mindset for airplanes, for example.

Speaker 1

所以,我认为我们不应该抛弃这种理念,突然实施未经测试的全能系统。

So, yeah, I don't think that that, that that should be thrown out the window and, that untested all powerful systems should be suddenly implemented.

Speaker 1

但没有理由认为会这样,事实上,如果你观察人工智能的进展,它一直令人印象深刻,尤其是近十年来。

But there's no reason to think they are, and in fact if you look at the progress of artificial intelligence, it's been impressive, especially in the last ten years or so.

Speaker 1

但那种认为会突然出现阶跃式发展,在我们意识到之前就变得无所不能,存在某种递归式自我改进或某种幻想,也是不切实际的。

But the idea that suddenly there'll be a step function that all of a sudden, before we know it, it will be, all powerful, that there'll be some kind of recursive self improvement, some kind of, fume, is also fanciful.

Speaker 1

当然,现有技术如深度学习已经令人印象深刻——通过数十万或数百万个示例训练系统,但这些示例中并不包括治愈癌症这类典型难题。

Certainly by the technology now impresses us, such as deep learning, where you train something on, hundreds of thousands or millions of examples, they're not hundreds of thousands of problems of which curing cancer is typical example.

Speaker 1

因此,过去五年推动AI进步的技术类型,并不会导致这种指数级突然自我改进的幻想。

And so the kind of techniques that have allowed AI to increase in the last five years are not the kind that are gonna lead to this fantasy of exponential sudden self improvement.

Speaker 1

我认为这更像是一种魔法思维。

It's I think it's it's kind of a magical thinking.

Speaker 1

这并非基于我们对AI实际运作方式的理解。

It's not based on our understanding of how AI actually works.

Speaker 0

现在给我个机会说说。

Now give me a chance here.

Speaker 0

你刚才用了'异想天开'、'魔法思维'这样的词。

So you said fanciful, magical thinking.

Speaker 0

Sam Harris在他的TED演讲中说,思考AI灭绝人类文明在某种程度上是种智力游戏。

In his TED Talk, Sam Harris says that thinking about AI killing all human civilization is somehow fun intellectually.

Speaker 0

作为科学工程师我必须说,我并不觉得这有趣。

Now I have to say, as a scientist engineer, I don't find it fun.

Speaker 0

但当我与非AI领域的朋友喝啤酒时,这确实有种黑色镜子剧集般的趣味性——就像讨论如果被告知有颗巨型陨石正朝地球撞来这类话题。

But when I'm having beer with my non AI friends, there is indeed something fun and appealing about it, like talking about an episode of Black Mirror, considering, if a large meteor is headed towards Earth we were just told that a large meteor is headed towards Earth, something like this.

Speaker 0

你能理解这种趣味感吗?你明白其中的心理机制吗?

And can you relate to this sense of fun, and do you understand the psychology of it?

Speaker 1

是的。

Yeah.

Speaker 1

没错。

That's right.

Speaker 1

好问题。

Good question.

Speaker 1

我个人并不觉得这有趣。

I I personally don't find it fun.

Speaker 1

实际上我觉得这有点浪费时间,因为我们应该关注真正的威胁,比如流行病、网络安全漏洞、核战争的可能性,当然还有气候变化。

I I find it, kind of, actually a waste of time, because there are genuine threats that we ought to be thinking about, like pandemics, like cyber security vulnerabilities, like the possibility of nuclear war, and certainly climate change.

Speaker 1

这些话题已经足够我们讨论很久了。不过我认为萨姆确实指出了关键一点:存在一个所谓的理性主义社群,他们热衷于运用脑力构想出普通人想不到的复杂场景。

This is enough to fill many conversations without And I think Sam did put his finger on something, namely that there is a community, sometimes called the rationality community, that delights in using its brainpower to come up with scenarios that would not occur to mere mortals, to less cerebral people.

Speaker 1

所以发现前人未曾担忧过的新问题,确实能带来某种智力上的刺激感。

So there is a kind of intellectual thrill in finding new things to worry about that no one has, worried about yet.

Speaker 1

不过,我实际上认为这不仅是一种无法带给我特别愉悦的乐趣,而且它还可能存在有害的一面,即让人们陷入如此多的恐惧与宿命论中——觉得有太多方式会导致死亡、毁灭我们的文明,以至于我们不如趁还能享受时及时行乐。

I actually think though that it's, not only is a kind of fun that doesn't give me particular pleasure, but I think there can be a pernicious side to it, namely that you overcome people with such dread, such fatalism that there's so many ways to die, to annihilate our civilization that we may as well enjoy life while we can.

Speaker 1

我们对此无能为力。

There's nothing we can do about it.

Speaker 1

如果气候变化没能终结我们,失控的机器人也会,所以现在及时行乐吧。

If climate change doesn't do us in, then runaway robots will, so let's enjoy ourselves now.

Speaker 1

我们必须分清主次。

We've got to prioritize.

Speaker 1

我们需要关注那些几乎确定会发生的威胁,比如气候变化,并将它们与那些仅存在于想象中且概率微乎其微的威胁区分开来。

We have to look at threats that are close to certainty, such as climate change, and distinguish those from ones that are merely imaginable but with infinitesimal probabilities.

Speaker 1

我们还必须考虑人们的'忧虑预算'。

And we have to take into account people's worry budget.

Speaker 1

你不可能为所有事情担忧。

You can't worry about everything.

Speaker 1

而如果你播撒恐惧、惊慌、恐怖和宿命论,可能会导致一种麻木不仁的状态。

And if you sow dread and fear and terror and fatalism, it can lead to a kind of numbness.

Speaker 1

嗯,这些问题确实令人不堪重负,工程师们迟早会害死我们所有人。

Well, just These problems are overwhelming and the engineers are just gonna kill us all.

Speaker 1

所以,我们要么摧毁整个科技基础设施,要么就趁还能享受时及时行乐。

So, let's either destroy the entire infrastructure of science, technology, or let's just, enjoy life while we can.

Speaker 0

确实存在一种担忧的界限,我对工程领域的许多问题都忧心忡忡。

So there's a certain line of worry, which I'm worried about a lot of things in engineering.

Speaker 0

当这种忧虑越过某个界限,任其发展时,就会变成令人瘫痪的恐惧而非有益的警惕,这正是他们所强调的。

There's a certain line of worry when you cross, you allow it to cross, that it becomes paralyzing fear as opposed to productive fear, and that's kind of what they're highlighting.

Speaker 1

完全正确。

Exactly right.

Speaker 1

我们也看到一些...我们知道人类的应对措施与风险并不匹配,因为认知心理学的基本原则是:对风险的感知——进而对恐惧的感知——是由可想象性驱动的,而非数据。

And we've seen some We know that human effort is not well calibrated against risk, in that because a basic tenet of cognitive psychology is that perception of risk and hence perception of fear is driven by imaginability, not by data.

Speaker 1

因此我们错误地投入大量资源防范恐怖主义——平均每年仅造成约六名美国人死亡(除911事件外)。

And so we misallocate vast amounts of resources to avoiding terrorism, which kills on average about six Americans a year, with the one exception of nine eleven.

Speaker 1

我们入侵他国,设立全新的政府部门,耗费巨额资源和生命,只为防范微不足道的风险。

We invade countries, we invent an entire new departments of government with massive massive expenditure resources and lives to defend ourselves against a trivial risk.

Speaker 1

而那些确定存在的风险,比如你提到的交通事故死亡,甚至那些尚未发生但足以令人担忧的潜在威胁,如大流行病、核战争,却受到远远不足的关注。

Whereas guaranteed risks, and you mentioned as one of them you mentioned, traffic fatalities, and even risks that are not here but are plausible enough to worry about like pandemics, nuclear war, receive far too little attention.

Speaker 1

在总统辩论中,从未讨论过如何降低核战争的风险。

In presidential debates, there's no discussion of how to minimize the risk of nuclear war.

Speaker 1

比如,关于恐怖主义的讨论倒是很多。

Lots of discussion of terrorism, for example.

Speaker 1

因此,我认为必须根据实际伤害发生的概率,来调整我们对恐惧、忧虑和防范计划的投入比例。

And and so we I think it's essential to calibrate our budget of fear, worry, concern planning to the actual probability of, of harm.

Speaker 0

是的。

Yep.

Speaker 0

那么让我问这个问题。

So let me ask this then this question.

Speaker 0

说到可想象性,你提到理性思考很重要,而我最喜欢的一位喜欢通过想象力进行迷人探索、触及理性边缘的人物是乔·罗根。

So speaking of imaginability, you said that it's important to think about reason, and one of my favorite people who who likes to dip into the outskirts of reason through fascinating exploration of his imagination is Joe Rogan.

Speaker 1

哦,是的。

Oh, yes.

Speaker 0

他曾凭借理性,一度相信许多阴谋论,又通过理性逐渐摒弃了那些信念。

You so who has, through reason, used to believe a lot of conspiracies, and through reason has stripped away a lot of his beliefs in that way.

Speaker 0

所以看他如何通过理性思维抛弃对大脚怪和911事件的阴谋论,这过程其实相当引人入胜。

So it's fascinating actually to watch him, through rationality kinda throw away the ideas of Bigfoot and nine eleven.

Speaker 0

我不太确定具体细节。

I'm I'm not sure exactly.

Speaker 1

化学尾迹。

Kim Trails.

Speaker 1

我不清楚他相信什么。

I don't know what he believes in.

Speaker 1

是啊。

Yeah.

Speaker 1

没错。

Yes.

Speaker 1

好的。

Okay.

Speaker 0

但他不再相信了。

But he no longer believed in.

Speaker 0

不。

No.

Speaker 0

不。

No.

Speaker 0

没错。

That's right.

Speaker 1

不。

No.

Speaker 1

他已经成为了一股真正的向善力量。

He's he's become a real force for for good.

Speaker 0

是的。

Yep.

Speaker 0

你二月份上了乔·罗根的播客,进行了场精彩对话,不过我记得你们没怎么聊人工智能。

So you were on the Joe Rogan podcast in February and had a fascinating conversation, but as far as I remember, didn't talk much about artificial intelligence.

Speaker 0

我几周后会上这个播客节目。

I will be on this podcast in a couple weeks.

Speaker 0

乔非常担心人工智能的生存威胁。

Joe is very much concerned about existential threat of AI.

Speaker 0

我不确定这是否是你希望探讨这个话题的原因。

I'm not sure if you're this is why I was hoping that you'll get into that topic.

Speaker 0

从这个角度看,他代表了很多从宏观层面看待AI话题的人。

And in this way, he represents quite a lot of people who look at the topic of AI from 10,000 foot level.

Speaker 0

作为沟通练习,你说过保持理性思考这些问题很重要。

So as an exercise of communication, you said it's important to be rational and reason about these things.

Speaker 0

让我请教一下,如果你要指导我作为AI研究人员如何向乔和公众谈论AI,你会给出什么建议?

Let me ask, if you were to coach me as an AI researcher about how to speak to Joe and the general public about AI, what would you advise?

Speaker 1

简短的答案是阅读我在《当下的启蒙》中关于AI的章节。

Well, I'd, the the short answer would be to read the sections that I wrote in enlightenment now, think about AI.

Speaker 1

但更详细的解释是,我认为应该强调——作为工程师你很适合提醒人们工程文化的本质就是安全导向的,我在《当下的启蒙》中还绘制了各类意外事故的死亡率图表。

But a longer reason would be, I think, to emphasize, and I and I think you're very well positioned as an engineer to remind people about the culture of engineering, that it really is safety oriented, that another discussion in enlightenment now, I plot, rates of accidental death from various causes.

Speaker 1

飞机失事、车祸、职业事故,甚至雷击致死事件,这些死亡率都大幅下降,因为工程文化的核心就是如何消除致命风险。

Plane crashes, car crashes, occupational accidents, even death by lightning strikes, and they all plummet because the culture of engineering is how do you squeeze out the lethal risks.

Speaker 1

火灾致死、溺水窒息等各类事故死亡率都因工程技术的进步而急剧下降——我必须承认,直到看到那些数据图表,我才真正理解这一点。

Death by fire, death by drowning, death by asphyxiation, all of them drastically declined because of advances in engineering that I gotta say I did not appreciate until I saw those graphs.

Speaker 1

而这正是因为有像你这样的人才在不断反思:天啊,我的发明会不会伤害到人们?并运用智慧来预防这种情况发生。

And it is because exactly people like you who stamp it and I think, oh my god, is what am I what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening.

Speaker 1

虽然我不是工程师,但在麻省理工度过了22年,所以对工程文化还是有所了解的。

Now I'm not an engineer, although I spent twenty two years at MIT so I know something about the culture of engineering.

Speaker 1

据我所知,这就是工程师的思维方式。

My understanding is that this is the way you think if you're an engineer.

Speaker 1

至关重要的是,当涉及人工智能时,这种工程文化绝不能突然被抛弃。

And it's essential that that culture not be suddenly switched off when it comes to artificial intelligence.

Speaker 1

所以,我的意思是,这确实可能是个问题,但有什么理由认为这种文化会被抛弃吗?

So, I mean, that that that could be a problem, but is there any reason to think it would be switched off?

Speaker 0

我不这么认为。

I don't think so.

Speaker 0

首先,没有足够多的工程师为这种方式发声,为这种对人类本性的乐观看法以及你们试图创造的积极事物发声。

And one, there's not enough engineers speaking up for this way, for this the the excitement for, the positive view of human nature and what you're trying to create is positivity.

Speaker 0

就像,我们试图发明的每样东西都是为了造福世界。

Like, everything we try to invent is trying to do good for the world.

Speaker 0

但让我问问你关于消极心理的问题。

But let me ask you about the psychology of negativity.

Speaker 0

看起来客观地说,不考虑具体话题,对未来持悲观态度似乎比乐观态度让你显得更聪明,无论话题是什么。

It seems just objectively, not considering the topic, it seems that being negative about the future makes you sound smarter than being positive about the future irregardless of topic.

Speaker 0

我的这个观察正确吗?

Am I correct in this observation?

Speaker 0

如果是这样,你认为原因是什么?

And if if so, why do you think that is?

Speaker 1

是的。

Yeah.

Speaker 1

我认为确实存在这种现象,就像讽刺作家汤姆·莱勒说的那样:总是预测最坏的情况,你就会被奉为先知。

I think I think there is that that, phenomenon that, as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a prophet.

Speaker 1

这可能是我们整体负面偏见的一部分。

It may be part of our overall negativity bias.

Speaker 1

作为一个物种,我们对负面信息比正面信息更为敏感。

We are as a species more attuned to the negative than the positive.

Speaker 1

我们对损失的恐惧超过对收获的喜悦,这可能为那些提醒我们注意被忽视的危害、风险和损失的人创造了空间。

We dread losses more than we enjoy gains, and, that may might open up a space for, profits to remind us of harms and risks and losses that we may have overlooked.

Speaker 1

所以我认为确实存在这种不对称性。

So I think there there, there there is that asymmetry.

Speaker 0

你写过一些我最喜欢的书,题材广泛。

So you've written some of my favorite books, all over the place.

Speaker 0

从《当下的启蒙》到《人性中的善良天使》、《白板》、《心智探奇》,还有那本关于语言的书——《语言本能》。

So starting from enlightenment now to, the better angels of our nature, blank slate, how the mind works, the the one about language, language instinct.

Speaker 0

比尔·盖茨也是你的忠实粉丝,他评价你最新著作时说‘这是我新的有史以来最喜欢的书’。

Bill Gates, big fan too, said of your most recent book that it's my new favorite book of all time.

Speaker 0

那么作为作家,在你早年生活中,哪本书对你世界观的形成产生了深远影响?

So for you as an author, what was a book early on in your life that had a profound impact on the way you saw the world?

Speaker 1

确实,《当下的启蒙》这本书受到了大卫·多伊奇的《无限的开端》影响。

Certainly, book, Enlightenment Now, was influenced by David Deutsch's The Beginning of Infinity.

Speaker 1

这是一部对知识及其改善人类境况力量的深刻反思。

A rather deep reflection on knowledge and the power of knowledge to improve the human condition.

Speaker 1

书中以这样的智慧箴言作结:问题不可避免,但只要有正确的知识就能解决;而解决方案又会催生新的问题,需要继续解决。

They end with bits of wisdom such as that problems are inevitable but problems are solvable given the right knowledge and that solutions create new problems that have to be solved in their turn.

Speaker 1

我认为这种关于人类处境的智慧影响了这本书的创作。

That's I think a kind of wisdom about the human condition that influenced the writing of this book.

Speaker 1

有些书非常优秀但鲜为人知,其中部分书目列在我网站的个人页面上。

There's some books that are excellent but obscure, some of which I have on my, a page of my website.

Speaker 1

我曾读过一本名为《暴力史》的自费出版著作,作者是政治学家詹姆斯·佩恩,书中论述了暴力的历史性衰退,这正是《人性中的善良天使》的灵感来源之一。

I read a book called The History of Force, self published by a political scientist named James Payne on the historical decline of violence, and that was one of the inspirations for the better angels of our nature.

Speaker 0

关于更早期的呢?

The about early on?

Speaker 0

如果回顾你青少年时期,

If you look back when you were maybe a teenager, is

展开剩余字幕(还有 24 条)
Speaker 1

那时我特别喜欢一本叫《从一到无穷大》的书。

there I loved a book called One Two Three Infinity.

Speaker 1

我年轻时读过物理学家乔治·伽莫夫写的这本书。

When I was a young adult, I read that book by George Gamow, the physicist.

Speaker 1

嗯。

Mhmm.

Speaker 1

这本书用通俗幽默的方式解释了相对论、数论、维度理论以及高维空间,我认为即使出版七十年后读来依然令人愉悦。

Which had very accessible and humorous explanations of relativity, of number theory, of dimensionality, high multiple dimensional spaces, in a way that I think is still delightful seventy years after it was published.

Speaker 1

我喜欢《时代生活》科学系列丛书。

I like the Time Life Science series.

Speaker 1

这些是我母亲订阅的月刊,每个月都会收到一本不同主题的书籍。

These are books that would arrive every month that my mother subscribed to, each one on a different topic.

Speaker 1

有讲电学的,有讲森林的,有讲进化论的,还有一本专门讲心智科学。

One would be on electricity, one would be on forests, one would be on evolution, and then one was on the mind.

Speaker 1

我对'心智可以成为科学研究对象'这个观点特别着迷,这本书对我影响也很深。

And I was just intrigued that there could be a science of mind and that book I would cite as an influence as well.

Speaker 1

后来,就在那时

Then later That's on when

Speaker 0

你爱上了研究心智这个想法吗?

you fell in love with the idea of studying the mind?

Speaker 1

是那本

Was that one

Speaker 0

吸引你的东西吗?

thing that grabbed you?

Speaker 1

可以说是其中之一吧。

It was one of the things, I would say.

Speaker 1

大学时我读了诺姆·乔姆斯基的《语言反思》,他职业生涯大部分时间都在麻省理工学院度过。

I read as a college student the book Reflections on Language by Noam Chomsky, who spent most of his career here at MIT.

Speaker 1

理查德·道金斯的《盲眼钟表匠》和《自私的基因》两本书影响巨大,部分原因在于内容,但更因其文风——能用生动散文阐述抽象概念的能力。

Richard Dawkins' two books, The Blind Watchmaker and The Selfish Gene were enormously influential, partly for mainly for the content but also for the writing style, the ability to explain abstract concepts in lively prose.

Speaker 1

斯蒂芬·杰伊·古尔德的首部文集《自达尔文以来》,同样是生动写作的绝佳范例。

Stephen Jay Gould's first collection ever since Darwin, also excellent example of lively writing.

Speaker 1

乔治·米勒是大多数心理学家都熟悉的一位心理学家,他提出了人类记忆容量为7±2个信息块的理论。

George Miller, a psychologist that most psychologists are familiar with, came up with the idea that human memory has a capacity of seven plus or minus two chunks.

Speaker 1

这可能是他最著名的学术主张。

That's probably his biggest claim to fame.

Speaker 1

但他还写过几本关于语言与沟通的著作,我在本科时读过。

But he wrote a couple of books on language and communication that I read as an undergraduate.

Speaker 1

同样文笔优美且思想深邃。

Again, beautifully written and intellectually deep.

Speaker 0

太棒了。

Wonderful.

Speaker 0

史蒂文,非常感谢你今天抽空接受采访。

Steven, thank you so much for taking the time today.

Speaker 1

这是我的荣幸。

My pleasure.

Speaker 1

非常感谢,莱克斯。

Thanks a lot, Lex.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客