本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
人类智能会成为可能性的上限吗?
Is human intelligence going to be the upper limit of what's possible?
我认为绝对不是。
I think absolutely not.
随着我们对构建智能系统理解的深入,我们将看到这些AI远超人类智能。
As our understanding of how to build intelligent systems develops, we're gonna see these AIs go far beyond human intelligence.
欢迎收听谷歌DeepMind播客,我是主持人汉娜·弗莱教授。
Welcome to Google DeepMind, the podcast with me, your host, professor Hannah Fry.
通用人工智能(AGI)即将到来。
AGI is coming.
这似乎是所有人都在说的。
That's what everyone seems to be saying.
今天播客的嘉宾是谷歌DeepMind首席AGI科学家兼联合创始人肖恩·莱格。
Well, today, my guest on the podcast is Shane Legg, chief ADI scientist and cofounder of Google DeepMind.
肖恩数十年来一直在谈论AGI,甚至在他口中那个被视为'疯狂边缘'的年代就开始了。
Shane has been talking about AGI for decades, even back when it was considered, in his words, the lunatic fringe.
他被认为是推广这一术语的先驱,并最早尝试探索其真正含义的人之一。
He is credited with popularising the term and making some of the earliest attempts to work out what it might actually be.
今天我们将与他探讨如何定义AGI,如何在它到来时识别它,如何确保其安全性和伦理性,以及最关键的是——实现AGI后的世界会是什么样子。
Now, in the conversation today, we're going to talk to him about how AGI should be defined, how we might recognise it when it arrives, how to make sure that it is safe and ethical, and then crucially, what the world looks like once we get there.
我必须告诉你们,肖恩对社会在未来十年将受到的影响表现得异常坦诚。
And I have to tell you, Shane was remarkably candid about the ways that the whole of society will be impacted over the coming decade.
这场讨论绝对值得您继续收听。
It's definitely worth staying with us for that discussion.
欢迎来到播客节目,肖恩。
Welcome to the podcast, Shane.
我们上次与您对话是在五年前,当时您向我们阐述了您对AGI形态的构想。
We last spoke to you five years ago, and then you were telling us your your sort of vision for what AGI might look like.
就目前我们拥有的人工智能系统而言,您认为它们是否已展现出AGI的零星火花?
In terms of the AI systems that we've got now today, do you think that they're showing little sparks of being AGI?
是的,我认为远不止是零星火花。
Yeah, I think it's a lot more than sparks.
不仅仅是火花?
More than sparks?
哦,是的,没错。
Oh, yeah, yeah.
我对AGI的定义,有时也称为最小AGI,是指至少能完成人类典型认知任务的人工智能体。
So my definition of AGI, or sometimes called minimal AGI, is an artificial agent that can at least do the kinds of cognitive things people can typically do.
对吧?
Yeah?
我喜欢这个标准,因为如果低于这个水平,就会让人觉得它未能完成我们期望人类能完成的认知任务,感觉我们还没真正实现目标。
And I like that bar because if it's less than that, it feels like, well, it's failing to do cognitive things that we'd expect people to be able to do, so it feels like we're not really there yet.
另一方面,如果我把最低标准定得远高于这个水平,就会达到一个很多人实际上也无法完成我们对AGI要求的某些任务的程度。
On the other hand, if I set the minimal bar much higher than that, I'm sitting at a level where a lot of people wouldn't actually be able to do some of the things we're requiring of the AGI.
所以我们认为人类具有某种,我不知道该怎么称呼它,你可以称之为通用智能。
So we believe people have some sort of, I don't know, general intelligence, you might call it.
因此,如果一个AI至少能完成人类典型的认知任务,甚至可能更多,那么我们就应该认为它属于这一类。
So it feels like if an AI can do the kinds of cognitive things people can typically do, at least, possibly more, then we should consider it within that kind of a class.
我们现在拥有的这些东西,在这些级别中处于什么位置?
This stuff that we have now, where is it on those levels?
所以,这是不均衡的。
So, it's uneven.
在某些方面,比如语言能力,它已经远超人类。
So, it's already much, much better than people, say, speaking languages.
它能说150种语言之类的。
So, it'll speak 150 languages or something.
没人能做到这一点。
Nobody can do that.
而且它的常识储备非常惊人。
And its general knowledge is phenomenal.
我可以问它关于我长大的新西兰小镇郊区的事,它碰巧知道一些相关信息。
I can ask it about, you know, this suburb I grew up in, a small town in New Zealand, and it happens to know things about it.
对吧?
Right?
另一方面,它们仍然无法完成我们通常认为人类能够做到的事情。
On the other hand, they still fail to do things that we would expect people to typically be able to do.
它们在持续学习方面表现不佳,难以在较长时间内掌握新技能。
They're not very good at continual learning, learning new sort of skills over an extended period of time.
而这恰恰至关重要。
And that's incredibly important.
举个例子,当你开始一份新工作时,没人指望你刚入职就能完全胜任,但你需要逐渐学习提升。它们在推理方面也存在弱点,尤其是视觉推理这类任务。
For example, if you're taking on a new job, you know, you're not expected to know everything to be performant in the job when you arrive, but you have to learn over time to do They're also have some weaknesses in reasoning, particularly things like visual reasoning.
嗯。
Mhmm.
人工智能非常擅长识别物体。
So the AIs are very good at, say, recognizing objects.
它们能识别猫、狗等各种动物。
They can recognize cats and dogs and all these sorts of things.
这项能力已经发展多年了。
They've done that for a while.
但如果你让它们对场景中的事物进行推理,它们就会显得力不从心。
But if you ask them to reason about things within a scene, they get a lot more shaky.
比如你可能会说,你能看到一辆红色汽车和一辆蓝色汽车,然后问它们哪辆车更大。
So you you might say, well, you can see a red car and a blue car, and you ask them which car is bigger.
人们明白这其中涉及透视关系。
People understand that there's perspective involved.
也许蓝色汽车实际上更大,但由于距离更远,所以看起来更小。
And maybe the blue car is bigger, but it looks smaller because it's further away.
对吧?
Right?
人工智能在这方面表现不佳。
AIs are not so good at that.
或者当你面对某种由节点和连接线构成的图表时,比如一个网络。
Or if you have some sort of diagram with nodes and edges between them Like a network.
这里说的网络,或者数学家所称的图结构,当你针对它提问时,系统需要计算其中的数量
A network here or a graph, as a mathematician would say, and you ask questions about that, and it has to count the number of
可以称之为辐条。
Spokes, as it were.
那些辐条
Spokes that
从图上的某个节点延伸出来。
are coming out of, you know, one of the nodes on the graph.
人类通过关注不同的点,然后在心理上或许进行计数等方式来完成这一过程。
A person does that by paying attention to different points and then actually mentally maybe counting them or what have you.
人工智能不太擅长处理这类事情。
The AI's not very good at doing that type of thing.
我认为这些方面并不存在根本性的障碍,我们已有关于如何开发能处理这些问题的系统的构想,并且看到多个相关领域的指标正随时间逐步提升。
I don't think there are fundamental blockers on any of these things, and we have ideas on how to develop systems that can do these things, and we see metrics improving over time in a bunch of these areas.
因此我的预期是,经过若干年后,这些问题都将得到解决,但目前尚未实现。
So my expectation is over a number of years, these things will all get addressed, but they're not there yet.
我认为这需要一些时间来完成,因为人类能执行的各种认知任务构成了一条相当长的尾巴,而人工智能在这些方面仍低于人类的表现水平。
And I think it's gonna take a little bit of time to go through that because it's quite a long tail of all sorts of cognitive things that people can do where the AIs are still below human performance.
当我们实现这一目标时——我认为这将在几年内到来,具体时间尚不明确——人工智能将变得更加可靠,这将从多方面大幅提升它们的价值。
As we reach that, and I think that's coming in a few years, unclear exactly, the AIs will be a lot more reliable, and that will increase their value quite a lot in many ways.
但在此期间,它们的能力也将持续增强,达到专业水平甚至更高,比如在编程、数学领域,以及已知语言、世界常识等方面。
But they will also, during that period, become increasingly capable, like, to professional level and beyond, and maybe in coding, mathematics, already in, you know, known languages, general knowledge of the world, and stuff like this.
所以这其实是一个发展不均衡的过程。
So it's kind of a it's an uneven thing.
如果你认为它们会随着时间的推移变得更可靠,这是否仅仅意味着需要扩大模型规模、进行更大范围的训练?
If you think that they will become more reliable over time, is it just a question of making the models bigger, doing things at a larger scale?
还是需要更多数据?
Is it more data?
我的意思是,你们是否有明确的路径让它们变得更可靠?
I mean, do you have a clear path to make them more reliable?
我认为我们有。
I think we do.
这并不是单一因素能决定的。
And it's not one particular thing.
这不仅仅是更大的模型或更多的数据。
It's just not bigger models or more data.
在某些情况下,是需要特定类型的更多数据。
In some cases, it's more data of a particular kind.
然后当你收集需要视觉推理的数据时,模型就能学会如何处理。
And then when you collect data that requires that, say, visual reasoning, then the models learn how to do it.
在某些情况下,它需要算法层面的改进,比如内部的新流程。
In some cases, it requires algorithmic things, like new processes within.
举个例子,如果你想实现持续学习,让AI随时间不断学习,你可能需要建立某种流程,将新信息存储在某种检索系统或情景记忆中。
So, example, if you want to do continual learning, so the AI keeps learning over time, you might need some process whereby new information is maybe stored in some sort of retrieval system, episodic memory, if you like.
然后你可能需要建立系统,将这些随时间积累的信息重新训练到底层模型中。
And then you might have systems whereby that information over time is trained back into some underlying model.
所以这需要的不仅仅是更多数据。
So that requires more than just more data.
它需要某种算法和架构上的改变。
It requires some sort of algorithmic and architectural changes.
所以我认为答案是这些因素的结合,具体取决于问题的性质。
So I think the answer is a combination of these things, and it depends on what the particular issue is.
我知道你不认为AGI应该是一个简单的二选一问题,比如跨越某个阈值,而更像是某种程度上的连续谱系,存在不同等级。
I know that you don't think that AGI should be this single yesno, like a threshold that you cross, but more of a sort of spectrum, as it were, that you have these levels.
请详细解释一下这个观点。
Just talk me through that.
是的,我提出了所谓的'最小AGI'概念,指的是一个人工智能体至少能完成我们通常期望人类能够完成的各种认知任务。
Yeah, so I have what I call minimal AGI, and that's when you have an artificial agent that can at least do all sorts of cognitive things that we would typically expect people to be able to do.
我们目前尚未达到这个水平,可能还需要一年,也可能是五年。
We're not there yet, and it could be one year, it could be five years.
我猜测大概两年左右吧。
I'm guessing probably about two or so.
那么这就是最低等级了?
So that's the lowest level then?
这就是你所说的'最小AGI'。
That's my Minimal AGI.
我称之为最小AGI。
What I call minimal AGI.
这个节点意味着,当AI执行认知任务时,其失败方式不会让我们感到意外——就像人类执行时一样。
That's the point at which I'd say, okay, this AI is no longer failing in ways that we would find surprising if we gave a person that cognitive task.
我认为这是最低标准。
And I think that's the minimum bar.
但这并不意味着我们完全掌握了实现人类智能所有能力的方法,因为现实中存在非凡的人类——他们能完成惊人的认知壮举,比如创立新的物理或数学理论,创作出宏伟的交响乐,写出杰出的文学作品等等。
Now that doesn't mean we understand fully how to reach the capabilities of human intelligence, because you could have extraordinary people who go and do amazing cognitive feats, inventing new theories in physics or maths or developing, you know, incredible symphonies or doing all the writing, amazing literature, and so on.
即使我们的AI能做到典型的人类认知行为,也不代表我们掌握了实现人类非凡认知成就所需的所有配方和算法。
And just because our AI can do what's typical of human cognition doesn't necessarily mean we know all the recipes and algorithms, everything required to achieve very extraordinary feats of human cognition.
当我们的AI能够实现人类认知的全部可能性时,我们才能真正确定至少达到了完全的人类水平。
Once we can, with our AI, achieve the full spectrum of what's possible with human cognition, then we really know that we've nailed, you know, at least a fully human level.
因此我们称之为完全AGI。
And so we call that full AGI.
那么是否存在超越这个水平的阶段?
And then is there a level beyond that?
是的。
Yeah.
我认为一旦开始超越人类认知能力的范畴,就进入了所谓的人工超级智能(ASI)领域。
So I think once you start going beyond what is possible with human cognition, you start heading into something that's called artificial superintelligence or ASI.
目前对此并没有真正完善清晰的定义。
There aren't really good, clear definitions of that.
实际上我曾多次尝试为这个概念提出一个合理的定义。
I've actually tried on a number of occasions to come up with a good definition of that.
但我提出的每个定义都存在某些重大问题。
Every definition I've ever come up with has some sort of significant problems.
不过粗略来说,它意味着这是一种AGI,具有AGI的通用性,但其能力已远超人类所能达到的水平。
But at least in vague terms, it means something like it's an AGI, so it has the generality of an AGI, but it's now so capable in general, it's somehow far beyond what humans are capable of reaching.
因为我知道你是最早提出AGI这个术语的人之一。
Because I know that you were one of the people who helped to coin that phrase AGI.
你认为这个术语现在仍然有用吗?
Do you think that it's still useful as a phrase?
我是说,现在有太多相互竞争的定义了。
I mean, there's so many competing definitions now.
这有点像大家都在用的流行词。
It's sort of like the buzzword that everyone's using.
而且你是对的。
And and you're right.
它的描述方式几乎就像是非此即彼的离散选择。
The way that it's described is almost like a yes, no, like a kind of discrete Yeah.
更像是需要跨越的一条界限,而不是你描述的这种连续的等级体系。
Line that gets crossed rather than this continuum almost of levels as you're describing.
是的。
Yeah.
所以当我提出这个术语时,我更多是把它当作一个研究领域来考虑的。因为我当时在和本·戈策尔交谈,我之前和他共事过一年左右,他想写一本关于AI传统愿景的书——那种能执行多种任务的思维机器,而不是当时典型的单一功能系统,比如只会打扑克或只做文本转语音。
So when I proposed the term, I was thinking of it more as a field of study, because I was talking to a guy, Ben Goertzel, who I'd worked a year or so before, and he wanted to write a book on sort of the old vision of AI, this sort of thinking machines, these machines that can do lots and lots of different things, rather than it's just specialized, it just plays poker, it just does text to speech, which was sort of typical at the time.
我在想,这个术语现在还有用吗?
And I was like, what about the old dream of AI?
构建一个具备广泛通用能力的系统,它能学习、推理、处理语言、写诗、做数学题、甚至可能画画,总之能做各种各样的事情。
Building a system that has a very general capability, and it can learn and reason and do language and write poetry or do maths or maybe paint a picture or, you know, all sorts of different things.
我们该叫它什么?
What do we call that?
于是我对他说,如果重点在于我们追求的通用性,何不直接把'通用'这个词放进名称里,就叫它'人工通用智能'呢?
And I said to him, well, if it's really about the generality we want, why don't we just put the word general in the name and call it artificial general intelligence?
AGI这个缩写还挺顺口的。
AGI kinda rolls off the tongue.
或许我们就该这么叫。
Maybe we do that.
但后来发生的情况是,很多人在网上开始使用这个术语,很快人们就开始讨论'我们何时能实现AGI'这样的问题。
But then what happened is that a number of people started using the term online, and then very quickly, people started talking about, well, when will we have AGI?
于是AGI就从研究领域或子领域的概念,转变成了人工制品的分类标准。
And so then AGI moved from being a sort of field of study or a subfield to a category of artifacts.
然后就需要给它下定义了。
And then it needs a definition.
也许我当时应该进去给它下个定义,这是个失误。
So perhaps it was a mistake that I should have gone in and and defined it.
几年后我们发现,有个叫Michael Brudd的人其实在97年就写过相关论文。
Now it turned out a few years later, we found there was a guy, Michael Brudd, who had actually written a paper in '97.
我们用了这个术语,但那是在一个纳米技术安全会议上,我们没人知道这件事。
We had used the term, but it was in a nanotech security conference, and none of us knew about this.
但他定义的方式其实是参照人们在工业等领域从事的各类认知活动。
But the way he defined it was actually in reference to the sorts of cognitive things people do in industry and other places like that.
所以这和我现在使用的定义在风格上非常相似。
So it's quite similar flavor to even what I'm I'm using now.
是的,如果早期能更明确地固定下来,会很有帮助。
Now, yeah, if it had been fixed more clearly early on, that would be helpful.
你后悔提出这个说法吗?不
Do you regret pointing No.
这个
This
不。
No.
因为我认为它提供了一种方式让人们来指代这种构建真正通用AI的理念。
Because I think it gave a way for people to refer to this idea of building AIs that were actually general.
通用程度达到人类智能的通用水平。
General to the extent that people's intelligence is general.
我认为这是有需求的。
There was a need for that, I think.
这就是为什么我认为这个术语会流行起来,因为某种程度上,你知道,如果不这样称呼它,你该怎么指代这个概念呢?
And that's why I think the term caught on because there was sort of, you know, how do you refer to that if you're not referring to this?
如果人们使用像'先进AI'这样的短语,从某种意义上说AlphaFold也是一种先进AI。
If people use phrases like advanced AI, well, AlphaFold is an advanced AI in some sense.
对吧?
Right?
它非常有影响力,但应用范围非常非常狭窄。
And it's very impactful, but it's very, very narrow.
对吧?
Right?
或者AlphaGo,它同样非常专一,是某种高级AI系统。
Or AlphaGo, again, it's very narrow, it's some sort of advanced AI system.
那么你如何称呼那些非常通用的系统呢?
So how do you refer to systems that are very general?
但后来发生的是,不同的人看到这个术语后,以不同方式进行了调整,他们通过不同的视角来看待它。
But then what's happened is that different people saw the term, and they adapted in different ways, so they looked at it through different lenses.
因此,对某些人来说,早在早期,当他们想到AGI时,想到的是几十年后才会出现的、具有变革性的东西。
So, for some people, back even in the early days, when they thought of AGI, thought of something in the future, decades away, and that this would be very transformative.
于是他们开始从AGI将给社会带来的变革角度来思考。
And so they started thinking about AGI in terms of the transformation it would create in society.
所以当他们试图定义它时,往往会想到,哦,因为它能带来经济增长,或者它能实现所有这些事情。
And so then they started if they try to define it, they tend to think about, oh, it's because it can lead to economic growth, or it's gonna do all these sorts of things.
对吧?
Right?
我更倾向于将其视为一个历史性的时间节点。
I tend to think of it as more of a historical point in time.
这个时间节点标志着我们不得不承认,这些AI在某种意义上与我们的智能属于同一类别,能够完成我们通常能做的认知任务。
It's the point in time at which we sort of have to say, well, these AIs in some sense belong in a similar category to our intelligence and that they can do cognitive things that we typically can do.
这并不一定会彻底改变世界。
Now that doesn't necessarily revolutionize the world.
普通人在街上走时,不会突然变成莫扎特或爱因斯坦,发明出量子理论的继承者之类的。
The typical person walking around isn't going to be a Mozart or an Einstein and invent the successor to quantum theory or whatever.
对吧?
Right?
但这是个非常有趣的时刻,因为十年前、二十年前,我们还没有任何AI能够接近完成人类通常能做的认知任务。
But it's a very interesting point in time because ten years ago, 20 you know, whatever, we did not have AIs that were anywhere close to being able to do the cognitive things that people can typically do.
因此我认为这是一个重要的历史时刻,标志着AI在某种程度上与我们属于同一类别。
So I think this is an important sort of historical moment in that AIs are somehow in a similar category to us.
我还认为尝试定义它是有用的,因为其中一个问题是人们对时间线有不同的看法。
I also think and I think it's useful to try to define it a bit because one of the issues that come up is people have these different timelines.
对吧?
Right?
有人说,哦,通用人工智能,我觉得三年内就会实现。
Some people say, oh, AGI, I think it's gonna be here in three years.
哦,我觉得还要十五年或二十年才能实现。
Oh, I think it's gonna be fifteen years away or twenty years or whatever.
而当我与他们讨论这个问题时,经常发现他们使用的是不同的定义。
And often, when I go and talk to them about that, I find that they're using a different definition.
这就会导致很多混淆,因为人们用这个词表达不同的含义。
And so that just leads to a lot of confusion because people use the term to mean different things.
在某些情况下,我其实同意他们对未来发展的预测,只是他们用词的方式不同,这就造成了相当大的混淆。
And in some cases, I actually agree with what they think is going to happen, they're just using the word in a different way, and that just creates quite a lot of confusion.
我想比较一下人们使用的其他定义。比如有人提出它应该像一份任务清单,或者类似人文综合考试——这种语言模型基准测试包含两千五百个跨学科问题,涵盖人文和自然科学。
I just want to compare some of the other definitions that people are using for So, some people have suggested that it's like, there's a checklist of tasks, or maybe there's a humanities last exam, which is this sort of language model benchmark of two and a half thousand questions across different subjects, so humanities and natural sciences.
还有人说它需要能在厨房里干活。
There's other people that have said it needs to be able to perform in a kitchen.
它需要像厨师一样接受训练,能够被投放到不同的厨房并胜任工作。
It's to sort of be trained as a chef and be able to be dropped into a different kitchen and perform.
甚至还有一种定义是,它能否用10万美元赚到100万美元?
Or there's even one which is could it be able to make a million dollars from a $100,000?
是的。
Yes.
你对这些定义有什么看法?
What's your take on those definitions?
嗯,我对每一个定义都有自己的看法。
Well, each one I have a take on.
好的。
Alright.
请讲。
Go ahead.
抱歉。
Sorry.
请继续。
Go ahead.
详细说说看。
Go through it.
我是说,比如用一千美元赚到一百万美元之类的。
I mean, make was a million dollars from a thousand dollars or something like that.
我觉得这显然是从非常经济学的角度来看这个问题。
I mean, that that's obviously a very economic kind of perspective on it.
我认为很多人很难做到这一点。
I think a lot of people would struggle to do that.
在某种程度上,我觉得这是个非常狭隘的视角。
It's a very, I think, in some ways, narrow perspective on this.
也许可以...我不知道...用交易算法在市场上操作实现这个目标,但它也就只能做这个。
Mean, maybe you could have, I don't know, a trading algorithm that trades on the markets that could do that, but that's all it can do.
这不是我要讨论的重点。
That's not what I'm talking about.
所以我认为关键在于g。
So I think it's the g.
这就是AGI中的g。
That's the g in AGI.
我觉得有趣的是它的通用性。
It's the generality that I find interesting.
我认为人类心智最不可思议的特质之一,就是我们处理多种多样事务的灵活性和通用性。
And I think that's one of the incredible things of the human mind, is our flexibility and generality to do many, many different things.
如果只是针对特定任务集,或许可以构建一个能完成这些任务的系统,但如果它连我们期待普通人应具备的基本认知能力都达不到,那就不够理想。
If you have a particular set of tasks, well, okay, maybe you can build a system that can do those tasks, but maybe it's still failing to do basic cognitive things that we'd expect almost anybody to be able to do, I think that's unsatisfying.
因此,我会这样具象化我的定义:设计一组我知道人类典型表现的任务集
So the way I would operationalize my definition is I would have a suite of tasks where I know what typical performance is
以人类为基准。
From humans.
以人类为基准,然后观察AI是否能完成所有这些任务。
From humans, And I would see whether the AI can do all those tasks.
如果它在任何一项任务上失败,那它就不符合我的定义。
Now if it fails at any of those tasks, it fails to meet my definition.
因为它不够通用。
Because it's not general enough.
是的。
Yeah.
它未能完成某些我们期望人类能够做到的认知任务。
It's failing to do some cognitive thing that we expect people to be able to do.
如果它通过了,我建议我们进入更具对抗性的第二阶段。
If it passes that, I would propose we then go into a second phase, is more adversarial.
我们会说,好吧,它通过了整套测试,所以在我们标准收集的几千项测试中,它没有在任何一项上失败。
And we say, Okay, it passed the battery of tests, so it's not failing at anything in our standard collection of however many thousands of tests or whatever we have.
现在,让我们进行对抗性测试。
Now, let's do an adversarial test.
组建一个团队,给他们,比如说一两个月的时间。
Get a team of people, give them, I don't know, a month or two or whatever.
他们被允许查看AI的内部。
They're allowed to look inside the AI.
他们可以做任何他们想做的事。
They're allowed to do whatever they like.
他们的任务是找出我们认为人类通常能做、且具有认知性的事情,而AI却无法完成。
Their job is find something that we believe people can typically do, and it's cognitive, where their AI fails at.
如果他们能找到这样的例子,那么根据定义,AI就是失败的。
If they can find it, it fails by definition.
如果他们经过数月的探测、测试、绞尽脑汁仍无法找到这样的例子,我认为从实际应用的角度来看,我们已经达到了目标。
If they can't, after a few months of probing it and testing it and scratching their heads and trying to find it, I think for intensive purposes, most practical purposes, we're there.
因为这些失败案例现在如此难以发现,甚至专业团队经过长时间探索也无法发现这些缺陷。
Because these failure cases now are so hard to find, even teams of people, after an extended period of time, can't even find these failure cases.
你认为我们最终能就智能或AGI的定义达成共识吗?
Do you think that we'll ever agree on a definition of what intelligence is, or AGI is, indeed?
就AGI本身而言,我猜测几年后,AI将在众多领域变得如此通用。
In terms of AGI itself, my guess is that some years from now, AIs will become so generally capable in so many different ways.
人们会直接把它们称为AGI,而AGI恰好就是指这些功能。
People will just talk about them as being AGI, and AGI will just happen to mean those things.
也许人们就不会那么担心了,关于这是否算AGI的争论也会减少。
And maybe people will be less worried about they will have less arguments about whether this is AGI or not.
人们会说,哦,我有了最新版的Gemini九号还是什么的。
People will say, oh, I've got the latest Gemini nine or whatever it is.
它确实很棒。
And it is really good.
它能写诗。
It can write poetry.
你可以教它玩纸牌游戏,它就能陪你一起玩。
You can teach it a card game, and it can play with you.
就是你刚编出来的那种。
You that you just made up.
它能做数学题,能翻译内容,能和你一起规划假期,诸如此类。
It can do math, it can translate things, it can plan a holiday with you, or whatever.
它确实非常全能,人们会明显感觉到它具有某种普遍智能。
It's really, really generally capable, and it'll just seem obvious to people that it has some sort of generality of intelligence.
但就目前而言,在实现这一目标之前,拥有这样一条通往AGI的明确路径,你谈到了没有路径的风险,比如它可能先掌握某些知识而非其他。
But then for now, in terms of before we get there, having this kind of defined path on the route to AGI, you talk about the risks of not having one, that it could acquire a certain piece of knowledge before another.
举例来说,比如在精通伦理之前先擅长化学工程。
For instance, I don't know, like, being good at chemical engineering before it gets really good at ethics.
我的意思是,在实现AGI之前开展这项工作有多重要?
I mean, how important is it to have this work now in advance of getting there?
所以需要围绕理解它在不同维度上的能力展开工作。
So work around understanding its capabilities in different dimensions.
我认为这非常重要,因为我们必须思考社会该如何应对强大机器智能的到来。
I think it's very important because we have to think about how do we, being society, navigate the arrival of powerful, capable machine intelligence.
你不能简单地用单一维度来衡量它。
And you can't just put it on a single dimension.
在某些方面它可能具备超乎人类的能力。
It may be superhumanly capable at some things.
它可能在其他某些领域非常脆弱和薄弱。
It may be very fragile and weak in some other areas.
如果你不了解这种能力分布情况,你就无法认识到存在的机遇。
And if you don't understand what that distribution looks like, you're going to not understand the opportunities that exist.
你也无法理解风险或被误用的可能性,因为虽然它在这里非常强大,但你需要明白它在那边非常非常薄弱,所以某些环节可能会出错。
You're also not gonna understand the risks or the ways in which it could be misapplied because, oh, it's super capable over here, but you need to understand that it's very, very weak over here, and so certain things can go wrong.
因此我认为这是社会应对和了解当前形势的重要部分。
So I think it's just an important part of society navigating and understanding what the current situation is.
你看,我觉得很多关于AI的讨论已经倾向于要么说它无所不能,要么说它其实没那么厉害、被过度炒作之类的。
So, you know, I think a lot of the dialogue around AI already tends to talk about us being so so capable or sort of being not really that capable and it's overhyped or whatever.
我认为现实情况要复杂得多。
I think the reality is much more complicated.
它在某些方面确实强大得不可思议,而在其他方面却相当脆弱。
It is incredibly capable in some ways, and it is quite fragile in others.
本质上你必须全面看待这个问题。
You have to take the whole picture, essentially.
你得看整体情况。
You gotta take the whole picture.
是啊。
Yeah.
这就像,你知道的,人类智能也是如此。
And it is like, you know, human intelligence as well.
有些人非常非常优秀。
Some people are really, really good.
他们会说很多种语言。
They speak a whole bunch of languages.
有些人数学特别好。
Some people are really good at math.
有些人音乐天赋极高。
Some people are really good at music.
但他们在其他方面可能就不那么擅长了。
But maybe they're not so good at something else.
那么,好吧。
So, okay.
我想和你讨论的另一个方面是伦理问题。
The other sort of arm of this that I wanna talk to you about is ethics.
这如何融入整个体系中?
How does that fit into all of this?
AI伦理包含许多方面。
There are many aspects to ethics in AI.
其中一个方面很简单:AI本身是否对道德行为有良好的理解?
One aspect is simply, does the AI itself have a good understanding of what ethical behavior is?
它能否根据这种道德行为分析可能的行动方案,并以我们能够信任的方式稳健地执行。
And is it able to analyze possible things it can do in terms of this ethical behavior, and do that robustly in a way that we can trust.
所以AI本身能够推理其行为的道德性。
So the AI itself can reason about the ethics of what it's doing.
是的。
Yes.
那这是如何运作的呢?
How does that work then?
你们是如何将其嵌入其中的?
How do you embed that within it?
对此我有一些想法,但这并非唯一的问题。
I have a few thoughts on that, but that's not a sole problem.
但我认为这是一个极其重要的问题。
But I think it's a very, very important problem.
我喜欢有些人称之为思维链监控的方法。
I like something which some people call chain of thought monitoring.
我之前讨论过这个。
I've talked about this.
我做过一些简短的演讲等等。
I've given some short talks on it and so on.
我称之为系统二安全机制。
I call it system two safety.
这就是丹尼尔·卡尼曼提出的系统一和系统二思维。
This is the Daniel Kahneman system one, system two thinking.
没错。
Exactly.
基本思路是这样的:作为一个人,当你面临困难的道德困境时,仅凭直觉往往是不够的。
And so the basic idea is something like this: Say, as a person, if you're faced with a difficult ethical situation, it's often not sufficient just to go with your gut instinct.
对吧?
Right?
实际上你需要坐下来思考:好,这是当前的情况。
You actually need to sit down and think about, okay, this is the situation.
这些是各种复杂因素和细微差别。
These are the various complexities and nuances.
这些是可以采取的可能行动。
These are the possible actions that can be taken.
这些是采取不同行动可能带来的后果。
These are the likely consequences of taking different actions.
然后根据你所拥有的某种伦理、规范和道德体系来分析所有这些。
And then analyze all of that with respect to some system of ethics and norms and morals and what have you that you have.
也许你需要进行相当多的推理才能真正理解这一切如何相互关联。
And maybe you have to reason about that quite a bit to really understand how all this fits together.
然后利用这种理解来决定应该采取什么行动。
And then use that understanding to decide what should be done.
那么假设人脑在这种情况下的运作方式,我是说,这就是卡尼曼的理论,对吧?
So let's say that the way that the human brain works in this situation, I mean, this is the Kahneman stuff, right?
就是,当有人惹恼你时,你会突然感到愤怒,想要立即反应,那是你的系统一快速思考的本能。
Is that, you know, someone annoys you, say, you have a rush of anger, you want to react, that's your system one sort of quick thinking instinctive.
但你深吸一口气,仔细思考,考虑后果,那是你的系统二思维,然后你可能会选择不同的做法。
But you take a breath, you think it through, consider the consequences, that's your system two thinking, then you might choose a different path.
是的。
Yes.
所以你可能会说,比如说,我也不知道。
So you might say, for example, I don't know.
撒谎是不对的。
Lying is bad.
对吧?
Right?
所以我们不会撒谎。
So we're not gonna lie.
但你可能遇到特殊情况,比如有坏人要来抓某人,如果你撒个谎就能救他们的命,那么从道德角度来说,或许撒谎才是正确的选择。
But you could be in a particular situation where, I don't know, you know, there's some bad people coming to get somebody, and if you tell a lie, you can save their life, and then the ethical thing to do is maybe to lie.
对吧?
Right?
因此简单的规则并不总能帮助我们做出正确决定。
And so the simple rule is not always adequate to really make the right decision.
有时你需要运用一些逻辑推理来仔细思考——在这种情况下,实际上符合道德的做法是撒个谎来救人一命之类的。
Sometimes you need a little bit of logic and reasoning to really think through, well, in this case, it is actually the ethical thing to do, is to tell a lie and maybe save someone's life or what have you.
对吧?
Right?
但这会变得非常复杂。
But it gets very complicated.
你可能听说过所有这些电车难题之类的问题,对吧?在某些情况下,我们的直觉和分析开始产生分歧,导致很多困惑。
And you've probably heard of all these trolley problems and all these sorts of things, right, where our instincts and the analysis, in some cases, start diverging and causes a lot of confusion.
对吧?
Right?
所以这绝不是简单的领域。
So this is not simple territory at all.
我们现在有了能进行这种思考的人工智能。
And we have AIs now that do these thinking AIs.
对吧?
Right?
因此你实际上可以看到人工智能使用的思维链。
And so you can actually see the chain of thought that the AIs use.
所以当你给AI提出一个具有道德或伦理层面的问题时,你实际上可以看到它如何分析推理这种情况。
And so when you give an AI some question, has a moral aspect to it, some ethical aspect, you can actually see it go away and reason about the situation.
如果我们能让这种推理变得非常严密,并且让它对我们希望它遵循的某些伦理道德有非常深刻的理解,我认为原则上它实际上应该变得比人类更道德。
And if we can make that reasoning really, really tight and it has a really strong understanding of some ethics and morals that we want it to adhere to, I think it should, in principle, actually become more ethical than people.
因为它能够更一致地应用推理,或许在超人类水平上处理所面临的选择等等。
Because it can more consistently apply and reason at maybe a superhuman level, the choices that it's faced with and so on.
因为这实际上将伦理问题转变为了一个推理问题,而不仅仅是一种感觉上的事情。
Because that switches ethics into a reasoning problem, as it were, rather than just a sort of feeling thing.
是的。
Yeah.
但同时,当你这么说的时候,我确实有点好奇关于基础的问题。
But then at the same time, I do wonder when you're saying that, do wonder a bit about grounding.
我的意思是,这些东西目前肯定不像人类那样生活在世界上。
I mean, these things, certainly for now, are like not living in the world as humans.
是否有可能从人类视角出发,真正将这些机器基于某种人类伦理?
Is it possible to sort of take what it feels like to experience the world from a human perspective and truly ground these machines in sort of human ethics?
嗯,这里有几个复杂之处。
Well, there's a few complexities.
其中一个复杂性在于,并不存在单一的人类伦理标准。
One complexity there is that there is not one human ethics.
同意。
Agree.
关于这一点存在不同的观点,不仅因人而异,也因文化和地区等有所不同。
And there are different ideas about this that vary between people, but also between cultures and regions and so on.
因此它必须理解在某些地方,规范和期望会有些不同。
So it'll have to understand that in certain places, the norms and expectations are a bit different.
实际上在某种程度上,模型确实已经了解很多这方面的内容,因为它们吸收了来自世界各地的数据。
And to some extent, the models do know quite a lot of this, actually, because they absorb data from all around the world.
不过,是的,它在这方面确实需要表现得非常出色。
But, yeah, it will need to be really good at that.
关于现实基础的问题,目前我们通过收集大量世界数据来构建这些智能体,将其训练成大型模型,然后它们就变成了我们与之交互的相对静态对象。
In terms of grounding in reality, at the moment, we're building these agents by collecting lots of data from the world, training them into these big models, and then they become relatively static objects that we then interact with.
它们实际上不会学习太多新东西或类似的内容。
And they don't really learn much new or anything like that.
这种情况正在改变,我们正在引入更多学习算法等类似技术。
That's shifting, and we're bringing in more learning algorithms and all that kind of thing.
但我们也在让系统更具主动性。
But we're also making the systems more agentic.
它们不再只是你与之交谈后处理并给出回应的系统,而是可以主动执行任务的系统。
So they're not just a system that you talk to and then it processes and gives a response, but there may be a system that can go and do something.
比如你可以对它说:好的,我需要你编写一个实现某某功能的软件。
So you can say to it, okay, I want you to write some software that does such and such.
噢,我希望你去...比如说为我的墨西哥之行制定一个计划。
Oh, I want you to go and, I don't know, come up with a plan for my trip to Mexico.
我想参观这些景点,但不喜欢那些安排之类的。
And I wanna see this and this, but I don't like this or whatever.
这些智能体未来还会更多地体现在机器人技术等领域。
And then those agents will also start to become more embodied in robotics and things like that.
其中一部分将是软件智能体。
Some of them will be software agents.
它们将会完成
They'll do those sorts of things.
不过随着时间的推移,我认为它们会更多地出现在机器人这类载体中
But though, with time, I think they'll become more they'll turn up in robots and all that kind of thing.
随着这条发展路径的推进,人工智能将通过多种方式与现实建立更紧密的联系,它们必须通过互动和经验来学习,而不仅仅是依赖初始阶段输入的大型数据集
And as you keep going along this track, the AIs become more connected to reality through all sorts of different things, and they actually have to learn through interaction and experience rather than just sort of a large dataset that sort of goes in at the beginning.
这正是与现实连接变得更加紧密的关键所在
That's where the connection to reality tightens up a lot.
话虽如此,这些系统初期输入的大量数据中,有很大一部分是来自人类的
That said, a lot of this data that was poured into them at the beginning, a lot of it came from people.
因此通过这一过程,AI也获得了与现实接轨的基础
So there is a grounding to reality that comes via that process as well.
关于AI在伦理方面比人类更优秀的观点,在你实现这个目标之前,在AI的推理能力达到人类水平之前,如何确保它以安全的方式被应用?
This idea of the AI being better at ethics than humans themselves, until you get there, until the reasoning is as good as ours, how do you make sure that it's implemented in a safe way?
我是说
I mean
是啊。
Yeah.
这很有道理。
It's a very reason.
停下,我不知道。
Stop I don't know.
比如功利主义的论点,对道路上的无人驾驶汽车很适用,就是要尽可能多地挽救生命。
Like so for example, a utilitarian argument, right, that works quite well for driverless cars on the roads is, you wanna save as many lives as possible.
但在医学领域,同样的理念就不适用了。
But then in medicine, that same idea, right, it it doesn't work anymore.
我们不能牺牲一个健康病人去救其他五个人的命。
We can't sacrifice one healthy patient to save the lives of five others.
如何确保它最终能朝正确的方向推理?
How do you make sure that it ends up reasoning in the correct direction?
你无法保证所有事情。
You can't guarantee everything.
世界上的行动可能性空间如此之大,100%的可靠性根本不存在。
The space of possibilities of action in the world is so huge that 100% reliability is not a thing.
但现实中,世界上很多领域都不存在绝对可靠的事物。
But it's not a thing in a lot of the world as it exists.
如果你需要做手术,去和外科医生交谈,你说'我要切除某个东西或其他什么',而医生告诉你'这100%安全',作为数学家,你知道他们没说实话。
If you need a surgery and you go and talk to the surgeon, and you say, Well, you know, I'm gonna get something removed or whatever, and the surgeon says to you, It's 100% safe, as a mathematician, you know that they're not telling you the truth.
从来没有什么事情是100%的。
Nothing is ever 100%.
所以我们必须做的是测试这些系统,尽可能使其安全可靠,并权衡利弊与风险。
So what we have to do is we have to test these systems and make them as safe and reliable as possible, and we have to trade off the benefits and the risks.
我们还需要做其他事情,比如监控它们。
And we also have to do other things like monitor them.
这样在部署时,我们就能跟踪正在发生的情况。
So when they're in deployment, we keep track of what's going on.
因此如果我们开始发现故障案例超出了可接受范围,可能就需要回滚并停止它们或采取其他措施。
So if we start seeing that there are failure cases that are beyond what we consider acceptable, we may have to roll back and stop them or do whatever.
展开剩余字幕(还有 281 条)
对吧?
Right?
所以我们需要做一系列不同的事情。
So there's a whole range of different things we need to do.
我们需要在系统上线前进行测试。
We need to do testing before it goes out.
我们需要在系统运行时进行监控。
We need to monitor when they are out there doing things.
我们需要做可解释性研究,这样我们才能深入系统内部观察。
We need to do things like interpretability where we're able to look inside the system.
这是系统二的一个优点。
That's one nice thing about system two.
如果是安全性问题,只要实现方式正确,你实际上可以看到它在进行推理。
If it's safety, if it's implemented the right way, you can actually see it reasoning about things.
但你必须验证这种推理是否真实反映了它实际想要做的事情。
But you gotta check that this reasoning is actually an accurate reflection of what it's really trying to do.
但你要知道,如果能深入系统内部观察它们的行为动机,或许能让你多一层确信——它们确实在努力以正确方式行事。
But, you know, if you have ways to look inside the system and really see why they're doing things, that can maybe give you another level of reassurance that they are trying to act in the right way.
因为这涉及到另一个重要的微妙之处。
Because that's another important subtlety.
关键并不总是结果,而可能是意图。
It's not always just about the outcome, but maybe the intention.
对吧?
Right?
就像有人故意伤害你,和有人不小心撞到你导致疼痛,这两者有天壤之别,明白吗?
So there's a big difference between, I don't know, somebody hurting you intentionally, and somebody accidentally bumping you and it hurts or something, right?
我们的解读方式也会截然不同。
And we interpret that very, very differently.
所以如果能透视AI的思考过程,我们或许能理解:'它当时在处理复杂状况'。
So if we can see inside our AIs, we might accept that, well, you know, it was dealing with a tricky situation.
根据它的分析,它已尽力做出最佳选择,只是产生了某些负面副作用——这种情况我们或许能够接受,因为即使换作人类身处那种处境,可能也很难做得更好。
It tried to do the best thing it could according to its analysis, but there was some negative side effect, we might be sort of okay with that, because maybe even as people in that treating situation, it would be very difficult for us to do the right thing.
但如果它是故意做错事,那就完全是另一回事了。
But if it did the wrong thing intentionally, that's a whole different thing.
这些都是人工智能、通用人工智能安全的各个方面,我们有团队在研究所有这些课题。
So these are all aspects of AI, AGI safety, and we have people working on all of these topics.
那么你们是否会限制这些系统与现实世界的交互程度、发布速度等等,直到你们确信它们达到了安全阈值?
So then do you sort of limit the amount that these things can interact with the real world, how quickly you release them, and so on and so on, until you feel confident that they're at this safety threshold?
是的,我们有各种测试基准和测试方法,会在内部运行一段时间。
Yeah, so we have all kinds of testing benchmarks and tests, we run them internally for a while.
我们还会针对特定风险领域进行专项测试。
And we have particular things that we test for, are risky areas.
比如哪些方面?
Like what?
我们会测试系统是否会帮助开发,比如说生物武器之类的东西。
We try to see if the system will help develop, I don't know, like a bioweapon or something like that.
没错。
Right.
而且,显然它不应该这样做。
And, obviously, it should not.
是的。
Yes.
所以如果我们发现能以某种方式欺骗或迫使它在该领域提供帮助,那就是个问题。
And so if we start seeing that we can somehow trick it or force it into being helpful in that area, that's a problem.
没错。
Right.
黑客攻击是另一个测试领域。
Hacking is another one.
它会帮助人们进行黑客攻击之类的事情吗?
Will it help people hack things and so on and so on?
是的,我们目前有一系列这样的测试,而且这个测试集随着时间的推移在不断扩充,然后我们会评估它在某些领域的能力有多强。
So, yeah, we have, at the moment, a collection of these tests, and this collection keeps growing over time, and then we assess how powerful it is in some of these areas.
然后我们会根据观察到的每个能力级别采取相应的缓解措施。
And then we have mitigations appropriate to each level of capability that we see.
这可能意味着我们不会发布这个模型。
It could mean that we don't release the model.
根据我们的发现,这可能意味着各种不同的情况,是的。
It could mean various different things, depending on what we find, yeah.
那么,我们来谈谈这类技术对社会的影响吧。
Well, let's talk about the impact on society of some of this stuff.
比如,当我们真正实现强大的人工通用智能时。
Like, once we get to really capable AGI.
我知道这是你思考了很多的问题。
And I know that this is something that you've thought an awful lot about.
这么说对吗?
Is that fair to say?
是的。
Yeah.
我现在主要关注的是试图理解,如果我们获得了人工通用智能,并且在其能力水平上相对安全,会怎样。
My main focus now is trying to understand what if we get AGI, and it's reasonably safe for its level of capability.
其他一切又当如何?
What about everything else?
而其他一切的清单是极其庞大的。
And the list of everything else is enormous.
存在诸多问题,比如:好,我们拥有了强大的AGI,我们称之为理性,那它是否具有意识?
There are questions like, so, okay, we've got powerful AGI, and it's reason we say, is it conscious?
这甚至算是个有意义的问题吗?
Is that even a meaningful question?
你对这个问题持...哦,就是这个立场
You have a stance on Well, that right
我们有个团队在研究这个,已经与世界上许多研究该领域的顶尖专家交流过。
we've got a group looking at that, and we've talked to a lot of leading experts in the world who study this.
我认为简短的答案是:没人真正知道。
And I think the short answer is nobody really knows.
必须明确说明,我们这里讨论的是完全体AGI,而非当前拥有的技术。
To be absolutely clear, we're talking about full AGI here rather than the stuff we have at the moment.
是的。
Yes.
你对目前的技术不具备意识这一点感到放心吗?
Are you comfortable that the stuff at the moment is not?
我不认为它有意识。
I don't think it is.
当我们展望未来——比如十年后的AGI,那种能力极强的系统——它会具有意识吗?
As we go into some future AGI years in the I don't know, ten years in the future or something, which is very, very capable, will that system be conscious?
当我与世界上研究这个领域最顶尖的专家交流时,有些人提出了赞成的论据。
When I talk to some of the most famous experts in the world that study this, there are various people who have arguments for.
也有些专家提出了反对的论据。
There are various people who have arguments against.
但当我向他们提出具体场景时,我会说:
But when I actually put a concrete scenario to them and I say, look.
假设我们有了Gemini 10系统,它被植入一个人形机器人中,能够学习、整合跨传感器信息,记住自己作为世界主体的历史,还能完成这类复杂行为。
We've got Gemini 10 here, and it's embodied in a, you know, humanoid robot, and it it learns and it integrates information across sensors, and it can remember its own history as an agent in the world, and do all these sorts of things.
而且它还会谈论自己的意识,因为现在只要你用正确的方式提示,确实可以让AI模型讨论它们的意识。
And also talks about its own consciousness, because you can actually get AI models to talk about their consciousness now if you prompt them in the right kind of way.
它有意识吗?
Is it conscious?
当我向该领域的专家提出这个问题时,他们的反应通常是:嗯,我认为可能没有,或者我认为可能有,但实际上我并不完全确定。
And when I put that to people in the field, they're like, well, I think probably not, or I think probably yes, but actually, I'm not absolutely sure.
谁知道呢?
And who knows?
也许我们终将找到答案。
Maybe we will have an answer to that.
我认为这是个由来已久的问题,甚至很难将其转化为一个严格的科学问题,因为我们不知道如何将其框架为可测量的东西。
I think it's a long standing question, and it's a very difficult question to even make into a strict scientific question, because we don't know how to frame this as a measurable thing.
我能确定的是,有些人会认为它们有意识,而另一些人则认为没有。
What I am sure is gonna happen is that some people will think they are conscious, and some people will think they are not.
这种情况必定会发生。
That is certainly gonna happen.
尤其是在缺乏一个被广泛认可的科学定义和测量方法的情况下。
Particularly in the absence of a really well accepted scientific definition and way of measuring it.
那我们该如何应对这种情况呢?
And then how are we gonna navigate that?
这也是个非常有趣的问题。
That's a very interesting question as well.
但这只是众多问题中的一个,对吧?
But this is just one question of know?
还有很多。
Many.
我们会面临诸如:是否能实现完全的通用人工智能?
We have things like, are we gonna go from full AGI?
我们是否会发展出远超人类智能的超级智能?
Are we gonna go towards superintelligence that's far, far beyond human intelligence?
这会快速发生还是缓慢进行?
Is it gonna happen quickly, slowly?
永远不会。
Never.
如果真的发展出超级智能,这种超级智能的认知特征会是什么?
And if it does go to superintelligence, what's the cognitive profile of that superintelligence?
是否在某些方面它会远超人类?
Are there certain things where it's gonna be far, far beyond human?
我们已经看到它能说200种语言之类的。
We already see it can speak 200 languages or something.
这很明显。
That's clear.
而在其他方面,也许由于计算复杂性或其他原因,它实际上并不会比人类强多少?
And are there other things where maybe because of the computational complexity or whatever, it's not actually gonna be much better than humans?
对吧?
Right?
我们对这方面有任何了解吗?
Do we have any idea of that?
这似乎是人类需要思考的一个极其重要的问题。
That seems like a really important question for humanity to be thinking about.
我们会在十年或二十年内进入超级智能时代吗?或者类似的时间框架?
Are we going to go into superintelligence in a decade or two decades or something like that?
你对此有什么立场吗?
Do you have a stance on that?
你认为它会发展成超级智能吗?
Do you think it will go to superintelligence?
比如说,我想到爱因斯坦提出了广义相对论。
I mean, I'm sort of thinking here about Einstein, for example, came up with general relativity.
我们是否会处于这样一种境地:拥有通用人工智能,它能对世界进行理论化,提出超越人类认知的真正科学理解?
Will we be in a position where you have AGI that can theorize about the world, come up with genuine scientific understanding that goes beyond what humans have managed?
我认为这会基于计算能力。
I think it will based on computation.
而人类大脑就像移动处理器。
And the human brain is a mobile processor.
它重约几磅。
It weighs a few pounds.
它的功耗大约在20瓦左右。
It consumes, I think, around 20 watts.
信号通过树突在大脑内传递。
Signals are seen within the brain through dendrites.
通道频率大约超过100赫兹,或者在皮层中可能达到200赫兹。
The frequency on the channel is about over 100 Hz, or maybe 200 Hz in cortex.
这些信号本身是电化学波的传播。
And the signals themselves are electrochemical wave propagations.
它们的传播速度约为每秒30米。
They move at about 30 meters per second.
如果将其与数据中心相比,功耗可能从20瓦变成200兆瓦。
So if you compare that to what we see in a data center, instead of 20 watts, you could have 200 megawatts.
重量则可能从几磅增加到数百万磅。
Instead of a few pounds, you could have several million pounds.
相比通道上的100赫兹,你可以拥有100亿赫兹的通道频率。
Instead of a 100 hertz on the channel, you can have 10,000,000,000 hertz on the channel.
相比每秒30米的电化学波传播速度,你可以达到光速——每秒30万公里。
And instead of electrochemical wave propagation at 30 meters per second, you can be the speed of light, 300,000 kilometers per second.
因此在能耗、空间占用、通道带宽和信号传播速度方面,你同时在四个维度上获得了六到八个数量级的提升。
So in terms of energy consumption, space, bandwidth on the channel, speed of signal propagation, you've got six, seven, maybe eight orders of magnitude in all four dimensions simultaneously.
那么人类智能会成为可能性的上限吗?
So is human intelligence going to be the upper limit of what's possible?
我认为绝对不是。
I think absolutely not.
因此我相信,随着我们对构建智能系统理解的深入,我们将看到这些AI远远超越人类智能。
And so I think as our understanding of how to build intelligence systems develops, we're gonna see these AIs go far beyond human intelligence.
就像人类无法在100米赛跑中胜过顶级加速赛车一样。
In the same way that humans, you know, we can't outrun a top fuel dragster over a 100 meters.
对吧?
Right?
我们的力气比不上起重机。
We can't lift more than a crane.
我们的视野不及哈勃望远镜。
We can't see further than the Hubble telescope.
我是说,我们已经看到在某些领域,机器能比最快的鸟飞得更快,诸如此类的事情。
I mean, we already see machines in particular areas that can fly faster than the fastest bird and all these sorts of things.
对吧?
Right?
我认为在认知领域我们也会看到类似情况。
I think we'll see that in cognition as well.
在某些方面我们已经看到,人类掌握的知识不可能比谷歌更多。
We've already seen in some aspects, you don't know more than Google.
诸如此类,比如信息存储等方面,我们已经超越了人脑的能力范围。
And so on, like, information storage and stuff like that, we've already gone beyond what the human brain is capable of.
我认为我们将开始在推理等各种其他领域看到这种超越。
I think we're gonna start seeing that in reasoning and all kinds of other domains.
是的,我认为我们正朝着超级智能的方向发展。
So yes, I think we are gonna go towards superintelligence.
这就是为什么我对诸如系统二安全性这类问题非常感兴趣,因为如果我们无法阻止全球竞争动态等因素推动超级智能的发展,那么我们就必须认真思考如何让超级智能变得超级道德。
So that's why I'm very interested in things like system two safety, because if we can't stop the development towards superintelligence because of competitive dynamics globally and all these sorts of things, then we need to think really hard about how do we make a superintelligence super ethical.
如果你有一个系统,不仅能将智能能力应用于实现目标和执行任务,还能将其用于做出道德决策,那么它的能力可能会以某种方式同步提升。
And if you have a system that can apply the capabilities of its intelligence, not just to achieving goals and doing things, but actually applying it to making ethical decisions as well, then it might scale with its capabilities in some way.
我确实想知道这一切对人类意味着什么。
I do wonder what all of this means for people.
我是说,如果我们发展到人类智力被超级智能彻底碾压的地步,那对社会意味着什么?
I mean, if we are getting to a point where essentially, human intelligence is dwarfed by super intelligence, What does that mean for society?
这是否意味着会出现巨大的不平等——那些本质上无法再为经济提供价值的人将被彻底抛弃?
Does that mean just massive inequality that you have the people who no longer have value essentially in in what they can offer the economy being completely left behind?
这意味着一次巨大的变革。
It means a massive transformation.
我认为当前人们通过贡献脑力和体力劳动来换取经济资源分配权的体系,可能不再适用了。
I think the current system where people contribute their mental and physical labor in return to access to resources that are generating the economy, that may not work the same anymore.
我们可能需要不同的做事方式。
And we may need different ways of doing things.
现在这块蛋糕应该会变得大得多。
Now the pie should get much bigger.
所以不存在生产商品和服务不足的问题。
So there's not a problem of a lack of goods and services that are produced.
如果说有什么变化的话,这方面正在变得好得多。
If anything, that's getting much, much better.
但我们需要仔细思考人们所处的体系。
But we need to think carefully about what's the system for people.
我们该如何分配社会中存在的财富?
What is how do we distribute the wealth that exists in society?
我认为需要更多思考后AGI时代的经济如何运作,以及后AGI社会的结构如何运作。
I think there needs to be a lot more thought going into this of how a post AGI economy works and how the structure of a post AGI society works as well.
我曾向罗素集团的副校长们做过一次演讲。
I gave a talk to the Russell Group vice chancellors.
在英国,罗素集团代表着顶尖大学。
So in The UK, the Russell Group is the top universities.
我对他们说,听着,AGI即将到来,而且并不遥远。
I said to them, look, this AGI thing's coming, and it's not that far away.
十年之内,我们就会拥有它。
In ten years, we're gonna have it.
它将开始能够完成各种认知劳动和工作,处理人类所做的许多事情。
And it's gonna start being able to do a significant fraction of all kinds of cognitive labor and work and things that people do.
对吧?
Right?
实际上,我们需要社会各领域、各阶层的人们思考这对他们特定领域意味着什么。
We actually need people in all these different aspects of society and how society works to think about what that means in their particular area.
因此,我们真的需要你们大学里的每个院系、每个部门都认真对待这个问题,思考这对教育意味着什么。
So we really need every faculty and every department that you have in your university to take this seriously and think, what does it mean for education?
这对法律又意味着什么?
What does it mean for law?
这对工程学意味着什么?
What does it mean for engineering?
数学呢?
Mathematics?
经济学呢?
Economics?
金融领域呢?
Finance?
医学领域。
Medicine.
基本上,每个院系研究的领域都离不开人类智能这个核心要素。
So basically, every department studies something where human intelligence is a really important thing.
因此当廉价、普及且强大的机器智能出现时,每个领域都需要重新思考这个问题。
And so if you have the presence of cheap, abundant, capable machine intelligence turning up, that thing needs to be thought about again.
这会产生什么影响?
What is the implications of this?
是否应该用不同的方式来处理?
Should it be done in a different way?
有哪些机遇?
What are the opportunities?
存在哪些风险?
What are the risks?
诸如此类。
And so on.
我认为这里蕴藏着巨大的机遇,但就像工业革命等任何变革一样,情况很复杂。
So I think there's an enormous opportunity here, But just like any revolution, like the Industrial Revolution or anything, it's complicated.
它将以各种方式对社会产生全方位的影响。
It has all kinds of effects on society in all kinds of ways.
为了从中获益并最小化风险和代价,我们需要谨慎应对。
And to get the benefits of that and minimize the risks risks and the costs of that, we need to navigate this carefully.
而目前我认为,几乎没有人认真思考过通用人工智能对这件事意味着什么。
And at the moment, I think nowhere near enough people are thinking about what AGI means for this particular thing.
我们需要更多人来做这件事。
And we need a lot more people doing that.
你还记得2020年3月专家们说疫情即将来袭的时候吗?
Do you remember in March 2020 when the experts were saying, there's this pandemic coming?
我们确实正站在指数级爆发的边缘。
We're really standing on the edge of an exponential Yes.
当时人们还在酒吧聚会、看足球比赛什么的。
And then everyone was still sort of in pubs and going to football games and things.
专家们越来越大声地警告即将发生的事。
And the experts were increasingly shouting about what was coming.
是啊。
Yeah.
你是不是
Do you sort
有点感觉
of feel a little bit
像那样吗?
like that?
那些日子我记得很清楚。
I remember those days well.
确实有点那种感觉。
It does feel a bit like that.
人们很难相信重大变革即将来临,因为大多数时候,那些声称将有大事发生的故事最终都
People find it very hard to believe that a really big change is coming because most of the time, the story that something really huge is about to happen is not a
不了了之。
to nothing.
物理规律终将归于虚无。
The physicals out to nothing.
对吧?
Right?
所以作为一种经验法则,如果有人告诉你一些疯狂的大事即将发生,你大可以忽略其中的大部分。
And so as a kind of a heuristic, if somebody tells you some crazy, crazy, big, big things are gonna happen, probably you can ignore most of those.
但你确实需要保持关注。
But you do have to pay attention.
有时是这些基本因素在推动着事情发展。
Sometimes there are fundamentals that are driving these things.
如果你理解了这些根本因素,就需要认真考虑重大变革确实会发生的可能性。
And if you understand the fundamentals, you need to take seriously the idea that sometimes big changes do come.
但这具体意味着什么呢?
What does this mean though?
因为,我是说,好吧,你描述了一个长期愿景,那里有完全的通用人工智能,还有可能共享的繁荣等等,但要实现它。
Because, I mean, okay, you describe a sort of a long term vision where you have full AGI, and there's, like, prosperity that can potentially be shared and so on, but getting there.
我是说,我们正在谈论的是巨大的经济动荡和结构性风险。
I mean, we're talking about some massive economic disruption, structural risks here.
给我们讲讲你预计未来几年会是什么样子。
Just talk us through what you expect the next few years to look like.
我是说,告诉我们2020年3月时我们不知道的事情。
I mean, tell us what we didn't know in March 2020.
我
I
认为未来几年我们不会看到你所谈论的那些重大颠覆。
think what we'll see in the next few years is not those big disruptions you're talking about.
我认为未来几年我们将看到AI系统从非常有用的工具,转变为真正承担更多具有经济价值的工作。
I think what we'll see in the next few years is AI systems going from being very useful tools to actually taking on more of a load in terms of doing really economically valuable work.
而且我认为这种转变会相当不均衡。
And I think it'll be quite uneven.
在某些领域会比其它领域发展得更快。
It'll happen in certain domains faster than others.
例如在软件工程领域,我认为未来几年由AI编写的软件比例将会上升。
So for example, in software engineering, I think in the next few years, the fraction of software being written by AI is, like, is gonna go up.
因此几年后,原本需要100名软件工程师的工作,可能只需要20人。
And so in a few years, where prior you needed a 100 software engineers, maybe you need 20.
而这20人将使用先进的AI工具。
And those 20 use advanced AI tools.
在未来几年里,我们将看到人工智能从仅仅是一种实用工具,转变为真正从事有意义、高效的工作,并提升相关领域工作者的生产力。
Over a few years, we'll see AI going from kind of just a sort of a useful tool to doing really meaningful, productive work, and increasing the productivity of people that work in those areas.
这也会在某些领域对劳动力市场造成一定冲击。
It'll also create some disruption in the labor market in certain areas.
随着这种情况发生,我认为关于人工智能的讨论将发生转变,变得更加严肃。
And then as that happens, I think a lot of the discussion around AI is going to shift and become a lot more serious.
因此,讨论焦点将从'这很酷'这样的层面转移。
And so it's gonna shift from being just sort of like, oh, this is really cool.
你可以让它帮你规划假期,或者当孩子作业遇到困难时提供帮助,到更实质性的层面。
You can ask it to plan your holiday and help you with your children's they're stuck in something and they don't understand their homework or whatever, things like this, through to something that's like, okay.
这不再只是某种新奇工具。
This is not some nice new tool.
这实际上是会从结构上改变经济、社会等方方面面的技术。
This is actually something which is gonna structurally change the economy and society and all kinds of things.
我们需要思考如何构建这个新世界。
And we need to think about how do we structure this new world?
因为我坚信,如果我们能驾驭这种能力,这将是一个真正的黄金时代。
Because I do believe that if we can harness this capability, this could be a real golden age.
因为我们现在拥有的机器能大幅提升多种产品的产量,推动科学发展,并让我们从各种可能无需人力完成的劳动中解放出来——既然机器可以代劳。
Because we now have machines that can dramatically increase production of many types of things and advance science and relieve us of all kinds of labor that maybe we don't need to be doing if the machines can do it.
对吧?
Right?
所以这里存在一个机遇。
So there's an opportunity here.
但只有当我们能将机器的这种惊人能力转化为社会愿景,让社会中的民众因此蓬勃发展并从中受益时,这种机遇才有意义。
But that is only good if we can somehow translate this incredible capability of machines into a vision of society where there is some flourishing of people in society that benefit from all this capability.
因为与此同时,那80名不再被需要的软件工程师,以及所有其他初级员工——那些刚毕业正意识到自己首当其冲受影响的人。
Because in the meantime, you have those 80 software engineers who are no longer needed, and all of the other people, the entry level employees at the moment, you know, graduates who are sort of noticing that they're the first ones to be affected by this.
有没有哪些行业不会受到冲击?
Are there any industries that are not gonna be impacted by this?
中短期内,我认为实际上会有相当多的领域不受影响。
In the short to medium term, I think there'll actually be quite a lot of things.
因此,即便人工智能发展得相当迅速,在纯粹的认知层面,我认为机器人技术还远未达到能胜任水管工工作的水平。
So even if the AI does develop quite quickly, then it's purely cognitive sense, I don't think robotics will be at the point at which it could be a plumber.
而且即便技术上可行,我认为要让机器人在价格上比人类水管工更具竞争力,还需要相当长的时间。
And then even when that is possible, I think it's gonna take quite a while before it's price competitive with a human plumber.
对吧?
Right?
所以我认为,所有非纯粹认知类型的工作相对而言都能在一定程度上免受这类技术冲击。
And so I think there are all kinds of work which is not purely cognitive that will be relatively protected from some of this stuff.
有趣的是,当前许多高薪工作恰恰属于精英认知工作的范畴。
The interesting thing is that a lot of work which currently commands very high compensation is sort of elite cognitive work.
比如那些处理全球复杂并购案的高级律师、从事尖端金融工作的人员,还有现在从事高级机器学习、软件工程等各类工作的人才。
It's people doing, I don't know, sort of high powered lawyers that are doing complex merger and acquisition deals across the globe and people doing advanced stuff in finance or now people doing, you know, advanced machine learning, software engineering, all these types of things.
数学家。
Mathematicians.
数学家。
Mathematicians.
我挺喜欢的一个经验法则是:如果你能仅用笔记本电脑通过互联网远程完成工作——不需要什么全身触觉套装或机器人操控设备,就是普通的键盘、屏幕、摄像头、扬声器和麦克风这样的常规接口。
One rule of thumb that I quite like is if you can do the job remotely over the Internet just using a laptop, so you're not some full haptic bodysuit with some robot controlling whatever, just normal interface, keyboard, screen, camera, speaker, microphone.
如果你的工作能完全以这种方式完成,那很可能就是高度认知型的工作。
If you can do your work completely that way, then it's probably very much cognitive work.
所以如果你属于这类职业,我认为先进的人工智能将能在某种程度上涉足这个领域。
So if you're in that category, I think that advanced AI will be able to operate in that space, to some extent.
另一个我认为具有保护性的因素是:即便是认知型工作,某些工作和人类行为中仍可能存在人性化的层面。
The other thing that is, I think, protective is even if it is cognitive work, there can be a human aspect to some types of work and things that people do.
比如说,假设你是个网红。
So for example, let's say you are, I don't know, an influencer.
对吧?
Right?
你或许可以远程完成这类工作,但关键在于你是个有独特个性的真实个体,人们知道这些内容背后是个活生生的人——这在很多情况下可能正是价值所在。
You can do that work maybe remotely, but the fact that you're a particular person with a particular personality and people know there is a person behind what's going on there, that may be valuable in many cases.
不过这样还是有很多人会被波及,不是吗?
That leaves a lot of people, though, doesn't it?
我认为我们需要的是类似我向罗素集团建议的方向,即需要研究社会各方面的人认真对待人工通用智能。
I think what we need is sort of along the lines of what I suggested to the Russell Group, is we need people who study all these different aspects of society to take AGI seriously.
而我的印象是,这些人中有很多并没有重视起来。
And my impression is that a lot of these people are not.
当我去和那些对其中某个特定领域感兴趣的人交谈时,他们通常会说'哦,是啊'。
And when I go and talk to people who are interested in one of these particular things, like, oh, yeah.
这是个有趣的工具。
It's an interesting tool.
挺有意思的,诸如此类。
It's kind of amusing, whatever.
但他们还没有真正意识到,他们现在所看到的以及目前所知的所有局限性——顺便说一句,这些认知往往已经过时了。
But they haven't internalized the idea that what they're seeing now and any current limitations that they currently know of, which, by the way, are often out of date.
这些人经常说'哦,我一年前尝试用它做过一些事'。
Often, these people say, oh, I tried to do something with it a year ago.
要知道,与当前模型的能力相比,一年前简直就是远古历史了。
It's like a year ago is now ancient history compared to what the current models are doing.
再过一年,它会变得更好得多。
And one year from now, it's gonna be a lot better.
他们没有看到这种趋势。
They're not seeing that trend.
从某些方面来说,我其实觉得普通大众中的许多人比专家更有先见之明,因为我觉得这是人类的一种天性。
In some ways, I actually think many people in the general public are ahead of the experts because I think there's a human tendency.
你知道,当我跟非技术人员聊起当前的人工智能系统时,有些人会对我说:'它不已经具备类似人类的智能了吗?'
You know, if I talk to non tech people about current AI systems, some of the people say to me, oh, well, doesn't it already have, like, human intelligence?
它会说的语言比我还多。
It speaks more languages than me.
它能解答的数学和物理问题比我高中时强多了。
It can do math and physics problems better than I could ever do in high school.
它知道的食谱比我还丰富。
It knows more recipes than me.
我之前对报税单搞不明白,它还能给我解释清楚。
I was confused about my tax return and explained something to me or whatever.
他们会问,那它在哪些方面不算智能呢?
They're like, so in what way is it not intelligent?
你知道吗,这就是我和许多非技术人员交谈时经常听到的反应。
You know, this is the sort of thing that I get when I talk to a number of non tech people.
但通常,某个特定领域的专家们总喜欢认为自己的领域非常深奥特殊,觉得AI不会真正触及他们。
But often, people who are experts in a particular domain, they really like to feel that their thing is very deep and special, and this AI is not really gonna touch them.
我想用你那个关于AGI的著名预测来结束讨论。
I think I want to end with your now quite famous prediction about AGI.
事实上,你在这个预测上保持了惊人的一致性,已经超过十年了。
And you have stayed incredibly consistent on this for over a decade, in fact.
你曾说过到2028年实现AGI的概率是五五开。
You have said that there is a fifty fifty chance of AGI by 2028.
没错。
Yes.
是指最低限度的AGI吗?
Is that that's minimal AGI?
是的。
Yes.
你仍然认为2028年有五成概率吗?
Are you still fiftyfifty by 2028?
是的。
Yes.
2028年。
2028.
你可以在我2009年的博客上看到这个预测。
And you can see that on my blog from 2009.
那你对完全通用人工智能怎么看?
And what do you think about full AGI?
你对这个的时间预估是怎样的?
What's your timeline for that?
还要再过几年。
Some years later.
那会是三、四、五、六年之后。
It would be three, four, five, six years later.
是啊。
Yeah.
但在十年之内?
But within a decade?
对。
Yeah.
我认为会在十年内实现。
I think it'll be within a decade.
拥有这么多知识时,你是否曾感到有些虚无?
Do you ever just feel a bit nihilistic with all of this knowledge?
我认为这里蕴藏着巨大的机遇。
I think there's an enormous opportunity here.
许多人投入大量精力做了很多工作,但并非所有事都那么有趣。
A lot of people put a lot of effort into doing a lot of work, and not all of it is that much fun.
就像工业革命通过驾驭能量来完成各种机械工作,从而为社会创造了更多财富一样,现在我们也能利用数据、算法和计算来完成各种认知工作。
And just like the Industrial Revolution took the harnessing of energy to do all sorts of mechanical work, which created a lot more wealth in society, now we can harness data and algorithms and computation to do all kinds of more cognitive work as well.
因此这能让人们拥有大量财富。
And so that can enable a huge amount of wealth to exist for people.
而财富不仅仅是商品和服务的生产,还包括新技术、新药物等各种类似的东西。
And wealth, not just news of production of goods and services and so on, you know, new technologies, new medicines, and all kinds of things like this.
所以这是一项具有巨大潜在利益的技术。
So this is technology that has an incredible potential for benefit.
挑战在于我们如何在应对风险和潜在成本的同时获得这些收益?
The challenge is how do we get those benefits while dealing with the risks and potential costs and so on?
我们能否想象一个未来世界,在那里我们真正受益于智能的帮助而蓬勃发展?
Can we imagine a future world where we're really benefiting from having intelligence really helping us to flourish?
那会是什么样子?
And what does that look like?
我无法直接回答这个问题。
I can't just answer that.
我对这个非常感兴趣。
I'm very interested in that.
我会尽我所能去理解。
I'm gonna try and understand the best I can.
但这确实是个非常深刻的问题。
But this is a really profound question.
它涉及哲学、经济学、心理学、伦理学等各类问题。
It touches on philosophy and economics, and psychology, and ethics, and all kinds of questions.
对吧?
Right?
我们需要更多人思考这个问题,并尝试构想那个积极的未来会是什么样子。
And we need a lot more people thinking about this, and trying to imagine what that positive future looks like.
肖恩,非常感谢你。
Shane, thank you so much.
说真的,这番谈话让人茅塞顿开——人类确实不太擅长理解指数级增长。
That was mind expanding, to say Humans the are not very good at exponentials.
此时此刻,我们正站在曲线的拐点上。
And right now, at this moment, we are standing right on the bend of the curve.
人工通用智能已不再是遥远的思维实验。
AGI is not a distant thought experiment anymore.
与Shane的对话中我发现最有趣的是,他认为公众比专家更理解这一点。
What I found so interesting about that conversation with Shane is that he thinks the general public understand this better than the experts.
如果他的时间线预测大致准确,我们可能没有时间进行缓慢的反思和认识。
And if his timelines are anything like correct, and we might not have the luxury of time for slow reflection and realisation here.
我们面临着困难、紧迫且可能真正令人振奋的问题,这些问题现在就需要认真关注。
We have got difficult, urgent and potentially genuinely exciting questions that need some serious attention now.
您正在收听的是由主持人汉娜·弗莱为您带来的《谷歌DeepMind播客》。
You have been listening to Google DeepMind the podcast with me, your host, Hannah Fry.
如果您喜欢这次对话,请订阅我们的播客或留下评论。
If you enjoyed that conversation, please do subscribe to our podcast or leave us a review.
下一期节目,我们将与DeepMind联合创始人德米斯·哈萨比斯对话。
Next episode, we are gonna be sitting down with DeepMind cofounder, Demis Esarbeis.
所以请相信我们,你不会想错过那一期的。
So trust us when we tell you, you don't wanna miss that one.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。