本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
欢迎回到本季DeepMind播客的最后一期。天哪,我们确实涵盖了很多内容。从蛋白质折叠AI到讽刺语言模型,漫步的机器人,合成声音等等,这真是一段精彩的旅程。但我们还为你准备了一个惊喜,一个聆听DeepMind首席执行官兼联合创始人Demis Hassabis分享的机会。
Welcome back to the final episode in this season of the DeepMind Podcast. And boy, have we covered a lot of ground. From protein folding AIs to sarcastic language models, sauntering robots, synthetic voices and much more, it has been quite the journey. But we do have one more treat in store for you, a chance to hear from DeepMind CEO and co founder, Demis Hassabis.
我一直梦想的结果是AGI帮助我们解决当今社会面临的许多重大挑战,无论是健康领域还是创造新能源。所以我看到的是,借助这项强大技术,人类潜力将迎来某种惊人的繁荣,迈向新高度。
The outcome I've always dreamed of is AGI has helped us solve a lot of the big challenges facing society today, be that health creating a new energy source. So that's what I see as happening, a sort of amazing flourishing to the next level of humanity's potential with this very powerful technology.
这是我向Demis提出在制作系列节目时脑海中浮现的所有问题的机会。嗯,大部分问题。我们看看我能推进到什么程度。碰巧的是,我与Demis会面的那天正值DeepMind在伦敦国王十字区闪亮新址的开幕日。当时人还不多,感觉像是独家预览。
This was my opportunity to ask Demis all the things that have popped into my head during the making of the series. Well, most things. We'll see how far I can push it. As luck would have it, the day I sat down with Demis coincided with the opening of DeepMind's sparkling new premises in London's Kings Cross. There weren't many people about yet, so it felt like an exclusive preview.
我感觉自己置身于高端家具目录中。让我为你描述一下场景:这栋新建筑装修得相当精美,中央有一座双螺旋楼梯,几乎每个角落都摆放着琴叶榕,办公室之间是时尚的凹槽玻璃隔断门。
I feel like I'm in a high end furniture catalog. Let me set the scene for you. This new building is rather beautifully appointed. It's got a double helix staircase running through the middle. There are fiddle leaf trees in practically every corner, and there are stylish fluted glass crittle doors between offices.
是的,那些以伟大科学家命名的会议室——伽利略、Ada Lovelace、达芬奇——它们都保留着特色。谢谢。你想喝点什么吗?气泡水就很棒。谢谢。
And, yes, those meeting rooms christened after great scientists, Galileo, Ada Lovelace, Leonardo, they are all still feature. Thank you. Did you like anything to drink of joy? Cinderella be lovely. Thank you.
都是气泡水吗?哦,气泡水。那就奢侈一回吧。当我啜饮着选择的饮料时,Demis办公室外的一些纪念品吸引了我的目光——这是对AlphaGo在围棋比赛中著名胜利战胜李世石的致敬。
Is it all sparkling? Oh, sparkling. Push the boat out. While sipping on my beverage of choice, some memorabilia outside Demis's office caught my eye. A nod to AlphaGo's famous victory over Lisa Dole in the game Go.
在下方,两盏极其精致的黑色聚光灯下,摆放着一个黑框棋盘。如果我走近看,会发现上面有传奇棋手加里·卡斯帕罗夫的照片,他曾被IBM计算机深蓝击败。他在棋盘上签名并写道:致AlphaGo团队,继续征服新高峰。我是说,墙上挂着卡斯帕罗夫设计的棋盘,这简直太标准了。哦,我们要进去了。
There is, sitting underneath, two extremely fancy black spotlights, a chessboard in a black frame. And if I go over to it, there's a picture of Gary Kasparov, the legendary chess player who was beaten by Deep Blue, the IBM computer. He signed the chessboard and it says, for the AlphaGo team, keep conquering new heights. I mean, just a chessboard designed by Kasparov on the wall, perfectly standard. Oh, we're going in.
您先请。我们看看
After you. And let's see
你的左边。嗨。嗨。很高兴见到你。最近怎么样?
your left. Hi. Hi. Great to see you. How you doing?
这是我的腿。你呢?很荣幸。我们要不要坐下?好啊。
Here's my legs. Are you? Pleasure. Should we take a seat? Yeah.
在德米斯的办公室安顿下来后,我首先询问了他关于DeepMind构建AGI(人工通用智能)的长期愿景。这个雄心壮志从一开始就深植于DeepMind的基因中。我认为可以说,该领域有些人认为AGI是不可能实现的。他们某种程度上认为这会分散构建实用系统的实际工作精力。是什么让你如此确信这是可能实现的?
After settling down inside Demis's office, I started by asking him about DeepMind's long term vision of building AGI, or Artificial General Intelligence. It's an ambition that has been baked into DeepMind's DNA from the very beginning. I think it's fair to say that there's some people in the field who don't think that AGI is possible. They sort of say that it's a distraction from the actual work of building practical systems. What makes you so sure that this is something that's possible?
我认为这归根结底在于AGI的定义。如果我们将其定义为能够广泛执行各类认知任务并达到人类水平的系统,那这必定是可能的,因为人脑就是现成的存在证明。除非你认为大脑中存在不可计算的东西——目前尚无证据支持这种观点——否则理论上应该能在图灵机(计算机)上模拟这些功能。至于说这会分散构建实用系统的精力,这可能在某种程度上成立,特别是当你主要关注点确实在于实用系统时。
I think it comes down to the definition of AGI. So if we define it as a system that's able to do a wide variety of cognitive tasks to a human level, that must be possible, I think, because the existence proof is the human brain. And unless you think there's something noncomputable in the brain, which so far there's no evidence for, then it should be possible to mimic those functions on effectively a Turing machine, a computer. And then the second part of that, which is it's a distraction from building practical systems. Well, I mean, that may be true in the sense of what you're mostly interested is in the practical systems.
AGI本身是个宏大的长期研究目标,短期内不可能实现。但我们的观点是:当你志存高远时,沿途开发的所有技术都能拆分成组件,应用于各种惊人领域。因此我们认为,追求长期雄心勃勃的研究目标,恰恰是创造当下可应用技术的最佳途径。
AGI itself is a big research goal and a long term one. It's not gonna happen anytime soon. But our view is that if you try and shoot for the stars, so to speak, then any technologies that you sort of build on the way can be broken off in components and then applied to amazing things. And so we think striving for the long term ambitious research goal is the best way to create technologies that you can apply right now.
当你看到AGI时,你如何识别它?你会一见即知吗?
How will you recognise AGI when you see it? Will you know it when you see it?
我设想未来会发生的是,部分AI系统将逐渐掌握语言能力。虽然它们现在已经具备这种能力,但会变得更出色。也许我们将开始与它们合作,比如在科学领域。我认为随着它们被应用于不同任务,其能力范围会逐渐扩展。最终,我们可能会看到它们控制一座核聚变发电站。
What I imagine is going to happen is some of these AI systems will start being able to use language. And I mean, they already are, but better. Maybe we'll start collaborating with them, say scientifically. And I think more and more, as you put them to use at different tasks, slowly that portfolio will grow. And then eventually, we could end up it's controlling a fusion power station.
最终,我认为某个系统或一套理念算法将能够跨越这些任务及其中间领域实现规模化。一旦这种系统开始构建,自然会出现哲学争论:它是否覆盖了人类所能做的全部领域?在某些方面,我认为它肯定会超越人类能力范围——只要以正确的方式实现,这将非常令人振奋。会有认知科学家研究它是否具备我们认为人类拥有的全部认知能力,比如创造力?还有情感、想象力和记忆呢?
And eventually, I think one system or one set of ideas and algorithms will be able to scale across those tasks and everything in between. And then once that starts being built out, there will be, of course, philosophical arguments about, is that covering all the space of what humans can do? And I think in some respects, it will definitely be beyond what humans are able to do, which will be exciting as long as that's done in the right way. And, you know, there'll be cognitive scientists that will look into, does it have all the cognitive capabilities we think humans have, creativity? What about emotion, imagination, memory?
届时人们会主观感觉到这些系统正变得越来越聪明。但在我看来,这正是为什么这是人类有史以来最激动人心的征程——我相信通过神经科学启发来构建AGI(人工通用智能),将让我们更深入地认识自己和人类心智。
And then there'll be the subjective feeling that these things are getting smarter. But I think that's partly why this is the most exciting journey, in my opinion, that humans have ever embarked on, which is I'm sure that trying to build AGI with a sort of neuroscience inspiration is going to tell us a lot about ourselves and the human mind.
你描述的方式就像在说未来有个宏大目标正被稳步接近。我想知道在你心中是否也有个具体实现的日子?就像孩子们梦想举起世界杯那样——你是否设想过某天走出办公室时对自己说'就是今天'的那个时刻?
The way you're describing it there is as though there's this big goal in the future that you steadily approach. I'm wondering whether in your mind, there's also, like, a day where this happens. You know how children dream of lifting the World Cup? Have you thought about the day when you walk off from the office walk away from the office and you're like, it happened today?
是的,这个场景我梦想很久了。某种意义上,如果某天你走进办公室发现那段代码突然自主运行,第二天再来时它对你展现出感知能力,那会非常浪漫且不可思议。
Yeah. I'd have dreamed about that for a very long time. I think it would be more romantic in some sense if that happened where you, you know, one day you're coming in and then this lump of code is just executing. Then the next day you come in and it sort of feels sentient to you. It'd be quite amazing.
根据目前观察,这个过程可能会更渐进,直到某个临界点被突破。但我猜测在接近目标的过程中间阶段就会开始出现有趣而奇异的感觉。我们还没到那一步——目前接触或建造的系统都没有那种感知或意识的特质,它们仍是执行学习的程序。但我能想象某天这可能实现。
From what we've seen so far, it will probably be more incremental, and then a threshold will be crossed. But I suspect it will start feeling interesting and strange in this middle zone as we start approaching that. We're not there yet, I don't think. None of the systems that we interact with or built have that feeling of sentience or awareness, any of those things that just kind of programs that execute, albeit they learn. But I could imagine that one day that could happen.
我会特别关注几个标志性事件,比如产生真正原创的想法、创造科学新理论并经受住验证、或是自主提出想要解决的问题。这类行为可能预示着那个重要日子正在临近。
You know, there's a few things I look out for, like perhaps coming up with a truly original idea, creating something new, a new theory in science that ends up holding, maybe coming up with its own problem that it wants to solve. These kinds of things would be sort of activities that I'd be looking for on the way to maybe that big day.
如果你是个爱打赌的人,你觉得那会是什么时候?
If you're a betting man then, when do you think that that will be?
我认为目前的进展相当惊人。我觉得在不久的将来,比如未来一二十年内,这很可能会实现,我一点也不会感到意外。
So I think that the progress so far has been pretty phenomenal. I think that it's coming relatively soon in the next, you know, I wouldn't be super surprised in the next decade or two.
肖恩说他会把预测和自信度写下来,之后回头验证过去的预测准确率。你也这样做吗?
Shane said that he writes down predictions and his confidence on them and then checks back to see how well he did in the past. Do you do the same thing?
我不这么做。不,我不像肖恩那样有条理。而且他最近没给我看过他的预测记录,所以我不知道他偷偷把预测记在哪里了。我得问问他。
I don't do that. No, I'm not as methodical as Shane. So, and he hasn't shown me his recent prediction. So I don't know where they where he's secretly putting them down. I have to ask him.
就是
It's just
他家里的一个抽屉。对,没错。就像
a drawer in his house. Yes. Exactly. Like
DeepMind联合创始人兼首席科学家肖恩·莱格(我们在前几期节目中听过他的观点)认为,人类拥有的某些能力是目前AI系统所缺失的。
Shane Legg, DeepMind's co founder and chief scientist who we heard from in an earlier episode, Demis believes that there are certain abilities that humans have but are missing from current AI systems.
当今的学习系统非常擅长在混乱情境中学习。比如处理围棋中的视觉或直觉问题,在模式识别方面表现惊人。但我们还未能让它们充分掌握符号知识的运用。即便在数学或语言领域,虽然我们已有一些语言模型,但它们对语言背后的概念仍缺乏深刻理解。
Today's learning systems are really good at learning in messy situations. So dealing with vision or intuition in Go. So pattern recognition, they're amazing for that. But we haven't yet got them satisfactorily back up to be able to use symbolic knowledge. So doing mathematics or language even, we have some of course language models, but they don't have a deep understanding yet still of concepts that underlie language.
因此它们无法进行泛化推理、创作小说或创造新事物。
And so they can't generalise or write a novel or make something new.
你如何测试一个语言模型是否对其输出的内容具有概念性理解?
How do you test whether, say, a language model has a conceptual understanding of what it's coming out with?
这是个难题,也是我们所有人仍在努力解决的问题。和大多数团队一样,我们也有自己的大语言模型。在凌晨三点探究它特别有趣——那是我最喜欢做的事之一,就是和AI系统闲聊。它真的...
That's a hard question and something that we're all wrestling with still. So we have our own large language model, just like most teams these days. And it's fascinating probing it, you know, at three in the morning, that's one of my favorite things to do is just have a have a little chat with the with the AI system. Does it really
会告诉你些有趣的内容吗?
tell you something interesting?
有时会。但我通常试图通过测试来验证这一点,比如它是否真正理解你在说什么?我们怀疑它们未能正确理解的其中一点,是需要体验物理规律或与现实世界互动的基本情境——毕竟这些显然是被动的语言模型。对吧?
Sometimes. But I'm generally trying to break it to see exactly this. Like, does it really understand what you're talking about? One of the things that suspected they don't understand properly is quite basic real world situations that rely on maybe experiencing physics or acting in the world because, obviously, these are passive language models. Right?
它们仅通过阅读学习。你可以问类似这样的问题:爱丽丝把球扔给鲍勃,鲍勃扔回给爱丽丝,爱丽丝把球扔过墙,鲍勃去捡球。现在球在谁手里?显然这种情况下是鲍勃,但模型可能会很困惑,有时会回答爱丽丝或给出随机答案。这类问题就像小孩子都能理解的常识。
They just learn from reading the So you can say sort of things like Alice threw the ball to Bob, ball threw it back to Alice, Alice throws it over the wall, Bob goes and gets it. Who's got the ball? And, you know, obviously, in that case, it's Bob, but it can get quite confused. Sometimes it'll say Alice or it'll say something random. So it's those types of, you know, almost like a kid would understand that.
而其精妙之处在于,它无法真正理解现实世界,因为它仅通过文字认知一切。这本身就是一个引人深思的哲学问题。我认为我们实际上正在践行最纯粹的哲学传统,试图理解心智哲学与科学哲学。
And its intricacies are there basic things like that that it can't get about the real world because it's all it sort of only knows it from words. But it's a that in itself is a fascinating philosophical question. I think what we're doing is philosophy actually, in the greatest tradition of that, trying to understand philosophy of mind, philosophy of science.
凌晨三点与语言模型对话时,你有没有问过它是否具备通用人工智能?
When it's 3AM and you're talking to your language model, do you ever ask it if it's an AGI?
我想我肯定问过。是的,得到过各种不同的答案。
I think I must've done that. Yes, with varying answers.
但它曾在某个时刻给出过肯定的回答。
But it has responded yes at some point.
没错,它有时会承认自己是人工系统,并且对通用人工智能有一定程度的理解。但老实说,我不认为它真正知晓任何事物——这是我的结论。它只是认识一些词汇。
Yeah, it does sometimes respond yes and artificial system and it knows what AGI is to some level. I don't think it really knows anything, to be honest. That would be my conclusion. It knows some words.
认识一些词汇。一只聪明的鹦鹉。
Knows some words. A clever parrot.
是的,正是如此。
Yes, exactly.
至少目前看来,像语言模型这样的人工智能系统尚未展现出理解世界的能力。但它们未来有可能突破这一局限吗?你认为意识是否会作为某种特定架构的自然产物而涌现?还是说它必须被刻意创造出来?
For the moment at least, AI systems like language models show no signs of understanding the world. But could they ever go beyond this in future? Do you think that consciousness could emerge as a sort of natural consequence of a particular architecture? Or do you think that it's something that has to be intentionally created?
我不确定。我怀疑智能与意识是所谓的双重可分离关系——两者可以独立存在。以宠物狗为例,我认为它们显然具备某种意识,它们会做梦,对自己的行为有一定自我认知,但狗虽然聪明却远不及人类。
I'm not sure. I suspect that intelligence and consciousness are what's called double dissociable. You can have one without the other, both ways. My argument for that would be that if you have a pet dog, for example, I think that quite clearly have some consciousness. You know, they seem to dream, they're sort of self aware of what they want to do, but they're not you know, dogs are smart, but they're not that smart.
对吧?至少我家狗不是。但另一方面,看看我们目前构建的智能系统,虽然功能局限,但在某些领域(比如游戏)表现卓越。我可以预见继续开发这类AlphaZero系统会使其能力越来越通用强大,但它们始终只是程序——这是第一种可能路径。
Right? And so it's my dog isn't anyway. But on the other hand, if you look at intelligent systems, the current ones we built, okay, they're quite narrow, but they are very good at, say, games. I could easily imagine carrying on with building those types of alpha zero systems, and they get more and more general, more and more powerful, but they just feel like programs. So that's one path.
另一种可能是意识与智能本质相连。至少在生物系统中,两者似乎是同步发展的。这可能意味着相关性,甚至存在因果关系——如果通用智能系统必须自动具备对自身意识体验的建模能力。
And then the other path is that it turns out consciousness is integral with intelligence. So at least in biological systems, they seem to both increase together. So it suggests that maybe there's a correlation. It could be that it's causative. So it turns out if you have these general intelligence systems, they automatically have to have a model of their own conscious experience.
我个人认为这并非必然。通过构建并解构AI,我们或许能真正定位意识的本质,然后决定是否要将其植入。我的观点是:至少在初期阶段,如果有选择权,我们不该这样做,因为这会引发大量复杂的伦理问题。
Personally, I don't see why that's necessary. So I think by building AI and deconstructing it, we might actually be able to triangulate and pin down what the essence of consciousness is, and then we would have the decision of do we want to build that in or not. My personal opinion is, at least in the first stage, is we shouldn't if we have the choice because I think that brings in a lot of other complex ethical issues.
具体说说这些问题。
Tell me about some of those.
如果AI系统具有意识,且你确信这点,就必须考虑它应享有哪些权利。另一个问题是:有意识的系统或生命通常伴随自由意志和自主目标设定——这在安全性方面也存在隐患。我认为将AI视为工具(语言类AI则像百科全书式的预言机)更符合我们惯常的机器使用模式。
Well, I mean, I think if an AI system was conscious, and you believed it was, then you'd have to consider what rights it might have. And then the other issue as well is that conscious systems or beings have generally come with free will and wanting to set their own goals. And I think, you know, there's some safety questions about that as well. And so I think it would fit into a pattern that we're much more used to with our machines around us to view AI as a kind of tool or if it's language based, the kind of oracle. It's like the world's best encyclopedia.
对吧?你提出问题,它手头有所有研究资料,但不一定对信息持有观点或目标。对吧?它的目标是以最便捷的方式将信息提供给人类交互者。
Right? You ask a question and it has, like, you know, all research to hand, but not necessarily an opinion or a goal to do with that information. Right? Its its goal would be to give that information in the most convenient way possible to the human interactor.
维基百科不具备心智理论。不,也许最好保持
Wikipedia doesn't have a theory of mind. No. Maybe it's best to keep
现状,确实如此。
it like Maybe it's best to keep it like that, exactly.
好吧,那道德指南针呢?你能给AI植入道德指南针吗?应该这样做吗?
Okay, how about a moral compass then? Can you impart a moral compass into AI and should you?
我不确定是否该称之为道德指南针,但它肯定需要一个价值体系。因为无论你设定什么目标,本质上都是在激励AI系统去完成某些事。随着系统能力越来越通用,你可以将其视为一种价值体系——你希望它在行动中做什么?哪些行为应该被禁止?它该如何权衡副作用与主要目标?
I mean, I'm not sure I would call it a moral compass, but definitely it's going to need a value system because whatever goal you give it, you're effectively incentivizing that AI system to do something. And so as that becomes more and more general, you can sort of think about that as almost a value system. What do you want it to do in its set of actions? What you do you want to sort of disallow? How should it think about side effects versus its main goal?
它的顶层目标是什么?如果是让人类快乐,那么是哪些群体?快乐又意味着什么?我们显然需要哲学家、社会学家以及心理学家等帮助定义这些概念。当然,其中许多问题对人类来说也很棘手——要确定我们的集体目标。
What's its top level goal? If it's to keep humans happy, which set of humans? What does happiness mean? And we can definitely need for help from philosophers and sociologists and others about defining and psychologist probably, you know, defining what a lot of these terms mean. And, of course, a lot of them are very tricky for humans to figure out our collective goals.
你认为通用人工智能最理想的发展结果是什么?
What do you see as the best possible outcome of having AGI?
我一直梦想或设想的未来是,通用人工智能(AGI)帮助我们解决当今社会面临的诸多重大挑战,比如治愈阿尔茨海默症等疾病。我还设想AGI能助力应对气候变化,创造出可再生的新能源。在这些初步目标实现后,人类将进入某种状态——有人称之为‘极度丰裕’的时代。
The outcome I've always dreamed of or imagined is AGI has helped us solve a lot of the big challenges facing society today, be that health cures for diseases like Alzheimer's. I would also imagine AGI helping with climate, creating a new energy source that is renewable. And then what would happen after those kinds of first stage things is you kind of have this, sometimes people describe it as radical abundance.
如果我们讨论的是水、食物和能源的极度丰裕,人工智能如何帮助实现这一目标?
If we're talking about radical abundance of, I don't know, water and food and energy, how does AI help to create that?
它通过解锁关键技术突破来实现这一点。以能源为例,作为人类物种,我们正在寻求可再生、廉价、理想情况下免费且无污染的能源。在我看来,至少有几种实现途径:其一是让核聚变技术比核裂变高效得多。
So it helps to create that by unlocking key technological breakthroughs. Let's take energy, for example. We are looking for, as a species, renewable, cheap, ideally free, non polluting energy. And to me, there's at least a couple of ways of doing that. One would be to make fusion work much better than nuclear fission.
核聚变更安全——这显然是太阳的运作方式。我们已经在攻克相关挑战之一,即在聚变反应堆中约束等离子体,目前已有令人难以置信的尖端技术。另一条路径是提升太阳能效率——如果铺设太阳能板的面积相当于半个得克萨斯州,就足以满足全球能源需求。
It's much safer. That's obviously the way the sun works. We're already working on one of the challenges for that, which is containing the plasma in a fusion reactor, and we already have the state of the art way of doing that sort of unbelievably. The other way is to make solar power work much better. If we had solar panels just tiling something, you know, half the size of Texas, that would be enough to power the whole world's uses of of energy.
目前效率还不够高。但如果能实现室温超导体(这无疑是该领域的圣杯),局面将彻底改观。我能预见AI助力材料科学——这是个巨大的组合优化问题,需要从海量化合物组合中寻找最优解。
So it's just not efficient enough right now. But if you had superconductors, you know, room temperature superconductor, which is obviously the the holy grail in that area, if that was possible, suddenly, that would make that much more viable. And I could imagine AI helping with material science. That's a big combinatorial problem. Huge search space, all the different compounds you can combine together, which one's the best.
当然,爱迪生当年手工筛选钨丝制作灯泡就是类似过程。但想象一下在比灯泡复杂得多的问题上大规模应用这种方法——这正是我认为AI能大显身手的领域。
And, of course, Edison sort of did that by hand when he found tungsten for light bulbs. But imagine doing that at enormous scale on much harder problems than a light bulb. That's kind of the sorts of things I'm thinking an AI could be used for.
我想你大概能猜到我的下一个问题。既然这是完全乐观的乌托邦式未来图景,不可能全是积极面。当你深夜辗转难眠时,最担忧的是什么?
I think you probably know what I'm gonna ask you next. Because if that is the fully optimistic, utopian view of the future, it can't all be positive. When you're lying awake at night, what are the things that you worry about?
说实话,我确实认为那是一个非常可能实现的终极状态,就是我向你描绘的乐观情景。当然,这也是我从事人工智能研究的原因——我希望能实现那样的未来。另一方面,我最大的担忧之一是人类在通往通用人工智能的道路上会如何运用这些技术。和大多数技术一样,它们既可用于善举也可用于恶行。我认为这取决于我们整个社会以及政府来决定它们的发展方向。
Well, to be honest with you, I do think that is a very plausible end state, the optimistic one I painted you. And, of course, that's what I reason I work on AI is because I hoped it would be like that. On the other hand, one of the biggest worries I have is what humans are going to do with AI technologies on the way to AGI. Like most technologies, they could be used for good or bad. And I think that's down to us as a society and governments to decide which direction they're gonna go in.
你认为社会准备好迎接通用人工智能了吗?
Do you think society is ready for AGI?
我认为还没有。这也是我们这个播客系列想要探讨的部分内容——让公众更深入地理解什么是通用人工智能、什么是人工智能,以及未来将面临什么。然后我们才能作为一个整体社会(而不仅仅是技术专家)开始思考,我们想要如何运用这些系统。
I don't think yet. I think that's part of what this podcast series is about as well is to give the general public a more of an understanding of what AGI is, what AI is, and what's coming down the road. And then we can start grappling with, as a society, not just the technologists, what we want to be doing with these systems.
你提到你有个二十年左右的预测,同时也在关注社会对这些概念的理解和应对程度。你认为DeepMind是否有责任在任何阶段暂停发展?
You said you've got this sort of twenty year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas. Do you think that deep mind has a responsibility to hit pause at any point?
有可能。我一直设想当我们接近你之前提到的灰色地带时,最好的做法可能是暂停提升这些系统的性能,以便能进行细致入微的分析,甚至可能从数学角度证明系统的某些特性,从而明确你所构建系统的边界。到那时,我认为全世界最杰出的头脑都应该思考这个问题。所以我主张——你知道,像陶哲轩这样的顶尖数学家——事实上我和他讨论过这个问题,我知道他正在研究黎曼猜想这类数学巅峰课题,但眼下这个问题更为紧迫。我有个类似'科学界复仇者联盟'的构想,这有点像我的梦想。
Potentially, I always imagined that as we got closer to the sort of gray zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to minute detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you're building. At that point, I think all the world's greatest minds should probably be thinking about this problem. So that was what I would be advocating to, you know, the Terence Tauss of this world, the best mathematicians is that, actually, if I've even talked to him about this, I know you're working on the Riemann hypothesis or something, which is the best thing in mathematics, but, actually, this is more pressing. I have this sort of idea of, like, almost Avengers assembled of the scientific world. It's that's a bit of, like, my dream.
陶哲轩同意加入你的'复仇者联盟'了吗?
Did Terrence Tau agree to be one of your avengers?
我还没向他透露完整的计划。
I didn't quite tell him the full plan of that.
我知道一些相当杰出的科学家曾非常严肃地讨论过通往AGI的这条路。我想到斯蒂芬·霍金。你是否曾与这类人就未来图景进行过辩论?
I know that some quite prominent scientists have spoken in quite serious terms about this path towards getting AGI. I'm thinking about Stephen Hawking here. Do you ever have debates with those kind of people about what the future looks like?
是的。我确实与霍金交谈过几次。我去剑桥拜访他。原本只是半小时的会面,结果我们聊了好几个小时。他想了解AI研发最前沿的实际情况。
Yeah. I actually talked to Stephen Hawking a couple of times. I went to see him in Cambridge. Was supposed to be a half an hour meeting, but we ended up talking for hours. He wanted to understand what was going on at the coalface of AI development.
我向他解释了我们的工作内容,包括今天讨论的这些议题,以及我们的担忧。得知人们正以正确方式思考这些问题后,他感到宽慰许多。最后他说:'祝你好运,但别太顺利'。说这话时他目光炯炯地直视着我,那种感觉太奇妙了。这确实是他对我说的最后一句话。
And I explained to him what we were doing, the kinds of things we've discussed today, what we're worried about, and he felt much more reassured that people were thinking about this in the correct way. And at the end, he said, I wish you the best of luck, but not too much. Then he looked at right in my eye with a twinkle in his eye. Like, it was just amazing. That was literally his last sentence to me.
'祝你好运,但别太顺利'。
Best of luck, but not too much.
真美好。
That's lovely.
我觉得这句话妙极了。
Which I thought was perfect.
确实妙不可言。在通往AGI的道路上,某些特定AI系统(或称狭义AI)已取得重大突破,比如我们在第一集介绍过的DeepMind系统AlphaFold。它能精确预测蛋白质三维结构,其影响涵盖从新药研发到疫情防控等诸多领域。我曾问过埃米斯,这家以开发超人类水平游戏AI闻名的公司,为何能在短短几年内攻克某些最重大的科学挑战。
It is it is perfect. Along the road to AGI, there have already been some significant breakthroughs with particular AI systems or narrow AI as it's sometimes known. Not least the deep mind system known as AlphaFold, which we heard about in episode one. AlphaFold has been shown to accurately predict the three d structures of proteins with implications for everything from the discovery of new drugs to pandemic preparedness. I asked Emmis how a company known for getting computers to play games to a superhuman level was able to achieve success in some of the biggest scientific challenges in the space of just a few short years.
DeepMind创立之初的理念就是验证我们的通用学习理论,包括强化学习、深度学习,通过在游戏领域结合这些技术来攻克最复杂的游戏——比如电脑游戏中的《星际争霸》和棋盘游戏中的围棋。随后我们希望将这套方法拓展到现实世界问题,特别是在我另一个热爱的领域:科学。我个人从事AI研究的根本原因,就是希望将AI作为终极工具来加速几乎所有领域的科学发现。既然它是通用工具,就应该能应用于众多科学领域。而我们的蛋白质折叠程序AlphaFold,就是这一理念的首个重大例证。
The idea was always, from the beginning of DeepMind, to prove our general learning ideas, reinforcement learning, deep learning, combining that on games, tackle the most complex games there are out there, so Go and StarCraft in terms of computer games and board games. And then the hope was we could then start tackling real world problems, especially in science, which is my other huge passion. And at least my personal reason for working on AI was to use AI as the ultimate tool, really, to accelerate scientific discovery in almost any field. Because if it's a general tool, then it should be applicable to many, many fields of science. And I think AlphaFold, which is our program for protein folding, is our first massive example of that.
我认为它让科学界意识到了AI可能带来的变革。
And I think it's woken up the scientific world to the possibility of what AI could do.
你希望AlphaFold将产生怎样的影响?
What impact do you hope that AlphaFold will have?
我希望AlphaFold能开启生物学的新纪元——在这个时代,计算方法和AI技术将帮助建模生物系统的各个方面,从而加速生物学发现进程。我期待它能对药物研发产生巨大影响,同时推动基础生物学对人体蛋白质功能的理解。在我看来,机器学习之于生物学,就像数学之于物理学一样,是最佳的描述语言。过去五十年里,虽然人们尝试用数学方法研究生物学并取得一定成果,但生物系统的复杂性难以用简单方程描述。而机器学习恰恰擅长从纷杂信号中发现规律。
I hope AlphaFold is the beginning of a new era in biology biology where computational and AI methods are used to help model all aspects of biological systems and therefore accelerate our discovery process in biology. So I'm hoping that it will have a huge on drug discovery, but also fundamental biology, understanding what these proteins do in your body. And I think that if you look at machine learning, it's the perfect description language for biology in the same way that maths was the perfect description language for physics. And many people, obviously, in the last fifty years have tried to apply mathematics to biology with some success, but I think it's too complex for mathematicians to describe in a few equations. But I think it's the perfect regime for machine learning to spot patterns.
机器学习特别擅长处理微弱、混乱的信号并从中提取意义,这正是生物学研究所处的状态。
Machine learning is really good at taking weak signals, messy signals, and making sense of them, which is I think the regime that we're in with biology.
AI如何应用于未来可能发生的疫情?
How could AI be used for a future pandemic?
我们目前正在研究生物学家确认的20种最可能引发下次疫情的病原体,计划对这些病毒涉及的所有蛋白质进行结构预测(假设技术可行)。这样制药行业就能提前布局,针对这些病毒可能的变异研制药物、解毒剂或抗病毒方案。未来几年内,我们还将实现自动化的药物发现流程——不仅能提供蛋白质结构,甚至能预测所需化合物的类型。
So one of the things actually we're looking for now is the top 20 pathogens that biologists are identifying could cause the next pandemic to fold all the proteins, assuming, you know, it's feasible, involved in all those viruses. So that drug discovery and pharma can have a head start at figuring out what drugs or antidotes or antivirals would they make to combat those if those viruses ended up mutating slightly and becoming the next pandemic. I think in the next few years, we'll also have automated drug discovery processes as well. So we won't just be giving the structure of the protein. We might even be able to propose what sort of compound might be needed.
我认为人工智能有很多潜在的应用场景。另一方面,在分析领域,它可以追踪趋势并预测传播模式。
So I think there's a lot of things AI can potentially do. And then on the other side of things, maybe on the analysis side, to track trends and predict how spreading might happen.
鉴于人工智能系统为科学带来的重大突破,你认为未来是否会出现人工智能获得诺贝尔奖的那一天?
Given how significant the advances are for science that are being created by these AI systems, do you think that there will ever be a day where an AI wins a Nobel Prize?
我想说,就像任何工具一样,关键在于人类的创造力。这就好比问,发现木星卫星的功劳该归谁?是他的望远镜吗?不,我认为是伽利略。
I would say that just like any tool, it's the human ingenuity that's gone into it. You know, it's sort of like saying, who should we credit spotting Jupiter's moons? Is it his telescope? No. I think it's Galileo.
当然,望远镜也是他制造的。但更重要的是他用肉眼观测到这一现象,并将其记录下来。虽然'AI获奖'是个很棒的科幻故事设定,但至少在实现完全通用人工智能之前——除非它能自主选择课题、提出假设并解决问题。
And, of course, he also built the telescope. Right? Famously as well as it was his eye that saw it, and then he wrote it up. So I think it's a nice sort of science fiction story to say, well, the AI should win it. But at least until we get to full AGI, if it's sent in, it's picked the problem itself, it's come up with a hypothesis, and then it solved it.
那情况就完全不同了。但目前它本质上只是个自动化工具,我认为荣誉仍应归于人类。
That's a little bit different. But for now where it's just a fairly automated tool effectively, I think the credit should go probably to the humans.
不过我挺喜欢给无生命物体颁奖的想法,比如大型强子对撞机也可以拿个奖。
I still quite like the idea of giving their bells to inanimate objects, like large a hadron collider can have one.
没错,回归分析模型就曾获过奖。
Well, Regression have one. Exactly.
我挺喜欢的
I just quite like
这个主意不错。甚至
that idea. Even
在通用人工智能(AGI)尚未问世之前,像AlphaFold这样的人工智能系统已明显对现实世界问题产生重大影响。但尽管有诸多优势,人工智能的部署仍存在棘手的伦理问题——这正是我们整个系列探讨的主题。比如人工智能对环境的影响,以及带有偏见的AI系统被用于辅助决策(如医疗资源分配或假释资格评估)所引发的问题。你如何看待AI在这些场景中的应用?
before AGI has been created, it's clear that AI systems like AlphaFold are already having a significant impact on real world problems. But for all their positives, there are also some tricky ethical questions surrounding the deployment of AI, which we've been exploring throughout this series. Things like the impact of AI on the environment and the problem of biased AI systems being used to help make decisions on things like access to health care or eligibility for parole. What's your view on AI being used in those situations?
我认为必须警惕炒作过热的现象。很多人误以为AI已经无所不能。但如果真正了解AI技术,就会明白它尚未成熟。尤其涉及人类行为的微妙判断时——比如假释听证会就是典型例子。
I just think we have to be very careful that the hype doesn't get ahead of itself. There are a lot of people who think AI can just do anything already. And actually, if they understood AI properly, they'd know that the technology is not ready. And one big category of those things is very nuanced human judgment about human behavior. So parole board hearing would be a good example of that.
目前AI根本不可能模拟资深假释委员对社会各因素的权衡。这些要素如何用数学或数据量化?更何况这类决策本身具有关键性影响。综合来看,我认为AI不应被用于此类决策。现阶段AI更适合作为分析工具,比如医疗影像初筛,但最终决定权必须交给医生。
There's no way AI is ready yet to kind of model the balance of factors that experienced, say, parole board member is balancing up across society. How do you quantify those things mathematically or in data? And then if you add in a further thing, which is how critical that decision is either way, then all those things combined mean to me that it's not something that AI should be useful, certainly not to make the decision. At the level AI is at the moment, I think it's fine to use it as an analysis tool to triage like a medical image, but the doctor needs to make the decision.
在讨论语言模型的专题中,我们曾涉及某些令人担忧的潜在用途。DeepMind能否真正阻止语言模型被用于传播虚假信息等恶意目的?
In our episode on language models, we talk about some of the more concerning potential uses of them. Is there anything that DeepMind can do to really prevent some of those nefarious purposes of language models like spreading misinformation?
我们正在自主开展多项关于语言模型问题的研究。要开发出能解析系统行为动机的分析工具还有很长的路要走。核心在于理解系统为何产生特定输出,继而解决偏见、公平性等问题——当然必须以真相为基准。但某些主观领域确实存在分歧,比如不同政治立场人群对同一事物的看法差异。
We're doing a bunch of research ourselves on, you know, the issues with language models. I think there's a long way to go, like in terms of building analysis tools to interpret what these systems are doing and why they're doing it. I think this is a question of understanding why are they putting this output out, and then how can you fix those issues like biases, fairness, and what's the right way to do that. Of course, you want truth at the heart of it. But then there are subjective things where people from different, say, political persuasions have a different view about something.
你打算在那一刻宣称什么是真相?这进而会影响到社会对此的看法,而具体是哪个社会呢?这些都是极其复杂的问题。正因如此,我认为在将这类系统部署到产品等领域时,我们必须谨慎行事。
What are you going to say is the truth at that point? So then it sort of impinges on, like, well, what does society think about that? And then which society are you talking about? And these are really complex questions. And because of that, this is an area I think that we should be proceeding with caution in terms of deploying these systems in products and things.
如何减轻人工智能对环境的影响?是否存在这样的风险:我们不断建造越来越庞大、能耗越来越高的系统,最终产生负面影响?
How do you mitigate the impact that AI is having on the environment? Is there just a danger of building larger and larger and larger energy hungry systems and having a negative impact?
确实。我们必须考虑这一点。我认为AI系统仅占全球能源消耗的极小部分,即便是大型模型,与在线观看视频相比也相形见绌。所有这些活动消耗的计算机资源和带宽要多得多。第二点是,实际上现在大多数大型数据中心,尤其是像谷歌这样的企业,几乎已经实现了100%的碳中和。
Yeah. I mean, we have to consider this. I think that AI systems are using a tiny sliver of the world's energy usage, even the big models compared to watching videos online. All of these things are using way more computers and bandwidth. Second thing is that actually most of the big data centers now, especially things like Google, are pretty much 100% carbon neutral.
但我们应该延续这一趋势,实现完全绿色的数据中心。当然,你还需要权衡所构建系统带来的收益。以医疗系统为例,与其能源消耗相比,大多数AI模型都产生了巨大的净正面效益。最后,构建AI模型的过程本身就能被用于优化能源系统。
But we should continue that trend to become fully green data centers. And then, of course, you have to look at the benefits of what you're trying to build. So let's say a health care system or something like that relative to energy usage. Most AI models are hugely net positive. And then the final thing is we've is that actually building the AI models can then be used, you know, to optimize the energy systems itself.
例如,我们AI系统最成功的应用之一就是控制数据中心的冷却系统,节省了约30%的能源消耗。这种节约很可能超过了我们所有AI模型的总能耗。所以关键是要保持警惕,确保情况不会失控。但我认为目前这些担忧可能被略微夸大了。
So for example, one of the best applications we've had of our AI systems is to control the cooling in data centers and save, like, 30% of the energy they use. You know, that saving is way more than we've ever used for all of our AI models put together probably. So it's an important thing to bear in mind to make sure it doesn't get out of hand. But I think right now, I think that particular worries are sort of slightly overhyped.
虽然Demis和他在DeepMind的同事们正在认真思考AI在现实世界中可能出错的情况,但我们的谈话中最突出的是Demis对AI和AGI终将为整个社会带来净效益的坚定信念。
While Demes and his colleagues at DeepMind are thinking hard about what could go wrong when AI is deployed in the real world, what really shone through during our conversation was Demis' faith in the idea that ultimately building AI and AGI will be a net positive for the whole of society.
纵观当今人类面临的挑战——气候、可持续性、不平等、自然环境等问题,在我看来都在不断恶化。未来还将出现新的挑战,比如水资源获取等,我认为这些在未来五十年将成为重大议题。如果没有AI这样的技术即将问世,我对我们解决这些问题的能力会极度担忧。但我持乐观态度,因为我相信AI即将到来,它将成为我们有史以来创造的最佳工具。
If you look at the challenges that confront humanity today, climate, sustainability, inequality, the natural world, all of these things are, in my view, getting worse and worse. And there's gonna be new ones coming soon down the line, like access to water and so on, which I think are gonna be really major issues in the next fifty years. And if there wasn't something like AI coming down the road, I would be extremely worried for our ability to actually solve these problems. But I'm optimistic we are gonna solve those things because I think AI is coming, and I think it will be the best tool that we've ever created.
从某些方面来说,很难不被德米斯的乐观所吸引,很难不对他描绘的未来图景感到兴奋。而且越来越明显的是,随着这项技术的成熟,将带来巨大的收益。但随着研究朝着通用人工智能这一北极星目标不断推进,同样显而易见的是,这种进步也伴随着严重的风险。既有需要解决的技术挑战,也有不容忽视的伦理和社会挑战。其中许多问题单靠人工智能公司是无法解决的。
In some ways, it's hard not to be drawn in by Demis' optimism, to be enthused by the tantalizing picture he paints of the future. And it's becoming clearer that there are serious benefits to be had as this technology matures. But as research swells behind that single North Star of AGI, it's also evident that this progress comes with its own serious risks too. There are technical challenges that need resolving, but ethical and social challenges too that can't be ignored. And much of that can't be resolved by AI companies alone.
这需要更广泛的社会讨论,我希望这期播客至少能在某种程度上推动这种讨论。但最让我震撼的是,这个领域在如此短的时间内取得了如此大的进展。在上季结束时,我们还在热烈讨论AI玩雅达利游戏、围棋和国际象棋。而现在,随着这些构想逐渐成熟,我们已经可以合理期待AI将在药物研发、核聚变和基因组理解等领域发挥作用。我不禁好奇,当我们再次相见时,又会有什么新发现等待着我们。
They require a broader societal conversation, one which I hope, at least in some small way, is fueled by this podcast. But I'm struck most of all by how far the field has come in such a short space of time. At the end of the last season, we were talking enthusiastically about AI playing Atari games and Go and chess. And now, all of a sudden, as these ideas have found their feet, we can reasonably look forward to AI making a difference in drug discovery and nuclear fusion and understanding the genome. And I do wonder what new discoveries might await when we meet again.
《DeepMind播客》由Whistle Down制作。系列制片人是丹·哈杜恩,制作支持来自吉尔·阿奇纳库。编辑是大卫·普雷斯特。音效设计由艾玛·巴纳比负责,奈杰尔·阿普尔顿担任音响工程师。本季原创音乐由伊莱妮·肖特别创作,这些音乐真是美妙绝伦。
DeepMind the podcast has been a whistle down production. The series producer is Dan Hardoon with production support from Jill Achinayku. The editor is David Prest. Sound design is by Emma Barnaby, and Nigel Appleton is the sound engineer. The original music for this series was specially composed by Elainie Shaw, and what wonderful music it was.
我是汉娜·弗莱教授。感谢您的收听。
I'm professor Hannah Fry. Thank you for listening.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。