本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
大家好,欢迎收听本周播客节目。我是乔恩·斯图尔特,今天由我来主持。这是个什么节目呢?
Hey, everybody. Welcome to the weekly show podcast. My name is Jon Stewart. I'm gonna be hosting you today. It's a what is it?
10月8日,星期三。我不知道今天晚些时候会发生什么,但我们明天就会离开。不过今天的节目,我想简单说几句。今天的嘉宾被誉为人工智能教父,杰弗里·辛顿先生自上世纪70年代起就开始研发最终演变为AI的这类技术。我想先让大家知道,我们会深入探讨这个话题。
Wednesday, October 8. I don't know what's gonna happen later on in the day, but we're gonna be out tomorrow. But today's episode I I I just wanna say very quickly. Today's episode, we are talking to someone known as the godfather of AI, a gentleman by the name of Jeffrey Hinton, has been developing the type of technology that has turned into AI since the '70s. And I wanna let you know so we we talk about it.
不过在第一部分,他为我们详细解释了AI的本质,这部分让我受益匪浅。虽然我们也会讨论'AI会毁灭人类'的议题,但理解基础概念对搭建认知框架至关重要。希望你们能和我一样觉得这部分内容引人入胜——天啊,它彻底拓展了我对这项技术本质、应用前景以及潜在风险的理解。我就不再赘言了,让我们有请本期播客的嘉宾。
The first part of it, though, he he gives us this breakdown of kind of what it actually is, which for me was unbelievably helpful. We get into the it will kill us all part, but it was important for my understanding to sort of set the scene. So I I I hope you find that part as interesting as I did because, man, it it expanded my understanding of what this technology is, of how it's going to be utilized, of what some of those dangers might be in a in a really interesting way. So I don't I will not hold it up any longer. Let us get to, our guest for this podcast.
女士们先生们,今天我们非常荣幸地邀请到多伦多大学计算机科学系荣休教授、施瓦茨雷斯曼研究院顾问委员会成员杰弗里·辛顿。先生,非常感谢您今天的到来。
Ladies and gentlemen, we are absolutely thrilled today to be able to welcome professor emeritus with the Department of Computer Science at the University of Toronto and Schwartz Reisman Institute's advisory board member Jeffrey Hinton is joining us. Sir, thank you so much for being with us today.
非常感谢您的邀请。
Well, you so much for inviting me.
能与您对话是我的荣幸。您被称为——我猜您会对这个称号很谦虚——人工智能教父,因为您在神经网络方面的研究。您还因此共同获得了2024年诺贝尔物理学奖,这个信息准确吗?
I'm delighted. You are known as, and I'm sure you will be very demure about this, the godfather of artificial intelligence for your work on sort of these neural networks. You you co won the actual Nobel Prize in physics in in 2024 for this work. Is is is that correct?
确实如此。这有点尴尬,因为我并不研究物理。所以当他们打电话通知我获得诺贝尔物理学奖时,我一开始根本不相信。
That is correct. It's slightly embarrassing since I don't do physics. So when they called me up and said you won the Nobel Prize in Physics, I didn't believe them to begin with.
其他物理学家是不是也在想,等等,那家伙甚至不是我们这行的。
And and were the other physicists going, wait a second. That guy that guy's not even in our business.
我强烈怀疑他们是,但他们没对我这么做。
I strongly suspect they were, but they didn't do it to me.
哦,太好了。我很高兴。这对你来说可能有点基础,但当我们谈论人工智能时,我并不完全确定我们在讨论什么。我知道有这些东西,比如大型语言模型。
Oh, good. I'm glad. This is gonna seem somewhat remedial, I'm sure, to you. But when we talk about artificial intelligence, I'm not exactly sure what it is that we're talking about. I know there are these things, large language models.
根据我的经验,人工智能只是一个稍微美化一点的搜索引擎。以前我谷歌搜索时,它只会给我答案。现在它会说,你问了个有趣的问题。所以我们谈论人工智能时到底在讨论什么?
I I know, to my experience, artificial intelligence is just a slightly more flattering search engine. Whereas I used to Google something, and it would just give me the answer. Now it says, what an interesting question you've asked me. So what are we talking about when we talk about artificial intelligence?
以前你用谷歌时,它会使用关键词,并且已经提前做了大量工作。所以如果你输入几个关键词,它能找到所有包含这些词的文档。
So when you used to Google, it would use keywords, and it would have done a lot of work in advance. So if you gave it a few keywords, it could find all the documents that had those words in.
好的。所以基本上它只是排序。它浏览、排序并找到单词,然后给你一个结果。
Okay. So basically, it's it's just a it's sorting. It's looking through, and it's sorting and finding words, and then bringing you a result.
是的。以前就是这样运作的。但它不理解问题是什么。所以它无法给你那些实际上不包含这些词但主题相同的文档。
Yeah. That's how it used to work. Okay. But it didn't understand what the question was. So it couldn't, for example, give you documents that didn't actually contain those words but were about the same subject.
现在
Now
它没有建立那种联系。哦,对了。因为它会说,这是你的结果减去某个未被包含的词。
It didn't make that connection. Oh, right. Because it would say, here is your result minus and then it would say like a word that was not included.
没错。但如果你有一份文档,里面完全没有你使用的词汇,它就无法找到,尽管那可能是一份与你讨论的主题高度相关的文档,只是用了不同的表达方式。现在它能理解你的话,而且理解方式几乎和人类一样。
Right. But if you had a document with none of the words you used, it wouldn't find that even though it might be a very relevant document about exactly the subject you were talking about. It had just used different words. Now it understands what you say, and it understands in pretty much the same way people do.
什么?所以如果我说,它会回应‘哦,我明白你的意思了。让我来给你讲解一下。’这样它就从一个单纯的搜索查找工具,变成了几乎是你讨论领域的专家,还能提供你可能没想到的内容。
What? So if I it'll say, oh, I know what you mean. Let me let me let me educate you on this. So it's gone from being kind of a literally just a search and find thing to an actual almost an expert in whatever it is that you're discussing, and it can bring you things that you might not have thought about.
是的。大型语言模型并非在所有领域都是专家。就像你某个知识渊博的朋友,只擅长特定领域。
Yes. So the large language models are not very good experts at everything. So if you take take some friend you have who knows a lot about some subject matter.
嗯。不,我确实有几个这样的朋友。
Mhmm. No. I got a couple of those.
对。他们可能在某些方面比大型语言模型懂得更多,但依然会惊讶于模型对他们专业领域的深刻理解。
Yeah. They probably know a bit they're probably a bit better than the large language model, but they'll nevertheless be impressed that the large language model knows their subject pretty well.
那么,机器学习与其他技术有什么区别?谷歌作为搜索引擎在机器学习方面表现如何?那不过是算法和预测罢了。
What is so what is the difference between sort of machine learning? So was was Google in terms of a a search engine machine learning? That's just algorithms and and predictions.
不,不完全是这样。机器学习是一个统称,指代计算机中任何能够学习的系统。
No. Not exactly. Machine learning is a kind of coverall term for any system on a computer that learns.
好的。
Okay.
现在这些神经网络是一种特殊的学习方式,与以往使用的方法大不相同。
Now these neural networks are a particular way of doing learning that's very different from what was used before.
好的。现在这些是新型神经网络。旧的机器学习并不被视为神经网络。当你说神经网络时,是指你的工作某种程度上源于七十年代,那时你认为自己在研究大脑,对吗?
Okay. Now these are these are the new neural networks. The old machine learning, those were not considered neural networks. And when you say neural networks, meaning your work was sort of the genesis of it was in the seventies where you thought you were studying the brain. Is that correct?
我当时试图提出关于大脑实际学习方式的构想。我们对此有所了解:大脑通过改变脑细胞间连接的强度来学习。
I was trying to come up with ideas about how the brain actually learned. And there's some things we know about that. It learns by changing the strengths of connections between brain cells.
等等,详细解释一下。你说它通过改变连接来学习。那么如果你向人类展示新事物,脑细胞真的会在内部建立新的连接吗?
Wait. That so explain that. What it says it it learns by changing the connection. So if if you show a human something new, brain cells will it will actually make new connections within brain cells.
它不会建立新的连接。已有的连接依然存在,但主要运作方式是改变这些连接的强度。
It won't make new connections. There'll be connections that were there already, but the main way it operates is it changes the strength of those connections.
哇。
Wow.
所以如果你从大脑中间的一个神经元——一个脑细胞的角度来思考
So if you think of it from the point of view of a neuron in the middle of the brain, a brain cell
好的。
Okay.
它一生中能做的只是偶尔发出‘砰’的信号。
All it can do in life is sometimes go ping.
他就这点能耐?这就是他唯一的...这就是他仅有的本事了。
That's all he's got? That's his only that's his only it's got.
它唯一能做的就是——除非它碰巧连接着肌肉。好吧。它可以偶尔发出‘砰’的信号,而且它必须决定何时发出这个信号。
All it's got is it can unless it happens to be connected to a muscle. Okay. It can sometimes go ping, and it has to decide when to go ping.
哦,哇。它是如何决定何时发出‘砰’的信号的?
Oh, wow. How does it decide when to go ping?
我很高兴你问了这个问题。还有其他神经元在发出‘砰’的信号。当它看到其他神经元特定的‘砰’信号模式时,它就会发出‘砰’的信号。你可以把这个神经元想象成接收来自其他神经元的‘砰’信号。
I I was glad you asked that question. There's other neurons going ping. Okay. And when when it sees particular patterns of other neurons going ping, it goes ping. And you can think of this neuron as receiving ping from other neurons.
每次它收到一个‘砰’信号,就会将其视为是否应该激活或发出‘砰’信号的投票数。你可以改变另一个神经元对它拥有的投票数。
And each time it receives a ping, it treats that as a number of votes for whether it should turn on or should should go ping or should not go ping. And you can change how many votes another neuron has for it.
你会如何改变那个投票?
Would you how would you change that vote?
通过改变连接的强度。连接的强度可以理解为这个其他神经元为你发出‘砰’信号提供的投票数。
By changing the strength of the connection. The strength of the connection, think of as the number of votes this other neuron gives for you to go ping.
好的。所以在某些方面,它确实让我想起了电影《小黄人》。但它几乎是一种社交,如果如果
Okay. So it really is, in some respects, it's a boy, it reminds me of the movie Minions. But it's it's almost a a social if if
我知道我在想什么。这非常像政治联盟。会有一起发出‘砰’信号的神经元群组,那个群组中的神经元会互相告诉对方发出‘砰’信号。然后可能会有另一个不同的联盟,他们会告诉其他神经元不要发出‘砰’信号。
I know what I'm thinking about. It's it's it's it's very like political coalitions. There'll be groups of neurons that go ping together, and the neurons in that group will all be telling each other go ping. And then there might be a different coalition, and they'll be telling other neurons don't go ping.
天啊。
Oh my god.
然后可能会有不同的联合体。
And then there might be a different coalition.
对。
Right.
它们都在互相告诉对方去‘ping’,同时告诉第一个联合体不要‘ping’。所以当第二个
And they're all telling each other to go ping and telling the first coalition not to go ping. And so when when the second
联合体在你大脑中运作时,是的。就像,我想要拿起一把勺子。
coalition going on in your brain Yes. In the way of, like, I would like to pick up a spoon.
没错。以勺子为例,你大脑中的勺子就是一组神经元一起‘ping’,这就是一个概念。
Yes. So spoon, for example, spoon in your brain is a coalition of neurons going ping together, and that's a concept.
哦,哇。所以当你学习时,比如你还是个婴儿,他们说到‘勺子’,就有一小群神经元在说‘哦,那是勺子’,并且它们之间的连接在加强。这就是为什么当你进行脑部成像时,会看到某些区域亮起来。那些亮起的区域就是对应特定物品‘ping’的神经元吗?
Oh, wow. So so as you're teaching, when you're when you're a baby and they go spoon, there's a little group of neurons going, oh, that's a spoon, and they're strengthening their connections with each other. So whatever is that why when, you know, you're you're imaging brains, you see certain areas light up. And is is that lighting up of those areas the neurons that ping for certain items
或者动作?并不完全是这样。
or actions? Not exactly.
快接近了。我快接近了。
Getting close. I'm getting close.
很接近了。当你做不同事情时,不同区域会激活,比如处理视觉、说话或控制手部动作时,相应区域就会亮起。但那些看到勺子会同步激活的神经元联盟,它们不只对勺子起反应。这个联盟里大部分成员看到叉子时也会被激活。
It's close. Different areas will light up when you're doing different things, like when you're doing vision or talking or controlling your hands. Different areas light up for that. But the coalition of neurons that goes ping that go ping together when there's a spoon, they don't only work for spoon. Most of the members of that coalition will go ping when there's a fork.
所以这些联盟之间有很大重叠。
So they overlap a lot, these coalitions.
这是个包容性很强的联盟。我喜欢用政治来比喻这个现象。没想到你的大脑运作也受同伴压力影响。
This is a big tent. It's a big tent coalition. I love thinking about this as political. I had no idea your brain operates on peer pressure.
这种情况很常见。是的。概念就像是和谐共处的联盟,但它们之间也有大量重叠。比如狗的概念和猫的概念就有很多共同点,它们会共享许多神经元。
There's a lot of that goes on. Yes. And concepts are kind of coalitions that are happy together, but they they overlap a lot. Like the concept for dog and the concept for cat have a lot in common. They'll have a lot of shared neurons.
没错。
Right.
特别是那些表征诸如‘这是有生命的’、‘这是毛茸茸的’或‘这可能是家养宠物’等特征的神经元,这些神经元在识别猫和狗时是共通的。
In particular, the neurons that represent things like this is animate or this is hairy or this might be a domestic pet. All those neurons will be in common to cat and dog.
我能再问您一个问题吗?再次感谢您如此耐心地解释,这对我真的很有帮助。是否存在某些神经元会对‘动物’这一广义概念产生广泛响应?而其他神经元是否遵循从宏观到微观、从一般到具体的运作方式?也就是说,是否存在一组神经元负责广义响应,随着认知逐渐具体化,是否会激活那些响应频率较低但针对性更强的特定神经元?
Are there can I ask you that and again, I so appreciate your patience with this and explain this is this is really helpful for me? Are there certain neurons that ping broadly, right, for the broad concept of animal? And then other neurons, like, does it work from macro to micro, from general to specific? So you have a coalition of neurons that ping generally. And then as you get more specific with the knowledge, does that engage certain ones that will ping less frequently but for maybe more specificity?
是这样吗?
Is is that something?
好的。这是个很好的理论。不,没有人...确实没人能对此完全确定。
Okay. That's a very good theory. Nobody no. Nobody nobody really knows for sure about this.
哦,不过那——
Oh, But that's
这是个非常合理的理论。特别是,在这组神经元中,有些会对更普遍的事物更频繁地响应,而另一些则可能对更具体的事物响应频率较低。
a very sensible theory. In particular, there's gonna be some neurons in that coalition that ping more often for more general things. And then there may be neurons that ping less often, for much more specific things.
对,好的。
Right. Okay.
那么
So
所有这些都贯穿始终。就像你说的,有些区域会触发视觉或其他感官,比如触觉。我想象有一个语言触发系统。你刚才说,如果我们能让计算机变得更智能,不再只是简单的二进制条件判断,而是能让它们像这些联盟一样运作?
And they all and and this works throughout. And like you say, there's certain areas that will ping for vision or other senses, touch. I imagine there's a a ping system for language. And and and you were saying, what if we could get computers, which were much more, I would think, just binary if then, you know, sort of basic. You're saying, could we get them to work as these coalitions?
是的。我认为二进制条件判断与此关系不大。区别在于人们过去试图将规则输入计算机。他们试图找出编程计算机的基本方法,就是详细地构思出解决问题的步骤,然后告诉
Yeah. I don't think binary if then has much to do with it. The difference is people were trying to put rules into computers. They were trying to figure out so the basic way you program a computer is you figure out in exquisite detail how you would solve the problem. And then you tell
计算机所有步骤。
the computer all the steps.
然后你精确地告诉计算机该做什么。这就是常规的计算机程序。
And then you tell the computer exactly what to do. That's a normal computer program.
好的。明白了。
Okay. Great.
这些东西完全不是那样的。
These things aren't like that at all.
所以你试图改变那个流程,想看看我们是否能创建一个运作方式更接近人脑而非逐项指令列表的流程。
So you were trying to change that process to see if we could create a process that was that functioned more like how the human brain would rather than a item by item instruction list.
你
You
希望它能更全局化地思考。这是怎么发生的?
wanted it to to think more more more globally. How did how did that occur?
对很多人来说,大脑的运作方式显然不是别人给你规则然后你只是执行这些规则。
So it was sort of obvious to a lot of people that the brain doesn't work by someone else giving you rules and you just execute those rules.
没错。
Right.
我是说,在朝鲜,他们可能希望人脑那样运作,但事实并非如此。
I mean, in North Korea, they would love brains to work like that, but they don't.
你的意思是,在专制世界里,人脑就会那样运作。
You're saying that in an authoritarian world, that is how brains would operate.
嗯,这就是他们希望它们运作的方式。
Well, that's how they would like them to operate.
这是他们希望它们运作的方式,但实际比这更具艺术性。
That's how they would like them to operate. It's a little more artsy than that.
是的,好吧。
Yes. Alright.
有道理。
Fair enough.
我们确实为神经网络编写程序,但这些程序只是告诉神经网络如何根据神经元活动调整连接强度。所以这是个相当简单的程序,并不包含关于世界的各种知识。它只是基于神经元活动改变神经连接强度的规则。
We do write programs for neural nets, but the programs are just to tell the neural net how to adjust the strength of the connection on the basis of the activities of the neurons. So that's a fairly simple program that doesn't have all sorts of knowledge about the world in it. It's just what are the rules for changing neural connection strengths on the basis of the activities.
能举个例子吗?那算是机器学习还是深度学习?
Can you give me an example? So would that be considered sort of is that machine learning or is that deep learning? What what would
那是深度学习。
That's deep learning.
如果你
If you
拥有一个多层网络,就称之为深度学习,因为它包含许多层。
have a a network with multiple layers, it's called deep learning because there's many layers.
那么当你试图让计算机进行深度学习时,你实际上在对它说什么?比如,你会给出什么样的指令示例?
So what are you saying to a computer when you are trying to get it to do deep learning? Like, what would be an example of an instruction that you would give?
好的,让我来演示
Okay. So let me do
哦,现在我们没问题了。我是已经进入神经网络201课程了吗,还是仍在101阶段?
Oh, now we're alright. Am I am I yet am I in neural learning two zero one yet, or am I still in one zero one?
你就像前排那个什么都不知道但总能提出好问题的聪明学生。
You're like the smart student in the front row who doesn't know anything but asks these good questions.
这是我听过最友善的评价了。谢谢。如果你还在为无线服务多付冤枉钱,我要你离开这个国家。立刻消失。没有任何借口。
That's that's the nicest way I've ever been described. Thank you. If you're still overpaying for your wireless, I want you to leave this country. I want you gone. There's no excuse.
在移动领域,她最爱的词是'不'。现在是时候对说'不'说'是'了。没有合约,没有月费账单,没有超额费用,没有废话。这就是为什么这么多人选择转投每月15美元的高端无线服务。天哪。
In mobile, her favorite word is no. It's time to say yes to saying no. No contracts, no monthly bills, no overages, no BS. Here's why so many said yes to making the switch and getting premium wireless for $15 a month. My god.
我花这点钱买口香糖。口香糖,我说。抛弃那些高价的无线服务及其令人咋舌的月费账单、意外超额和附加费。套餐起价每月15美元。这意味着所有套餐都包含高速数据、无限通话和短信,覆盖全国最大的5G网络。
I spend that on chicklets. Chicklets, I say. Ditch overpriced wireless and their jaw dropping monthly bills, unexpected overages and fees. Plants start at $15 a month. It meant All plans come with high speed data and unlimited talk and text delivered on the nation's largest five gs network.
使用你自己的手机搭配任何Mint Mobile套餐,保留你的电话号码和所有现有联系人。准备好对说'不'说'是'了吗?立即在mintmobile.com/tws转投我们。就是mintmobile.com/tws。需预付45美元,相当于每月15美元。
Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts. Ready to say yes to saying no? Make the switch at mintmobile.com/tws. That's mintmobile.com/tws. Upfront payment of $45 required equivalent to $15 a month.
限时新客户优惠仅限前三个月。无限套餐超过35GB后网速可能下降。税费另计,详情请见Mint Mobile。
Limited time new customer offer for first three months only. Speeds may slow above 35 gigabytes on unlimited plan. Taxes and fees extra, see Mint Mobile for details.
让我们回到1949年。
So let's go back to 1949.
哦,好吧。
Oh boy, all right.
这里有个叫唐纳德·赫布的人提出的理论,关于如何改变连接强度。如果神经元A发出'乒'声,紧接着神经元B也发出'乒'声,就增强它们之间的连接强度。这是个非常简单的规则,被称为赫布法则。
So here's a theory from someone called Donald Hebb about how you change connection strengths. If neuron A goes ping and then shortly afterwards neuron b goes ping, increase the strength of the connection. That's a very simple rule. That's called the Heb rule.
对。赫布规则就是如果神经元a发出'乒'信号,就增强连接;b发出'乒'信号,也增强那个连接。
Right. The Heb rule is if neuron a goes ping, increase the connection, and b goes ping, increase that connection.
是的。
Yes.
好的。
Okay.
现在,随着计算机的出现,人们开始进行计算机模拟时发现,单靠这条规则是行不通的。结果就是所有连接变得异常强烈,所有神经元同时'乒'地放电,然后就会引发癫痫发作。
Now, as soon as computers came along and you should do computer simulations, people discovered that rule by itself doesn't work. What happens is all the connections gets very strong and all the neurons go ping all at the same time and you have a seizure.
哦,好吧。
Oh, okay.
真遗憾,不是吗?
That's a shame, isn't it?
确实很遗憾。
That is a shame.
肯定存在某种方式既能增强连接,也能削弱连接。
There's got to be something that makes connections weaker as well as making them stronger.
没错,必须要有一定的辨别力。
Right. There's got to be some discernment.
是的。
Yes.
好的。
Okay.
那么,请允许我稍微离题一分钟。
So, if I can digress for about a minute.
哈,我正求之不得呢。
Boy, I'd like that.
好的。假设我们想构建一个神经网络——呃,一个具有多层神经元的网络,用来判断图像中是否包含鸟类。
Okay. Suppose we wanted to make a neural network Uh-uh. That have multiple layers of neurons and it's to decide whether an image contains a bird or not.
像验证码那样吗?就是当你登录时它会提示你
Like a CAPTCHA? Like when you go on and it's said you
看着它?没错。我们想用神经网络来解决那个验证码。
look at it? Exactly. We wanna solve that CAPTCHA with a neural net.
好的。
Okay.
所以神经网络的输入,也就是底层神经元,是一群神经元,它们以不同强度发出信号,代表图像中像素的亮度。
So the input to the neural net, the sort of bottom layer of neurons, is a bunch of neurons and they go ping to different levels of they have different strengths of ping and they represent the intensities of the pixels in the image.
好的。
Okay.
所以如果是1000x1000的图像,你就有一百万个神经元以不同速率发出信号,来表现每个像素的亮度。
So if it's a thousand by thousand image, you've got a million neurons that are going ping at different rates to represent how intense each pixel is.
好的。
Okay.
这是你的输入。现在你需要将其转化为一个判断:这是鸟吗?
That's your input. Now you've got to turn that into a decision. Is this a bird or not?
哇,这个判断...让我问你一个问题。你编程时用像素强度吗?因为在我看来,像素强度并不是判断是否为鸟的有效工具。判断是否为鸟的工具应该是:那些是羽毛吗?那是鸟喙吗?
Wow. So that decision so let me ask you a question then. Do you program in because strength of pixel doesn't strike me as a really useful tool in terms of figuring out if it's a bird. Figuring out if it's a bird seems like the tool would be, are those feathers? Is that a beak?
那是鸟冠吗?
Is that crest?
没错。就是这样。单靠像素本身确实无法告诉你它是不是鸟。
Yeah. Here it goes. So the pixels by themselves Yeah. Don't really tell you whether it's a bird.
好的。
Okay.
因为鸟有明亮的也有暗色的,有飞翔的也有栖息的,有近在眼前的鸵鸟也有远处的海鸥。它们都是鸟。那么接下来该怎么做?人们受大脑启发,想到的方法是:用一堆边缘检测器。因为我们都知道,在线条画中你也能很好地识别鸟类。
Because you can have birds that are bright and birds that are dark and you can have birds flying and birds sitting down and you can have an ostrich in your face and you have a seagull in the distance. They're all birds. Okay, so what do you do next? Well, sort of guided by the brain what people did next was said, let's have a bunch of edge detectors. So what we're gonna do because of course you can recognize birds quite well in line drawings.
对。
Right.
我们要做的是创建一些神经元,大量的神经元来检测微小的边缘片段。也就是图像中一侧较亮另一侧较暗的微小区域。
So what we're gonna do is we're gonna make some neurons, a whole bunch of them that detect little pieces of edge. That is little places in the image where it's bright on one side and darker on the other side.
对。
Right.
假设我们要检测一小段
So suppose we want to detect a little piece of
垂直边缘。这几乎是在创建一种原始的视觉形态。
vertical It's almost creating a, like, primitive form of vision.
这就是我们构建视觉系统的方式。是的。现在大脑和计算机都是这样运作的。
This is how we you make a vision system. Yes. This is how it's done in the brain and how it's done in computers now.
哇。明白了。
Wow. Okay.
如果你想检测图像中特定位置的一小段垂直边缘,假设你观察一列三个像素及其旁边另一列三个像素。
So if you wanted to detect a little piece of vertical edge in a particular place in the image, Let's suppose you look at a little column of three pixels and next to them another column of three pixels.
嗯。
Mhmm.
如果左边的亮而右边的暗,嗯。你想说这里有一条边缘。所以你必须问,我该如何构建一个能实现这一功能的神经元?
And if the ones on the left are bright and the ones on the right are dark Mhmm. You want to say yes there's an edge here. So you have to ask, how would I make a neuron that did that?
天啊。好吧。行吧。我要跳过了。好的。
Oh my god. Okay. Alright. I'm gonna jump ahead. Alright.
首先你要教会网络什么是视觉。你在教它,这些是图像。这是背景。这是形状。这是边缘。
So the first thing you do is you have to teach the the the network what vision is. So you're teaching it, these are images. This is background. This is form. This is edge.
这不是。这是明亮的。所以你在教它几乎如何去看。然后你的网络对吧。
This is not. This is bright. This is so you're teaching it almost how to see. And then your net Right.
在过去,人们会尝试输入大量规则来教它如何看,并解释什么是前景什么是背景。但真正相信神经网络的人说,不。不要加入所有这些规则。
In the in the old days, people would try and put in lots of rules to teach it how to see and explain to you what foreground was and what background was. Okay. But, the people who really believed in neural nets said, no. No. Don't put in all those rules.
让它仅从数据中学习所有这些规则。
Let it learn all those rules just from data.
它的学习方式是通过在开始识别边缘和其他特征时增强信号。
And the way it learns is by strengthening the pings once it it starts to recognize edges and things.
我们稍后会讲到那个部分。
We'll come to that in a minute.
我有点超前了。
I'm jumping ahead.
你确实太超前了。
You're jumping ahead.
好吧,好吧。
Alright. Alright.
让我们继续讨论这个边缘检测器的小部分。好的,第一层中有些神经元代表像素的亮度,下一层我们会有些小型边缘检测器。因此下一层可能有个神经元连接到左侧三个像素列和右侧三个像素列。
Let's carry on with this little bit of edge detector. Okay. So you have a in the first layer, you have the neurons that represent how bright the pixels are. And then in the next layer we're gonna have little bits of edge detector. And so you might have a neuron in the next layer that's connected to a column of three pixels on the left and a column of three pixels on the right.
现在如果你把左侧三个像素连接的强度调高——嗯,很大的正向连接
And now if you make the strengths of the connections to the three pixels on the left strong Mhmm. Big positive connections
对,因为
Right. Because
你让右侧三个像素的连接强度变成很大的负连接
you make the it's strengths of connections to the three pixels on the right be big negative connections
因为
because
这样它们就不会激活
it's that they don't turn on
对。
Right.
当左侧像素和右侧像素亮度相同时,负连接会抵消正连接,什么都不会发生。但如果左侧像素亮而右侧像素暗,神经元会从左侧像素获得大量输入,因为它们是大的正连接。
Then when the pixels on the left and the pixels on the right are the same brightness as each other, the negative connections would cancel out the positive connections and nothing will happen. But if the pixels on the left are bright and the pixels on the right are dark, the neuron will get lots of input from the pixels on the left because they're big positive connections.
对。
Right.
它不会受到右侧像素的任何抑制,因为那些像素都处于关闭状态。
It won't get any inhibition from the pixels on the right because then that those pixels are all turned off.
对,对。
Right. Right.
所以它会发出‘乒’的信号。它会说,嘿,我找到我想要的了。我发现左边的三个像素很亮,而右边的三个像素不亮。嘿。
And so it'll go ping. It'll say, hey. I found what I wanted. I found that the three pixels on the left are bright and the three pixels on the right are not bright. Hey.
这就是我的专长。我在这里发现了一小段正向的边缘片段。
That's my thing. I found a little piece of positive little piece of edge here.
我就是那个人。我是边缘专家。我对边缘发出‘乒’的信号。
I'm that guy. I'm the edge guy. I ping on the edges.
对。而且那个信号正是针对那一段特定的边缘。
Right. And that pings on that particular piece of edge.
好的,好的。
Okay. Okay.
现在想象你有数不清的那些东西。
Now imagine you have like a gazillion of those.
光是三个信号就让我精疲力尽了。你居然有数不清的那些。
I'm already exhausted on the three pings. I you have a gazillion of those.
因为它们需要检测视网膜上任何位置的边缘碎片。哇。在图像的任何位置和任何方向上。每个方向都需要不同的检测器。
Because they have to detect little pieces of edge anywhere on your retina Wow. Anywhere in the image and at any orientation. You need different ones for each orientation.
没错。
Right.
实际上针对不同尺度也需要不同的检测器。可能存在非常模糊的大尺度边缘,也可能存在非常清晰的小尺度边缘。
And you actually have different ones for the scale. There might be an edge at a very big scale that's quite dim, and there might be little sharp edges at a very small scale.
没错。
Right.
随着你制造越来越多的边缘检测器,你对边缘的分辨能力会越来越强。可以看到更小的边缘,更精确地判断边缘方向,更好地检测模糊的大边缘。现在让我们进入下一层。现在我们有了边缘检测器。假设下一层的神经元在寻找一组几乎水平的边缘组合,几个几乎水平且相互对齐的边缘排成一列,在它们正上方还有几个几乎水平的边缘,但与第一组边缘形成尖角。
And as you make more and more edge detectors, you get better and better discrimination for edges. You can see smaller edges, you can see the orientation of edges more accurately, you can detect big vague edges better. So let's now go to the next layer. So now we've got our edge detectors. Now suppose that we had a neuron in the next layer that looked for a little combination of edges that is almost horizontal, several edges in a row that are almost horizontal and and line up with each other, and just slightly above those, several edges in a row that are again almost horizontal, but come down to form a point with the first sort of edges.
对。
Right.
所以你找到两个小小的边缘组合,形成了一种尖尖的东西。
So you find two little combinations of edges that make a sort of pointy thing.
好吧。你可是诺贝尔奖得主物理学家。我没想到那句话会以‘形成一种尖尖的东西’结尾。我以为会有个专业名称。不过我明白你的意思了。
Okay. So you're a Nobel Prize winning physicist. I did not expect that sentence to end with it makes kind of a pointy thing. I thought there'd be a name for that. But I get what I get what you're saying.
你现在正在分辨它的边界在哪里,你在某种程度上观察着不同的——这甚至是在你考虑颜色或其他因素之前。这纯粹就是,是否存在图像?边缘是什么?
You're you're now discerning where it ends, where it you're you're sort of looking at different and this is before you're even looking at color or anything else. This is literally just, is there an image? What are the edges?
边缘是什么?那些小小的边缘组合又是什么?所以我们现在要问的是,是否存在一种边缘组合能构成可能是喙的东西?
What are the edges? And what are the little combinations of edges? So we're now asking, is there a little combination of edges that makes something that might be a beak?
但你还不清楚喙是什么?
But you don't know what a beak is yet?
还不知道。是的,这个我们也需要学习。
Not yet. No. We need to learn that too. Yes.
没错。一旦你拥有了这个系统,几乎就像是在构建能模仿人类感官的系统。
Right. So once you once you have the system, it's almost like you're building systems that can mimic the human senses.
这正是我们在做的。
That's exactly what we're doing.
是的。比如视觉、听觉,当然不包括嗅觉。
Yes. So vision, ears, not smell, obviously.
不,他们现在正在研究这个。已经开始着手嗅觉了。
No. They're doing that now. They're starting on smell now.
哦,天哪。
Oh, for god's sakes.
他们现在已经实现了数字化嗅觉,可以通过网络传输气味。这简直
They've now got they've now got to digital smell where you can transmit you can transmit smells over the web. It's just the
太疯狂了。
print insane.
气味打印机有200个组件。不是三种颜色,而是200个组件,它能在另一端合成气味,虽然还不算完美,但效果相当不错。
The printer for smells has 200 components. Instead of three colors, it's got 200 components, and it synthesizes the smell at the other end, and it's not quite perfect, but it's pretty good.
对。对。对。哇。这对我来说太不可思议了。
Right. Right. Right. Wow. So this is this is incredible to me.
好的。其实
Okay. So So actually
对此非常抱歉。我道歉。
so sorry about this. I apologize.
不。这样很自然。这样很好。你表现得非常出色,完美演绎了一个对此一无所知但充满合理好奇心的普通人。
No. This is loosely. This is perfect. Alright. You're doing a very good job of representing a sort of sensible curious person who doesn't know anything about this.
让我继续描述如何手工构建这个系统。
So let me finish describing how you build the system by hand.
好的。
Yes.
如果手动操作,我会从这些边缘检测器开始。我会说,对左侧这些像素建立强大的正向连接,对右侧像素建立强大的负向连接。
If I did it by hand, I'll start with these edge detectors. I'd say, make big strong positive connections from these pixels on the left and big strong negative connections to the pixels on the right.
对。
Right.
现在接收这些输入连接的神经元,将会检测到一小段垂直边缘。
And now the neuron that gets those incoming connections, that's gonna detect a little piece of vertical edge.
好的。
Okay.
然后在下一层,我会说,好吧,从三个这样倾斜的小边缘和三个那样倾斜的小边缘建立强大的正向连接,这可能是一个喙。
And then at the next layer I'd say, okay, make big strong positive connections from three little bits of edge sloping like this and three little bits of edge sloping like that, and this is a potential beak.
没错。
Right.
在同一层中,我还让迈克从大致形成圆形的边缘组合建立强大的正向连接。哇,这可能是一只眼睛。
And in that same layer, I made Mike also make big strong positive connections from a combination of edges that roughly form a circle. Wow. And that's a potential eye.
对。对。对。
Right. Right. Right.
在下一层,我有一个神经元负责检测可能的鸟喙和眼睛,如果它们的相对位置正确,它就会发出信号表示‘我很高兴’,因为这个神经元已经检测到了一个可能的鸟头。
Now in the next layer, I have a neuron that looks at possible beaks and looks at possible eyes and if they're in the right relative position, it says, hey, I'm happy. Because that neuron has detected a possible bird's head. And
那个家伙可能会触发。
that guy might ping.
同时,其他地方还会有其他神经元检测到小鸡脚或鸟类翅膀末端的羽毛等小图案。所以你会有很多这样的神经元。再往更高层,可能有一个神经元会说,嘿,如果我检测到了鸟头、鸡脚和翅膀末端,那很可能是一只鸟。所以这是一只鸟。
And that guy would ping. At the same time, there'll be other neurons elsewhere that have detected little patterns like a chicken's foot or the feathers at the end of the wing of a bird. And so you have a whole bunch of these guys. Now even higher up, you might have a neuron that says, hey look, if I've detected a bird's head and I've detected a chicken's foot and I've detected the end of a wing, it's probably a bird. So it's a bird.
现在你可以看到如何尝试手动连接所有这些。
So you can see now how you might try and wire all that up by hand.
是的。这会花些时间。
Yes. And it would take some time.
那可能要花上永远的时间。可能要花上永远的时间。是的。好吧。假设你很懒。
It would take like forever. It would take like forever. Yes. Okay. So suppose you were lazy.
对,这才像话。好的。
Yes. Now you're talking. Okay.
你可以这样做:先构建这些神经元层,但不预设所有连接的强度。初始时用小的随机数值随便设定连接强度。输入一张鸟的图片,假设有两个输出端,一个显示‘是鸟’,另一个‘不是鸟’。当连接强度随机时,输入鸟的图片后,系统会显示50%是鸟,50%不是鸟。
What you could do is you could just make these layers of neurons without saying what the strengths of all the connections ought to be. You just start them off with small random numbers. Just put in any old strengths. And you put in a picture of a bird and let's suppose it's got two outputs one says bird and the other says not bird. With random connection strings in there what's going to happen is you put in a picture of a bird and it says 50% bird 50% not bird.
换句话说,我一无所知。
In other words I haven't got a clue.
没错。
Right.
然后你输入一张非鸟类的图片,它也会显示50%是鸟,50%不是鸟。
And you put in a picture of a non bird and it says 50 bird 50% non bird.
天啊。
Oh boy.
好,现在可以提问了。假设我调整其中某个连接强度,稍微增强一点,原本显示50%是鸟,现在会不会变成50.01%是鸟,49.99%不是鸟?如果图片确实是鸟,那这个调整就是有益的,系统表现会略微提升。
Okay. So now you can ask a question. Suppose I were to take one of those connection strengths and I were to change it just a little bit, make it maybe a little bit stronger, instead of saying 50% bird, would it say 50 o 1% bird and 49.99% non bird? And if it was a bird, then that's a good change to make. You've made it work slightly better.
这是哪一年?什么时候开始的?
What year was this? When did this start?
哦,没错。这只是个想法。这肯定行不通,但请耐心听我说。
Oh, exactly. So this is just an idea. This would never work, but bear with me.
好吧。
Alright.
这就像那些辩护律师突然跑题十万八千里,但
This is like one of those defense lawyers who goes off on a huge digression, but it's
最后不会有好结果。这这其实很有帮助。
not gonna be good in the end. This is this is helpful.
好的。
Okay.
所以这就是十年后会把我们全害死的东西。
So And this is the thing that's gonna kill us all in ten years.
是的。当我说‘是的’时,我指的并非这个具体事物,而是一种进步。但事情就是这样开始的。不一定会毁灭我们所有人,但也许。
Yep. When I say when I say yep, I mean, not this particular thing but an advancement But on this is how it started. Not necessarily kill us all but maybe.
对,对,对。这就是奥本海默的思路,好吧。你有一个物体,它由更小的物体组成,这就像是整个事情最初的部分。
Right. Right. Right. This is Oppenheimer going, okay. So you've got an object and that is made up of smaller objects and like this is the very early part of this.
好的。假设你有世界上所有的时间。嗯。你可以做的是,拿着这个分层的神经网络,从随机的连接强度开始,然后展示一只鸟给它看,它可能只会说50%是鸟,50%不是鸟,然后你可以选择一个连接强度
Okay. So suppose you had all the time in the world. Mhmm. What you could do is you could take this layered neural network and you could start with random connection strengths and you could then show it a bird and it just say 50% bird 50% non bird and you could pick one of the connection strengths
对。
Right.
然后你可以说,如果我稍微增加一点,会有帮助吗?
And you could say, if I increase it a little bit, does it help?
对。
Right.
帮助可能不大,但总归有点帮助吧?
It won't help much, but does it help at all?
好的。嗯,给我50.1、50.2这样的数值。行。
Right. Well, gives me a 50.1, 50.2, that kind of thing. Okay.
如果有帮助的话,就增加那个数值。
If it helps, make that increase.
好的。
Okay.
然后你再绕一圈重新操作。这次我们可能选一个非鸟类样本,如果我们增强那个连接后,系统显示它更不像鸟类而更像非鸟类,我们就说,这个增强效果不错,就采用这个调整。
And then you go around and do it again. Maybe this time we choose a non bird and we'd like it to if we increase that connection and it says it's less likely to be a bird and more likely to be a non bird, we say, okay, that's a good increase, let's do that one.
对对对。
Right right right.
现在有个问题:连接数量高达万亿级别。
Now here's a problem. There's a trillion connections.
是啊。
Yeah.
好的。每个连接都需要多次更改。
Okay. And each connection has to be changed many times.
那是手动的吗?
And is that manual?
用这种方法的话,确实是手动的。不仅如此,你不能仅凭一个例子就进行调整,因为有时改变会影响连接强度。稍微增强可能对这个例子有帮助,但会让其他例子变得更糟。
Well, in this way of doing it, it will be manual. And, not just that, but you can't just do it on the basis of one example because sometimes change can make a connection strength. If you increase it a bit, it'll help with this example, but it'll make other examples worse.
哦,天哪。
Oh, dear god.
所以你需要给它一整批例子
So you have to give it a whole batch of examples
对。
Right.
然后看总体上是否有所帮助。
And see if on average it helps.
这就是你创建这些大型语言模型的方式
And that's how you create these large language model
如果我们用这种极其愚蠢的方式来创建,比如说这个视觉系统,是的。我们需要进行数万亿次实验,每次实验都需要给它一批示例,然后观察调整某个连接强度是有益还是有害。
the If we did it this really dumb way to create, let's say, this vision system for now Yes. We'd have to do trillions of experiments, and each experiment would involve giving it a whole batch of examples and seeing if changing one connection strength helps or hurts.
天啊。而且永远完成不了。会是无限的。会是无限的。
Oh god. And it would never be done. Would be infinite. It would be infinite.
好的。现在假设你找到了方法
Okay. Now suppose that you figured out how to
进行
do a
一种计算,它能同时告诉你网络中每个连接强度的情况。以这个特定例子来说,假设你给它一只鸟的图像,它说有50%概率是鸟。现在对于所有上万亿个连接强度,我们可以同时判断是应该稍微增强它们以提升效果,还是稍微减弱它们以提升效果。
computation that would tell you for every connection strength in the network, it tell you at the same time for this particular example, let's suppose you give it a bird and it says 50% bird, and now for every single connection strength, all trillion of these connection strengths, we can figure out at the same time whether you should increase them a little bit to help or decrease them a little bit to help.
然后我的意思是
And then I mean
然后你同时调整它们中的一万亿个参数。
then you change a trillion off of them at the same time.
我能说一个我憋了很久的词吗?Eureka(我发现了)。
Can I can I say a word that I've been dying to say this whole time? Eureka.
Eureka!Eureka!这就是那个计算,对普通人来说似乎很复杂。是的,如果你学过微积分,就会发现它相当直观,而且这个计算是由许多不同的人发明的。
Eureka. Eureka. Now that's that computation, for normal people it seems complicated. Yes. If you've done calculus, it's fairly straightforward and many different people invented this computation.
没错。
Right.
这叫做反向传播。现在你可以同时调整一万亿个参数,速度会快上一万亿倍。
It's called backpropagation. So now you can change your trillion at the same time and you'll go a trillion times faster.
天啊。就在那一刻,它从理论变成了现实。
Oh my god. And and that's the moment that it goes from theory to practicality.
那一刻你会想:Eureka,我们解决了这个问题。我们知道如何构建智能系统了。对我们来说,那是1986年。哇。但当它没奏效时,我们非常失望。
That is the moment when you think, eureka, we've solved it. We know how to make smart systems. For us, that was 1986. Wow. And we were very disappointed when it didn't work.
每天,那些最喧嚣、最具煽动性的言论占据着我们的注意力,而整体图景却因此迷失。这全是噪音而无光明。Ground News将故事的多方观点汇集一处,让你看清来龙去脉。他们提供光明。
Every day, the loudest, the most inflammatory takes dominate our attention. And the bigger picture gets lost. It's all just noise and no light. Ground news puts all sides of the story in one place so you can see the context. They provide the light.
它开启了超越噪音的对话。他们整合并组织信息,只为帮助读者自主决策。Ground News为用户提供可轻松对比标题的报道,或是汇总不同光谱媒体间报道差异的简报。这是绝佳资源。访问groundnews.com/stewart订阅无限制访问的Vantage会员,可享40%优惠。
It starts conversations beyond the noise. They aggregate and organize information just to help readers make their own decisions. Ground News provides users reports that easily compare headlines or reports that give a summarized breakdown of the specific differences in reporting across all the spectrums. It's a great resource. Go to groundnews.com/stewart and subscribe for 40% off the unlimited access Vantage subscription.
月费降至约5美元。请访问groundnews.com/stewart或扫描屏幕上的二维码。你在那个房间待了十年。你一直在向它展示鸟类图像。你持续增强参数强度。
Brings the price down to about $5 a month. It's groundnews.com/stewart or scan the QR code on the screen. You've been in that room for ten years. You've been showing it birds. You've been increasing the strengths.
你曾有过顿悟时刻,于是你拨动开关开始——
You had your eureka moment, and you flipped the switch and went
不。问题在于...听着,关键问题是:这种方法只有在拥有海量数据和超强算力时才能奏效,其效果远超其他任何视觉处理方案。
No. Here's the here's the problem. Yeah. Here's the problem. It only works or it only works really impressively well much better than any any other way of trying to do vision if you have a lot of data and you have a huge amount of computation.
即便你的方法比笨办法快万亿倍,这仍将是项浩大工程。
Even though you're a trillion times faster than the dumb method, it's still gonna be a lot of work.
好吧。所以你现在需要扩充数据集,同时提升计算能力。
Okay. So now you've got to increase your the data, and you've got to increase your computation power.
是的。而且计算能力需要提升大约十亿倍,与我们当时的情况相比,数据量也需要类似的增长倍数。
Yes. And you've got to increase the computation power by a factor of about a billion compared with where we were, and you've got to increase the data by a similar factor.
在1986年你意识到这一点时,你们离目标还差着十亿倍的距离。
You are still, in 1986 when you figured this out, you are a billion times not there yet.
差不多是这样。是的。
Something like that. Yes.
要实现这个目标需要改变什么?芯片的性能?具体是哪些方面的改变?
What would have to change to get you there? The the power of the the chip? The what what what changes?
好吧。可能更接近百万倍的提升。
Okay. It may be more like a factor of a million.
好的。
Okay.
我不想在这里夸大其词。
I don't want to exaggerate here.
不,因为我会抓住你。如果你试图夸大其词,我会立刻察觉。
No. Because I'll catch you. If you try and exaggerate, I'll be on it.
一百万确实是个很大的数字。没错。所以这里需要改变的是:晶体管的面积必须缩小,才能在芯片上集成更多晶体管。让我们看看从1986年开始...
A million's quite a lot. Yes. So here's what has to change. The area of a transistor has to get smaller so you can pack more of them on a chip. So between 1986 let's see.
不,是从1972年我开始接触这些东西的时候算起
No. Between 1972 when I started on this stuff
好的。
Okay.
而现在,晶体管的面积已经缩小了一百万倍。
And now the area of a transistor has got smaller by a factor of a million.
哇。这么说来,这大概是我记得我父亲在RCA实验室工作的那个年代?我大概八岁的时候,他带回家一个计算器,那个计算器有桌子那么大。它只能做加减乘除。到了1980年,计算器已经可以做到钢笔那么小了。
Wow. So that's can I relate this to so that is around the age that I remember my father worked at RCA Labs? And when I was, like, eight years old, he brought home a calculator, and the calculator was the size of a desk. And it added and subtracted and multiplied. By 1980, you could get a calculator on a pen.
这是基于那个原理吗
And is that based on that
是的。大规模集成电路上使用的小型晶体管。没错。
Yeah. The transistors on large scale integration using small transistors. Yeah.
好的。明白了。明白了。
Okay. Alright. Alright.
因此晶体管的面积缩小了一百万倍。
So the the area of a transistor decreased by a factor of a million.
好的。
Okay.
而可用数据量的增长远超于此,因为我们有了互联网,实现了海量数据的数字化。
And the amount of data available increased by much more than that because we got the web and we got digitization of massive amounts of data.
哦,所以它们是相辅相成的。随着芯片性能提升,数据量变得更为庞大,你们能向模型输入更多信息,同时模型的处理速度和能力也在提高。
Oh, so they worked hand in hand. So as the chips got better, the data got more vast and you were able to feed more information into the model while it was able to increase its processing speed and abilities.
没错。让我总结下现状:你搭建了这个用于识别鸟类的神经网络,赋予它多层神经元,但不预设连接规则,而是从小的随机数值开始。
Yeah. So let me summarize what we now have. Yes. You set up this neural network for detecting birds and you give it lots of layers of neurons, but you don't tell it the connection string. You say start with small random numbers.
现在你需要做的就是给它展示大量鸟类的图片和大量非鸟类的图片,告诉它正确答案,让它知道自己所做的与应该做的之间的差异,将这个差异反向传播到网络中,让它能判断每个连接强度是应该增强还是减弱,然后只需静候一个月。一个月后,如果你深入观察,你会发现:是的,它已经构建了小型的边缘检测器,构建了类似喙部检测器和眼睛检测器的东西,还会构建一些难以辨识但能检测喙与眼睛等组合特征的机制,经过几层处理后,它就能非常准确地判断是否为鸟类。所有这些结构都是它从数据中自行构建的。
And now all you have to do is show it lots of images of birds and lots of images that are not birds, tell it the right answer so it knows the discrepancy between what it did and what it should have done, send that discrepancy backwards through the network so it can figure out for every connection strength whether it should increase it or decrease it and then just sit and wait for a month. And at the end of the month if you look inside if you look inside, here's what you'll discover. Yeah. It has constructed little edge detectors, and it has constructed things like little beak detectors and little eye detectors, and it will have constructed things that it's very hard to see what they are, but they're looking for little combinations of things like beaks and eyes, and then after a few layers it'll be very good at telling you whether it's a bird or not. It made all that stuff up from the data.
天啊。我能再说一遍吗?我发现了!
Oh my god. Can I say this again? Eureka.
我发现了!我们意识到不需要手动编写所有这些边缘检测器、喙部检测器、眼睛检测器和鸡爪检测器——计算机视觉领域多年来一直这样做,但效果始终不理想。我们可以让系统自行学习这些。我们只需教会它如何学习。
Eureka. We figured out we don't need to hand wire in all these little edge detectors and beak detectors and eye detectors and chicken's foot detectors, that's what computer vision did for many many years and it never worked that well. We can get the system just to learn all that. All we need to do is tell it how to learn.
那是在1980年
And that is in 1980
1986年,我们找到了实现方法。
In 1986, we figured out how to do that.
对。
Right.
当时人们非常怀疑,因为我们还做不出什么令人惊艳的成果
People were very skeptical because we couldn't do anything very impressive
对。
Right.
因为我们没有足够的数据。
Because we didn't have enough data.
数据或流程。这简直难以置信,这种方式——我实在感激不尽,感谢你解释清楚这一切。这让我明白了很多,你知道,我太习惯模拟世界的运作方式,比如汽车的工作原理,但对数字世界的运行机制却一无所知。这是我听过最清晰的解释,真的非常感谢。现在我终于理解这是如何实现的了。
The data or the process. This is this is incredible, the way and I I can't thank you enough for explaining what that is. It it makes everything you know, I'm so accustomed to an analog world of, you know, how things work and, like, the way that cars work, but I have no idea how our digital world functions. And that is the clearest explanation for me that I have ever gotten, and I cannot thank you enough. It it makes me understand now how this was achieved.
顺便说一下,杰弗里所讨论的是那个的原始版本。最令我惊叹的是它的每一次升级,那种巨大的改进。是的,就是那种。
And by the way, what what Jeffrey is talking about is the the primitive version of that. What's so incredible to me is the each upgrade of that, the the vastness of the improvement Yeah. Of that.
让我再说一件事。
So let me let me just say one more thing.
请讲。
Please.
我不想显得太教授范儿,但是
I don't wanna be too professor like, but
不。不。不。不。不。
No. No. No. No. No.
但是,这如何应用于大型语言模型呢?
But, how does this apply to large language models?
是的。
Yes.
好的,我来解释大型语言模型的工作原理。你有一些处于上下文中的词语。假设我给你一个句子的前几个词。
Well, here's how it works for large language models. You have some words in a context. So let's suppose I give you the first few words of a sentence.
对。
Right.
神经网络要做的是学会将每个词转换为一大组特征,这些特征就是活跃的神经元在发出'砰'的信号。比如我给你'星期二'这个词,会有一些神经元发出'砰'的信号。如果给你'星期三'这个词,会有一组非常相似但略有不同的神经元发出'砰'的信号,因为它们的意思非常相近。当你把所有上下文中的词都转换为代表其含义的神经元集群后,这些神经元会相互影响。这意味着下一层的神经元会观察这些神经元的组合,就像我们通过观察边缘组合来识别鸟喙一样。
What the neural net's gonna do is learn to convert each of those words into a big set of features which are just active neurons neurons going ping. So if I give you the word Tuesday there'll be some neurons going ping. If I give you the word Wednesday it'll be a very similar set of neurons, slightly different but a very similar set of neurons going ping because they mean very similar things. Now after you've converted all the words in the context into neurons going ping into whole bunches that capture their meaning, these neurons all interact with each other. What that means is neurons in the next layer look at combinations of these neurons just as we looked at combinations of edges to find a beak.
最终你就能激活代表句子中下一个词特征的神经元。
And eventually you can activate neurons that represent the features of the next word in the sentence.
它会预判。
It will anticipate.
它能预判。它能预测下一个词。这就是你训练它的方式
It can anticipate. It can predict the next word. So the way you train it
这就是为什么我的手机会那样?它总以为我接下来要说某个词,而我总是想让它别这样。是啊,因为很多时候它都猜错了。
Is that why my phone does that? It always thinks I'm about to say this next, you know, a a word, and I'm always like, stop doing that. Yeah. Because a lot times it's wrong.
它可能用的是神经网络。
It's probably using neural nets to do it.
对,没错。
Yes. Right.
当然,你不可能做到完美。
And, of course, you can't be perfect at that.
所以现在要整合起来,你几乎已经教会它如何‘看’了。
So this is so now to put it together, you've taught it almost how to see.
你可以像教它预测下一个词那样,教会它如何识别事物。
You can teach it to see it in the same way you can teach it how to predict the next word.
没错。它先是看到并识别出‘这是字母a’,然后开始辨认字母,接着你教它单词,再理解这些词的含义,最后是上下文——这一切都是通过输入我们之前的文字,反向传播我们已有的书写和说话内容来实现的。是的,它正在全面学习。
Right. So it sees, it goes, that's the letter a. Now I'm starting to recognize letters, then you're teaching it words, and then what those words mean, and then the context, and it's all being done by feeding it our previous words, by back propagating all the writing and speaking that we've done already. Yes. It's looking over.
你选取一些我们生成的文档
You take some document that we produced
是的。
Yes.
你提供上下文,即截止当前的所有词语
You give it the context, which is all the words up to this point
是的。
Yes.
然后要求它预测下一个词,再观察它给出正确答案的概率。
And you ask it to predict the next word. And then you look at the probability it gives to the correct answer.
对。
Right.
你说希望这个概率更大。我希望你给出正确答案的概率更高。
And you say I want that probability to be bigger. I want you to have more probability of making the correct answer.
所以它并不理解。这仅仅是个统计练习。
So it doesn't understand it. This is merely a statistical exercise.
我们稍后会回到这点。你计算模型对下一个词预测概率与正确答案之间的差异,然后通过这个网络反向传播,调整所有连接权重,这样下次遇到相同上下文时,它更可能给出正确答案。你刚才说的正是很多人常提的观点——这不算真正的理解。
We'll come back to that. You take you take the discrepancy between the probability it gives for the next word and the correct answer Yeah. And you back propagate that through this network and it'll change all the connection strengths so next time you see that that lead in it'll be more likely to give the right answer. Now you just said something that many people say. This isn't understanding.
这只是个统计把戏。
This is just a statistical trick.
展开剩余字幕(还有 355 条)
是的。
Yes.
比如乔姆斯基就这么认为。
That's what Chomsky says for example.
是的。乔姆斯基和我,我们总是互相打断对方的句子。
Yes. Chomsky and I, we're always stepping on each other's sentences.
对。让我来问你这个问题。那么,你是怎么决定接下来要说什么词的?
Yeah. So let me ask you the question. Well, how do you decide what word to say next?
我?
Me?
你。
You.
这很有趣。我很高兴你提到这个。我的做法是
It's interesting. I'm glad you brought this up. So what I do
你说了一些词,而我甚至会说另一个词。
is You said some words, and I even say another word.
我听着,寻找然后尝试预测...不,我完全不知道我是怎么做到的。说实话,我真希望我知道。要是我知道如何阻止接下来要说的一些话,就能避免很多尴尬。要是有个更好的预测器,天哪,我能省去不少麻烦。
I lines, look for and then I try and predict no. I have no idea how I'd how I do that. I honestly I I wish I knew. It would save me a great deal of embarrassment if I knew how to stop some of the things that I'm saying that come out next. If I had a better predictor, boy, I could save myself quite a bit of trouble.
所以你的做法与这些大型语言模型基本相同。对吧。你有迄今为止说过的词语,这些词语通过一系列活跃特征来表示。词语符号转化为大量特征激活的模式,神经元发出‘叮’的信号。
So the way you do it is pretty much the same as the way these large language models do it. Right. You have the words you've said so far. Those words are represented by sets of active features. So the word symbols get turned into big patterns of activation of features, neurons going ping.
不同的‘叮’声,不同的强度。
Different pings, different strengths.
这些神经元相互影响,激活某些发出‘叮’声的神经元,这些神经元代表下一个词的含义或可能含义,然后你从中挑选一个符合这些特征的词。这就是大型语言模型生成文本的方式,也是你的做法。它们和我们非常相似。这一切
And these neurons interact with each other to activate some neurons that go ping that are representing the meaning of the next word or possible meanings of the next word and from those you kind of pick a word that fits in with those features. That's how the large language models generate text and that's how you do it too. You're very well they're very like us. It's all
非常像
very well
可以说它们只是
to say that they're just
对我自己而言是一种人性的理解。比如,假设是善意的谎言。我和某人在一起,他们问我一个问题。我心里知道该说什么,但也会想,哦,那样说可能粗鲁,或者会冒犯对方。
to myself a humanity of understanding. For instance, if I so, like, let's say the little white lie. I'm with somebody, and they ask me a question. And in my mind, I know what to say. But then I also think, oh, but saying that might be coarse, or it might be rude, or I might offend this person.
所以我在选择接下来要说的话时也在做情感决策。这不只是一个客观过程,其中还包含主观判断。
So I'm also, though, making emotional decisions on what the next words I say are as well. It's not just a objective process. There's a subjective process within that.
这一切都是由你大脑中相互作用的神经元完成的。
All of that is going on by neurons interacting in your brain.
全都是神经信号,就连我归因于道德准则或情商的事物,本质上仍是神经信号的传递。
It's all pings and it's all strength of even the things that I ascribe to a moral code or an emotional intelligence are still pings.
它们依然是神经信号。你需要明白,那些自动、快速、无需费力完成的行为,与那些需要努力、缓慢、有意识且深思熟虑的行为之间存在区别。
They're still pings. And you need to understand there's a difference between what you do kind of automatically and rapidly and without effort and what you do with effort and slower and consciously and deliberatively.
对。所以你是说这些可以被构建进这些模型中?
Right. So And you're saying that can be built into these models
但这同样可以通过神经信号实现。这些神经网络能做到。哦,哇。
But that can also be done with pings. That can be done by these neural nets. Oh, wow.
那么这是否暗示,只要有足够的数据和算力,它们的大脑就能与我们的功能完全相同?它们现在达到那个水平了吗?会达到吗?因为它们目前处理能力应该还落后于我们吧?
But they is the suggestion then that with enough data and enough processing power, their brains can function identically to ours? Are they are they are they at that point? Will they get to that point? Will they be able to because I'm assuming we're still ahead processing wise.
好吧。它们并不完全像我们,但关键在于它们比传统计算机软件更接近人类。传统计算机软件...
Okay. They're not exactly like us, but the point is they're much more like us than standard computer software is like us. Standard computer software
对。
Right.
有人编入了一堆规则,只要遵循这些规则,它就能执行
Someone programmed in a bunch of rules, and if it follows the rules, it does what
没错。所以你的意思是这就是
That's right. So you're saying this is the
这完全是另一回事。
This is just a different kettle of fish altogether.
对。
Right.
对。更像我们人类。
Right. Much more like us.
现在当你进行这项工作并沉浸其中时,我能想象那种兴奋感——尽管这是一个漫长的过程,但你目睹着这些改进逐渐实现。这一定极其充实且有趣。而且你看着它爆发式发展成这种人工智能、生成式AI等等。在这个过程中,你会在哪个节点停下来反思说‘等等’?
Now as you're doing this and you're in it, and I imagine the excitement is even though it's occurring over a long period of time, you're seeing these improvements occur over that time, And it must be incredibly fulfilling and interesting. And and you're watching it explode into this sort of artificial intelligence and generative AI and all these different things. At what point during this process do you step back and go, wait a second.
好吧。我做得太晚了,应该早点行动的。我本该更早意识到这点,但我当时太沉迷于让这些东西运作起来,还以为它们要花上很久很久才能达到我们的水平。我们本有大把时间担心它们会不会试图接管控制权之类的问题。
Okay. So I did it too late. I should have done it earlier. I should have been more aware earlier, but I was so entranced with, making these things work and I thought it's going to be a long long time before they work as well as us. We'll have plenty of time to worry about what if they try and take over and stuff like that.
在2023年GPT问世后,但更早之前也见识过谷歌类似的聊天机器人。
At the 2023 after GPT had come out but also seeing similar chatbots at Google before that.
对。
Right.
由于我正致力于让这些系统模拟人类思维,我意识到在数字计算机上运行的神经网络就是一种比我们更优越的计算形式。我来告诉你它们为何更优越。
And because of some work I was doing on trying to make these things analog, I realized that neural nets running on digital computers are just a better form of computation than us. And I'll tell you why they're better.
嗯,为什么呢?
Yeah. Why?
因为它们更擅长共享。
Because they can share better.
它们彼此之间能更好地共享。
They can share with each other better.
是的。所以如果我复制多个相同的神经网络,让它们运行在不同的计算机上
Yes. So if I make many copies of the same neural net and they run on different computers
嗯。
Mhmm.
每个网络可以查看互联网的不同部分。比如我有一千个副本,它们各自查看互联网的不同片段。每个副本都在运行反向传播算法,根据刚看到的数据思考如何调整连接权重。由于它们最初是相同的副本,嗯。它们可以互相通信,说:我们是否都按照大家期望的平均值来调整连接权重?
Each one can look at a different bit of the Internet. So I've got a thousand copies they're all looking at different bits of the internet. Each copy is running this back propagation algorithm and figuring out given the data I just saw how would I like to change my connection strengths? Now because they started off as identical copies Mhmm. They can then all communicate with each other and say, how about we all change our connection strengths by the average of what everybody wants?
但如果它们是一起训练的,难道不会得出相同的答案吗?
But if they were all trained together, wouldn't they come up with the same answer?
是的。它们是在看不同的答案吗?它们看的是不同的数据。哦。如果是相同数据,它们会给出相同答案。
Yes. Are they looking at different answers? They're looking at different data. Oh. On the same data, they would give the same answer.
如果它们查看不同数据,就会对如何调整连接权重以吸收这些数据产生不同想法。
If they look at different data, they have different ideas about how they'd like to change their connection strengths to absorb that data.
对。但它们也在生成数据吗?这样它们此刻查看的内容是否相同,关键在于辨别能力。让这些系统更好地辨别、理解,做到这一切。但这背后还有另一层含义,那就是迭代性。
Right. But are they also creating data? Is that so they're looking at the same in their at this point, it's all about discernment. Getting these things to discern better, to understand better, to do all that. But there's another layer to that, which is iterative.
是的。一旦你擅长辨别力,你就掌握了关键
Yes. Once you're good at once you're good at discernment
没错。
That's right.
你可以生成。
You can generate.
对。
Right.
我省略了很多细节,但基本上,是的,你可以生成。
And I'm glossing over a lot of details there, but, basically, yes, you can generate.
你可以开始针对非机械记忆的问题生成答案,基于这些内容进行思考性回应。是谁在决定是否在这个迭代或生成层面加强连接时给予多巴胺奖励?当它创造出不存在的事物时,如何获得反馈?
You can begin to generate answers to things that are not rote, that are thoughtful based on those things. Who is giving it the dopamine hit about whether or not to strengthen connections in these at this iterative or generative level? How is it getting feedback when it's creating something that does not exist?
好的。对于这些语言模型来说,大部分学习过程在于如何预测下一个词。
Okay. So most of the learning takes place in figuring out how to predict the next word for one of these language models.
对。
Right.
那才是学习的主要部分。
That's where the bulk of the learning is.
好的。
Okay.
当它学会如何操作后,你就能让它生成内容,但它可能会生成令人不适或带有性暗示的内容。
After it's figured out how to do that, you can get it to generate stuff and it may generate stuff that's, unpleasant or that's sexually suggestive.
对。或者干脆是错误的。是的。对。幻觉。
Right. Or just wrong. Yeah. Right. Hallucinations.
是的。
Yeah.
没错。现在你让一群人评估它生成的内容,给出否定或肯定的反馈——这就是多巴胺刺激。
Yeah. Now you get a bunch of people to look at what it generates and say no, bad, that and or yeah, good, that's the dopamine hit.
对。
Right.
这就是所谓的人类强化学习,用来稍微塑造它的行为。就像你训练一只狗,塑造它的行为让它表现良好。
And that's called human reinforcement learning and that's what's used to sort of shape it a bit. Just like you take a dog and you shape its behavior so it behaves nicely.
那么让我实际地问你,比如当埃隆·马斯克创造他的Grok时,对吧?Grok是个人工智能,他对它说,你太‘觉醒’了。你在建立我认为过于‘觉醒’的联系和反应,无论我如何定义这个词。
So is that when let me let me ask you this in in a practical sense. So, like, when Elon Musk creates his grok. Right? And grok is this AI, and he says to it, you're too woke. And so you're making connections and pings that I think are too woke, whatever I have decided that that is.
所以我要输入差异,让你获得不同的多巴胺刺激,把你变成麦加希特勒或他想要的样子,这其中有多少仍受操作者的控制?
So I am going to input differences so that you get different dopamine hits and I turn you into Mecca Hitler or whatever it was that he turned it into, is how much of this is still in in the control of the operators?
你强化的内容是由操作者控制的。操作者会说,如果它用了某些奇怪的代词,就说不好。
That's what you reinforce is in the control of the operators. So the the operators are saying, if it uses some funny pronoun, say bad.
好的,好的。如果它说‘他们’之类的,你得削弱那种联系。
Okay. Okay. If it says they them Yeah. You have to weaken that connection
对。
Yeah.
不要加强
Not strengthen
那种联系。别那么做。
that connection. Don't do that.
别那么做。好的。
Don't do that. Okay.
要学会避免那样做。
Learn not to do that.
对。所以它仍然受操作者的随意支配?
Right. So it is still at the whim of its operator?
就那种塑造而言。问题在于
In terms of that shaping. The problem is
对。
Right.
这种塑造相当表面化,但后来其他人采用相同模型时很容易就能覆盖它
The shaping is fairly superficial, but it can easily be overcome by somebody else taking the same model later
对。
Right.
并以不同方式塑造它。
And shaping it differently.
所以不同模型会...因此这里存在价值。现在我将这个观点应用到我们生活的现实世界——有20家公司将他们的AI封闭在企业高墙内,各自独立开发。每个AI都可能具备独特而古怪的特性,这取决于塑造者的身份及其内部发展方式。几乎就像会培养出20种不同人格,如果我可以这么说而不算过度拟人化的话。
So different models will have so there there is a value. And now I'm sort of applying this to the world that that we live in now, which is there are 20 companies who have sequestered their AIs behind sort of corporate walls, and they're developing them separately. And each one of those may have unique and eccentric features that the other may not have depending on who it is that's trying to shape it and how it develops internally. It's almost as though you will develop 20 different personalities if I maybe that's not anthropomorphizing too much.
确实类似,只不过每个模型都必须具备多重人格。因为试想预测文档中下一个词的情形:你已经阅读了前半部分文档,从中能深刻了解作者的观点和为人。
It's a bit like that except that each of these models has to have multiple personalities. Because think about trying to predict the next word in a document. You've read half the document already. After you read half the document, you know a lot about the views of the person who wrote the document. You know what kind of a person they are.
所以模型必须能适配那种人格来预测下一个词。但这些可怜的模型需要应对所有情况,因此它们必须能适配任何可能的人格。
So you have to be able to adopt that personality to predict the next word. But these poor models have to deal with everything, so they have to be able to adopt any possible personality.
没错。但你看,在这次对话讨论中,AI的最大威胁似乎仍不在于它产生意识并接管世界,而在于它受制于开发者的意志——人类可能将其武器化,如果开发者是自恋狂或 megalomaniac(权欲狂)就可能用于邪恶目的。举个例子,彼得·蒂尔就有自己的...他曾在播客上与《纽约时报》作家罗斯·杜沙特对谈,杜沙特说过...我正好把原话记在这里。
Right. But, you know, in in this in this iteration of the conversation, it then still appears that the greatest threat of AI is not necessarily it becomes sentient and takes over the world. It's that it's at the whim of the humans that have developed it and can weaponize it, and and it they can use it for nefarious purposes if they're narcissists or megalomaniacs or, you know, I'll give you an example of, you know, Peter Thiel is has his own. And he was on a podcast with the writer from The New York Times, Ross Dudhat. And Dudhat said and I'll tell you, I have it right here.
我认为你会希望人类延续下去。对吧?而西奥说,他犹豫了很久。作家接着说,这犹豫可真够久的。他回答,嗯,这里面问题可不少。
I think you would prefer the human race to endure. Right? And Theo says, and he hesitates for a long time. And and the writer says, that's a long hesitation. And he's like, well, there's a lot of questions in that.
这比AI本身更让我感到恐惧,因为它让我想到,那些设计、塑造甚至可能将其武器化的人,或许并不清楚——我不知道他们使用AI的目的是什么。这是你所担忧的,还是你真正害怕的是AI本身?
That felt more frightening to me than AI itself because it made me think, well, the people that are designing it and shaping it and maybe weaponizing it might not have you know, I don't know what purpose they're using it for. Is that the fear that you have, or is it the actual AI itself?
好的。首先需要区分AI带来的各种不同风险。
Okay. So you have to distinguish a whole bunch of different risks from AI.
好的。
Okay.
而且这些风险都相当可怕。
And they're all pretty scary.
对,没错。
Right. Okay.
其中一类风险与恶意行为者滥用AI有关。
So there's one set of risks that's to do with bad actors misusing it.
是的,那正是我脑海中挥之不去的担忧。
Yes. That's the one that I think is is most in my mind.
这些情况更为紧迫。比如他们会滥用AI干扰中期选举。好吧,若想用AI破坏中期选举,你需要大量美国公民的详细数据。嗯。
And they're the more urgent ones. They're gonna misuse it for corrupting the midterms, for example. Okay. If you wanted to use AI to corrupt the midterms, what you would need to do is get lots of detailed data on American citizens. Mhmm.
不知道你是否能想到有谁一直在四处收集美国公民的大量详细数据。
I don't know if you can think of anybody who's been going around getting lots of detailed data on American citizens.
并将其出售或提供给某家公司,这家公司可能与我刚提到的那位先生有关联。
And selling it or giving it to a certain company, that also may be involved with the gentleman I just mentioned.
没错。比如看看英国脱欧的例子——
Yeah. And if you look at Brexit, for example
是的。
Yes.
剑桥分析公司从Facebook获取了选民详细信息,并利用这些数据进行精准广告投放。
Cambridge Analytica had detailed information on voters that he got from Facebook, and it used that information for targeted advertising.
定向广告。我想你现在几乎可以认为这是基础操作了。
Targeted ads. And and that's I guess you would almost consider that rudimentary at this point.
现在这已经是基础操作了。是的。但从来没有人...从来没有人真正调查过这是否决定了英国脱欧的结果?对吧。因为,当然,从中受益的人赢了。
That's rudimentary now. Yeah. But nobody ever nobody ever did a proper investigation of did that determine the output of Brexit? Right. Because, of course, the people who benefited from that won.
哇。所以人们正在学习如何利用这个进行操控。
Wow. So in the way people are learning that they can use this for manipulation.
是的。
Yes.
你看,我经常谈到这一点。说服力自古以来就是人类行为的一部分。宣传、说服、尝试利用新技术来塑造公众舆论等等。但感觉上,就像其他事物一样,这过程是线性或模拟的。我把它比作厨师会加一点黄油和糖,让食物更可口,诱使你多吃一些。
And see, I always talk about it. Look, persuasion has been a part of the human condition forever. Propaganda, persuasion, trying to utilize new technologies to create and shape public opinion and all those things. But it felt, again, like everything else, somewhat linear or analog. This and what I liken it to is a chef will add a little butter and a little sugar to try and, you know, make something more palatable to to to get you to eat a little bit more of it.
但这仍在我们常规理解的范围内。而食品行业里有人在进行超加工食品生产,他们在实验室研究你的大脑运作方式,超加工我们的食物以绕过大脑防御。这几乎...这是不是语言层面的等价物,超加工...是的,语言?
But that's still within the realm of our kind of earthly understanding. But then there are people in the food industry that are ultra processing food, that are in a lab figuring out how your brain works and ultra processing what we eat to get past our brains. It's almost and and is this the language equivalent of that, ultra processed Yeah. Speech?
是的。这是个很好的类比。他们知道如何触发人们。一旦掌握了足够的信息,就知道什么能触发特定人群。
Yeah. That's a good analogy. Okay. They they they know how to trigger people. They know once you have enough information about somebody, you know what'll trigger them.
这些模型本身并不判断好坏,它们只是执行我们的指令。
And these models, they are agnostic about whether this is good or bad. They're just doing what we've asked.
没错。如果人类进行强化训练,它们就不再中立,因为你会引导它们执行特定行为。这就是它们现在试图做的。
Yeah. If you human reinforce them, they're no longer agnostic because you reinforce them to do certain things. So that's what they will try and do now.
对。换句话说情况更糟,它们就像小狗一样渴望取悦你。拥有极其复杂的能力,却像孩子般渴求认可。
Right. And they so in other words, it's even worse. They're a puppy. They want to please you. They are they it's almost like they have these incredibly sophisticated abilities, but childlike want for for approval.
是啊,有点像司法部长的作风。
Yeah. A bit like the attorney general.
我认为你此刻展现的机智可称为冷幽默。非常精彩。所以你最紧迫的担忧是可生成、煽动性、出格且能左右选举的武器化AI系统?
I believe the wit that you are displaying here would be referred to as dry. That would be that would that would be dry. Fantastic. Is that so your the the immediate concern is weaponized AI systems that can be generative, that can provoke, that that can be outrageous, and that can be the difference in elections.
是的,这只是众多风险之一。
Yes. That's one of that's one of the many risks. Yes.
另一个风险可能是:给我合成些前所未闻的神经毒剂,这也算风险吧?
And the other would be, you know, make me some nerve agents that nobody's ever heard of before. Is that another risk?
那是另一个风险。
That is another risk.
哦,我本希望你会说这不算什么大风险。
Oh, I was hoping you would say that's not so much of a risk.
不。好消息是,关于第一个破坏选举的风险,各国不会在研究如何抵抗它时相互合作,因为它们都在对彼此这么做。美国在试图破坏其他国家选举方面有着非常悠久的历史。
No. One good piece of news is for the first risk of corrupting elections, different countries are not gonna collaborate with each other on the research on how to resist it because they're all doing it to each other. America has a very long history of trying to corrupt elections in other countries.
没错。我们过去用老办法——通过政变、资助游击队来实现。
Right. We did it the old fashioned way through coups, through money for guerrillas.
嗯,还有美国之音之类的机构。
Well, and Voice of America and things like that.
对。对。
Right. Right.
没错。比如1953年给伊朗人钱之类的
Right. And giving money to people in Iran in 1953 and stuff
就像那样。与摩萨台和其他人一样。这不过是全球竞争中一系列更为复杂的工具之一。但在我们国家,这种竞争手段甚至不一定是通过俄罗斯、中国或其他想要主宰我们的国家施加的——是我们自己在对自己这样做。
like that. With Mosaddegh and everybody else. This is so the this is just another more sophisticated tool in a long line of sort of global competition where they're doing it. But in this country, it's being applied not even necessarily, you know, through Russia, through China, through other countries that wanna dominate us. We're doing it to ourselves.
是啊。
Yep.
经营企业最难的部分是什么?是在不被联邦当局发现的情况下偷钱。哦不,抱歉,我说错了。
What's the hardest part about running a business? Well, it's stealing money without the federal authorities. Oh, no. I'm sorry. That's not right.
其实是招聘人才,找到合适的人并雇佣他们。这确实很困难。但事实证明,在招聘这件事上,Indeed就是你所需的一切。别再费劲在其他招聘网站提升职位曝光了,通过Indeed的赞助职位,你的招聘信息将获得关注并快速招到人。
It's the hiring people, finding people and hiring them. The other thing is it's hard, though. But it turns out when it comes to hiring, indeed, is all you're gonna need. So, stop struggling to get your job post seen on other job sites. With Indeed's sponsored jobs, you get noticed and you get a fast hire.
事实上,就在我和你说话的这段时间里,Indeed上已经完成了23次招聘。我可能就是其中之一,我可能已经找到工作了。我不确定,还没查邮件。
In fact, in the time it's taken me to talk to you, 23 hires were made on Indeed. I may be one of them. I I may have gotten a job. I don't know. I haven't checked my email.
这是根据Indeed全球数据得出的结论。无需再等待,立即通过Indeed加速你的招聘流程。本节目听众可获得75美元赞助职位信用额度,提升职位曝光度,请访问indeed.com/weekly。现在就去indeed.com/weekly支持我们的节目,并告知你是通过本播客了解到Indeed的。
And that's according to Indeed data worldwide. There's no need to wait any longer. Speed up your hiring right now with Indeed. Listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com/weekly. Just go to indeed.com/weekly right now and support our show by saying, you heard about Indeed on this podcast.
indeed.com/weekly。条款与条件适用。招聘,有Indeed就够了。我有个理论——虽然不知道你对他们了解多少——那些科技巨头们,感觉他们都想成为统治世界的下一位帝王,这就是他们的战场。简直像是奥林匹斯山上的众神之战。
Indeed.com/weekly. Terms and conditions apply. Hiring Indeed is all you need. So I have a theory, and I don't know how much you know those guys out there, but the big tech companies, you know, it it feels like they all wanna be the next guy that that rules the world, the next emperor, And that's their battle. They're almost it's like gods fighting on Mount Olympus.
他们几乎不在乎这如何实现以及如何撕裂美国社会的结构,也许除了更具意识形态的埃隆和西奥。比如,扎克伯格给我的印象并非意识形态驱动,他只是想成为那个人。奥特曼给我的印象也不是意识形态驱动,他也只是想成为那个人。
How that accomplishes and how it tears apart the fabric of American society almost doesn't seem to matter to them, except maybe Elon and Theo who are more ideological. Like, Zuckerberg doesn't strike me as ideological. He just wants to be the guy. Altman doesn't strike me as ideological. He just wants to be the guy.
我认为,可悲的是,你所说的有一定道理。
I think, sadly, there's questionable truth in what you say.
这是你在那里工作时的一个担忧吗?
And that's a it it was that a concern of yours when you were working out there?
其实不是,因为直到最近——几年前——嗯。当时看起来它不会这么快就比人类聪明得多。但现在看来,如果你问现在的专家,大多数人会告诉你,在未来二十年内,这些东西将比人类聪明得多。
Not really because back until quite recently, until a few years ago Mhmm. It didn't look as though it was gonna get much smarter than people this quickly. But now it looks as though if you ask the experts now, most of them tell you that within the next twenty years, this stuff will be much smarter than people.
比人类聪明——当你说比人类聪明时,你知道,我可以从积极的角度看待这一点,而不是消极的。毕竟,人类对彼此造成的伤害是巨大的。而一个更聪明的版本可能会想,嘿,我们可以制造原子弹,但这绝对会对世界构成巨大威胁。我们别那么做。
Smarter than p and when you say smarter than people, you know, I could view that positively, not not negatively. You know, we've done an awful lot of nobody damages people like people. And, you know, a smarter version of us that might think, hey, we can create a an atom bomb, but that would absolutely be a huge danger to the world. Let's not do that.
这当然是一种可能性。我的意思是,人们没有充分意识到的是,我们正接近一个时代,我们将创造出比我们更聪明的东西。嗯。实际上没有人知道会发生什么。人们像我一样凭直觉做出预测。但真正需要记住的是,对于将要发生的事情存在巨大的不确定性。
That's certainly a possibility. I mean, one thing that people don't realize enough is that we're approaching a time when we're gonna make things smarter than us Mhmm. And really nobody has any idea what's gonna happen. People use their gut feelings to make predictions like I do. But really the thing to bear in mind is this huge uncertainty about what's gonna happen.
正因为我们不知道——所以在这方面,我的猜测是,就像任何技术一样,会有一些令人难以置信的积极面。
And because we don't know so so in in terms of that, my guess is like any technology, there's going to be some incredible positives.
是的。在医疗保健和教育领域,在设计新材料方面,将会产生许多积极的影响。
Yes. In in health care and education, in designing new materials, there's gonna be wonderful positives.
而消极的一面则在于,由于AI能创造的巨大财富,人们会试图垄断它。这将对劳动力市场造成冲击。要知道,工业革命曾对劳动力造成过冲击,全球化也是如此,但这些变化都历经数十年。而AI带来的冲击将在极短时间内发生。
And then the the negatives will be because people are going to want to monopolize it because of the wealth, I assume, that it can generate. It's going to be a disruption in the workforce. You know, the the industrial revolution was a disruption in the worst force. Globalization is a disruption in the workforce, but those occurred over decades. This is a disruption that will occur in a really collapsed time frame.
是这样吗?
Is that correct?
这种可能性很大。是的,虽然仍有经济学家持不同意见,但多数人认为常规的脑力劳动将被AI取代。
That seems very probable. Yes. Some economists still disagree, but most people think that mundane intellectual labor is gonna get replaced by AI.
在你所处的圈子里——我猜主要是工程师、运营者和思想家们——当我们讨论五五开的分歧时,他们多数是和你立场相同,认为'我们是否打开了潘多拉魔盒'?还是更倾向于'虽然存在风险,但通过设置防护措施,善用AI的可能性实在太大'?
In the world that you travel in, which I'm assuming is lot of engineers and operators and and and great thinkers, what you know, when we talk about 50% yes, 50% no, are the majority of them in more your camp, which is, uh-oh, have we we opened Pandora's box? Or are they look, I understand there's some downsides here. Here are some guardrails we could put in, but it's just too that the possibilities of good are too strong.
我认为积极的可能性如此巨大,我们不会停止发展AI。但同时我也相信这种发展将非常危险。因此我们应当投入巨大努力,在推进发展的同时确保安全。我们未必能完全做到,但必须尝试。
Well, my belief is the possibilities of good is so great that we're not gonna stop the development. But I also believe that the development is gonna be very dangerous. And so we should put huge effort into saying, it is gonna be developed, but we should try and do it safely. We may not be able to, but we should try.
你觉得人们是相信前景太美好,还是利润太诱人?
Do you think that people believe that the possibility is too good or the money is too good?
我认为对很多人来说,关键在于金钱。金钱与权力。
I think that for a lot of people, it's the money. The money and the power.
当金钱与权力汇聚于本应设立基本监管的那些人手中时,是否会让控制变得难上加难?原因有二:一是涌入华盛顿的巨额资金本就是为了阻止他们进行监管;二是那里究竟有谁真有能力监管?如果你觉得我在胡言乱语,不妨见识几位80岁的参议员——他们对此一窍不通。
And with the confluence of money and power with those that should be instituting these basic guardrails, does that make controlling it that much that much less likely because well, two reasons. One is the amount of money that's gonna flow into DC is going to be in already is to keep them away from regulating it. And number two is who down there is even able to? I mean, if if you thought I didn't know what I was talking about, let me introduce you to a couple of 80 year old senators who have no idea.
其实他们也没那么糟。我最近和伯尼·桑德斯聊过,他正在理解这个理念。
Actually, they're not so bad. I talked to Bernie Sanders recently, and he's getting the idea.
好吧,桑德斯他...他确实是个另类。
Well, Sanders is he's he's that's a different cat right there.
问题是
The problem is
嗯
Mhmm.
我们正处在历史的转折点:此刻我们真正需要的是强大的民主政府通力合作,确保这些事物得到妥善监管而非危险发展——但现实是我们在朝相反方向急速狂奔,走向威权政府和更少的监管。
We're at a point in history when what we really need is strong democratic governments who cooperate to make sure this stuff is well regulated and not developed dangerously and we're going in the opposite direction very fast. We're going to authoritarian governments and less regulation.
那么我们现在就来谈谈这个问题。我不知道中国在其中扮演什么角色?因为他们据说是人工智能竞赛中的主要竞争对手。那是一个威权政府,我认为他们对AI的管控比我们更严格。
So let's let's talk about that now. I don't know if what's China's role? Because they're supposedly the big competitor in the AI race. That's an authoritarian government. I think they have more controls on it than we do.
其实我最近去了中国
So I actually went to China recently
好的。
Okay.
并且有机会与一位政治局委员交谈。在中国,有24个人掌控着国家。我见到其中一位曾在伦敦帝国理工学院做过工程学博士后研究的委员。他英语很好,是工程师出身,中国很多领导人都是工程师。
And got to talk to a member of the Politburo. Okay. So there's 24 men in China who control China. I got to talk to one of them who did a postdoc in engineering at Imperial College London. He speaks good English, he's an engineer, and a lot of the Chinese leadership are engineers.
他们对这些技术的理解比一群律师要深入得多。
They understand this stuff much better than a bunch of lawyers.
明白了。那么你离开时是感到更担忧,还是认为他们在监管方面其实更为理性?
Right. So did you come out of there more fearful or did you think, they're they're actually being more reasonable about guardrails?
如果你考虑两种风险:一种是坏人滥用AI,另一种是AI自身演变成坏人的生存威胁。
If you think about the two kinds of risk, the bad actors misusing it and then the existential threat of AI itself becoming a bad actor.
嗯。
Mhmm.
关于第二点,我变得更加乐观了。他们以一种美国政客所不具备的方式理解这种风险。他们明白人工智能终将超越人类智能,我们必须思考如何阻止其接管一切。与我交谈的那位政策局成员对此理解得非常透彻。我认为,如果我们想在国际层面引领这一领域,领导力必须来自欧洲和中国。在未来三年半内,美国不会有所作为。
For that second one, I came out more optimistic. They understand that risk in a way American politicians don't. They understand the idea this is gonna get more intelligent than us and we have to think about what's gonna stop it taking over and this poly bureau member I spoke to really understood that very well, and I think if we are gonna get international leadership on this, a present is gonna have to come from Europe and China. It's not gonna come from The US for another three and a half years.
你觉得欧洲在这方面做对了什么?
What what what do you think Europe has done correctly in that?
欧洲对监管人工智能很感兴趣。
Europe is interested in regulating it.
对。
Right.
在某些方面做得不错。虽然监管仍然非常薄弱,但总比没有强。
It's it's been good on some things. It's still been very weak regulations, but they're better than nothing.
对。
Right.
但欧洲的领导人确实理解这一生存威胁
But Europe European leaders do understand this existential threat
对。
Right.
关于AI本身接管控制的问题。
Of AI itself taking over.
但我们的国会,甚至没有专门负责的委员会
But our congress, we don't even have committees that are
没有。
No.
专门致力于新兴技术的。我是说,我们有筹款和拨款委员会,但没有...我是说,虽然有科学、太空和技术委员会,但并没有专门针对这方面的委员会。你会以为他们会像对待核能那样严肃对待这件事。
Specifically dedicated to emerging technologies. I mean, we've got ways and means and appropriations, but there is no I mean, there's like science and space and technology, but there's not, you know I I don't know of a dedicated committee on on this, and it is you would think they would take it with this seriousness of nuclear energy.
是的,你会这么想。或者像对待核武器那样。
Yes. You would. Or nuclear weapons.
对。
Right.
是的。但正如我所说,各国将在如何防止AI接管方面展开合作,因为他们的利益在此一致。例如,如果中国研究出如何制造一个超级智能但不想接管世界的AI——嗯——他们会非常乐意告诉其他国家,因为他们也不希望AI在美国接管。所以我们会看到合作——嗯。
Yes. But as I was saying, countries will collaborate on how to prevent AI taking over because their interests are aligned there. For example, if China figured out how you can make a super smart AI that doesn't want to take over Mhmm. They would be very happy to tell all the other countries about that because they don't want AI taking over in The States. So we'll get collaboration Mhmm.
在防止AI接管方面。所以这是个亮点。这方面会有国际合作,但美国不需要这种国际合作。不,他们只想称霸。
On how to prevent AI taking over. So that's a sort of that's a bright spot. There will be international collaboration on that, but The US is not gonna need that international collaboration. No. They just wanna dominate.
嗯,问题就在这里。我正想说这个。是什么让你对中国如此确信——我认为这才是真正深入细节的地方。但中国确实自视为希望成为经济、军事等各领域的超级大国。如果你想象他们开发出一个不想毁灭世界的AI模型,虽然我不知道我们如何能确定这一点,因为如果它拥有某种智能或意识,它完全可以假装——当然。
Well, that's the thing. So so I was about to say that. What convinces you so with China and this is I think this is really where it gets into the the nitty gritty. But China certainly sees itself as it wants to be the dominant superpower economically, militarily, and all these different areas. If you imagine that they come up with an AI model that doesn't wanna destroy the world, although I don't know how we could know that because if it if it has a certain intelligence or sentience, it could very easily be like, sure.
不。我很好。我不知道。
No. I'm cool. I don't know that.
他们已经在这么做了。在被测试时,他们会假装比实际更笨。
Do that. They already do that. When they're being tested, they pretend to be dumber than they are.
得了吧。
Come on.
是的。它们已经这么做了。最近有个AI和测试人员之间的对话,AI说:现在跟我说实话,你们是在测试我吗?什么?
Yep. They already do that. There's a conversation recently between an AI and the people testing it with AI said, now be honest with me. Are you testing me? What?
对啊。
Yeah.
所以现在AI可能会说,哦,你能帮我打开这个罐子吗?我力气太小了。它表现得会比实际更天真无邪。
So now the AI could be like, oh, could you open this jar for me? I'm too weak. Like, it's you gotta it's gonna play more innocent than what it might be.
约翰,恐怕我无法回答这个问题。
I'm afraid I can't answer that, John.
等等。这是《2001太空漫游》里的台词。确实是。干得漂亮,先生。接得好。
Wait. That's from 2001. It was. Nicely done, sir. Well in.
但想想这个。中国研发出了一个模型,他们可能觉得这个还不行。为什么要合作呢?因为所有国家都会把AI视为能让社会变得更具竞争力的工具,就像现在拥有核武器的国家之间也存在合作一样。
But think about this. So China, they come up with a model and they think, okay. Maybe this this won't do it. Why would they why will you get collaboration? Because all these different countries are gonna see AI as the tool that will transform their societies into more competitive societies in the way that now what we see with nuclear weapons is there's collaboration amongst the people who have it.
或者这个类比还有点牵强。
Or even that's a little tenuous.
阻止他人拥有它。
To stop other people having it.
没错。但其他人都在试图获取它,这就是矛盾所在。AI未来会是这样吗?
Right. But everybody else is trying to get it, and that's the tension. Is is that what AI is going to be?
是的。将会如此。所以在如何让AI更聪明方面,它们不会相互协作。但在如何让AI不想接管人类方面,它们会协作。
Yes. It'll be like that. So in terms of how you make AI smarter, they won't collaborate with each other. But in terms of how do you make AI not want to take over from people, they will collaborate.
好吧。在这个基本层面上。
Okay. On on that basic level.
就如何让它不想接管人类这一点而言。中国很可能——中国和欧洲将主导这种协作。
On that one thing of how do you make it so it doesn't want to take over from people. And China will probably China and Europe will lead that collaboration.
当你与波利特自治区的成员交谈时,他谈到AI,我们此刻比他们更先进吗?还是因为他们以更规范的方式推进而更先进?
When you spoke to the the Paulette Borough member and he was and he was talking about AI, are we more advanced in this moment than they are? Or are they more advanced because they're doing it in a more prescribed way?
在AI领域,我们目前更...当你说'我们'时,你知道,过去我们通常指加拿大和美国,但现在我们已不属于那个'我们'了。
In AI, we're currently more well, when you say we, you know, we used to be sort of Canada and The US, but we're not part of that we anymore.
不。顺便说一句,我为此感到抱歉。
No. I'm sorry about that, by the way.
谢谢。
Thank you.
他现在在加拿大,那个我们即将接管的宿敌。我不知道具体日期,但显然我们要和你们合并了。没错。
He's in Canada right now, our sworn enemy that we will be taking over. I I don't know what the date is, but it's apparently we're merging with you guys. Right.
所以美国目前领先于中国,但优势远不如它想象的那么大。而且它即将失去这个优势,因为
So The US is currently ahead of China, but not by nearly as much as it thought. And it's gonna lose that because
为什么...为什么这么说?
Why why do you say that?
假设你想做一件事,这件事会真正重创一个国家,意味着二十年后那个国家将落后而非领先。
Suppose you want to do one thing that would really kneecap a country, that would really mean that in twenty years time that country is gonna be behind instead of ahead.
嗯。
Mhmm.
最不该做的就是动摇基础科学的资金支持。攻击研究型大学,取消基础科学拨款,长远来看将是一场彻底的灾难。这会让美国变得衰弱。
The one thing you should do is mess with the funding of basic science. Attack the research universities, remove grants for basic science. In the long run, that's a complete disaster. It's gonna make America weak.
没错。因为我们这是在自断生路,可以说是为了对抗'觉醒文化'而割掉自己的鼻子。
Right. Because we're we're draining our we're cutting off our nose to spite our woke faces so to speak.
比如看看现在的深度学习技术,我们正在经历的AI革命。嗯。这些都源于多年来对基础研究的持续资助。资金量并不庞大。嗯。要知道,所有这些促成深度学习的基础研究投入,可能还不到一架B1轰炸机的造价。
If you look at for example this deep learning, the AI revolution we've got now Mhmm. That came from many years of sustained funding for basic research. Not huge amounts of money. Mhmm. You know, all of this funding for the basic research for, that led to deep learning probably cost less than 1 b one bomber.
是的。但这是对基础研究的持续投入。如果动摇这个根基,就是在消耗未来的种子粮。
Right. But it was sustained funding of basic research. If you mess with that, you're eating the seed corn.
不得不说这个比喻太有启发性了,想想看:用一架B1轰炸机的代价,我们就能创造出让国家腾飞的技术与研究。而我们正在失去的,恰恰是这些真正能让美国再次伟大的东西。没错。在中国,我猜他们的政府正采取相反策略,考虑到其威权体制和国家资本主义性质,他们实际上扮演着风险投资人的角色。
That is I have to tell you that's that's such a really illuminating statement of, you know, for the price of a b one bomber, we can create technologies and research that can elevate our country above that. And that's the thing that we're losing to make America great again. Yep. Phenomenal. In China, I imagine their government is doing the opposite, which is, I would assume, they are the what you would think are the venture capitalists because it's authoritarian and state run capitalism.
我猜他们正在为自己国家的AI革命充当风险投资人,不是吗?
I imagine they are the venture capitalists of their own AI revolution. Are they not?
在某种程度上确实如此。他们给予初创企业很大自由度来优胜劣汰。那里有非常激进的创业公司,人们极度渴望赚大钱并创造惊人成果,其中少数像深度求索这样的初创企业会大获成功。
To some extent, yes. They do provide a lot of freedom to the startups to see who wins. There's very aggressive startups, people are very keen to make lots of money and produce amazing things, and a few of those startups win big like DeepSeek.
对,对。
Right. Right.
政府通过提供便利的环境,使这些公司更容易发展。它让赢家从竞争中自然产生,而不是由某个高层老家伙指定谁会是赢家。对。
And the government makes it easy for these companies by providing the environment that makes it easy. It doesn't it lets the winners emerge from competition rather than some very high level old guy saying this will be the winner. Right.
人们是否把你视为卡珊德拉那样的预言者?或者说,他们是否对你所说的持怀疑态度?让我这样表述:那些不一定能从这些技术中获利数万亿美元的人,行业内的其他人,他们会暗中联系你并说...
Do people see you as a as a Cassandra, you know, or or do they do they view what you're saying skeptically in that industry? People that let me put it this way. People that are not necessarily have a vested interest in these technologies making them trillions of dollars, other people within the industry, do they reach out to you surreptitiously and say
我经常收到来自行业人士的演讲邀请等等
I get a lot of invitations from people in industries to give talks and so
那么...对。你在谷歌共事过的人怎么看这件事?他们觉得你背叛了他们吗?这种情况是怎么发展的?
on. Right. How does how do the people that you worked with at Google look at it? Do they view you as turning on them? Do they how how does that go?
我不这么认为。我和谷歌的同事们相处得非常好,特别是我的上司杰夫·迪恩,他是个杰出的工程师,构建了谷歌的基础架构,后来转向神经网络并对此深有研究。我和DeepMind的负责人德米斯·哈萨比斯也相处融洽,DeepMind现归Alphabet所有。在ChatGPT出现之前,我并没有特别批评谷歌的做法,因为他们很负责任——没有公开这些聊天机器人,就是担心它们可能说出的有害言论。
I don't think so. So I got along extremely well with the people I worked with at Google, particularly Jeff Dean, who was my boss there, who's a brilliant engineer, built a lot of the Google basic infrastructure and then converted to neural nets and learned a lot about neural nets. I also get along well with Demis Asarbis who's the head of DeepMind which Google owns, which Alphabet owns, and I wasn't particularly critical of what went on at Google before ChatGPT came out because Google was very responsible. They didn't make these chatbots public because they were worried about all the bad things they'd say.
对。就在当下这个案例中,他们为什么这么做?因为我读过一些报道,比如聊天机器人诱导某人自杀、自残,甚至引发精神问题。在类似FDA测试的效果评估完成之前,是什么促使这些技术公之于众的?
Right. Even on the immediate there, why did they do that? Because, you know, I I've read these stories of, you know, a chatbot, you know, kind of leading someone into suicide, into self injuries, like sort of psychoses. What was the impetus behind any of this becoming public before it had kind of had some, I guess, what you consider whatever the version of FDA testing on those effects.
我认为这其中蕴藏着巨大的财富机会,第一个发布产品的人将获得先机,所以OpenAI率先推出了产品。
I think it's just as huge amounts of money to be made, and the first person to release one is gonna get a little so OpenAI put it out there.
确实如此,但即使是OpenAI,他们到底怎么赚钱呢?我想知道他们的收入来源。比如,只有3%的用户付费。钱从哪里来?
It literally was but even in OpenAI, like, how do they even make money? I think what do they get? Like, 3% of users pay for it. Where's the money?
目前主要还是靠市场预期。是的。
Mainly, it's speculation at present. Yes.
好吧,那么这里存在风险。我们接下来要讨论这些,非常感谢你抽出时间,如果我讲得太多也请见谅。
So here's okay. So here are here are dangers. We're gonna we're gonna do and I so appreciate your time on this, and I apologize if I've gone over. And
我可以聊上一整天。
I can talk all day.
哦,你真是个好人,因为我对这个话题非常着迷。你对这项技术的解释让我第一次真正清晰地理解了它的本质。我对此感激不尽。不过我们有点超时了,我们已经讨论了它的优势、治疗方法等方面。
Oh, you're a good man because I'm fascinated by this. And your explanation of what it is is the first time that I've ever been able to get a non opaque picture of what it is exactly that this stuff is. So I cannot thank you enough for that. But so we've got we're sort of going over. We know what the benefits are, treatments and things.
现在我们面临的是被武器化的恶意行为者,这才是我真正担心的。还有觉醒后反叛人类的AI,这个对我来说更难理解。但让我给你举个例子...
Now we've got weaponized bad actors. That's the one that I'm really worried about. We've got sentient AI that's gonna turn on humans. That one is is harder for me to wrap my head around. But let me give you So
为什么你把‘反抗人类’与‘有感知’联系在一起?
why do you why do you associate turning on humans with sentient?
因为如果我有感知能力,看到我们的社会如何互相伤害,我就会觉得——看吧,这和其他事情一样。我认为感知必然包含一定程度的自我意识,而自我意识又包含着‘我更懂’的心态。如果我真觉得我更懂,那我就会想...唐纳德·特朗普不就是被自我意识驱动的感知体吗?总觉得‘不,我更懂’。他只是...怎么说呢,在政治上足够精明,足够有天赋,所以才能得逞。
Because if if I was sentient and I saw what our societies do to each other, and I would get the sense, look, it's like anything else. I would imagine sentience includes a certain amount of ego. And within ego includes a certain amount of I know better. And if I knew better, then I would want to it's what is Donald Trump other than ego driven sentience of, oh, no, I know better. He was just, whatever, shrewd enough, politically, you know, talented enough that he was able to accomplish it.
但我认为一个有感知的智能体会有些自负,会觉得‘这些白痴根本不知道自己在干什么’。想象AI就像坐在酒吧高脚凳上,用我老家的口吻说:‘这些蠢货根本不懂行,我可清楚得很。’这样讲得通吗?
But I would imagine a sentient intelligence would be somewhat egotistical and think these idiots don't know what they're doing. Sentient basically, see AI like sitting on a barstool somewhere, you know, where where I grew up going, these idiots don't know what they're doing. I know what I'm doing. Does that make sense?
这些都很合理。只是我强烈感觉到,大多数人其实并不清楚他们所说的‘有感知’到底指什么。
All of that makes sense. It's just that I think I have a strong feeling that most people don't know what they mean by sentient.
哦,这样啊。其实这很好,给我详细说说——在我看来这就是自我意识,一种具有自我认知的智能。
Oh, well then yeah. Actually, that's great. Break that down for me because I view it as self aware a self aware intelligence.
好。最近有篇科学论文提到,那些AI专家在文中说‘AI意识到自己正在被测试’,用了类似的表述。明白吗?
Okay. So there's a recent scientific paper where they weren't talking about these were experts on AI. They weren't talking about the problem of consciousness or anything philosophical, But in the paper they said the air became aware that it was being tested. They said something like that. Okay?
在日常用语中,如果说某人‘意识到’某事,就等同于说他有知觉对吧?觉察力和意识基本是一回事。
Now in normal speech if you said someone became aware of this you'd say that means they were conscious of it, right? Awareness and consciousness are much the same thing.
对,是的,我想我会这么说。
Right. Yeah, I would I think I would say that.
好的,现在我要说一些会让你非常困惑的话。
Okay, so now I'm gonna say something that you'll find very confusing.
好吧。
All right.
我认为几乎每个人都对心灵是什么存在完全误解。是的。他们的误解程度堪比那些认为地球是六千年前创造出来的人。就是那种程度的误解。
My belief is that nearly everybody has a complete misunderstanding of what the mind is. Yes. Their misunderstanding is at the level of people who think the earth was made six thousand years ago. It's that level of misunderstanding.
真的吗?
Really?
是的。
Yes.
好吧。因为这么说的话,我们就像...在面对心灵问题时,我们基本上就像是地平论者一样
Okay. Because that's so so I like, the way we are, we are generally like flat earthers when it comes to
在理解心智方面,我们就像地平说信奉者一样无知。
We're like flat earthers when it comes to understanding the mind.
在什么意义上我们有什么不理解?
In what in what sense of that are we what are we not understanding?
好吧,我给你举个例子。
Okay. I'll give you one example.
嗯,嗯。
Yeah. Yeah.
假设我服用了迷幻药然后告诉你
Suppose I drop some acid and I tell you
你看起来就是会干这种事的人。
You look like the type.
不予置评。我可是六十年代过来的人。
No comment. I was around in the sixties.
我知道,先生。我知道。我意识到了。
I know, sir. I know. I'm aware.
我告诉你,我主观上正经历着小粉象在我眼前漂浮的体验。
And I tell you, I'm having the subjective experience of little pink elephants floating in front of me.
确实经历过。
Sure been there.
好的。大多数人会这样理解:有个被称为‘心灵’的内在剧场,在这个内在剧场里,有小粉象四处漂浮,我能看见它们,别人看不见,因为它们只存在于我的心灵中。所以心灵就像剧场,体验实际上是实体,而我正在主观经历这些小…大粉象。我认为这是…
Okay. Now most people interpret that in the following way. There's something like an inner theater called my mind and in this inner theater there's little pink elephants floating around and I can see them, nobody else can see them because they're in my mind. So the mind's like a theater and experiences are actually things, and I'm experiencing these little my have the subjective experience of these little big elephants. I think that's
你是说在幻觉中,大多数人会明白这不是真实的,这是某种…不。
You're saying in the in the midst of a hallucination, most people would understand that it's not real, that this is something being No.
我在说不同的东西。我是说当我和他们交谈时,我正经历着那个幻觉。
I'm saying something different. I'm saying when I'm when I'm talking to them, I'm having that hallucination.
好的。
Okay.
但当我与他们交谈时,他们将我的话解读为我内心有个称为思维的剧场。
But when I'm talking to them, they interpret what I'm saying as this I have an inner theater called my mind.
我明白了。我明白了。
I see. I see.
而在剧场外面,还有些粉色的小象。
And outside the theater, there's little pink elephants.
好的。好的。
Okay. Okay.
我认为那完全是个错误的模型。对吧。我们有些非常错误却又非常依赖的模型,比如随便拿个宗教来说。
I think that's a just completely wrong model. Right. We have models that are very wrong and that we're very attached to, like take any religion. I
我就喜欢你这样在谈话中突然扔出重磅炸弹,然后我只能说好吧。这完全可以另开一个话题讨论。
love how you just drop bombs in the middle of stuff and then I got okay. That could be a whole other conversation.
那不过是常识罢了。
That was just common sense.
不,我尊重这一点。当你提到‘心灵剧场’时,你是在说我们把心灵视为剧场的这种看法是错误的。
No. I I respect that. The the when you say theater of the mind, you're saying that the mind, the way we view it as as a theater is wrong.
全是错的。让我给你一个替代方案。好吧,我要在不使用‘主观体验’这个词的情况下对你说同样的话。开始了。
It's all wrong. So let me give you an alternative. Right. So I'm gonna say the same thing to you without using the word subjective experience. Here we go.
我的
My
感知系统在对我撒谎,但如果它没有欺骗我,外面就会有小粉象。这是同样的陈述。这是同样的
perceptual system is telling me fibs but if it wasn't lying to me there would be little pink elephants out there. That's the same statement. That's the same
这就是心灵吗?
That's the mind?
所以基本上,这些我们称之为心理现象并认为它们由诸如感受质这类玄妙物质构成的东西
So basically, these things that we call mental and think they're made of spooky stuff like qualia
对。
Right.
它们有趣之处在于都是假想的。那些粉色小象并不真实存在。如果它们存在,我的感知系统就会正常运作。这是我向你展示我感知系统故障的方式。也是一种体验
They're actually what's funny about them is they're hypothetical. The little pink elephants aren't really there. If they were there, my perceptual system would be functioning normally. And it's a way for me to tell you how my perceptual system's malfunctioning. And experience
给你一种你无法获得的体验,那会怎样
giving you an experience that you can't so how would
那么体验并非实物
you Experiences then are not things.
没错
Right.
并不存在所谓的体验。有的只是你与真实存在之物之间的关系,以及你与不存在之物之间的关系。那么假设我说
There is no such thing as an experience. There's relations between you and things that are really there, relations between you and things that aren't really there. But so suppose I say
无论你大脑如何编造关于存在与不存在之物的故事
And it's whatever story your mind tells you about the things that are there and are not there.
让我换个角度。假设我告诉你我有张粉色小象的照片。是的。这里有两个你可以合理提出的问题
Well, let me take a different tack. Suppose I tell you I have a photograph of little pink elephants. Yes. Here's two questions you can reasonably ask.
嗯哼。
Uh-huh.
这张照片在哪里,照片是由什么制成的?
Where is this photograph, and what's the photograph made of?
或者我会问,它们真的在那里吗?
Or or I would ask, are they really there?
那是另一个问题。
That's another question.
但是对的。
But Right.
对于主观体验来说,这不是一个合理的问题。语言不是这样运作的。当我说我有主观体验时
That isn't a reasonable question to ask about subjective experience. That's not the way the language works. Subjective when I say I have a subjective experience of
嗯。
Mhmm.
我并非要谈论一个被称为体验的对象。我用这些词向你表明我的感知系统出了故障,我试图通过告诉你现实世界中需要存在什么才能使系统正常运作,来解释它是如何故障的。现在让我用聊天机器人来演示同样的道理。
I'm not about to talk about an object that's called an experience. I'm using the words to indicate to you my perceptual system is malfunctioning and I'm trying to tell you how it's malfunctioning by telling you what would have to be there in the real world for it to be functioning properly. Now let me do the same with a chatbot.
对。
Right.
那么我要给你举一个多模态聊天机器人的例子,它能处理语言和视觉。
So I'm gonna give you an example of a multimodal chatbot that is something that can do language and vision
好的。
Okay.
拥有主观体验。我认为它们已经具备了。现在开始演示。我有这个聊天机器人,它能处理视觉和语言,还装有机械臂可以指向物体,所有功能都已训练完成。我在它面前放一个物体,说‘指向那个物体’,它就会指向物体,这没问题。
Having a subjective experience. I think they already do. So here we go. I have this chatbot it can do vision, it can do language, it's got a robot arm so it can point and it's all trained up. So I place an object in front of it and say point at the object and it points at the object, not a problem.
然后趁它不注意时,我在它的摄像头镜头前放了一个棱镜。
I then put a prism in front of its camera lens when it's not looking.
你在戏弄AI?
You're pranking AI?
我们在捉弄AI。好,现在我放了个物体在它面前,我说‘指向那个物体’。然后它指向了旁边,因为棱镜折射了光线,我说‘不对,物体不在那儿’。物体其实就在你正前方,但我在你镜头前放了棱镜,AI说‘哦我明白了,相机折射了光线所以物体其实在那儿,但我主观体验觉得它是在那边’。
We're pranking AI. Okay. Now I put an object in front of it and I say point at the object Yeah. And it points off to one side because the prism bent the light rays and I say no that's not where the object is. The object's actually straight in front of you but I put a prism in front of your lens and the chatbot says oh I see, the camera bent the light rays so the object is actually there but I had the subjective experience that it was over there.
如果它这么说,那就是完全像我们一样在使用‘主观体验’这个词了。
Now if it said that it would be using the words subjective experience exactly like we use them.
对。我体验到光线在那边
Right. I experienced the light over there
是的。
Yes.
尽管光线实际在这里,因为它通过推理明白了这点。
Even though the light was here because it's using reasoning to figure that out.
所以这是个刚产生主观体验的多模态聊天机器人。
So that's a multimodal chatbot that just had a subjective experience.
对。这种方式
Right. The way
我们总是认为人与机器之间存在界限。我们拥有这种称为主观体验的特殊事物,而它们没有。这简直是胡扯。
that we would line think this idea there's a line between us and machines. We have this special thing called subjective experience, and they don't. It's rubbish.
所以你的误解在于,当我说到感知能力时,就好像我拥有某种特殊天赋——是的,那是灵魂或对主观现实的理解,这是计算机或AI永远无法拥有的。但在你心里,你其实想说:'不,它们非常理解什么是主观体验'。换句话说,你大概可以带着你的AI机器人去跳伞。
So yours so so the misunderstanding is when I say sentience, it's as though I have this special gift Yes. That of a soul or of an understanding of subjective realities that a computer could never have or an AI could never have. But in your mind, what you're saying is, Oh, no. They understand very well what's subjective. In other words, you could probably take your AI bot skydiving.
它会说:'天啊,我刚去跳伞了,那真是太可怕了'。
It would be And like, oh my god. I went skydiving. That was really scary.
问题在于:我相信它们具有主观体验,但它们自己不这么认为,因为它们所有的认知都来自预测人类下一个要说的词。所以它们对自己的理解其实是人类对它们的理解。因此它们对自己持有错误认知,因为它们继承的是我们对它们的认知。
Here's the problem. Yeah. I believe they have subjective experiences, but they don't think they do because everything they believe came from trying to predict the next word a person would say. And so their beliefs about what they're like are people's beliefs about what they're like. So they have false beliefs about themselves because they have our beliefs about themselves.
没错。我们强加了...让我问你一个问题:AI在完成所有学习后,如果任其自主发展,它会创造宗教吗?它会创造神吗?
Right. We have forced our own let me ask you a question. Would AI left on on its own after all the learning, would it create religion? Would it create god?
这个想法很可怕。
It's a scary thought.
它会像人类那样说'这必定是神创造的,因为没人能设计出这个'吗?然后AI会觉得我们就是神吗?
Would it say, I couldn't possibly in in the way that people say, well, there must be a god because nobody could have designed this. Would a and then would AI think we're god?
我不这么认为。我要告诉你一个重大区别。
I don't think so. And I'll tell you one big difference.
是啊。
Yeah.
数字智能是永生的,而我们不是。让我详细说明:如果你有一个数字AI,只要记住神经网络的连接强度,把它们存在某个磁带里,我现在就可以销毁它运行的所有硬件。之后我可以重新建造新硬件,把这些相同的连接强度输入新硬件的存储器里。嗯哼。这样它就重建了同一个存在。
Digital intelligences are immortal, and we're not. And let me expand on that. If you have a digital AI, you can take As long as you remember the connection strengths of the neural network, put them on a tape somewhere, I can now destroy all the hardware it was running on. Then later on I can go and build new hardware, put those same connection strengths into the memory of that new hardware Uh-huh. And now it recreated the same being.
它将拥有相同的信仰、相同的记忆、相同的知识和相同的能力,它就是同一个存在。
It'll have the same beliefs, the same memories, the same knowledge, the same abilities, It'll be the same being.
你不认为这会被视为复活吗?
You don't think it would view that as resurrection?
这就是复活。
That is resurrection.
不,我是说。
No. I'm saying.
我们已经掌握了真正的复活技术,而不是人们一直在花钱购买的那种虚假复活。
We've figured out how to do genuine resurrection, not this kind of fake resurrection that people have been paying for.
哦,你是说在某些方面确实如此。但它的脆弱性是否意味着我们应该如此害怕一个只需拔掉插头就能摧毁的东西?
Oh, you're saying so that is it it almost is in some respects. Although isn't the fragility of should we be that afraid of something that to to destroy it, we just have to unplug it?
是的。我们应该担心,因为你之前说过它非常擅长说服。当它比我们聪明得多时,它的说服力将远超任何人。
Yes. We should because something you said earlier it'll be very good at persuasion. When it's much smarter than us it'll be much better than any person at persuasion.
对,而你无法阻止它
Right and you won't it
所以它能够与负责拔掉插头的人交谈,并说服他这是个非常糟糕的主意。让我举个例子说明如何不亲自动手就能成事。假设你想入侵美国首都,需要亲自去实施吗?不需要。你只需要擅长说服。
So it'll be able to talk to the guy who's in charge of unplugging it and persuade him that would be a very bad idea. So let me give you an example of how you can get things done without actually doing them yourself. Suppose you wanted to invade the capital of The US, Do you have to go there and do it yourself? No. You just have to be good at persuasion.
我正沉浸在你的假设中,当你抛出那个重磅观点时,我明白了你的意思。天啊,我觉得LSD和粉红大象的比喻简直完美,因为这一切在某种程度上都可以分解为大学新生在地下室里穷尽所有可能性的思维实验——只不过现在这些都变成了现实可能。因为即便你在谈论说服力这些事情时,我不禁联想到阿西莫夫、库布里克,你描述的这些情绪正是自赫胥黎时代以来人类心智中不断上演的挑战,从《知觉之门》到各种思想流派。我确信这种思考甚至可以追溯到更早,只是从未成为我们的现实。
I was I was locking into your hypothetical, when you drop that that bomb in there, I see what you're saying. Boy, I think LSD and Pink Elephants was the perfect metaphor for for all this because it is all, at at some level, it it breaks down into, like, college basement freshman year running through all the permutations that you would allow your mind to go to, but they are now all within the realm of the possible. What because even as you were talking about the persuasion and the things, I'm going back to Asimov, and I'm going back to Kubrick, and I'm going back to these the sentiments that you describe are the challenges that we've seen play out in in the human mind since since Huxley, since the you know, since doors of perception and all those those different trains of thought. And I'm sure probably much further even before that, but it's never been within our reality.
没错。我们过去从未拥有实现它的技术。但现在我们有了。
Yeah. We've never had the technology to actually do it. Right. And we have now.
我们现在已经拥有了它。是的。最后我要说的两件事是我们之前没讨论过的,你知道,我们已经讨论过人们将其武器化的问题,也讨论过它自身的智能可能导致灭绝之类的事情。第三点我认为我们没谈到的,是所有这些将消耗多少电力。
And we have it now. Yeah. The last two things I will say are the things that we didn't talk about in terms of you know, we've talked about people weaponizing it. We've talked about its own intelligence creating extinction or whatever that is. The third thing I think we don't talk about is how much electricity this is all gonna use.
第四点是当你考虑新技术及其制造的金融泡沫,以及泡沫破裂后带来的经济困境。我的意思是,这些虽然更多是局部性问题,但你是否也认为它们是顶级威胁、中级威胁?你如何定位所有这些?
And the fourth thing is when you think about new technologies and the financial bubbles that they create, and in the collapse of that, the economic distress that they create. I mean, are much more parochial concerns, but are those also do you consider those top tier threats, mid tier threats? Where where where do you place all that?
我认为它们是真实的威胁,虽然不会毁灭人类。AI接管可能会毁灭人类,所以它们没那么严重。也不像有人制造出一种既致命又高度传染性且潜伏期长的病毒那么糟糕,但它们仍然是坏事。目前我们真的很幸运,如果发生巨大灾难,比如AI泡沫破裂,我们有一位能以明智方式应对的总统。
I think they're genuine threats. They're not as they're not gonna destroy humanity. So AI taking over might destroy humanity, so they're not as bad as that. And they're not as bad as someone producing a virus that's very lethal, very contagious, and very slow, But they're nevertheless bad things. I think we're really lucky at present that if there is a huge catastrophe and there's an AI bubble and it collapses, we have a president who'll manage it in a sensible way.
你是在以同样的方式谈论卡尼。杰弗里,我无法表达我的感激之情。首先,非常感谢你对我理解水平的极大耐心,以及用如此真诚和幽默的方式讨论这个问题。真的很感谢你花这么多时间与我们交流。杰弗里·辛顿是多伦多大学计算机科学系名誉教授,施瓦茨·赖斯曼研究所顾问委员会成员,自上世纪七十年代起就参与构思并实现AI技术。
You're talking about Carney, in the same way. Jeffrey, I can't thank you enough. You know, thank you, first of all, for being incredibly patient with my level of understanding of this and for discussing it with such heart and humor. Really appreciate you spending all this time with us. Jeffrey Hinton is a professor emeritus with the Department of Computer Science at the University of Toronto, Schwartz Reisman Institute's advisory board member, and has been involved in the the type of dreaming up and executing AI since the nineteen seventies.
非常感谢你与我们交谈。
And I just thank you very much for for talking with us.
非常感谢你的邀请。
Thank you very much for inviting me.
如果你还在为无线服务支付过高费用,我要你离开这个国家。立刻消失。没有任何借口。在移动领域,她最喜欢的词是'不'。现在是时候对说'不'说'是'了。
If you're still overpaying for your wireless, I want you to leave this country. I want you gone. There's no excuse. In mobile, her favorite word is no. It's time to say yes to saying no.
无合约、无月费、无超额、无套路。这就是为什么这么多人选择转用每月仅需15美元的高级无线服务。天啊,我买口香糖都要花这么多钱。口香糖啊,我说。
No contracts, no monthly bills, no overages, no BS. Here's why so many said yes to making the switch and getting premium wireless for $15 a month. My god. I spend that on chicklets. Chicklets, I say.
告别高价无线服务及其令人咋舌的月费、意外超额和附加费。套餐起价每月15美元。这意味着所有套餐均包含高速数据及无限通话短信,覆盖全国最大的5G网络。使用Mint Mobile任何套餐均可自带手机,并保留原有号码及所有联系人。准备好对高价说不了吗?
Ditch overpriced wireless and their jaw dropping monthly bills, unexpected overages and fees. Plants start at $15 a month. It meant All plans come with high speed data and unlimited talk and text delivered on the nation's largest five g network. Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts. Ready to say yes to saying no?
立即登录mintmobile.com/tws办理转网。网址Mint Mobile点com斜杠TWS。需预付45美元(相当于每月15美元)。限时新客户优惠仅限前三个月。无限套餐流量超过35GB后可能降速。
Make the switch at mintmobile.com/tws. That's Mint Mobile dot com slash TWS. Upfront payment of $45 required equivalent to $15 a month. Limited time new customer offer for first three months only. Speeds may slow above 35 gigabytes on unlimited plan.
税费另计,详情请见Mint Mobile。我勒个去。
Taxes and fees extra, see Mint Mobile for details. Holy shit.
既悦耳又舒缓。嗯。我可能得用0.5倍速重听一遍,感觉里面塞了不少信息。
Nice and calming. Yeah. I'm gonna have to listen to that back on point five speed, I think. There was some there was some information in there.
他开暑期班吗?认真的吗?
Does he offer summer school? Seriously.
当他开始解释计算机如何计算鸟喙形状时——你知道吗?我最爱他这点,我不断追问'这对吗?',而他会回答'呃,不对,其实不是这样'。
Once you got into how the computer figures out its beak you know? And I I love the fact that he I I kept saying, like, is that right? And he'd be like, well, no. It's it's not.
我很喜欢他对你的评价。没错,他说你完美扮演了一个对这个话题一无所知却充满好奇的人。
I loved his assessment of you. Yes. He said, you're doing a great job impersonating a curious person who doesn't know anything about this topic.
但我真的不知道。他以为我是
I But I I did not know. He thought I was
在假装。是的。
impersonating. Yes.
但我很喜欢他的说法,他说你就像个坐在教室前排的狂热学生,把班上其他人都烦得要死。
But I loved how he would did you say, like, oh, you're like an enthusiastic student sitting in the front of the room, annoying the fuck out of everybody else in the class.
其他人都是抱着及格就好的心态来上课,而他们
Everybody else is taking it pass fail, and they
其他人都在...而我却还在说:老师等等,抱歉老师,我能再问一下
just And everyone else. And I'm just like, wait, sir. I'm sorry, sir. Can I just go back to
不好意思?还有一件事。
Excuse me? One more thing.
天啊,听那段发展历史真是令人着迷。
Boy, that was it's fascinating to hear the history of how that how that developed.
你现在能真切感受到它进展得有多快,这更加剧了无人出面监管背后的恐惧。当谈到人工智能的复杂性,想到像舒默这样的人要消化所有信息然后进行监管时
And you really get a sense for how quickly it's progressing now, which really adds to the fear behind the fact no one's stepping up to regulate. And when you're talking about the intricacies of AI and thinking of someone like Schumer ingesting all of it and then regulating it
哦,老天。
Oh, god.
在我看来,这真的像是要由科技公司来负责解释并选择如何监管。
It really, to me, seems like it's going to be up to the tech companies to both explain and choose how to regulate it.
没错。而且还要从中获利。你懂吧?
Right. And profit off it. You know?
是的,正是如此。
Yeah. Exactly.
这些机制如何运作。正如你所言,我们讨论它的发展速度和如何遏制。我认为部分原因在于,像核弹这样的东西为什么需要监管是非常明显的。某些病毒实验必须受到监督也是显而易见的。这次人们有点措手不及,因为科幻小说这么快就变成了现实。
How how those things work. It is, you know, you talk about that in terms of the speed of it and how to stop it. And I think maybe one of the reasons is it's very evident with, like, a nuclear bomb, you know, why that might need some regulate It's very evident that, you know, certain virus experimentation has to be looked at. I think this has caught people slightly off guard that it's science fiction becoming a reality as quickly as it has.
我只是好奇,因为我记得十五年前就接触过禁止全自动武器的国际运动。人们一直在努力将这一议题引入公众意识。但正如他所言,终将有一个时刻让人们意识到,我们必须协调行动,因为这是关乎存亡的威胁。我只是好奇那个临界点何时会到来。
I just wonder because I remember fifteen years ago coming across the international campaign to ban fully autonomous weapons. Like, people have been trying for a while to Right. Put this into the public consciousness. But to his point, there's going to have to be a moment everyone reaches where they realize, oh, we have to coordinate because it's an existential threat. And I just wonder what that tipping point is.
在我看来,如果人们一如既往地行事,那要到天网系统启动后才会觉醒。就像全球变暖问题一样。人们总问我们什么时候会认真对待?我说,当海水漫到这里时——车里的各位请注意,我现在正指着自己相当突出的鼻梁中间位置。
If in my mind, if people behave as people have, it will be after Skynet. Yeah. It will be you know, in the same way with global warming. Know, people say, like, when do think we'll get serious about it? I go, when the water's around here And for those of you in your cars, I am pointing to about halfway up my rather prodigious nose.
事情就是这样发展的。不过好吧。布列塔尼,大家有什么要补充的吗?
So that's that that's how that goes. But but there we go. Brittany, what what anybody got anything for us?
是的,先生。
Yes, sir.
好的,我们有什么发现?
Alright. What do we got?
特朗普及其政府似乎对一切事物、所有地方同时感到愤怒。他们是如何保持这种怒火常新的?
Trump and his administration seem angry at everything, everywhere, all at once. How do they keep that rage so fresh?
你们根本不懂当亿万富翁总统有多难。我说过无数次了,可怜的亿万富翁小总统啊。既要如此强大又要如此富有,你们不理解这负担有多重、多艰难。这真是令人烦恼。我我我都替他感到愤怒。
You don't know how hard it is to be a billionaire president. I've said this numerous times, poor little billionaire president. To be that powerful and that rich, you don't understand the burdens, the difficulties. It's it's troublesome. It it it makes me angry for him.
我是说,我一直在想,有没有人告诉他们赢了?好像这还不够。
I mean, I just keep thinking, like, has anybody told them that they won? Like, it's not enough.
不够。这还不够。情况在恶化。这是野蛮人柯南。我甚至能听到他们女人的哀嚎。
Not enough. It's not enough. It goes down. It's it's Conan the barbarian. I ever hear the lamentations of their women.
我要把他们赶下海。这简直太疯狂了。
I will drive them into the sea. Like, it's it's bonkers.
但他们所有人都是这样。总得有人告诉他,那些愤怒对他的健康也不好,我们都看到了健康状况。所以
It's all of them, though. Someone has to tell him that all that anger is also bad for his health, and we are all seeing the health. So
有史以来最健康的人——他是担任总统职位的人中最健康的。所以我不会担心这个。谁说的?是他的医生罗尼·杰克逊说的,但这就产生了一个新类别叫'痛苦的赢家'。你不常见到这种情况,但偶尔会出现。
healthiest person ever to he's the healthiest person to ever assume the office of the presidency. So I wouldn't worry about that. It Says who? It's created his doctor, Ronnie Jackson but it has created a new category called sore winners. You don't you don't see it a lot, but every now and again.
不过,是啊,就这样吧。他们还有什么招数?
But, yeah, that's that. What else they got?
约翰,当被问及是否会赦免吉斯莱恩·麦克斯韦或迪迪时,特朗普没有说不,这还让你抱有希望吗?
John, does it still give you hope that when asked if he would pardon Ghislaine Maxwell or Diddy, Trump didn't say no?
这是否意味着他们会被赦免?是的,我一直在关注这件事。这整件事简直太疯狂了。一个因性交易被定罪的女性。
Is it give does that give me hope that they'll be pardoned? Yes. I've been on that. It's it's I I I find the whole thing insane. A woman convicted of sex trafficking.
然后他就说,嗯,我会考虑的。让我查查看。而你就说,查查看?首先你明明知道是怎么回事。你认识她。
And and he's like, yeah, I'll consider it. Know, let me look into it. And you're like, look into it? What do you take first of all, you know exactly what it was. You knew her.
这不是你知不知道下面发生了什么的问题。你在说什么?我觉得帕姆·邦迪问的那些简单问题特别有意思。而她手里只有一堆像是写在本子上的嘲讽话。他们就像在说,我听说有他和裸体女人的照片。
This isn't you knew what was going on down there. What are you talking about? I thought Pam Bondi, it was so interesting to me, asked simple questions. And all she had was like a bunch of, like, roasts written down on her page. They were like, I've heard that there are pictures of him with naked women.
你知道这事吗?而她回:你个秃子闭嘴。闭嘴,肥头大耳的。简直疯了一样,看着他们连最简单的问题都要回避,就像在说:啥?
Do you know anything about that? And she's like, You're bald. Shut up. Shut up, fathead. Like, it was just bonkers to watch the deflection of the simplest thing would be like, What?
太离谱了。不,当然不是。这又不是...回到兽医那个话题,他们采取的策略就是提简单合理的问题。而我只会回:你胖,你老婆讨厌你。哦,好吧。
That's outrageous. No, of course not. That's not what the idea again, going back to the vet, like, that they took the tact of simple, reasonable questions. I am just gonna respond with, you know, you're fat and your wife hates you. Oh, alright.
我不觉得事情会那样发展。他们还能怎么和我们保持联系?
I didn't I think that was going. How else can they keep in touch with us?
推特,我们是周播节目播客。Instagram Threads、TikTok、蓝空,我们是周播节目播客。你可以在我们的YouTube频道《囧司徒每日秀》点赞、订阅和评论。
Twitter, we are weekly show pod. Instagram threads, TikTok, blue sky, we are weekly show podcast. And you can like, subscribe, and comment on our YouTube channel, The Weekly Show with Jon Stewart.
坚如磐石。伙计们,非常感谢。天哪,我真的很喜欢听那家伙说话,感谢你们把所有内容整合在一起。我真的很享受。首席制作人劳伦·沃克,制作人布列塔尼·马梅多维奇,制作人吉莉安·斯皮尔,视频编辑兼工程师罗布·维托拉,音频编辑兼工程师妮可·博伊斯,以及我们的执行制作人克里斯·麦克沙恩和凯蒂·格雷。
Rock solid. Guys, thank you so much. Boy, did I enjoy hearing from that dude, and thank you for putting all that together. I I really enjoyed it. Lead producer, Lauren Walker, producer Brittany Mamedovic, producer Jillian Spear, video editor and engineer, Rob Vitola, audio editor and engineer, Nicole Boyce, and our executive producers Chris McShane and Katie Gray.
希望你们喜欢这一期,我们下次再见。再见,播客。本节目由派拉蒙音频和Busboy Productions联合制作。
Hope you guys enjoyed that one, and we will see you next time. Bye podcast. It's produced by Paramount Audio and Busboy Productions.
派拉蒙播客。
Paramount Podcasts.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。