本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
你好。本期《巴贝奇》节目可免费收听。但如果你想每周收听,需成为《经济学人》订阅用户。获取完整详情,请点击节目说明中的链接,或在线搜索经济学人播客。
Hello. This episode of Babbage is available to listen for free. But if you want to listen every week, you'll need to become an Economist subscriber. For full details, click the link in the show notes or search online for Economist podcasts.
全球食品系统不仅存在裂缝,已然支离破碎。我们正以前所未有的速度和低成本生产食物,却仍有超过8亿人忍饥挨饿。《经济学人影响力》发起的全新全球倡议'食品使命',为决策者提供构建更安全、更具韧性食品系统所需的数据与工具。因为养活世界不该以耗尽地球为代价。
The global food system isn't just cracked, it's broken. We're producing food faster and cheaper than ever before. Yet over 800,000,000 people go hungry. The Food Imperative, a new global initiative from Economist Impact, equips decision makers with the data and tools needed to build a more secure and resilient food system. Because feeding the world shouldn't starve the planet.
访问impact.economist.com/foodimperative了解更多。
Visit impact.economist.com forward /foodimperative to learn more.
《经济学人》。
The Economist.
我们刚抵达曼彻斯特郊外的一处工业区。眼前是栋低矮的灰色工业建筑,外面矗立着几个巨型储罐。外观看似平平无奇,但内部正在进行着非凡之事。
So we've just arrived at an industrial estate outside Manchester. We're looking at a sort of low grey industrial building with some huge tanks outside. Looks very unassuming from the outside, but there's something pretty special going on inside.
安斯利·约翰斯顿是《经济学人》的数据记者兼科学通讯员。你好。
Ainsley Johnston is a data journalist and science correspondent for The Economist. Hello.
你好,安斯利。嗨。嗨。初次见面。嗨。
Hello, Ainsley. Hi. Hi. Don't meet you. Hi.
很高兴认识你。
Very nice to meet you.
这位是史蒂夫。
This is Steve.
嗨,我是史蒂夫。
Hi. I'm Steve.
嗨,史蒂夫。
Hi, Steve.
欢迎来到英国生物银行影像中心。
Welcome to the UK Biobank Imaging Center.
谢谢。她最近去参观了英格兰北部的一个脑成像实验室。
Thanks. She recently went to visit a brain imaging lab in the North Of England.
英国生物银行影像研究为每位参与者贡献了约9000张图像。
The UK Biobank Imaging study went up with each participant contributing about 9,000 images.
达乌德·达苏是英国生物银行影像运营的负责人。
Dawud Dasu is the head of imaging operations at UK Biobank.
这些数据不仅能告诉你大脑的大小、体积和结构,还能揭示大脑功能。比如在执行特定任务时哪些脑区活跃。此外,我们还能测量关键脑区的血流情况。每位参与者仅大脑数据就为研究数据库贡献了约2500个变量,供科研人员使用。
So these are things that tell you about the size, volume, the structure of the brain, but also tells you about brain function as well. So which parts of the brain are active during certain tasks. And, we also have something which gives us a measure of flow of blood in key parts of the brain as well. So each participant contributes just from brain around two and a half thousand variables to the data set that we upload for researchers to use.
英国生物银行拥有庞大的生物医学数据库,收集从基因组序列到饮食信息等各类数据。达维德提到的影像研究旨在扫描所有参与者的心脏、骨骼和腹部。这些扫描将帮助科学家探索宇宙中最复杂物体——人类大脑的奥秘。参与者在MRI扫描仪内会执行简单任务。
The UK Biobank maintains a huge database of biomedical data. It collects everything from genome sequences to information on people's diets. The imaging study that Dawid is talking about here aims to scan everything from the hearts to the bones and abdomens of all the participants. Those scans will help scientists delve into the intricacies of one of the most complicated objects in the entire universe, the human brain. While participants lie inside an MRI scanner, they're given a quick task to do.
这是个快照游戏:给参与者看三张人脸图像,需将顶部人脸与左右两侧匹配。用外行话说,这会激活大脑的决策区域。研究人员会对比受试者在相同磁场环境下无任务时的脑部活动。
It's a snap game. So you got three images, three faces. They have to match the top face to either the left or the right hand side. And that, in my layman's language, it lights up the parts of the brain that are involved decision making. They'll compare that to what was happening earlier on when the same magnetic fields were being applied but there was no task.
众所周知人类大脑非凡绝伦。数十亿脑细胞与化学反应交织,竟能涌现语言、记忆、视觉、信息处理乃至肌肉控制等无数能力。其整体远胜部分之和——人类大脑正是我们称之为
We all know that human brains are remarkable. Somehow from a tangle of billions of brain cells and a soup of chemical reactions emerges a vast range of skills: language, memory, vision, the ability to process information and even control muscles and much much more. And the sum is much greater than the parts because human brains are also at the centre of what we call intelligence. Human intelligence has driven the success of our species which perhaps makes it odd that we still have so much to learn about what human intelligence, in fact any intelligence, actually is. But understanding human intelligence has to be the starting point if you want to understand the artificial type too.
这正是我们这档四集特别节目的目标——探索构建AI革命的科学。我是阿洛克·贾,《经济学人》的巴贝奇。本期节目我们将追溯最早期的AI系统如何从人脑获得灵感。在四集系列中,我们将剖析造就当今AI时刻的科学思想与创新,穿透炒作与术语迷雾,解读理解生成式AI起源必知的八大核心理念。
That's our goal in this special four part series on the science that built the AI revolution. I'm Alok Jhaar and this is Babbage from The Economist. In today's show, we'll look at the very earliest AI systems and how they took inspiration from the human brain. This is the first of four episodes in which we'll examine the scientific ideas and innovations that have led to the current moment in AI. We're gonna get behind the hype, buzzwords and jargon, and explore eight ideas that we think you need to know if you want to understand how the generative AI of today came to be.
我们将探讨人工神经网络的本质。
We'll explore what artificial neural networks really are.
神经细胞会发出‘吧吧吧吧吧’的信号。当我们设计神经启发的人工系统时,关键在于它是否发送脉冲——这一洞见赋予了我们现今使用的全部信息处理能力。
The nerve cell will go ba ba ba ba ba ba ba ba When we come to think about neurally inspired artificial systems, the fact that it's sending a pulse or isn't is the critical insight that gives us all the information processing power that we use now.
从最早尝试用硅材料模拟人脑
From the earliest attempts to model the human brain in silicon
当时我们构建的系统如此愚笨、脆弱且难以训练。
The systems we were building were so dumb and so weak and so difficult to train.
到实现这些模型规模化应用的技术突破
To the technologies that enabled those models to be scaled up.
ImageNet是AI历史的转折点,它让人们认识到大数据应用的决定性意义。
ImageNet was the turning point of AI's history, recognizing how critical it is to use big data.
我们将了解为何约十年前,AI突然取得了惊人进步。
We'll hear why finally, around a decade ago, AI got astonishingly good.
突然间一切开始奏效,人们开始关注我们的成果。已有多个案例证明,计算机视觉系统能在专家领域超越人类。
All of a sudden things are working and people pay attention to what we do. We have a number of examples where computer vision systems can beat human experts at their own game.
以及这些系统如何持续精进。
And how those systems just kept on getting better.
从GPT-2到GPT-3的跨越是巨大的,而从GPT-3到GPT-4的飞跃同样惊人。
The change from, say, GPT two to GPT three was huge. The change from GPT-three to GPT-four was huge.
我原本不认为大型语言模型能像现在这样运作得如此出色。我只是想,总不能直接把整个互联网数据扔给它,就能实现下一个词的预测,并让它表现得像人类一样。你知道吗?我大错特错了。事实是你确实可以。
I did not think the large language models would work as well as they are. I just thought, can't just throw the whole internet at it and be able to get next word prediction and have that seem like a human. You know what? I was dead wrong. You can't.
若要理解人工智能的起源,最好从‘智能’这个词入手。因此我们这个系列的第一个问题是:什么是智能?为了精确探究人类大脑的工作机制,让我们跟随记者安斯利·约翰斯顿继续之前未完成的研究。
If you want to understand the origins of artificial intelligence, it's best to start with the second of those words. So our first question in this series is this. What is intelligence? To figure out exactly how the human brain works, pick up where we left off with our correspondent Ainsley Johnston.
位于曼彻斯特郊外的英国生物样本库中心每周七天为患者进行脑部扫描,他们的目标是完成10万人的影像采集。影像项目主管史蒂夫·加勒特解释了参与者需要经历的流程。
The UK Biobank Centre just outside of Manchester scans patients' brains seven days a week as they work towards their goal of imaging 100,000 people. Steve Garrett, the imaging program manager, explained the process the participants undergo.
我们现在身处影像诊所。来这里参与的访客将进行约四到五小时的检查。
We are in the imaging clinic. We've got participants in here who are coming for around a four to five hour visit.
他们需要在电脑上做测试之类的吗?
Are they doing tests and things on the computers?
我们有触屏问卷系统,参与者需要详细回答关于健康和生活方式的问题,同时他们也要完成认知测试。
We have a touchscreen questionnaire where they give really comprehensive answers to anything about their health and lifestyle, but they also do the cognition tests.
谁想坐下试试?哦当然,我来坐吧,太好了。这是诊所的健康研究助理乔安妮·诺里斯。
Who wants to have a seat? Oh sure, I'll have a seat, yeah great. That's Joanne Norris, one of the health research assistants at the clinic.
我们会明确告知他们将进行25分钟的计时环节,包括游戏、谜题和记忆测试。请先阅读黄色部分的说明,准备好后点击旁边的笑脸按钮。
We explain obviously that they're going to go through a twenty five minute timed section on some games, puzzles, and memory tests as well. So have a read in the yellow, and then when you're ready, press that smiley next button.
好的。好的。好的。这个游戏会有三组配对。对吧?
Okay. Okay. Okay. The game will have three pairs. Right?
明白了。现在我面前有六张卡片,它们已经被翻过来了。我需要找出...这张是个男人,这张也是个男人。我想这张应该是个方块。
Okay. So I can see six cards in front of me, and now they've been turned over. And so I've got to find okay. This one was a man, and this one was a man. I think this one was a square.
哦,不。哦,不。这太尴尬了。好吧。这是一个风筝,这也是一个风筝。
Oh, no. Oh, no. It's embarrassing. Okay. This one is a kite, and this one is a kite.
很好。正方形。正方形。好的。很好。
Great. Square. Square. Okay. Great.
好的。现在我被要求把以下数字相加。一、二、34、5。明白吗?所以总和是15。
Okay. So I'm being asked to add the following numbers together. One, two, 34, 5. Okay? So that equals 15.
如果特鲁达母亲的兄弟是蒂姆姐姐的父亲,那么特鲁达与蒂姆是什么关系?特鲁达母亲的兄弟。对,那是特鲁达的舅舅。他是蒂姆姐姐的父亲。
If Truda's mother's brother is Tim's sister's father, what relation is Trudha to Tim? Trudha's mother's brother. Right. So that's Trudha's uncle. It's Tim's sister's father.
那就是蒂姆的父亲。特鲁达的舅舅是蒂姆的父亲。那么特鲁达应该是他的姑姑,我想。哦,天哪。
That's Tim's father. Trudeau's uncle is Tim's father. Then Trudeau must be his aunt, I think. Oh, god.
安斯利,这时候我们的一些参与者要了纸和笔。
And Ainsley, this point, some of our participants asked for a pen and paper.
我觉得我需要纸和笔。我感觉我们可能已经收集得够多了,而且我可能正在出洋相。不过这些测试的意义远不止是拿记者开玩笑。每项测试的分数有助于描绘出参与者认知能力的独特画像。这对研究人员来说是强有力的数据,尤其是与即将收集的生物医学数据相结合时。
I think I need a pen and paper. I feel like we've probably got enough of this, and I think I'm probably embarrassing myself. These tests are about a lot more than just making fun of journalists, though. The scores from each of the tests help to paint a unique picture of participants' cognitive abilities. This is powerful data for researchers, particularly in combination with the biomedical data that's about to be collected.
然后他们会去换衣服。换好衣服后,其中一人会去做脑部扫描。
And then they'll go and get changed. And after they're changed, one of them will go for their brain scan.
在一条贴满强磁场警告标志的走廊尽头,是脑部核磁共振成像仪。这些机器看起来像巨大的甜甜圈。参与者躺在一张床上,然后他们的头部和肩膀被移入扫描器的孔道中。我们进入了隔壁的控制室。从这里,放射技师控制扫描仪,检查正在采集的脑部图像质量,并确保参与者感到愉快和舒适。
At the end of a corridor full of warning signs for a strong magnetic field is the brain MRI machine. These machines look like giant donuts. The participant lies down on a bed, and then their head and shoulders are moved inside the bore of the scanner. We entered the control room next door. From here, a radiographer controls the scanner, checks the quality of the brain images that are being collected, and makes sure that the participant is happy and comfortable.
任务现在开始。
The task is starting now.
其中一位放射技师安吉拉·埃蒙斯带我了解了整个流程。
Angela Emmons, one of the radiographers, took me through the process.
这是半小时的脑部扫描。前二十五分钟,你只需要保持静止不动。然后会有一个任务出现。这个任务就是在脑部实际工作时观察它。我们先在他们静息时运行一个早期序列,然后在他们玩快速配对游戏时运行两分钟该序列。
It's a half hour scan of the brain. First twenty five minutes, you just need to keep nice and still. Then there's a task coming up. The task is just to look at the brain when it's actually working. We run an earlier sequence when they are at rest and then just run two minutes of that when they're undertaking a game of snap.
我们会向他们展示一系列形状和一系列面孔。整个过程大约持续两分半钟,结束后他们在扫描仪里还有大约两分钟时间。
We show them a series of shapes and we show them a series of faces. Runs about two and a half minutes, and then when that comes to an end, they've got about another two minutes left in the scanner.
当参与者在扫描仪里时,你们在控制室能看到什么?
While the participant's in the scanner, what can you see in the control room?
会显示大量图像。这些图像是实时显示的。我们检查分辨率,确保获得清晰的图像,参与者状态稳定,然后按照序列流程进行操作。
Lots of images come up. The images come up in real time. We check the resolution, make sure we've got good images, participants settled, and then just follow that through the sequences.
这能让你对智力有所了解。
This gives you an idea of intelligence.
这是达维德·达苏,我们在播客开头听到过他的声音。
That's Dawid Dasu, who we heard at the start of the podcast.
你可以研究基因组数据能解释参与者间多少差异。你可以查看我们的影像数据。你甚至还可以考察历史因素,比如生活方式、工作、饮食等。所有这些都可以纳入研究,我相信有人会找到方法将这些因素综合起来分析。
You can look at how much of that variation amongst participants is explained by genome data. You can look at our imaging data. You might even be looking at history as well, so lifestyle, job, diet, and things like that. You could look at all of that as well, and I'm sure somebody will figure out a way of looking at all of that together.
利用生物银行数据,科学家们发现更大的脑容量——尤其是更大的前额叶皮层——与更高的智力相关。大脑不同区域之间的特定交流模式也能预测人们在认知测试中的得分。不过,智力仍有很大变异性是科学家无法通过这些脑部测量来解释的。但像英国生物银行这样庞大数据库的访问权限,正让科学家得以解析我们脑中错综复杂的神经元网络如何使我们能够研发疫苗、将人类送上月球,甚至创造人工智能。
Using the biobank data, scientists have discovered that having a larger brain, and in particular, a larger frontal cortex, is associated with higher intelligence. There are also certain patterns in how different parts of the brain communicate with each other that can predict people's scores on cognitive tests. There's still a lot of variability in intelligence that scientists can't explain using these measures of the brain, though. But access to enormous datasets like the UK Biobank is allowing scientists to pick apart how the tangle of neurons inside our heads have enabled us to develop vaccines, send a man to the moon, and even create AI.
全球许多研究者使用英国生物银行及其他来源的数据来研究脑部智力。但人类大脑的智力并非易事。比如,大脑中没有单独负责智力的特定区域。而且越是深入研究,就越难定义智力究竟是什么。因此让我们退一步,从更基础的层面看看大脑如何运作。
Lots of researchers from around the world use data from the UK Biobank and other sources to investigate brain intelligence. But intelligence in human brains is not something that's easy to pinpoint. There isn't one bit of the brain that's responsible for it, for example. And the more you get into it, the harder it gets to define what intelligence even is. So let's take a step back and look at how the brain works at a more basic level.
为此我采访了丹尼尔·格雷泽。他是伦敦大学哲学研究所的神经科学家,研究方向是神经科学与人工智能的交叉领域。
To do that I spoke to Daniel Glaser. He's a neuroscientist at the Institute of Philosophy, part of the University of London. He works at the intersection of neuroscience and AI.
我们对大脑结构了解颇深,在分子运作层面也掌握大量知识。我能详尽描述单个神经元的结构,也能说明大脑前额与后部的功能区别。但微观层面的变化如何引发宏观层面的差异,这仍是个谜。尽管我通晓大脑各层级的运作机理,却无法将这些分子层面的精妙细节整合成解释整体行为的完整叙事。
We know a lot about how the brain is structured and we know a lot about how it works in the sense of how the molecular level works. I can tell you in exquisite detail about the structure of the individual neurons and, at the level of the whole brain, I can tell you what the front does and what the back does. What I can't tell you is what difference at the microscopic level makes the difference at the macroscopic level. So although I know all of these levels of description of the brain, I can't give you a coherent story that tells you how the overall behaviour derives from all this exquisite detail that I do know about the molecules.
那让我们深入这些精妙细节吧。请为我描述下解剖结构及其功能原理。
Let's go into a bit of that exquisite detail then. Just describe the anatomy for me and how the anatomy functions.
大脑是神经元的集合体,也就是神经细胞。虽然全身都分布着神经细胞(如痛觉感受器等),但在大脑中它们密集聚集成团。几乎所有神经细胞的核心特性都是利用电信号进行远距离传导,这衍生出两个特征:一是这类细胞通常呈细长形态。
So brains are collections of neurons, which are nerve cells. And while nerve cells exist throughout the body, pain detectors and all sorts of things like that, in the brain they're all clumped together in a big watch. And the principal property that almost all nerve cells have is that they use electricity to send signals over a distance. There's two things that derive from that. One is that these cells are often elongated.
人体大多数细胞呈圆团状,而神经细胞特征性地具有被称为轴突的细长突起。你可以将其想象成电线——神经细胞通过这个名为轴突的
Most cells in the body are roundy, clumpy they have a shape like that. Nerve cells characteristically have got a long extended process, which we tend to call an axon. You really can think about this extended process like a wire. And like a wire, nerve cells send information along this long process using So nerve cells are signaling devices that get, if you like, information from one bit of the cell to the other bit of the cell along a long bit called the axon and they use electricity to do that.
从感知世界的角度,请解释这些细胞网络如何识别气味或进行学习。
Just in terms of how that manifests in sensing the world, just explain to me how the network of these cells smells something or learns something.
要理解这个机制,我们需要回溯约七千万年前的进化历程。生物最初用化学物质传递信息(比如感知危险后收缩触须),但这仅限于极短距离。当生物体逐渐增大时,它们需要远距离传递气味、天敌和食物等信息。于是进化——姑且这么说——改造了细胞内原本用于信号传递的蛋白质,将其与电信号传导系统耦合。
I think to understand how this works, you can actually go back in evolution around about seventy million years. You could use chemicals to send information, oh, there's something nasty there, pull back, and you could retract your feelers. But that only works at very short distances. And for animals and cells to get bigger, organisms to get bigger, they needed to communicate information about smells, about predators, about food over longer distances. And so what evolution did, if we can say it that way, about seventy million years ago, is to use some of these proteins that were being used for signalling within cells and wire them up to an electrical signal.
在传导终端,电信号又被转化回化学信息,从而激活网络中的其他细胞。有趣的是,这种信号传导机制的进化,与人类电报技术的发展如出一辙。要实现远距离可靠通信,就需要编码系统(比如摩斯电码)。第一条跨大西洋电缆正是通过脉冲实现可靠解码——而七千万年前,进化早已悟出这个原理。
And then at the other end, they turn them back into chemical information, which they then use to set off other cells in the network. And that insight interestingly, which was about signaling, is paralleled in the evolution, in human terms, of what we would call telegraphy. If you want to send reliably a signal over long distances, you want to be using some kind of code, for example Morse code. So the first transatlantic cable used pulses which could be reliably read out at the other end. And it turns out that seventy million years ago, evolution came up with the same insight.
神经细胞的关键在于用电脉冲传导信号——这不是连续渐变信号,而是离散脉冲。这种脉冲式信息传导是理解神经细胞的要害。当我们设计神经启发的人工系统时,这种阈值触发机制(要么激发脉冲要么不激发)正是赋予现代信息处理能力的核心洞见。
So the critical thing about nerve cells that they use electricity to signal, but that code is not a kind of more or less, more or less, more it's not a continuously modulated signal, it's pulses. And this transmission of information by pulses it either is a critical thing you need to know about nerve cells. When we come to think about neurally inspired artificial systems, the fact that it's thresholding, it's sending a pulse or it isn't, it's doing a yesno firing is the critical insight that gives us all the information processing power that we use now.
这么说来,某种程度上它算是原始版本的数码信号?
And it's kind of, in a very sort of crude way, a kind of a digital signal in that respect?
一点也不粗糙。从信息是1或0的角度来看,这是一个数字信号。神经元要么放电,要么不放电。理解神经细胞的好方法或许不是从大脑开始,而是想象收缩肌肉。你想从脊髓向手臂肌肉发送信号。
Not crudely at all. It's a digital signal in the sense that the information is a one or a zero. The cell either fires or it doesn't. A good way to think about nerve cells probably is not so much to start with in the brain, but to think about flexing a muscle. You want to send a signal from your spinal cord to your muscle in your arm.
如果你想更用力收缩肌肉,阿洛克——我现在要模仿神经元说话——神经细胞会这样:哒哒哒哒哒。如果想轻微收缩,它会这样:哒...哒...哒。同理,如果是痛觉感受器,当感到轻微疼痛时,你会接收到哒、哒、哒的信号;当剧痛时,神经元就会哒哒哒哒哒哒哒哒连续放电。这种速率编码——即放电频率的差异(还存在更精妙的编码方式)——本质上是通过时间维度传递的是/否信号,而非像声音那样通过振幅调制的连续信号。
If you want the muscle to contract more, Alok, I'm going to sound like a neuron for a second, the nerve cell will go If you want to contract a little bit, it will go Similarly, if you have a pain receptor and something's a bit painful, you'll get that, that, that, that, that, that. If something's really, really painful, the nerve cell signals that by going ba ba ba ba ba ba ba ba ba So the rate coding, we would say, the rate at which things fire and there can be more subtle codes is a yesno signal that contains information over time rather than an amplitude modulated smooth signal as you might have in the nuances of your voice.
那么在大脑里,神经元紧密相连。它们形成的网络承载着各种功能、记忆等等。大脑中的神经元是如何协同工作来学习事物的?无论是语言、捕食者的模样还是其他任何东西?
So in your brain, the neurons are very close together. They exist in networks which represent all sorts of functionality and memory, etcetera. So in your brain, how do the brain cells, the neurons work together to learn something, whether it's a language or what a predator looks like or whatever else?
当神经元以个体或网络形式连接时,连接强度各不相同。并非每个输入细胞对目标细胞的影响都相同。想象一个细胞接收着上千个细胞的输入,这些细胞都在放电,但每个脉冲对目标细胞的输入强度不同。我们通过突触(传入的神经连接)强度来控制输入量。
When neurons are connected to each other individually or in networks, there is a strength of the connection. So, you don't get the same bang for your buck from each of the cells that connects into a particular cell. Imagine you've got a cell and it's got thousands of other cells connecting into it. Those thousands of other cells are firing, but each of the pulses from those cells does not give you the same input to the cell that's the target. We can control the amount of input that you get from a cell by the strength of what we call a synapse, the wire that comes in.
用人类世界类比的话,你可能会向所有朋友征求餐厅推荐,但会更重视其中某位美食达人、或熟悉某类菜系、或了解城市的朋友。虽然他们都在说披萨、汉堡、该去那家印度菜、那家亚洲餐厅等等。但你会选择性倾听——接收所有建议,却上调某些输入的权重,下调其他的。这就是连接强度的概念。如果事实证明你选的餐厅确实不错...
And if you like to take an analogy from humankind, you might ask all of your mates for a restaurant recommendation, but you're gonna pay more attention to one of your friends who's good with food or likes that kind of cuisine or knows the city than another. So they're all saying pizza, burger, we should go to that Indian place, we should go to that Asian restaurant, whatever. But listening, and you might say, well, I hear all of those inputs, but I'm gonna upregulate one and downregulate the other. So that's the strength of connections. In learning, if it turns out that the restaurant you chose was a good one
根据你的经验。
From your experience.
你去餐厅后发现太棒了,就会感叹'这家店太惊艳了',接着问'是谁推荐来着?'然后想起来'是阿洛克'。
You go to the restaurant. It was great. You'll say, oh, that restaurant was amazing. And then you say, who has it recommended that restaurant? You go, Alok.
知道吗?下次再找餐厅推荐时,我会提高阿洛克建议的权重,降低那个推荐...(你知道的)不靠谱建议的人的权重。
Well, do you know what? Next time I'm looking for a recommendation for a restaurant, I'm gonna upweight Alok's signal compared to the other guy who, you know, didn't
在上千条推荐中筛选。
recommend Within thousands.
没错。共同放电的神经元会强化连接。当神经元放电时,它就像在说:'我被激活了,现在要追溯是哪些输入导致了这个结果'。
Yeah. So things that fire together wire together. When a neuron fires, it says, okay. I got excited. Now I'm asking, what was the input that got me to the place I am?
我会微妙地增强那些输入信号,这样在未来,那些曾引导我到达美好境地的路径更有可能再次激活。
And I'm gonna subtly upregulate those inputs so that in future, the ones that got me to this good place are more likely to get me going again.
这种学习过程,这种在化学层面上加强神经连接的现象,具体发生了什么?
That learning, that strengthening the connection, at a chemical level, what's happening?
在化学层面,神经递质会发生变化,通常这会改变树突结构——那些被称为树突棘的微小突起能让每个激活的神经元向目标细胞释放更多神经递质。这既改变了神经化学环境,也在小程度上改变了神经解剖结构,实质上是调整了神经元的微观结构,使得特定细胞能从前序激活细胞获得更多输入信号。
At a chemical level, there are neurotransmitters which change, generally speaking, the structure of the dendrites so there are things called spines that basically allow each neuron that fires to release more neurotransmitter to that cell. So it changes the neurochemistry and to a small extent the neuroanatomy. It really just changes the microstructure of the neurons so that you get more input to a particular cell from the cells that fired previously.
让我们宏观来看。人们总追问关于智力的问题:人类大脑真的智能吗?但这一切现象背后的根源是什么?
Let's zoom out. People always ask this question about intelligence and are human brains intelligent? But where does that come from in all of this?
顺便说,就像我在任何重要访谈前会做的那样,今早出门前我查了维基百科。如果你搜索'什么是智力',它会说'这是人类擅长的事'——这说法有点戏谑,但确实存在某种循环论证。比如当我们寻找动物甚至植物的智力时,有些有趣理论认为森林具有智能。
By the way, if you look it up online or elsewhere, mean, as I would with any respected interview, an interviewer, I looked it up on Wikipedia before I came out this morning. If you look up on Wikipedia, what is intelligence? It said it's that thing which humans are good at, right? That's a bit facetious, but there is a sort of circularity. And so, for example, when we look for intelligence in animals or indeed in plants, there's some nice stuff about forests being intelligent.
本质上,如果你加速观察森林生态系统,它们会展现出深思熟虑的特质:它们慷慨互助,当同伴被砍伐时会'感到痛苦'。我们在动物身上寻找智力时,大体是在寻找类似人类行为的表现。当然这个定义可以更精准,但最初级的理解就是:智力即'像我们一样思考'的能力。
Basically, if you just speed up forests, then they kind of think things through, they're generous, and they look after each other, and they feel pain when their fellows are chopped down. We say that, when we look for intelligence in animals, broadly speaking, we're looking for things that they do that are like things that we do. Right? So, I can do better than this. But actually, as a starting point, intelligence is what we think like.
那么具体拆解一下。智力的内涵是什么?即便无法精确定义,我们通常认为它包含哪些要素?
And so just break that down. What does intelligence mean? Even if we can't define it exactly, what are the kind of components of what we think of
智力是深度思考的能力,其证据在于可跨领域应用:能抽象分析事物结构并迁移至其他场景,能整合多领域知识解决问题——这需要记忆广度与理解深度。语言作为抽象符号系统,对智力发展至关重要。很难想象没有语言这类抽象思维工具的生物能具备真正智力。
So intelligence is the ability to think things through. And the evidence for that is that you can apply it to different domains. You can abstract things to look at something and see their structure to apply it to other things, to bring knowledge of different domains to bear on certain things that requires kind of memory and breadth of reference and understanding. It turns out that language is quite a useful tool in helping one to be intelligent. It's difficult maybe to imagine a human or a creature that doesn't have any kind of symbolic abstract thought like language and is still intelligent.
虽然当我们观察章鱼等生物时,它们展现出问题解决、经验学习、深思熟虑等智能行为,但它们很可能不具备内在语言思维。这很有趣——
It seems to be very helpful to do that. Although, when we start to look at other organisms like octopuses, they exhibit behaviors which we might think of intelligent. They solve problems. They learn from experience. They think things through.
它们会尝试不同方法解决问题。假设某项智力标准是'预判未来'的能力,你很快会发现总有动物具备该特质。比如某些鸟类会为越冬提前储藏食物,但这不意味着它们具备人类级智力。
They try stuff and try things again differently from that. And they probably don't have internal language of thought. It's interesting, Alok, if you think of any given thing. So, example, the ability to project into the future, to think about a future, might think of that planning as intelligent thing. The problem is, as soon as you write down a single thing that's about intelligence, you can usually find an animal that does that particular thing.
对吧?所以如果你要找会做计划的动物,那就选鸦科鸟类,比如乌鸦这类生物。
Right? So if you want planning, go for corvids, like crow like creatures.
我们总说乌鸦很聪明,人们经常这么评价。
We call crows intelligent. People say it all the time.
确实如此。这是因为它们具备我们人类自认为聪明的特质——规划能力。举例来说,当乌鸦藏食物时,如果被同类或其他物种看见,它们会假装离开。等确认观察者离开后,再返回将食物转移到新地点。这种行为背后的逻辑是什么?
Quite so. And that's because they share a thing which we think of as intelligent ourselves, is the ability to plan the ability. So, for example, when crows hide stuff, if they're observed hiding a thing by another crow or in sometimes a different species, they'll kind of wander away. And then when they're sure the person who saw them hide the piece of food is gone, they'll go back and move the food hiding place to a place somewhere else. Now why would you do that?
因为它们推演过:如果自己不尽快回来,目睹藏食过程的家伙就会来偷走食物。过去我们认为只有人类才有这种思维。但问题在于——就像你引导我思考的那样,Alok——当我们用单一标准定义智力时,总能找到符合的动物案例。
It's because you kind of have thought through that when your back is turned, if you don't come back soon, the person who saw you hiding it is gonna come and move it. So we used to think that only humans could do that. The problem is once you as you're encouraging me to do, Alok, once you define a single thing, which is, yeah, do you know what? Intelligence is that. I can probably find you an animal that can do something like that.
真正难以找到的,是能完成所有人类智力行为的动物。但这又陷入循环论证——我们称之为智力行为,恰恰因为这是人类擅长的。
What I can't find you an animal that can do is all the things that we count off as intelligent. But that's a bit circular again, because we call them intelligent because we do them.
是的。这种定义既过于简化,又不够全面。但作为科学家,我们仍需要在这个复杂世界里检验假设、测量特定指标。那么针对人类智力,神经科学家通常采用哪些测量或测试方法呢?
Yeah. So it is a bit reductive and it's not at all comprehensive in the way that you can define intelligence. But as scientists, want to try and test hypotheses, want to try and measure specific things in this sort of slightly confusing world. So in terms of intelligence in humans, what are the ways that neuroscientists or others would try and measure that or test it?
我们可以观察人类进行智力活动时的脑部反应,特别是那些比低等动物更发达的脑区。通过对比人类与猴子的脑部差异,我们能解析复杂思维的神经回路。我认为智力本质上是操控外部系统的能力——很少有人能在不借助任何外部工具的情况下展现智力,即便这些工具已被内化。
So we can certainly look at what's going on in people's brains when they do things that we would consider intelligent. And we can also particularly do that in the bits of brains of which we have more than animals that are less intelligent than we are. So, we can learn by looking at the bits of the brain which are different in us from monkeys, and we can draw out the circuits which enable us to do that kind of complex thought. I do think that intelligence is something that allows us to manipulate objects. It's very rare for somebody to be just intelligent without using some kind of external system, even if they've internalized it.
语言就是内化的外部系统典范。但人类更擅长使用工具——就像此刻,房间里有操作录音设备的同事,你用Mac电脑组织思路查阅问题,这都是智力的体现。我们依赖这些'体外器官'。
Language would be an example of an external system which you put in your head. But actually, people use tools well. While we're talking, we've got somebody friendly in the room who's operating some complex sound recording equipment, and you're using a Mac to structure your thinking and look at the questions. That's intelligence. We use these prosthetics.
说到大型语言模型和当代AI发展,我们这类聪明人的特质就是善用工具。虽然我们会误以为工具也有智能(没人真觉得手机有智能),但确实通过它们扩展了自身智力——当然过度刷手机反而会削弱智力,但合理利用维基百科或信息存储功能就是智力延伸。这种工具使用能力,正是人类前额叶进化时期智力突飞猛进的重要标志。
And actually, again, when we come to think about large language models and the contemporary developments in AI, one of the things that intelligent people like us do is to make good use of these tools. Now, we also fool ourselves that they might be intelligent too, but nobody thinks that their phone is intelligent really, but they use it to enhance their own intelligence, if they're often can defeat your intelligence by too much scrolling, but you can use it to extend yourself by judicious use of Wikipedia on the fly or storing information in a helpful way. This ability to use tools is something that we observe in the history of man actually when these frontal lobes developed as something that is a marker of a time when our intelligence probably really took off.
手机的例子很有趣不是吗?联网手机本质上是微型计算机,具备记忆和某种推理能力——这些确实符合智力特征,但它缺乏人类式的规划能力和抽象思维。
It's interesting with the phone example actually, isn't it? Mobile phone that's connected to the internet, basically a small computer has memory. It has some sorts of reasoning capabilities too. These markers, as you say, of intelligence, but it doesn't have all of the things. It doesn't plan or abstract things in the way that humans do.
但我想从这方面来说,这是一种不同类型的智能,但我们绝不会称其为智能。你说得对。
But I guess it's a different type of intelligence in that respect, but we would never call it intelligent. You're right.
大体上是这样。我认为关于赋予智能的有趣问题值得稍加思考。快进到LLMs时代,像大型语言模型和机器学习这样的人工神经网络,我们无法关闭的那种必然倾向——让它们显得智能——反而使我们能更有效地使用这些工具。这并不意味着它们真有智能,但将其视为智能体能使我们以更有效的方式与之互动。当我们像你必然会问的那样,Alok,去评判这些机器是否聪明时,必须时刻警惕人类这种将智能赋予他人和机器的先天倾向。
In general, that's right. I think the interesting question about ascribing intelligence is worth pondering for a second. Fast forwarding to LLMs, artificial neural networks like large language models and machine learning, I think our inevitable ability we can't turn it off to make them seem intelligent allows us to use these tools more effectively. It doesn't mean they are intelligent, but treating them like they're intelligent enables us to engage with them in more effective ways. And when we come to ask, as I'm sure you will, Alok, whether these machines are smart or not, we must always beware of this innate capacity of humans to ascribe intelligence to others and to machines.
而当我们试图对我们建造的新机器做出判断时,这种倾向会误导我们。
And that will mislead us when we try to make judgments about the new machines that we've built.
好的。我们已经讨论了定义人类智能的困难,也探讨了从整体层面到细胞层面理解智能的多重难度。显然还有海量的未知领域。但我想,如果我们试图理解所有这些知识如何导向人工智能中'人工'部分的实现。
All right. Well, we've talked about the difficulty of defining human intelligence. We've talked about the difficulty of actually trying to understand it at all the different levels from the sort of whole level to the cellular level. There's clearly huge amounts still to learn. But I guess if we try and understand where all of this knowledge leads into how to do the artificial bit of the artificial intelligence.
那么当我们谈论那些寻求智能灵感以制造人工版本计算机科学家时,基于人脑构建人工智能是个好主意吗?我想这是他们唯一的选择,对吧?
So when we're talking about computer scientists who were looking for ways of being inspired about intelligence to make artificial versions of it, Was it a good idea to try and build artificial intelligences on the human brain? I suppose it's the only way they had, right?
当计算机科学家试图制造更聪明的机器时,他们观察到人类思维方式的关键可能在于那些湿漉漉的东西——神经元。我们可以探究他们关注了神经元的哪些特性,又是如何实现这些特性的。实际上他们确实回归了基础。要理解计算机领域的神经网络(大多数机器学习算法的工作方式),你真正需要从单个神经元开始:这是一个接收来自其他神经元输入的装置。
Well, when computer scientists tried to make smarter machines, one of the observations that they made is that maybe what's important about the way that humans think is the wet stuff, is the neurons. We can ask what are the properties of neurons that they lit upon and how did they implement them. Actually, they did go right back to basics. To understand a neural network in the sense of computers that's the way that most machine learning algorithms work you really just start with a neuron. It's a device which takes inputs from a bunch of other neurons.
并非所有神经元对它的影响程度相同,这些被称为权重。就连微小如蠕虫的生物也是如此——每个作用于另一个神经元的神经元都以不同强度激发它。它会根据这些输入判断是否超过兴奋阈值。
Not all the neurons affect it to the same extent. Those are called weights. This is true of a tiny little worm. Each of the neurons that comes onto another neuron excites it to a different extent. It works out on the base of those inputs, whether it's past a threshold for excitement or not.
如果超过阈值,它就会'砰'地发出脉冲信号传递给下一个神经元。在这种架构基础上叠加学习规则(如我们之前所说'共同激活的神经元会加强连接'),通过调整神经元间的权重来增强那些在良好情境下共同激活的连接——这两个简单洞见就能构建出相当强大的计算学习机器。如今这些神经网络实际上是在数字架构中实现的,有趣的是,你用的仍是传统数字计算机(就像台式机或手机里的那种),但它运行的是对这些简单神经元的模拟。
If it does, it goes boom and that ping, that spike goes to the next one. Taking that architecture and layering upon it a learning rule, which as we said before, things that fire together wire together. So, by adjusting the weights between the neurons to upregulate things that tended to make things fire in a good context, those two simple insights give you quite a powerful computational learning machine. Now, when we talk about these neural networks, they're actually being implemented in digital architecture. Funnily enough, you've got a good old fashioned digital computer, like the kind that works in your desktop PC or in your phone, but it's running a simulation of these very simple neurons.
再想想人类神经元精妙的微观结构,光是描述单个人类神经元就需要数年时间。而我们将其抽象为若干输入、权重和放电模式——这种极度简化的神经元正是所有机器学习与当代AI背后人工神经网络的基础。
Again, if you think about the exquisite microarchitecture of human neurons, it would take years to describe even a single human neuron. No, we abstract it into some inputs, some weights, a firing pattern. This very simplified neuron is at the basis of all of the artificial neural networks that underlie machine learning and current AI.
接下来我们将延续这个思路,从人类细胞转向硅芯片,看看创建人脑人工版本的最初尝试。现代AI教父之一将向我们讲述他的计算机系统首次展现出Dan Glaser所述能力的时刻。这些内容即将呈现。不过首先提醒,这是《Babbage》的免费试听集,要继续收听AI专题系列,您需要订阅《经济学人播客+》。
Next, we'll continue that thought and move from human cells to silicon chips and look at the first attempts to create artificial versions of the human brain. And one of the godfathers of modern AI will tell us about the first time his computer systems showed some of the skills that Dan Glaser has been telling us about. That's all coming up. First, though, just a quick reminder that this is a free episode of Babbage. To continue listening to our special series on AI, you'll need to sign up to Economist Podcast Plus.
现在正是最佳时机。我们正在进行促销活动,订阅每月不到2.5美元,但请抓紧,优惠将于3月17日星期日结束。作为订阅用户,您不仅能收听我们所有专业周播节目,
And now's the perfect time to do so. We've got a sale on. Subscribe for less than $2.50 a month, but hurry. The offer ends on Sunday, March 17. And as a subscriber, you won't only have access to all of our specialist weekly podcasts.
还能参与本系列完结后《巴贝奇》的首场直播活动。活动定于4月4日星期四举行,届时我们将尽可能解答您关于人工智能科学原理的疑问。别错过机会,您可提交问题、查询当地开始时间并预约席位,请访问economist.com/aievent(连写无空格),链接详见节目备注。
You'll be able to join us for Babbage's first ever live event following the conclusion of this very series. That's gonna be held on Thursday, April 4, where we're going to answer as many of your questions as we can on the science behind artificial intelligence. Don't miss out. You can submit your questions, check the start time in your region, and book your place by going to economist.com/aievent, all one word. The link is in the show notes.
人工智能、量子计算等技术领域的快速发展正在改变世界。如何实现技术的可持续与负责任发展?《2030进展》——经济学人影响力的新全球倡议,探讨技术如何重塑我们的工作、生活与繁荣。了解更多请访问impact.economist.com/progress2030。
Rapid advances in artificial intelligence, quantum computing, and other areas of technology are transforming our world. How can technology be made sustainable and responsible? Progress twenty thirty, a new global initiative from Economist Impact, examines how technology can redefine how we work, live, and thrive. Visit impact.economist.com/progress2030 to learn more.
本期《巴贝奇》探讨了大脑运作机制,并解读计算机科学家如何受神经科学发现启发构建智能系统。科学家们并未直接复制物理神经细胞,而是致力于构建虚拟神经元。这引领我们进入理解现代AI科学基础的下一阶段。问题二:第一个人工神经元是什么?为寻找答案,我们横跨大西洋来到马萨诸塞州波士顿。
Today on Babbage we've heard about how the brain works and we're trying to unpack how computer scientists who wanted to build intelligent systems were inspired by what neuroscientists had already found. But rather than building an artificial version of a physical nerve cell, scientists wanted to build virtual ones. And that leads us to the next step as we build our understanding of the science behind modern AI. Question two: What was the first artificial neuron? To answer this question, we traveled across the Atlantic Ocean to Boston in Massachusetts.
从波士顿市中心穿过查尔斯河的主干道两旁,聚集着谷歌、脸书和IBM等科技巨头的办公室。这片区域被称为全球最具创新力的平方英里,其吸引力源于两大顶尖学府——哈佛大学与麻省理工学院(MIT)。地球上少有地方能在现代人工智能发展史中占据如此核心地位。
Main Street, which carries traffic after crossing the Charles River from Central Boston, is awash with offices of some of the world's biggest tech firms, Google, Facebook, and IBM. This part of the city has been called the most innovative square mile on the planet. Companies are lured in because the area is dominated by two institutions, Harvard University and the Massachusetts Institute of Technology or MIT. Few places on the planet have played a more central role in the evolution of modern artificial intelligence.
我们对智能进行数学建模的探索可追溯至1943年,沃伦·麦卡洛克与沃尔特·皮茨首次提出神经网络概念。
Our quest to mathematically think about intelligence and model our brains goes back to 1943, where Warren McCulloch and Walter Pitts introduced the concept of neural networks.
丹妮拉·鲁斯是麻省理工学院计算机科学与人工智能实验室(CSAIL)主任。
Daniela Roos is the director of the MIT Computer Science and Artificial Intelligence also known as CSAIL.
他们发表了首个数学模型,当时被认为能够模拟人脑运作机制。
And they published the first mathematical model that at that time was believed to capture what is happening in our brain.
既然神经元工作机制可用数学解释,那么通过代码复现大脑网络便成为可能。麦卡洛克与皮茨教授认为,类脑架构的机器将具备强大计算能力。
If the way that neurons work in their brain can be explained by mathematics, then the brain's network surely could be replicated using computer code. Professors McCulloch and Pitts thought that machines with brain like architecture could have a lot of computational power.
早期人工神经元是极其简化的数学模型:这个计算单元接收来自其他单元的数据输入,输入值通过参数加权,在神经元内部仅进行阈值计算的简单处理。
The early artificial neuron was a very simple mathematical model. You had a computational unit that took as input data from other sources, maybe other units. The input was weighted by parameters. And then inside the artificial neuron, the computation was very simple. It was a thresholding computation.
本质上,如果K均值的总和大于给定阈值,神经元就输出1;否则输出0。因此,这种计算是离散且极其简单的,实质上是一个阶跃函数。你只需判断数值是否超过某个界限。
Essentially, if the sum total of what K mean was larger than given threshold, the neuron output one. Otherwise, the neuron output zero. So, the computation was discrete and very simple, essentially a step function. You're either above or below a value.
正如Dan Glaser早前提到的,人类大脑中的神经元也通过离散函数运作——它们要么激活,要么不激活。康奈尔大学的心理学家Frank Rosenblatt进一步发展了这个模型,创造出他称为'感知机'的人工神经元数学模型。最初,感知机展现出良好前景:通过学习示例后,它能对未经机器分析过的输入给出是非判断,完成一些基础任务。
Neurons in the human brain also operate using discrete functions, which Dan Glaser mentioned earlier. They either fire or they don't fire. A psychologist at Cornell University called Frank Rosenblatt went on to develop this model to create an artificial neuron, a mathematical function that he called a perceptron. At first, the perceptron seemed promising. After learning some examples, perceptrons could do some basic things, giving a yes or no answer to an input that hadn't been previously analyzed by the machine.
假设你向模型输入某运动队运动员的力量与速度数据。通过学习这两个变量,模型可以判断新运动员是否可能被团队录取。但随着领域发展,感知机的缺陷逐渐显现:由于仅相当于单个人工神经元,它无法识别更复杂的模式。比如那些速度力量平平但技术精湛的运动员该如何评估?
Let's say you fed the model some data about the strength and speed of athletes in a sports team. Learning from those two variables, the model could answer whether or not a new athlete would be likely to be accepted into a team. As the field matured, however, flaws in the perceptron became clearer. Because perceptrons only worked like a single artificial neuron, they couldn't be trained to recognize patterns that were more complex. What about, for example, athletes who were neither particularly fast nor strong but had really good technique?
1969年,Marvin Minsky和Seymour Papert合著的《感知机》从数学上证明:若仅使用单层神经网络,则只能计算线性函数。而线性函数存在闭式解,根本不需要机器学习。这项研究实际上引发了第一次AI寒冬,因为人们对此技术的可能性丧失了信心。
In 1969, Marvin Minsky and Seymour Papert co authored Perceptrons, which is a book that demonstrated that mathematically, if all you have is a single layer neural network, then you could only compute linear functions. And if you can compute the linear function, then you can have a closed form solution. There's no need for machine learning. And actually, this work triggered the first AI winter because people lost faith in what would be possible.
显然,人工神经网络若要有效运作,必须通过多层感知机应对现实世界的复杂性。在Daniela提到的AI寒冬期间,资金枯竭,人们对构建神经网络的热忱消退。直到1980年代才重现曙光,但仍有研究者坚持探索其他智能机器实现路径。
It became clear that if artificial neural networks were to work, they'd have to have more layers of perceptrons to deal with the complexity of the real world. During the AI winter that Daniela mentioned, funding dissipated and interest in the very idea of creating artificial neural networks dwindled. There was very little progress until the 1980s. But some researchers did persist within that time and they found other pathways to creating intelligent machines.
1950年,艾伦·图灵在《计算机器与智能》论文中提出了重要的机器智能基准——图灵测试:如果你与机器对话时无法分辨对方是人还是计算机,即可判定其具有智能。MIT等机构为此付出诸多努力,早期系统ELISA曾长期令人着迷。
In 1950, Alan Turing introduced a very important benchmark for machine intelligence in his paper Computing Machinery and Intelligence. This is the paper that introduced the Turing test, the idea that you know whether a machine is intelligent or not if you aim to converse with it, and if you cannot tell whether on the other side you're talking to a machine or to a person. So, there have been many efforts to build intelligent machines that meet the Turing test, in particular at MIT. One of the earliest systems called ELISA was introduced, and ELISA enchanted people for a long time.
1966年,首个聊天机器人Eliza以心理治疗师角色横空出世。其创造者Joseph Weisenbaum让用户通过电动打字机与机器对话,引发全球轰动。
Eliza was one of the first so called chatterbots which took the world by storm in 1966. Its creator Joseph Weisenbaum put the bot in the role of a psychotherapist. A user would type a message on an electric typewriter and the machine would reply.
男人都一个样。哪方面?他们总为各种事烦我们。
Men are all alike. In what way? They're always bugging us about something or other.
能举个具体例子吗?
Can you think of a specific example?
比如,我男朋友硬要我来这儿。
Well, my boyfriend made me come here.
是你男朋友让你来这里的。
Your boyfriend made you come here.
他说,我大部分时间都很抑郁。
He says, I'm depressed much the time.
听到你抑郁我很难过。
I am sorry to hear you are depressed.
是真的。我不快乐。
It's true. I am unhappy.
早期的人工智能系统大量使用基于规则的系统。要解决问题时,你先确定一些规则,然后编写程序根据这些规则做决策。这有时被称为老派人工智能。
The early AI systems used what is called a lot of rule based systems. So you need to solve a problem, you identify some rules and then you write a program that makes decisions according to the set of rules. This is sometimes called good old fashioned AI.
ELISA没有使用人工神经网络,也不会从输入中学习。相反,这个语言模型识别关键词,并以简单短语或问题的形式反馈回来,试图模拟与心理治疗师的对话。它几乎就像一面镜子。
ELISA didn't use an artificial neural network and it didn't learn from its inputs. Instead, the language model recognized keywords and reflected them back in the form of simple phrases or questions, supposedly modeling the kind of conversation that you'd expect with a therapist. It was almost like a mirror.
伊丽莎没有通过图灵测试。
Eliza did not pass the Turing test.
这实际上正是设计目的。开发这个机器人的研究人员旨在展示人机对话的表浅本质。但现实中却产生了相反效果——人们开始与这个计算机程序进行长时间深入交流。
Which was in fact the point. The researchers behind the bot designed Eliza to show how superficial the state of human to machine conversation really was. But in reality, it had the opposite response. People became engaged in long deep conversations with the computer program.
你知道吗,这真的太不可思议了。就好像它真的理解我在说什么。
You know, that's really incredible. It's as if it really understood what I was saying.
但它当然没有理解。这只是些小把戏罢了。
But it doesn't, of course. It's just a bag of tricks.
哦,我明白了。它完全不知道我在说什么。
Oh, I get it. It hasn't the faintest idea what I'm talking about.
伊丽莎并非智能机器,但它让人们停下来思考,如果人工智能真的出现,世界会是什么样子。这或许也是人类首次展现出我们多么愿意相信,只要计算机用我们的语言与我们交流,它们就可能具有智能。这是丹·格拉泽早前描述的人类天生倾向的又一例证——将周围世界的一切拟人化。当然,在伊丽莎之后的几十年里,聊天机器人从chatterbots变成了chatbots。而这还不是全部。
Eliza was not an intelligent machine, but it made people stop and think about what the world might be like if artificial intelligence did come along. It was perhaps also the first time that humans showed how willing we all are to believe that computers could be intelligent if they spoke to us in our own language. It's another example of what Dan Glaser described earlier as the innate desire of humans to anthropomorphise everything in the world around us. Of course, in the decades since Eliza, chatterbots became chatbots. And that's not all.
如今,我们与聊天机器人的对话轻松通过了图灵测试。
These days, our conversations with chatbots easily pass the Turing test.
但今天我们所见聊天机器人的技能是如何从1960年代的原始人工智能发展而来的?是什么让人工神经网络理论真正在实践中发挥作用?核心答案在于认识到人工神经元必须像人脑中的神经网络那样分层堆叠。因此,在1960年代末,研究人员提出了深度神经网络的概念。几十年后爆发的深度学习革命,很大程度上要归功于后来被称为人工智能教父的三位科学家。
But how did the skills of chatbots that we see today emerge from the primitive AI of the 1960s? What was it that made the theory of artificial neural networks actually work in practice? At its core, the answer lies in the insight that artificial neurons had to be layered on top of each other like neural networks are in the human brain. And so at the end of the 1960s researchers came up with the idea of the deep neural network. The deep learning revolution that came several decades later happened in no small part thanks to three scientists who would later become known as the godfathers of AI.
我们当时构建的系统如此愚笨、脆弱且难以训练。
The systems we were building were so dumb and so weak and so difficult to train.
这是所谓的教父之一约书亚·本吉奥所言。他是蒙特利尔大学的计算机科学家,也是深度学习发展中的关键人物。
That's one of the so called godfathers, Joshua Benghio. He's a computer scientist at the University of Montreal and he was a key figure in the development of deep learning.
当我开始阅读八十年代初的一些早期神经网络论文时,真正让我兴奋的是这样一种理念:我们自身通过大脑实现的智能可以用几条原理来解释。就像物理学的工作原理那样。我们是否有可能用类似的方法来理解智能,并利用这些原理设计智能机器?事实上,这种影响是双向的,因为有些实验我们可以在计算机上运行,却无法在真实大脑上进行。因此,我们在人工智能领域的工作也在为大脑运作理论提供参考。
What got me really excited when I started reading some of the early neural net papers from the early eighties is the idea that our own intelligence with our brain could be explained by a few principles. Just like think of how physics works. Could it be possible that we would do something similar for understanding intelligence and of course take advantage of those principles to design intelligent machines. And in fact, it goes also in the other direction, because there are experiments we can run-in computers that we can't run on real brains. And so the work we've been doing in AI is informing also theories of how the brain works.
所以这是双向的。那种协同效应,以及可能存在一种我们可以作为科学理论来阐述的智能解释的想法,正是吸引我进入这个领域的原因。
So it's a two way street. So that synergy and that idea that maybe there is an explanation for intelligence that we can communicate as a scientific theory is really what got me into this field.
请谈谈在硅片中模拟人脑面临哪些挑战。
Talk to us about what the challenge was in trying to model the human brain in silicon.
我们并未尝试在硅片中模拟人脑,因为那看起来任务过于艰巨。相反,我们研究了神经科学中最简单的模型,探索如何调整它们。在我攻读博士的早期,我们试图用这些系统来分类简单模式,比如字符形状或音素。利用缺失的元音a、e、o的录音,神经网络——这种受大脑神经元启发的极简化计算模型——能否学会区分输入中这些不同类别的对象?我从八十年代中期到两千年代中期一直在研究这个问题。
Well, we didn't try to model the human brain in silicon because that would have seemed too daunting a task. Instead, we looked at the simplest possible models that come from neuroscience and see how we can tweak them. In the early days, when I was doing my PhD, we were trying to use these systems to classify simple patterns, like shapes of characters or phonemes. Using the sound recording of missing ah, e, o, can a neural network, which is this very simplified calculation inspired by neurons in the brain, can a neural network learn to distinguish between those different categories of objects in the input? I've been working on this, like, from the mid eighties to the mid two thousands.
你最初尝试用神经网络做哪些事情来证明它们是有用的?
What were some of the first things you tried to do with the neural networks to sort of prove that they could be useful?
九十年代时,我从事语音和图像分类等模式识别任务的研究。随后工业应用开始涌现。比如我曾参与一个项目,利用神经网络对支票金额进行分类,以自动化核验银行存款金额正确性的流程。该系统实际在90年代就被银行采用,处理了大量支票。此前尝试的所有方法效果都不理想,因为不同人的书写差异实在太大。
So in the nineties, I worked on these pattern recognition tasks, both speech and image classification. And industrial applications emerged. For example, I worked on a project to use neural nets for classifying amounts on checks to automate the process of making sure that a check you deposit at the bank has the right amount. And that was actually deployed in banks in the 90s and processed a large number of checks. All of the approaches that had been tried before didn't do very well because there is so much variability between people.
我们的书写方式各不相同。所以这个挑战并不简单,而解决它当时已具有巨大的经济价值。
We write in different ways. So it was not trivial, and it is something that had a lot of economic value already to address that challenge.
下周我们将具体探讨人工神经网络如何让机器实现学习,同时解析促成这一切的精妙数学原理。
Next week we'll look at exactly how artificial neural networks allowed machines to learn and we'll also examine the clever maths that allowed all of this to happen.
人们意识到,如果加入一个中间层(有时称为隐藏层),这类系统就能计算更复杂的函数。
People realize that if you could insert a middle layer, which is sometimes called a hidden layer, these systems could actually compute many more functions.
这就是下期《巴贝奇》的内容。感谢丹尼尔·格雷泽、丹妮拉·鲁兹、约书亚·本吉奥、《经济学人》的安斯利·约翰斯顿,以及她在英国生物样本库采访的所有人士。感谢各位收听。欲继续了解现代AI探索之旅,请订阅《经济学人播客+》。点击节目备注中的链接获取详情。
That's next time on Babbage. Thanks to Daniel Glazer, Daniela Ruz, Joshua Bengio, The Economist Ainsley Johnston, and all of the people she spoke to at the UK Biobank. And thank you for listening. To follow the next stage of our journey to understand modern AI, subscribe to Economist Podcast Plus. Find out more by clicking the link in the show notes.
《巴贝奇》由杰森·霍斯金和卡纳尔·帕特尔制作,混音与音效设计由尼科·罗法斯特完成。执行制片人是汉娜·马里诺。我是阿洛克·杰哈,这里是伦敦《经济学人》电台。
Babbage is produced by Jason Hoskin and Kanal Patel, with mixing and sound design by Nico Rofast. Rofast. The executive producer is Hannah Marino. I'm Alok Jha, and in London, this is The Economist.
25年后,全球每三人中就有两人居住在城市。这些城市将呈现何种面貌?它们能否具备韧性、可持续性和公平性?通过《经济学人影响力》的新全球倡议"城市未来",我们探索城市如何在快速变革的世界中蓬勃发展。因为未来的城市不仅是我们生活的场所,更将塑造我们的生活方式。
Twenty five years from now, two out of every three people on the planet will live in urban areas. What will these cities look like? Will they be resilient, sustainable, equitable? With Urban Futures, a new global initiative from Economist Impact, we examine how cities can flourish in a world of rapid transformation. Because the cities of the future won't just be where we live, they'll shape how we live.
访问impact.economist.com/urbanfutures了解更多。
Visit impact.economist.com/urbanfutures to learn more.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。