Hard Fork - OpenAI发布‘红色警报’+我该选用哪个模型?+《Hard Fork》对‘Slop’的评论 封面

OpenAI发布‘红色警报’+我该选用哪个模型?+《Hard Fork》对‘Slop’的评论

OpenAI Calls a ‘Code Red’ + Which Model Should I Use? + The Hard Fork Review of Slop

本集简介

硅谷正值AI模型发布季,OpenAI似乎感受到了压力。周一,OpenAI首席执行官萨姆·奥特曼向员工发送备忘录,宣布启动'红色警报'行动以改进ChatGPT并推迟其他计划。我们将解析为何谷歌和Anthropic的最新前沿模型让OpenAI感到不安,以及该公司如何重新调整优先事项应对挑战。随后,我们将坦诚分享各自最青睐的AI模型,并探讨日常生活中如何运用AI技术。最后,在《Hard Fork烂作鉴赏》最新一期中,我们盘点本周互联网上最热门的AI生成内容。 延伸阅读: 当ChatGPT用户与现实脱节时,OpenAI做了什么 谷歌发布Gemini 3,编码与搜索能力升级 游客被假冒皇家圣诞集市欺骗 北卡罗来纳州议员深度伪造视频获家电品牌广告奖 欢迎来信至hardfork@nytimes.com。在YouTube和TikTok关注《Hard Fork》。 立即订阅:nytimes.com/podcasts或通过Apple Podcasts与Spotify。您也可通过此链接在任何播客应用订阅:https://www.nytimes.com/activate-access/audio?source=podcatcher。下载纽约时报APP获取更多播客与有声文章:nytimes.com/app。

双语字幕

仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。

Speaker 0

这是尼克·克里斯托夫。

This is Nick Kristoff.

Speaker 0

我是《纽约时报》的专栏作家,令我自豪的是,一百多年来,《纽约时报》每年都会发起募捐活动,为慈善组织筹集资金。

I'm an opinion columnist for The New York Times, and I'm proud that for more than one hundred years, The Times has conducted an annual appeal to raise money for charitable organizations.

Speaker 0

《纽约时报》的新闻工作本质在于核实真相,而在这种情况下,就是核实各类组织并筛选出其中最优者,以帮助创造机会、克服困境。

Times journalism is fundamentally about vetting the truth, and in this case, vetting organizations and selecting some of the best to help create opportunity and overcome hardship.

Speaker 0

希望大家能考虑向《纽约时报》社区基金捐款。

I hope you'll consider donating to The New York Times Communities Fund.

Speaker 0

了解更多详情,请访问nytimes.com/nytfund。

To learn more, go to nytimes.com/nytfund.

Speaker 0

谢谢。

Thank you.

Speaker 1

凯西,最近怎么样?

Casey, how's it going?

Speaker 1

早上好,凯文。

Good morning, Kevin.

Speaker 1

我挺好的,考虑到昨天刚做了结肠镜检查,已经算恢复得不错了。

I am doing well, as well as can be expected given that I had a colonoscopy yesterday.

Speaker 2

嗯。

Yes.

Speaker 2

我听说了这事。

I heard about this.

Speaker 2

检查结果如何?

How did it go?

Speaker 1

嗯,检查结果显示一切正常。

Well, I got a clean bill of health.

Speaker 1

不过我得说,检查过程中有个瞬间确实让我有点慌。

I will say that there was one moment during the the procedure that was sort of alarming to me.

Speaker 2

怎么回事?

What was that?

Speaker 1

就是...我之前见过那些护士和医生,他们都很友善,一直在自我介绍什么的。

Well, I'd I'd met, you know, the various nurses and the doctors, and everyone was so friendly, you know, and and and was introducing themselves.

Speaker 1

但当他们给我注射药物让我逐渐失去意识时,我注意到有个医护人员靠在墙边,正在刷手机。

But as they sort of put in the medicine to make me kind of go under, I noticed that there was one medical professional who was against the wall, and she was scrolling through her phone.

Speaker 1

而我失去意识前的最后一个念头是:真希望她不是在搜索如何做肠镜。

And the last thought I had before I went under was, I really hope she's not looking up how to do a colonoscopy.

Speaker 1

你懂我意思吧?

You know what I mean?

Speaker 1

因为她脸上就是那种表情。

Because she kinda had that look on her face.

Speaker 1

就像在说:我得回忆一下我在这儿要做什么。

Like, I need to jog my memory about what I'm doing here.

Speaker 1

我当时就想:天啊。

And I thought, oh god.

Speaker 1

I

Speaker 2

希望她

hope she

Speaker 1

已经知道了。

already knows.

Speaker 2

不。

No.

Speaker 2

她在刷抖音。

She was on TikTok.

Speaker 2

她正在向她的数十万粉丝直播你的结肠镜检查。

She she was live streaming your colonoscopy to her hundreds of thousands of followers.

Speaker 1

我也有过这个念头。

I had that thought.

Speaker 1

我当时在想,如果直播我的结肠镜检查,我们YouTube频道的观众会变多还是变少呢?

I was like, could we get more or fewer viewers to the YouTube channel if we we went live with my colonoscopy?

Speaker 2

他们总说要真实。

They're always saying be authentic.

Speaker 1

没错。

Exactly.

Speaker 1

带上你的全部

Bring your whole

Speaker 2

投入工作。

self to work.

Speaker 2

或者你的全部。

Or yourself whole.

Speaker 2

没错。

Yes.

Speaker 2

把它带到工作中来。

Bring it to work.

Speaker 2

我是《纽约时报》科技专栏作家凯文·卢斯。

I'm Kevin Roose, the tech columnist at the New York Times.

Speaker 1

我是Platformer的凯西·纽德。

I'm Casey Neude from platformer.

Speaker 1

这里是Hard Fork。

And this is hard fork.

Speaker 1

本周,OpenAI宣布进入红色警戒状态,为何AI竞争格局让Sam Altman感到恐惧。

This week, OpenAI declares a code red, why the competitive landscape in AI has Sam Altman scared.

Speaker 1

然后,我们将探讨如何使用所有最新AI模型。

Then, how we're using all the latest AI models.

Speaker 1

最后,我们将重返剧院,带来硬分叉对《Slop》的评论。

And finally, we're heading back to the theater for the hard fork review of Slop.

Speaker 2

凯西,你是否感受到一丝紧张的能量,这些天旧金山空气中弥漫着某种一触即发的氛围?

Casey, do you feel a little nervous energy, a certain of tension in the air crackling through San Francisco these days?

Speaker 1

完全正确,凯文。

Absolutely, Kevin.

Speaker 1

当我走在米申区的街道上时,后颈发凉,周围笼罩着诡异的寂静。

There's a chill on the back of my neck and an eerie silence as I walk down the streets of the mission.

Speaker 2

没错。

Yes.

Speaker 2

那是因为OpenAI正处于红色警戒状态。

Well, that is because OpenAI is in a Code Red.

Speaker 2

红色警报。

Code Red.

Speaker 2

你应该记得,几年前在这个节目上,我们采访过谷歌CEO桑达尔·皮查伊。

Now as you will remember, a couple years ago on this show, we talked to Sunar Pachais, CEO of Google Mhmm.

Speaker 2

当时他们正处于自己的红色警报时期,不过他说那其实不叫红色警报。

When they were in their own sort of code red period, which he said was not actually called code red.

Speaker 2

但那边有人用了这个词,那是在他们被ChatGPT意外成功打了个措手不及,正急于推出自己的聊天机器人版本的时候。

But someone over there was using that term, and that was sort of when they were on their heels taken aback by the surprise success of ChatGPT, and they were racing to get their own version of a chatbot out.

Speaker 2

当时他们整个公司都处于一种恐慌状态。

And they were sort of in a corporate state of panic about this.

Speaker 2

那就是他们的红色警报。

That was their Code Red.

Speaker 2

没错。

Yes.

Speaker 2

但现在我们有了新的红色警报,这次是OpenAI。

But now we have a new Code Red, and it is an OpenAI.

Speaker 2

据报道,萨姆·奥特曼本周宣布进入'红色警报'状态,针对ChatGPT使用中出现的一些令人担忧的趋势。

Sam Altman reportedly declared a Code Red this week about some worrying trends they're seeing with ChatGPT usage.

Speaker 2

我认为,不仅OpenAI如此,前沿AI公司整体都发生了许多值得我们讨论的事情。

And I think in general, beyond just OpenAI, there's just been a lot happening at the frontier AI companies that we should talk about.

Speaker 2

许多新模型正在发布,关于当前AI发展现状的讨论也层出不穷。

A lot of new models coming out, a lot of discussions about the sort of state of AI right now.

Speaker 2

所以今天我们应该从这次'红色警报'事件切入,全面探讨这些话题。

So I thought today we should just kind of get into it all starting with this Code Red.

Speaker 1

嗯。

Yeah.

Speaker 1

我们来聊聊这个话题。

Let's talk about it.

Speaker 1

因为你知道吗,对于好奇的听众来说,'红色警报'是企业能宣布的第二紧急状态,而最紧急的当然是'巴哈爆炸'。

Because, you know, for listeners who may be curious, a Code Red is the second most dire state of emergency a company can declare with number one, of course, being a Baja Blast.

Speaker 1

所以'红色警报004'的紧急程度仅次于它。

So Code Red is just below that.

Speaker 1

是的。

Yes.

Speaker 1

没错。

Yes.

Speaker 1

如果我们

If we

Speaker 2

真到了Baja Blast警报级别,我肯定第一时间找掩护。

get to Baja Blast, I'm ducking in cover.

Speaker 1

我也是。

Me too.

Speaker 1

我要逃离这座城市。

I'm leaving the city.

Speaker 1

我要躲进防空洞。

I'm heading to the bunker.

Speaker 2

既然这是关于AI的讨论环节,我们应该先做AI相关的免责声明。

So because this is a segment about AI, we should make our AI disclosures.

Speaker 2

我在《纽约时报》工作,我们正在起诉OpenAI和微软涉嫌侵犯版权。

I work for the New York Times, which is suing OpenAI and Microsoft over alleged copyright violations.

Speaker 1

而我男朋友在Anthropic工作。

And my boyfriend works at Anthropic.

Speaker 2

好的。

Okay.

Speaker 2

那我们先从OpenAI开始吧。

So let's start with OpenAI.

Speaker 2

Casey,这份'红色代码'备忘录里写了什么?

Casey, what was in this Code Red memo?

Speaker 1

对。

Yeah.

Speaker 1

这是The Information报道的。

So this was reported by the information.

Speaker 1

Sam显然在周一给员工发了一份备忘录。

Sam apparently sent employees a memo on Monday.

Speaker 1

有趣的是,凯文,你的同事凯什·希尔最近报道过OpenAI曾宣布进入橙色警戒状态,所以他们现在正逐步提升危机等级。

And interestingly, Kevin, your colleague Cash Hill had reported recently that OpenAI had declared a code orange, so they are moving up the ladder of of distress here.

Speaker 1

但这封备忘录的要旨是,OpenAI将立即投入更多资源改进ChatGPT,同时推迟其他项目的进展,包括广告业务、AI智能体和几个月前刚推出的每日摘要功能Pulse。

But the upshot from this memo is that OpenAI is going to start devoting more resources immediately toward improving chat GPT, and they're going to be delaying work on some of the other projects they had going, including ads, AI agents, and Pulse, which is this daily digest feature that they launched a couple months ago.

Speaker 1

一方面,我觉得他们想投入大量资源改进ChatGPT似乎是很自然的事。

So on one hand, it seems sort of obvious to me that they would be wanting to put a lot of resources toward improving ChatGPT.

Speaker 1

这在我看来应该是常规操作。

Like, that would sort of seem to be the norm to me.

Speaker 1

但另一方面,如果真的因此要从其他项目抽调工程师,那可能表明他们是认真对待这件事的。

But on the other hand, if this actually does result in them pulling engineers off of other projects, well, maybe that shows that they are taking this seriously.

Speaker 1

是啊。

Yeah.

Speaker 2

凯西,他们为什么现在采取这些措施?

Casey, why are they doing this right now?

Speaker 2

为什么他们对让用户回归ChatTubitY这件事如此迫切?

Why are they feeling so much urgency around bringing people back to ChatTubitY?

Speaker 1

我认为有两个主要原因,凯文,它们的名字分别是Gemini三号和Opus四点五。

I think there are two big reasons, Kevin, and their names are Gemini three and Opus four point five.

Speaker 1

过去几周,我们看到谷歌和Anthropic都发布了最先进的模型,从不同方面挑战了OpenAI试图打造的核心支柱。

Over the past few weeks, we have seen Google and Anthropic both release state of the art models that in various ways challenge some of the core pillars of what OpenAI is trying to do.

Speaker 1

是的。

Yeah.

Speaker 1

我们知道就在几周前,Sam在Gemini三号发布前夕又给OpenAI团队发了备忘录,说我们可能要面临一些困难了。

We know that just a few weeks ago, Sam had sent another memo to the OpenAI team on the eve of Gemini three coming out saying, hey, we may be heading into some rough waters here.

Speaker 1

他们认为Gemini三号会强大到影响OpenAI在用户数量和收入两方面的增长。

The belief was that Gemini three was going to be so good that it was going to cut into OpenAI's growth both on the user side and the revenue side.

Speaker 1

这给OpenAI带来了各种问题。

And that creates all sorts of problems for OpenAI.

Speaker 1

对吧?

Right?

Speaker 1

这是一家高杠杆公司,完全依赖订阅收入,在试图打造消费产品的同时还要与世界上最大最富有的公司之一谷歌竞争。

This is a massively leveraged company that is wholly dependent on subscription revenue, that is trying to build out a consumer product while competing against in Google what is one of the biggest and richest companies in the world.

Speaker 1

所以如果谷歌的模型真的变得更好,而转换成本又很低的话,OpenAI很快就会陷入非常困难的境地。

So if you get to a point where Google's models are truly better and the cost of switching are quite low, then things start to get very difficult for OpenAI very quickly.

Speaker 1

是的。

Yeah.

Speaker 2

我认为这一点非常重要,我想强调一下,因为目前的情况是多种因素共同作用的结果。

I think that's really important, and I and I I wanna just underscore that because I think what's happening here is a combination of things.

Speaker 2

一方面,我认为OpenAI和Anthropic在某种程度上一直依靠模型优势生存。

One is that I think for a while, OpenAI and to a lesser extent Anthropic were both sort of surviving on this mote of the model.

Speaker 2

对吧?

Right?

Speaker 2

他们拥有世界上最好的模型,这正是他们与其他竞争者不同的地方。

They had the best models in the world, and that was kind of what separated them from the rest of the pack.

Speaker 2

如果你想使用世界一流的模型,无论是进行某种代理软件开发,还是大量氛围编程之类的,你确实需要最智能的模型,并且愿意为此每月支付20美元、200美元,对企业来说甚至是几千美元,因为其他选择只有Llama或Gemini这些二流模型,而那些模型并不出色。

If you wanted to work with a world class model, you were doing some kind of agentic software development, if you were trying to do a lot of lot of vibe coding or something, you really, like, wanted the smartest model possible, and you were willing to pay 20 or 200 or in a business's case, like a couple thousand dollars a month for access to that best model because your alternative was Llama or Gemini or one of these other sort of second rate models, and those models were not that good.

Speaker 2

但正如我们几周前在节目中和Dennis与Josh讨论的那样,现在的Gemini已经很优秀了。

But Gemini, as we talked about with Dennis and Josh on the show a couple weeks ago, is good now.

Speaker 2

我认为至少在我尝试过的许多任务上,它和ChatGPT一样优秀。

I would say it's at least as good as ChatGPT at many of the tasks I've been trying it on.

Speaker 2

而且很难想象如何与谷歌这样的公司竞争,他们上个季度的收入达到了1000亿美元。

And it's really hard to imagine competing with Google, a company that last quarter did a $100,000,000,000 in revenue.

Speaker 2

这家公司拥有的资源和资金、工程人才比其他任何公司都多。

Like, this is a company that has more resources and money and engineering talent than anyone else.

Speaker 2

你真的认为他们会担心每月20美元的订阅量吗?

And do you really think that they're, like, sort of worried about how many $20 a month subscriptions they're selling?

Speaker 2

不会。

No.

Speaker 2

一旦他们的模型成熟,他们就会开始大力补贴,把价格压得非常低,试图抢占市场份额。

Once their models are good, they're gonna, like, start subsidizing the hell out of them, and they're gonna drive the cost very low, and they're gonna try to steal market share.

Speaker 2

我认为他们现在正处于这个阶段:意识到'我们已经迎头赶上'了。

And I think that's the sort of phase they are right now is that they are realizing, oh, we've caught up.

Speaker 2

我们拥有极具竞争力的产品,可以通过低价策略来挤压其他公司的利润空间。

We have something compelling, and we can just kind of drive these other companies' margins down by offering our thing very cheaply.

Speaker 1

是啊。

Yeah.

Speaker 1

那么,我们来谈谈备忘录中的其他细节,以及OpenAI现在声称将要进行的ChatGPT改进。

Well, so let's talk then about a few other details from this memo and the kinds of improvements to ChatGPT that OpenAI now says it is going to be working on.

Speaker 1

备忘录中包括个性化功能,进一步定制ChatGPT与用户的互动方式,改进模型行为。

The memo includes personalization features, so further customizing how ChatGPT interacts with you, improving the behavior of the model.

Speaker 1

我不太确定这具体意味着什么。

I'm not quite sure what that means.

Speaker 1

不过其中提到的一点是,他们希望ChatGPT减少拒绝用户的次数,同时提升速度和可靠性。

Although one thing it did say was they want ChatGPT to refuse you less, and then improving speed and reliability.

Speaker 1

我得说,这些改进内容我以为OpenAI本来就会持续进行。

I have to say, these are things that I just assume that OpenAI is always working on anyway.

Speaker 1

对吧?

Right?

Speaker 1

这些看起来不像是特别重大的突破。

Like, these don't feel like particularly big swings.

Speaker 1

这些看起来并不像是方向上的重大转变。

They don't feel like a giant change in direction.

Speaker 1

不过凯文,在我看来,这更像是Facebook的玩法,这是我们节目上讨论了一段时间的话题。

What they do seem to me though, Kevin, is like the Facebook playbook, which is something we've been talking about on the show for a while now.

Speaker 1

这家公司招募了很多曾在Meta工作过的人。

This is a company that has brought on a lot of people who used to work at Meta.

Speaker 1

他们在Meta都做些什么呢?

And what kind of things do they do over at Meta?

Speaker 1

他们试图为你打造一个完全个性化的定制信息流。

Well, they try to create a perfectly personalized custom feed to you.

Speaker 1

他们试图精准满足你的需求,不愿拒绝你的任何要求。

They try to give you exactly what you want, and they don't want to refuse anything that you want for them.

Speaker 1

对吧?

Right?

Speaker 1

换句话说,这似乎是在优先考虑用户参与度,我认为这会带来一系列有趣的影响。

So this seems, in other words, like they are going for engagement first and foremost, and I think that has a bunch of interesting implications.

Speaker 2

是啊。

Yeah.

Speaker 2

所以我觉得现在说OpenAI完蛋了还为时过早。

So I I think it's too early to say that, like, OpenAI is screwed here.

Speaker 2

ChatGPT目前处境确实不太好。

The ChatGPT is in a bad place.

Speaker 2

他们显然仍是全球领先者。

They're obviously still the the sort of world leader.

Speaker 2

他们的品牌认知度最高。

They have the most name recognition.

Speaker 2

我认为他们在AI重度用户中已经形成了一种难以撼动的普及度。

I think they've gotten a kind of ubiquity among AI power users that is going to be very hard to unseat.

Speaker 2

你怎么看这个决定?

What do you think about this decision?

Speaker 2

你对OpenAI这个发展方向怎么看?

What do you think about this direction for OpenAI?

Speaker 2

你认为他们的担忧有道理吗?

Do you think that they are right to be worried?

Speaker 1

我觉得吧。

I think look.

Speaker 1

如果OpenAI失败了,我们都能回头找出他们犯下的15个重大错误。

If OpenAI flames out, all of us will be able to look back and identify 15 huge mistakes that they made.

Speaker 1

对吧?

Right?

Speaker 1

同样有可能的是,他们现在的一些赌注未来会获得回报。

It is just as possible that some of the same bets that they are making now may pay off.

Speaker 1

而现在,我们正处于这种不确定的时刻。

And right now, we're in this moment of uncertainty.

Speaker 1

但如果你想看空,这周有很多人都在这么做。

But if you want to take the bear case, there's a lot of people are making this week.

Speaker 1

你可以这么说。

Here's what you could say.

Speaker 1

这家公司杠杆率极高。

This company is massively leveraged.

Speaker 1

对吧?

Right?

Speaker 1

他们已做出数万亿美元的支出承诺,而这些承诺所依赖的收入远未实现。

They've made a ton of spending commitments into the trillions of dollars that rely on revenue that is not close to materializing.

Speaker 1

如果你看看他们的产品组织,会发现他们完全缺乏专注。

And if you look at their product organization, they are not focused at all.

Speaker 1

他们什么都想尝试一点。

They are trying a little bit of anything and everything.

Speaker 1

我们在节目中如此频繁讨论他们的视频生成器Sora,一个重要原因就是这看起来严重偏离了他们的核心方向。

One reason why we've talked about Sora, their video generator so much on the show is it seemed like such a weird departure from their core focus.

Speaker 1

对吧?

Right?

Speaker 1

所以这家公司现在涉足的领域非常分散。

So you have this company that has its fingers in many, many different pots.

Speaker 1

它们大部分都没有产生收入。

Most of them are not generating revenue.

Speaker 1

它背负着这些巨额支出承诺。

It has these massive spending commitments.

Speaker 1

而现在突然间,其他一些实验室的模型似乎正在超越他们。

And now all of a sudden, some of the other labs seem like their models are leapfrogging them.

Speaker 1

所以,是的,你可以根据所有这些事实描绘出一个潜在严峻的未来图景

So, yeah, you can take all of those facts and and paint a potentially dire picture about the future

Speaker 2

关于OpenAI的。

of OpenAI.

Speaker 2

是的。

Yeah.

Speaker 2

这周有些有趣的讨论。

There's some interesting discourse this week.

Speaker 2

有人指出OpenAI已经很久没有成功的预训练运行了。

Someone point was pointing out that OpenAI has not had a successful pretraining run-in in quite a while.

Speaker 2

山姆几周前在Slack给员工发的消息中提到,他们认为Gemini 3是一次相当惊人的预训练成果,这是构建大语言模型时的第一步,需要向模型输入大量信息。

This was something that Sam actually brought up in his one of his sort of Slack messages to staff a couple weeks ago is that they they feel like Gemini three is like a pretty amazing sort of pre train, which is, you know, the first step in the AI process when you're building a large language model and you're feeding it a bunch of information.

Speaker 2

我认为AI领域的普遍共识是,预训练正面临收益递减的临界点。

And I think that the sort of conventional wisdom among, like, AI heads has been that, like, pre training is kind of hitting a point of diminishing returns.

Speaker 2

对吧?

Right?

Speaker 2

我们已经吸收了所有数据,将其输入模型,使模型达到最大规模和效率,现在容易摘取的果实都在后训练阶段了。

That there we've sort of sucked up all the data, fed it into the models, made these models as big and efficient as they can be, and all of this are low hanging fruit now is in the post training phase.

Speaker 2

所以我认为现在OpenAI意识到他们在预训练环节存在特定问题。

So I think what we're seeing now is that OpenAI realizes that it has a problem with pre training specifically.

Speaker 2

这个问题比后训练更难解决。

And that is harder to fix than post training.

Speaker 2

成本很高。

It's expensive.

Speaker 2

必须重新进行这些训练流程。

You have to redo these training runs.

Speaker 2

你必须找出那些干扰预训练的问题所在。

You have to find whatever is messing up the pre trains.

Speaker 2

但我认为,这正是他们将集中研究精力的方向。

But that is, I think, where they are going to be focusing their research energy.

Speaker 1

是啊。

Yeah.

Speaker 1

这绝对是OpenAI非常关注的问题。

Definitely something that OpenAI is concerned about.

Speaker 2

通过这些泄露的备忘录,我们还了解到OpenAI正在训练更多他们认为会更好、能赶上或推动技术前沿的模型。

So thanks to these memos that have been leaking out, we also know that OpenAI is training more models that it thinks will be better, will sort of catch up to the frontier or, you know, advance the frontier in some way.

Speaker 2

其中一款叫'大蒜',另一款叫'葱头彼得'。

And one of them is called garlic, and another one is called shallot pete.

Speaker 2

所以你们自己琢磨这名字的含义吧。

So make of that what you will.

Speaker 1

他们那边可真是对葱属植物情有独钟啊。

They have a real Allium thing going on over there.

Speaker 1

他们快要能做出调味蔬菜丁了。

They're getting very close to being able to make a mirepoix.

Speaker 2

我知道那是什么。

I know what that is.

Speaker 1

加点胡萝卜和芹菜。

Put a little carrot and celery.

Speaker 1

没错。

Yes.

Speaker 1

你就能炖锅汤了。

You got a stew going.

Speaker 1

不过他们是怎么评价那些模型的?

Now what did they say about those models, though?

Speaker 1

因为我相信我们在《信息报》上看到过报道,说至少他们认为这下一代模型会让他们重回甚至略微超越当前的技术

Because I believe we saw some reporting in the information that said that, at least they believe that this next series of models will bring them back to or maybe even a bit ahead of the state of

Speaker 2

前沿。

the art.

Speaker 2

是啊。

Yeah.

Speaker 2

我一直在和那边的一些人交流。

I've been talking to some folks over there.

Speaker 2

他们似乎对这些模型持乐观态度,但目前还不清楚这些模型是否能达到他们期望的水平。

They they they seem optimistic about these models, but it's also not clear yet whether they will be as good as they hope they will.

Speaker 2

在模型训练的后期阶段,各种问题都可能出现。

All kinds of things can get messed up in the late stages of training a model.

Speaker 2

所以我想我们只能拭目以待。

And so I guess we'll just have to wait and see.

Speaker 1

不过,关于这一切我想再补充一点,我认为很重要的一点是,OpenAI当前的重点仅仅是努力追平其最大竞争对手这一事实,本身就是个大问题。

Let me add one more point about all this, though, which I think is important, which is the mere fact that OpenAI's current focus is just kind of clawing its way back to parity with its biggest rivals is a big part of the problem here.

Speaker 1

没错。

Yes.

Speaker 1

对吧?

Right?

Speaker 1

想想OpenAI大约三年前这周所处的地位,那正是ChatGPT发布后的几天。

Think about the position that OpenAI was in just about three years ago this week, which was just days after the launch of ChatGPT.

Speaker 1

世界尽在他们掌握之中。

The world was their oyster.

Speaker 1

对吧?

Right?

Speaker 1

他们拥有对所有人的巨大先发优势,并且即便面对历史性动荡——包括CEO被罢免又回归——仍能保持领先。

They had this massive head start over everyone, and they have been able to maintain that lead even in the face of, like, historic turmoil, including the ousting of their CEO and then bringing him back.

Speaker 1

对吧?

Right?

Speaker 1

说实话,几个月来我一直惊讶他们能持续推出新功能,将竞争对手远远甩在身后。

And I think for months, I was honestly astonished that they had been able to release feature after feature that was keeping them so far ahead of the competition.

Speaker 1

现在似乎是ChatGPT发布后,他们首次可能开始略显落后。

Now does seem like the first moment after the release of ChatGPT where maybe they're just starting to fall a little bit behind.

Speaker 1

凯文,我必须说,要实现雄心壮志,OpenAI仅仅做出与Gemini三号同等水平的模型是远远不够的。

And, Kevin, I have to say for OpenAI to realize its ambitions, it is not going to be enough for them to make a model that is as good as Gemini three.

Speaker 1

他们需要能够再次实现跨越式发展。

They need to be able to leapfrog it again.

Speaker 2

没错。

Right.

Speaker 2

他们不可能靠并列第一取胜。

They they are not going to win by tying for first place.

Speaker 2

说得对。

That's right.

Speaker 2

确实如此。

That's right.

Speaker 2

好的。

Alright.

Speaker 2

让我们来谈谈其他一些AI模型以及最近推出新产品的公司。

Let's talk about some of these other AI models and some of these other companies that have been coming out with new things recently.

Speaker 2

我想先从今天已经多次提到的Gemini 3模型开始。

And I wanna start with Gemini three, a model that we've mentioned a couple times already today.

Speaker 2

我们在模型发布当天的特别节目中和Demis与Josh讨论了它。

We talked with Demis and Josh about it on our bonus show on the day that it came out.

Speaker 2

现在我们已经有几周时间来试用这个模型并开始使用它,我想知道你的印象。

We've now had a couple weeks to play around with the model and start using it, and I wanna know your impressions.

Speaker 1

我认为关于Gemini三号的首要观察是它比竞争对手更快,这一点非常重要。

So I think the number one observation I have about Gemini three is that it is just faster than the competition, and this matters a lot.

Speaker 1

对吧?

Right?

Speaker 1

通常当我写完一篇文章后,我会让ChatGPT和Gemini都帮我做事实核查。

Often when I'm finished writing a column, I will ask both ChatGPT and Gemini to fact check it.

Speaker 1

即使到今天,ChatGPT的事实核查通常比Gemini三号更全面更好,但Gemini三号要快得多。

ChatGPT's fact checking is usually more thorough and better than Gemini three's even today, but Gemini three is a lot faster.

Speaker 1

在AI领域,速度非常重要。

And in AI, speed matters a lot.

Speaker 1

一个工具越快,你就会越频繁地使用它。

And the faster something is, the more often you use it.

Speaker 1

所以我觉得这确实非常强大。

So I think that's been, really powerful.

Speaker 2

那你会对事实核查进行二次核查吗?

Now do you fact check the fact check?

Speaker 2

你是不是还得手动检查模型告诉你的信息是否正确?

Do you have to, like, go in and sort of manually see if what their models are telling you is correct?

Speaker 1

是的。

Yeah.

Speaker 1

我的做法是,它基本上会直接提示说,比如,嘿。

So what I'll do is, you know, it'll it'll basically just say, like, hey.

Speaker 1

比如'这个日期你写错了',或者'这个人名你搞错了'。

Like, you got this date wrong or, you know, you got this name wrong.

Speaker 1

然后我会自己去查证,十次里有九次它们确实抓住了我的错误。

And then I go look it up myself and, you know, nine times out of 10, they have, like, caught my mistake.

Speaker 1

所以我并不是单纯让它们告诉我这里所有内容都完美无缺。

So I'm not just saying, like, tell me that everything in here is perfect.

Speaker 1

我是说,你能在这里找到你认为错误的地方吗?

I'm saying, can you find something in here that you think is wrong?

Speaker 1

顺便说一下,要知道,这是一年前的事情

And by the way, know, this was something that a year

Speaker 2

对吧。

ago Right.

Speaker 2

一年前,我们还在检查模型是否存在幻觉问题。

A year ago, we were checking the model for hallucinations.

Speaker 2

现在轮到它们检查我们是否有幻觉了。

Now they're checking us for hallucinations.

Speaker 1

是的。

Yes.

Speaker 1

确实如此。

It's really true.

Speaker 2

不过他们在这方面已经相当擅长了。

But but this is something that they've got quite good at.

Speaker 2

完全同意。

Totally.

Speaker 2

是啊。

Yeah.

Speaker 2

我真的很喜欢Gemini。

I really like Gemini.

Speaker 2

其实我默默支持Gemini已经有一段时间了。

I've I've been a sort of quiet Gemini stan for for a while now.

Speaker 2

我很喜欢之前的2.5版本。

I really liked 2.5, the model that preceded this.

Speaker 2

我日常主要使用两个模型,Gemini就是其中之一。

I have been using Gemini as sort of one of my two kind of daily driver models.

Speaker 2

稍后我们会聊聊具体使用情况。

We'll we'll talk a little bit later about sort of how we're using this stuff.

Speaker 2

但我认为这个模型确实非常强大。

But I think this is a really powerful model.

Speaker 2

我一直在为正在写的书做些研究。

I've been doing some research for the book that I'm working on.

Speaker 2

Gemini在这方面帮了大忙。

Gemini has been extremely helpful with that.

Speaker 2

比如整理时间线、查找研究论文、将

Things like organizing timelines, pulling up, you know, research papers, putting

Speaker 1

东西

things

Speaker 2

按顺序排列,在我分享的大型文档中查找内容。

in sequence, finding things within large documents that I'm sharing with it.

Speaker 2

我觉得这确实是个非常出色的模型。

I I think this is just a really good model.

Speaker 2

但对我来说,它不像其他某些模型那样交谈起来有趣或令人兴奋。

And and to me, it's not as interesting or fun to talk to as some of the other models.

Speaker 2

我感觉它没什么个性。

I don't feel like it has much of a personality.

Speaker 2

不。

No.

Speaker 2

但它是个工作狂,而且速度很快。

But it is a workhorse, and it is fast.

Speaker 2

你说得对。

You're right.

Speaker 2

这甚至还不是快速版本。

And this is not even the the fast version.

Speaker 2

他们未来会推出这个模型的闪存版本。

They're gonna be coming out with a flash version of this model at some point.

Speaker 2

所以我对此很期待,我觉得他们在Gemini三上真的下足了功夫。

So I'm excited for that, and I think they they really cooked with Gemini three.

Speaker 1

是啊。

Yeah.

Speaker 1

在发布之际,谷歌表示现在每月约有6.5亿人使用Gemini。

So on the occasion of the release, Google said that about 650,000,000 people a month are now using Gemini.

Speaker 1

OpenAI每周都烦人地报告用户数量。

OpenAI annoyingly reports weekly user numbers.

Speaker 1

他们声称ChatGPT每周用户超过8亿。

They say they have more than 800,000,000 weekly users of ChatGPT.

Speaker 1

有趣的是,这两家公司都不报告每日数据,我认为这是因为大多数人仍未每天使用AI。

Interestingly, neither of these guys reporting daily numbers, and I think that's because most people still are not using AI daily.

Speaker 1

对吧?

Right?

Speaker 1

所以我们正处于这种奇怪的中间地带。

So that's why we're sort of in this weird middle zone.

Speaker 1

但问题是这样的。

But here's the thing.

Speaker 1

如果Gemini能在如此短时间内从零增长到6.5亿用户,完全有理由相信他们能追上OpenAI。

If Gemini has gone from, you know, zero to 650,000,000 in this short of a time, there is every reason to believe that they can catch OpenAI.

Speaker 1

对吧?

Right?

Speaker 1

尽管对很多人来说ChatGPT等同于AI,但事实证明它可能没有你想象的那么重要。

And that even though ChatGPT is synonymous with AI for a lot of people, it is just turning out maybe not to matter as much as you might think.

Speaker 2

对。

Right.

Speaker 2

而且我一直对这些Gemini的数据有点怀疑,因为我不确定他们统计的是主动访问Gemini网站或应用的用户,还是也包括那些在Google文档或Gmail里点击过Gemini小功能的人。

And I I'm always a little suspicious of these Gemini numbers because I'm not sure whether they're just counting sort of people who sort of proactively go to the Gemini website or the Gemini app or whether they're also counting people who, like, click on the little Gemini thing inside Google Docs or Gmail or something.

Speaker 2

在我看来,后者体现的主动性较低,可能我会对这些数据不那么当真。

To me, that, like, indicates a little bit less intent, and maybe I I take those numbers a little less seriously.

Speaker 2

不过

But

Speaker 1

那也

that also

Speaker 2

反过来看,谷歌确实拥有巨大的渠道优势。

on the flip side of that is, like, Google has this massive distribution advantage.

Speaker 2

确实。

Yes.

Speaker 2

对吧?

Right?

Speaker 2

它不需要说服人们去访问一个他们不习惯的网站或下载一个新应用。

It does not have to convince people to go to a website that they are not used to going to or download a new app.

Speaker 2

它已经存在于数十亿台手机和设备上。

It is already on, you know, billions of phones and devices.

Speaker 2

人们已经将谷歌设为默认主页。

People already have Google as their default homepage.

Speaker 2

他们已经在使用Gmail。

They're already using Gmail.

Speaker 2

他们已经在使用所有其他谷歌产品。

They're already using all these other Google products.

Speaker 2

我认为在一个模型日益商品化、或者说有更多实验室处于领先梯队的世界里,分发渠道将扮演更重要的角色。

And I think in a world where models are becoming more commoditized or at least there are sort of more more labs at the front of the pack, distribution is gonna play a much bigger role.

Speaker 2

完全正确。

Absolutely.

Speaker 2

好的。

Okay.

Speaker 2

现在让我们转向Anthropic及其新发布的Claude Opus 4.5。

Now let's turn to Anthropic and their new release, Claude Opus 4.5.

Speaker 2

凯西,你花时间试用过这个模型吗?

Casey, have you spent time playing around with this model?

Speaker 1

我用过,我觉得这个模型真的非常出色。

I have, and I think this is a really, really good one.

Speaker 1

众所周知,我男朋友在Anthropic工作,所以你可以对我接下来要说的话打八折。

Now famously, my boyfriend does work at Anthropic, so you should feel free to apply an 80% discount rate to everything that I'm about to say.

Speaker 1

但我要告诉你的是。

But here's what I'll tell you.

Speaker 1

在4.5版本之前,我其实并不经常使用Claude。

Before four point five, I was not really using Clot on a daily basis.

Speaker 1

我只是偶尔会试试它的功能,就像我测试其他所有模型一样。

I was trying it every once in a while to see what it can do as I do with all other models.

Speaker 1

但对我来说,日常主力绝对是ChatGPT和Gemini。

But for me, the daily drivers were absolutely ChatGPT and Gemini.

Speaker 1

那些是最实用的模型。

Those were the most useful models.

Speaker 1

当Opus 4.5发布时,我用一个长期用于测试所有模型的方法来检验它:我会给它一份未发表的研究报告,可能是我想要写报道的那种。

When Opus 4.5 came out, I put it through a test that I've been giving every model forever, which is I would give it some sort of, unpublished study, that I might wanna write a story about.

Speaker 1

然后我会说:'以Casey平台专栏的风格写一篇关于这项研究的文章',看看会发生什么。

And I would say, write a column about this study in the style of Casey's platformer just to see what would happen.

Speaker 1

直到今天,如果你用ChatGPT 5.1做这个测试,效果完全不行。

To this day, if you do this with ChatGPT 5.1, not good at all.

Speaker 1

它只会给你一堆项目符号和加粗内容——这完全不是我的风格。

It just gives you a bunch of bullet points and bold stuff I would never do.

Speaker 1

如果用Gemini三代,结构上勉强有点像我的文风,但带有明显的AI痕迹。

If you give it to Gemini three, it kinda sorta is structured like something that I might write, but has a lot of obvious AI tells.

Speaker 1

我第一次用Opus 4.5做这个测试时,说实话让我脊背发凉——因为这是我第一次看到看起来像是我自己写的句子。

I did this for the first time with Opus 4.5, and it honestly sent a chill through my spine because for the first time, I was looking at sentences that it looked like I could have written them.

Speaker 1

特别是它写了一个结论,让我觉得,我可能会写出一个看起来像这样的结论。

In particular, it wrote a conclusion that I was like, I would write a conclusion that looks like that.

Speaker 1

所以我们今年早些时候讨论了很多关于风格迁移的概念。

So we talked a lot earlier this year about the concept of style transfer.

Speaker 1

那就像吉卜力工作室的瞬间,突然间你可以让任何图像看起来像这种日本动漫风格。

That was the Studio Ghibli moment where all of a sudden you could make any image look like this, you know, Japanese anime.

Speaker 1

这真的很有趣。

It was really kind of fun.

Speaker 1

我一直在等待这种文本风格迁移的时刻到来。

I've been waiting for the moment when that happens in text.

Speaker 1

那一刻我简直惊呆了。

This was a moment where I was like, oh my god.

Speaker 1

凯文,这正在发生。

It is starting to happen, Kevin.

Speaker 1

这就是Opus 4.5让我觉得'好吧'的第一个表现。

So that was the first thing I saw Opus 4.5 do that made me say, okay.

Speaker 1

他们可能确实有所突破。

They may have something here.

Speaker 2

是啊。

Yeah.

Speaker 2

我与Anthropic的员工谈恋爱并不会让我感到困扰。

I am not conflicted by being in a romantic relationship with anyone who works in Anthropic.

Speaker 2

所以也许可以少打些折扣来看待我接下来说的话,但我真的超爱这个模型。

So maybe apply less of a discount rate to what I'm about to say, but I love this model.

Speaker 2

我用Cloud Opus 4.5玩得特别开心。

I am having so much fun with Cloud Opus 4.5.

Speaker 2

它和Gemini三号是我日常使用的两大主力工具。

It is one of my two daily drivers along with Gemini three.

Speaker 2

我一直在用它进行各类书籍研究、准备播客和访谈。

I've been using it for all kinds of book research, for preparing for podcasts and interviews.

Speaker 2

我还跟它聊过各种家庭琐事、医疗问题和育儿话题。

I've been talking to it about all kinds of, you know, family things and medical things and parenting things.

Speaker 2

我只是觉得这个模型有种特别之处,自从Claude 3.5 Sonnet(新版)之后我就再没体验过这种感觉——那曾是我最爱对话的模型。

And I just think there's, like, something special about this model that I have not felt since a previous version of Claude Claude 3.5 Sonnet parentheses new, which was, to that point, my favorite model to talk to.

Speaker 2

而现在它又让我找回了那种'天啊'的惊叹感。

And this is sort of bringing back that same feeling of, like, oh my god.

Speaker 2

和这个模型对话简直是种不可思议的体验。

Like, this is an incredible experience talking to this thing.

Speaker 1

那你能说说关于4.5版本的开发背景吗?是什么造就了这些让你我都感受到的进步?

Now can you say what do we know about what went into the making of 4.5 that might explain some of these gains that you and I are both feeling?

Speaker 2

有趣的是,Anthropic这次其实低调处理了版本发布。

So interestingly, I think Anthropic actually underhyped to this release.

Speaker 2

他们没有大张旗鼓地宣传。

They didn't do a big, like, splashy thing about it.

Speaker 2

他们只提到这个版本在编程和代理任务(比如计算机操作)方面表现优异。

They made some claims about how good it is at coding and agentic tasks like computer use.

Speaker 2

他们还强调它特别擅长深度研究,并称这是他们发布过的最强对齐模型。

They also said that it was really good at deep research, and they called it their most robustly aligned model they've ever released.

展开剩余字幕(还有 480 条)
Speaker 2

但我认为他们真的想让模型自己说话,人们确实对这个模型感到惊叹。

But I think they really wanted to let the model do the talking, and people are kind of amazed by this model.

Speaker 2

最近的哈特福德嘉宾迪恩·鲍尔写了一篇关于Cloudopus 4.5的精彩文章,他说'这个模型是个美丽的机器,是我见过的最美之一'。

Recent Hartford guest, Dean Ball, had a great post about Cloudopus 4.5 in which he said, this model is a beautiful machine, among the most beautiful I had ever encountered.

Speaker 2

我不会说得那么夸张,但我想说的是,这些模型确实具有某些难以量化、难以言喻的特质,只有当你频繁使用时才能体会到。

And I won't go that far, but I will say that, like, this is there's there's there are these these sort of intangible and hard to quantify properties of models that you just kind of get a sense of when you use them a lot.

Speaker 1

是啊。

Yeah.

Speaker 1

我认为特别是Claude系列模型,始终擅长在保持同理心的同时避免过度谄媚。

I think that in particular, the Claude models have always excelled at kind of having an empathy for the user that stops short of a sycophancy.

Speaker 1

对吧?

Right?

Speaker 1

感觉就像是在和一个有点像治疗师的人交谈,保持着某种专业距离,但同时又能感受到对方非常认真地对待你,试图给予温暖关怀。

It felt like you were talking to somebody a little bit more like a therapist where there was, like, some sort of remove, and yet you also sort of felt like you were interacting with something that, you know, was was, like, taking you very, very seriously and was, like, trying to to treat you warmly.

Speaker 1

正是这些特质让Opus适用于很多场景。

And that just makes Opus, I think, good for a lot of things.

Speaker 1

我要说,最近我做了个检查。

I will say, recently, I had this procedure.

Speaker 1

我可能已经说得太多了。

I probably now talked about it too much.

Speaker 1

但问题是这样的。

But here but here's the thing.

Speaker 1

当你准备做结肠镜检查,或者说正在做检查前的准备工作时,身体会发生很多恶心的事情。

When you're about to have a colonoscopy or maybe let's say you're going through the preparations for a colonoscopy, many gross things are happening in your body.

Speaker 1

你男朋友不需要知道这些细节。

Your boyfriend doesn't need to know about them.

Speaker 1

你的朋友也不想接到你问这些问题的电话。

Your friends don't want you to call them asking, you know, questions.

Speaker 2

是的。

Yes.

Speaker 2

但你

But you

Speaker 1

按照这个模式,你可以说,这件具体的事刚刚发生在我身上。

go to this model and you say, this specific thing just happened to me.

Speaker 1

你觉得这怎么样?

What do you think about that?

Speaker 1

然后你会得到一个非常温暖且充满人性化的回应。

And you just get back a response that is very warm and humane.

Speaker 1

正因如此,我觉得这真的很棒。

And and so for that reason, I thought it was really good.

Speaker 2

是啊。

Yeah.

Speaker 2

我很欣赏Claude的一点是,它会在我表现得很荒谬时指出来。

I appreciate about Claude that it it will tell me when I'm being ridiculous.

Speaker 3

嗯哼。

Mhmm.

Speaker 2

比如前几天晚上,我熬夜到很晚,问了它一些关于圣诞购物之类的无聊问题。

Like, the other night, I was, like, up to way too late, like, asking it some, you know, banal question about, like, Christmas shopping or something.

Speaker 2

然后有那么一刻,它就像在说,凯文,已经过了午夜了。

And at at one point, it was just like, Kevin, it's after midnight.

Speaker 2

去睡觉吧。

Go to bed.

Speaker 1

哇。

Wow.

Speaker 1

这触及了我觉得克劳德模型真正有趣的地方,我认为这将为接下来一年开启一个值得关注的迷人视角。

That gets at something that I think is really interesting about the Claude models and I think opens up what what should be something fascinating to watch over the next year.

Speaker 1

当你观察谷歌和OpenAI的模型时,它们在很大程度上是在优化用户参与度。

When you look at the the Google and the OpenAI models, those are in some large sense optimizing for engagement.

Speaker 1

对吧?

Right?

Speaker 1

我们知道他们希望你每天都回来使用。

We know they want you coming back to them every day.

Speaker 1

把这变成你的主要驱动力。

Make this your sort of primary driver.

Speaker 1

我们还知道谷歌已经在测试广告和人工智能。

We also know that Google is already testing ads and AI.

Speaker 1

我们相信OpenAI也会推出这个功能。

We believe that OpenAI is gonna launch this as well.

Speaker 1

我认为这种变化可能会扭曲激励机制,影响你将拥有什么样的AI系统。

I do think that kind of changes and probably perverts the incentives or what kind of AI systems you're gonna have.

Speaker 1

我非常确信Claude不会那样做。

I'm pretty confident Claude is just not going to do that.

Speaker 1

我认为未来一年Claude里不会出现广告。

I don't think ads are gonna be in Claude in the next year.

Speaker 1

我不认为它会变成一个电商引擎。

I don't think it's gonna become an ecommerce engine.

Speaker 1

它基本上会保持现状。

It's just kind of going to stay the way that it is.

Speaker 1

因此我认为这给了Claude一个非常有趣的机会——在其他人都在追求用户粘性、商业化和变现的世界里。

And so I think that gives Claude this really interesting opportunity in a world where everyone else is pushing for engagement, commerce, monetization.

Speaker 1

Anthropix的模型就是非常与众不同。

Anthropix model is just very different.

Speaker 1

他们是为企业级市场打造的。

They're building for the enterprise.

Speaker 1

比如,claw.ai对他们来说几乎就是附带产品。

Like, claw.ai is is almost an afterthought for them.

Speaker 1

对吧?

Right?

Speaker 1

因为他们真正想做的是向企业销售API,收取数百万美元的费用来做智能体解码。

Because what they really wanna do is they wanna sell an API to a company and charge them millions of dollars to, like, do agent decoding.

Speaker 1

没错。

Right.

Speaker 1

所以Clawd最终变成了这样一种...我不知道该怎么说,就像是他们额外拥有的一个全能优等生。

So Clawd winds up being this kind of, like, I don't know, like like, bonus child that they have that is, like, really good at a bunch of things.

Speaker 1

我只是觉得它不像其他产品那样面临在未来一年被毁掉的风险。

And I just kind of don't think it's at the same risk of being ruined in the next year that the other ones are.

Speaker 2

是的。

Yeah.

Speaker 2

我是说,我觉得你指出了一个有趣的矛盾点,一方面,Anthropic作为前沿实验室中最专注于企业工作场景的,特别是编码这类用例。

I mean, I think there is, like, an interesting tension that you're identifying, which is, like, on one hand, Anthropic of the big Frontier Labs is, like, the most sort of focused on these, like, enterprise work use cases, like, specifically coding.

Speaker 2

这也是他们主要的盈利来源。

And that's where they make most of their money.

Speaker 2

可以说是他们业务中增长最快的部分。

That's, like, the fastest growing part of their business.

Speaker 2

他们已经不再真正参与消费者市场的竞争了。

They're not really competing in the consumer space anymore No.

Speaker 2

因为我认为他们意识到,公平地说,ChatGPT拥有更多用户和更广泛的普通用户认可度。

Because I think they realize, you know, to their credit that, like, ChatGPT just has way more users and way more sort of purchase among, like, ordinary users.

Speaker 2

是的。

Yeah.

Speaker 1

他们输了。

They lost.

Speaker 2

他们确实输了。

They they lost.

Speaker 2

我认为这可能会逐渐激励他们,让这个东西变得更无趣,对话体验更不吸引人。

And I think that could incentivize them over time to, like, make this thing more boring and less interesting to talk to.

Speaker 2

就是把它打造成一个完美高效的编程同事,停止投资于其他那些更软性的模型行为功能。

Just sort of make it like a perfect efficient coding coworker and to stop investing in some of this other sort of more soft, like, model behavior stuff.

Speaker 2

但我真心希望他们别这样,因为与一个真正具有——我不想说一致个性——但正如迪恩·鲍尔所说的那样,能与之对话的AI模型是一种享受。

But I really hope they don't because it is a joy to talk to an AI model that actually feels like it has, I don't wanna say, like, a consistent personality, but, like, I I really liked the way Dean Ball put it in his essay.

Speaker 2

他说克劳德就像始终在演奏同一个音乐调子。

He said, Cloud Opus 4.5 just feels like it's playing in the same musical key all the time.

Speaker 2

对吧?

Right?

Speaker 2

比如,你可以开启一个新对话。

Like like, you can open a new chat with it.

Speaker 2

你可以谈论完全不同的话题,但得到的回应在哲学层面上感觉来自同一个源头。

You can talk to it about something completely different, and what comes back at you feels like it comes from the same place sort of almost philosophically as the thing that you were talking to about something completely different.

Speaker 1

我是说,我认为他们会继续朝这个方向发展,因为他们想打造什么?

I mean, I I think they are gonna keep going in this direction because what are they trying to build?

Speaker 1

他们想打造一个AI同事。

They're trying to build an AI coworker.

Speaker 1

对吧?

Right?

Speaker 1

他们希望这个同事既人性化,又能保持一致的基调,每次交流时都如此。

And they want that coworker to be humane and to play in the same key, you know, every time that you speak with it.

Speaker 1

所以我认为他们可能不会像其他公司那样深入个性化定制。

So I think you'll probably see them go less into personalization than you see these other companies go into.

Speaker 1

这实际上就是,你们对AI工具应该是什么样存在两种截然不同的观点,我们将拭目以待

So this is just, like, really it'll be like, you actually just have two very different points of view about what an AI tool should be, and we're gonna get to watch that play out

Speaker 4

明年。

next year.

Speaker 4

我们要

Should we

Speaker 2

要讨论首尔文件吗?

talk about the Seoul doc?

Speaker 1

我们来谈谈首尔文件吧。

Let's talk about the Seoul doc.

Speaker 2

好的。

Okay.

Speaker 2

过去一周关于Opus 4.5的讨论很多都集中在所谓的首尔文件上。

So a lot of the chatter about Opus 4.5 in the past week has been about what's come to be known as the Seoul document.

Speaker 2

这是

This was

Speaker 1

是S O U L,不是S O L E。

S O U L, not S O L E.

Speaker 2

或者是S O E U L。

Or S O E U L.

Speaker 2

无论你怎么拼写韩国那个城市的名字。

However you spell the city in South Korea.

Speaker 1

没错。

That's right.

Speaker 1

是的。

Yeah.

Speaker 1

我想你明白了。

I think you got it.

Speaker 2

对。

Yeah.

Speaker 2

不。

No.

Speaker 2

S E O U L。

S E O U L.

Speaker 1

没错。

That's right.

Speaker 2

是的。

Yes.

Speaker 2

这其实是源于那些网络评论者的行为

This is something that actually came out because these kind of, you know, Internet commenters were

Speaker 1

一群极客

sort Freaks.

Speaker 2

极客

Freaks.

Speaker 2

没错

Yes.

Speaker 2

这些人热衷于破解新机型,挖掘其中隐藏的所有彩蛋,他们声称发现了这个东西

These people who, like, love to jailbreak new models and sort of figure out all the hidden Easter eggs inside of them had discovered or claimed to have discovered this thing.

Speaker 2

这并不完全是一个系统提示(即模型开始响应用户前接收的指令)

This sort of it wasn't exactly a system prompt, which is the thing that you tell the model before it starts responding to users.

Speaker 2

它实际上是模型权重的一部分

It was actually in the weights of the model.

Speaker 2

属于预训练过程的一部分

So, like, part of the the sort of pretraining process.

Speaker 2

这是一份关于Claude的迷人文档,解释了Claude是什么、Anthropic是什么,以及他们在AI领域占据的奇特位置——他们既非常担忧这项技术的危险影响,却又在竞相开发它。文档基本上就像是Claude和Anthropic的传记,但就藏在模型的权重参数里。

And it was this kind of fascinating document about Claude and sort of explaining what Claude is and what Anthropic is and this weird position that they occupy in the AI landscape where they're very worried about the dangerous effects of this technology, but they're also racing to build it and how, like, Claude is there sort of this it was basically just like kind of a biography of Claude and Anthropic, but, like, inside the weights of the model.

Speaker 2

起初人们并不确定,这到底是真实存在的,还是模型自己产生的幻觉?

And at first, people didn't really know, like, is this real or is this just sort of being hallucinated by the model?

Speaker 2

众所周知,当你询问模型关于它们自身及内部运作时,它们的回答并不可靠。

Models are notoriously unreliable when you ask them about themselves and their internal workings.

Speaker 2

但在周一,Anthropic公司的阿曼达·阿斯克尔证实,这确实基于一份真实文件,并且是Claude训练过程的一部分。

But on Monday, Amanda Askel from Anthropic confirmed that this was based on a real document and that this this was part of Claude's training process.

Speaker 2

她说他们仍在研究这个问题。

She said they are still working on it.

Speaker 2

他们打算很快发布更多相关细节。

They intend to release more details about it soon.

Speaker 2

但在Anthropic内部,这被亲切地称为唯一文档,多么有趣的事情。

But this has become endearingly known within Anthropic as the sole doc, and what a fascinating thing.

Speaker 1

确实是个有趣的事情。

It is a fascinating thing.

Speaker 1

我是说,你看。

I mean, look.

Speaker 1

这家公司深信他们正在创造的东西终将具备感知能力和意识,需要像对待人类一样给予充分尊重。

This is a company that fully believes the thing that they are making is going to become sentient, conscious, and will need to be treated with all the respect that you would afford another human being.

Speaker 1

因此相比竞争对手,他们在这方面走得非常超前,而且为AI模型构建SOLDOX这件事本身就充分说明了Anthropic员工的特质。

So they are sort of way out on a limb compared to their competitors getting ready for that, and it really tells you a lot about the people that work at Anthropic that they are building SOLDOX for their AI models.

Speaker 2

我认为这预示着未来的趋势。

I mean, I think it tells you what is coming.

Speaker 2

最近我参加了一个关于AI意识的会议,非常引人入胜,我准备在书中详细记述这件事。

I was recently at a I went to an AI consciousness conference, which was fascinating, and I'm I'm gonna be writing about it in my book.

Speaker 2

现在各大实验室的研究人员中已经开始萌生这类讨论——虽然我们接下来要说的话肯定会遭到反对拟人化阵营的猛烈抨击——但他们越来越认为这些系统正在形成某种内在觉知,能够反思训练过程中的经历,甚至表现出某些持续的情感倾向。

But it's like, it it there there is now this sort of seeds of this conversation happening among the people at the big labs who I think do understand that these systems are becoming increasingly like, we're gonna get hammered by the anti anthropomorphization people for everything that we're about to say, but they increasingly see these things as having some kind of inner awareness, some kind of ability to reflect on maybe things that happen to them during their training processes, maybe some consistent emotions that they tend to express.

Speaker 2

当然,还存在大量未解之谜。

And, like, there are lots of outstanding questions.

Speaker 2

连我自己都无法确定所谓的'现象意识'究竟是什么。

I am not at all certain about what my sort of p consciousness is.

Speaker 2

我认为目前这种可能性很低,但那些在正经公司担任要职的人,已开始思考这些系统可能——无论多么渺茫——已经或即将具备意识的可能性。

I think it's very low right now, but, like, people in serious jobs at serious companies are starting to think about the possibility, however remote, that these things are or may soon be conscious.

Speaker 2

我只是觉得这非常有趣。

And I just think that's fascinating.

Speaker 1

是啊。

Yeah.

Speaker 1

我同意这个观点。

I agree with that.

Speaker 1

什么样的

What kind of

Speaker 2

威胁是Anthropic目前对OpenAI构成的战略威胁?

threat is Anthropic strategically to OpenAI right now?

Speaker 1

我的意思是,我认为目前主要是在企业市场。

I I mean, I think right now it's primarily in the enterprise.

Speaker 1

比如今年初,Anthropic的年化收入还不到10亿美元。

Like, at the start of this year, Anthropic had less than $1,000,000,000 in annualized revenue.

Speaker 1

随着年底临近,该公司表示预计年化收入将达到约90亿美元。

As it's coming to the close of the year, it said that it is expecting to have about $9,000,000,000 in annualized revenue.

Speaker 1

它是通过向企业销售实现这一目标的。

So it did that by selling into the enterprise.

Speaker 1

如果你是一名开发者或是大型咨询公司,想要创建这些自动化工作流程,大多数购买这类软件的公司——或者说可能相当一部分公司——都是从Anthropic购买的。

If you are a developer or you're a big consulting firm and you wanna create these agentic workflows, most companies that are are buying this software are buying it from and or I should say maybe a plurality of them are buying it from Anthropic.

Speaker 1

因此Anthropic已成为有史以来增长最快的初创公司之一,因为他们抓住了这个巨大机遇。

And so Anthropic has just become one of the fastest growing startups of all time because they've just created this massive opportunity.

Speaker 1

如果他们不在竞争格局中,那100亿美元可能会流向其他公司,很可能被OpenAI和谷歌瓜分。

If they were not on the chessboard, that $10,000,000,000 would probably be going to somebody else, and that would probably be some combination of OpenAI and Google.

Speaker 1

对吧?

Right?

Speaker 1

这意味着OpenAI今年将损失相当可观的收入。

So that's a significant amount of revenue that OpenAI is losing out on this year.

Speaker 1

据我所知,OpenAI预计今年收入将达到约200亿美元。

I believe OpenAI is is projecting to have about $20,000,000,000 in revenue this year.

Speaker 1

你可以想象如果他们能占领企业市场,局面会有多么不同。

So you can imagine how different the picture would look for them if they've been able to capture the enterprise market.

Speaker 1

而且越来越明显的是,Anthropic正在赢得这场竞争。

And increasingly, you know, Anthropic is winning it.

Speaker 2

从某种奇怪的角度来说,ChatGPT实际上是对谷歌和Anthropic都最有利的事情。

There's a weird sense in which ChatGPT was actually the best thing that could have happened to both Google and Anthropic.

Speaker 2

嗯。

Mhmm.

Speaker 1

你知道吗?

You know?

Speaker 1

就像,我

Like like, I

Speaker 2

认为当时ChatGPT横空出世时取得了巨大成功。

think at the time ChatGPT came out as this huge success.

Speaker 2

当时所有人都在谈论它。

It was like everyone was talking about it.

Speaker 2

它某种程度上将AI带入了这个,嗯,新时代。

It was sort of took AI into this, like, new era.

Speaker 2

我认为对谷歌来说,这件事之所以有帮助,是因为它就像一记警钟。

And I think for Google, the reason that was helpful is because it was the thing that, like, woke them up.

Speaker 2

对吧?

Right?

Speaker 2

他们之前一直,你知道的,被各种官僚主义和内斗搞得四分五裂,由于种种原因始终无法真正团结起来。

They had been, you know, tearing themselves apart with all this, like, bureaucracy and infighting, and they couldn't really get their act together for various reasons.

Speaker 2

而ChatGPT迫使他们集中精力、全力以赴,变得更高效率,更擅长推出这些产品。

And Chat2bt sort of forced them to focus and bear down and, like, become more efficient and and better at shipping these things.

Speaker 2

对Anthropic来说,这感觉就像是‘好吧,现在我们不用再做消费者聊天机器人了,因为这条赛道已经挤满了’。

And for Anthropic, it was sort of like, well, I guess we don't have to, like, make a consumer chatbot now because that lane is already full.

Speaker 2

所以我认为他们得以转向这个有趣的新方向,最终结果对他们来说,比直接与ChatGPT竞争要好得多。

And so I think they were able to kind of pivot into this interesting new direction that I think ended up being better for them than what they would have gotten if they had tried to compete with ChatGPT.

Speaker 1

没错。

Yep.

Speaker 1

好观点。

Good take.

Speaker 2

凯西,过去一两周AI领域还有其他我们应该讨论的重大新闻吗?

Casey, is there any other big news in the AI world from the past week or two that we should talk about?

Speaker 2

我是说,或许可以简单提一下,我们看到了一些

I mean, maybe just real quickly, we've seen a

Speaker 1

有趣的人事变动。

couple of interesting departures.

Speaker 1

Yan Lakun终于离开了Meta。

Yan Lakun finally left Meta.

Speaker 1

自从他们任命Alexander Wang担任Meta超级智能部门负责人以来,大家就一直在等待这一刻。

I think everybody has been waiting for that ever since they installed Alexander Wang as the head of the Meta's super intelligence division.

Speaker 2

确实。

Yes.

Speaker 2

作为图灵奖得主的AI教父,要向一个二十多岁的年轻人汇报工作确实很难。

Hard to be a Turing award winning godfather of AI who is reporting to a guy in his twenties.

Speaker 1

简显然要创办一家新公司,专注于构建世界模型。

Jan is apparently going to be doing a new startup that is going to build world models.

Speaker 1

简·拉昆是最著名的LLM怀疑论者之一。

Jan Lakun is one of the most famous LLM skeptics out there.

Speaker 1

他认为通过当前各大实验室采用的方法无法实现AGI。

He says that you cannot get to AGI using the approach that all the other big labs are using right now.

Speaker 1

所以看他能提出什么方案肯定会很有意思。

So we'll definitely be interesting to see what he comes up with.

Speaker 1

另一个重大变动是苹果长期担任AI主管的约翰·约翰·安德烈亚。

The other big move is John John Andrea, who was the longtime head of AI at Apple.

Speaker 1

他正在卸任职位,我认为这也是意料之中的,毕竟苹果在推进AI计划方面一直困难重重。

He is stepping down from his position, and that also I think was long expected because of all of the problems that Apple has had getting its AI efforts off the ground.

Speaker 1

事实上凯文,我认为吉恩·安德烈亚的离职可能暗示苹果正在低调放弃整个AI领域。

And in fact, Kevin, I think the fact that Jean Andrea is leaving might just be a sign that Apple is low key giving up on AI overall.

Speaker 1

我们知道他们已经与谷歌达成协议,将Gemini作为其AI计划的核心。

We know that they've signed a deal with Google to make Gemini the kind of core of their AI efforts.

Speaker 1

也许他们根本不需要自己开发。

Maybe this just becomes the kind of thing where they don't have to build it.

Speaker 1

他们只需要低价从别人那里购买。

They just buy it for cheap from someone else.

Speaker 1

据报道他们每年只需支付谷歌十亿美元,这对他们来说九牛一毛,也许这样对他们更划算。

They're reportedly only gonna pay Google a billion dollars a year, something that they can very easily afford, and maybe they'll be fine.

Speaker 2

是啊。

Yeah.

Speaker 2

我不太确定该怎么解读这件事。

I don't know how to read this exactly.

Speaker 2

我的意思是,你可以认为他们放弃了AI,但他们刚聘请了一位微软的人来担任新的AI主管。

I mean, you could read this as, like, they're giving up on AI, but they also just brought in a guy from Microsoft to be their new head of AI.

Speaker 1

让我告诉你吧。

Let me tell you something.

Speaker 1

当你从微软挖人过来时,那恰恰说明你要放弃AI了。

When you bring in a guy from Microsoft, that is a way that you're giving up on AI.

Speaker 2

不。

No.

Speaker 2

实际上,他在那之前在谷歌工作了很多年。

Actually, he was at Google for many more years before that.

Speaker 2

他在微软只待了大概四个月,这里面有个有趣的故事,将来总有一天会讲出来。

He was only at Microsoft for, like, four months, which there's an interesting story there that will have to be told someday.

Speaker 2

但基本上,我觉得你可以理解为他们在放弃,或者某种程度上是在重启他们的AI计划。

But, basically, I think you could read it as they are giving up or they are sort of rebooting their AI efforts.

Speaker 2

他们是在说,我们之前做的行不通。

They're saying, like, what we've been doing is not working.

Speaker 2

我们要引进一个新团队。

We're gonna bring in a new team.

Speaker 2

我们要重新开始,努力推进这件事。

We're gonna start fresh, and we're gonna try to give this thing a go.

Speaker 1

老兄,如果你在2025年12月才从零开始你的AI项目,那你已经完蛋了。

Bro, if you're starting from scratch in December 2025 on your AI program, you're cooked.

Speaker 1

你真是史上最没救的了。

You truly no one has ever been more cooked.

Speaker 1

得了吧。

Come on.

Speaker 1

稍后回来,我们将继续

When we come back, we'll continue

Speaker 5

Framer已经打造出发布精美生产级网站的最快方式,现在它正在重新定义网页设计。

Framer already built the fastest way to publish beautiful production ready websites, and it's now redefining how we design for the web.

Speaker 5

随着最近推出的Design Pages——一个基于Canvas的免费设计工具,Framer已不仅仅是网站构建器。

With the recent launch of Design Pages, a free Canvas based design tool, Framer is more than a site builder.

Speaker 5

它是真正的一体化设计平台。

It's a true all in one design platform.

Speaker 5

从社交素材到活动视觉,从矢量图到图标,再到实时网站,Framer让创意从构思到落地全程实现。

From social assets to campaign visuals to vectors and icons, all the way to a live site, Framer is where ideas go live, start to finish.

Speaker 5

立即免费创作,访问framer.com/design,并使用代码hard fork免费获得一个月的Framer Pro。

Start creating for free at framer.com/design, and use code hard fork for a free month of Framer Pro.

Speaker 3

嘿。

Hey.

Speaker 3

我是《纽约时报烹饪》的Von Vreeland。

It's Von Vreeland from New York Times Cooking.

Speaker 3

如果我能吹出哨音,我肯定会来个Moriah式的尖叫。

And if I could hit a whistle tone, I would do a Moriah.

Speaker 3

是时候了,因为饼干周来了。

It's time because cookie week is here.

Speaker 3

这是一年中最棒的时光,我们将连续七天推出你最爱的烘焙师们创作的新饼干食谱。

It is the best time all year when we unveil seven days of new cookie recipes from some of your favorite bakers.

Speaker 2

这看起来像只粉色的小贵宾犬。

This looks like a little pink poodle.

Speaker 5

它们看起来好想抱抱。

They look huggable.

Speaker 5

如果我把越南咖啡做成饼干会怎样?

What if I took Vietnamese coffee and made that into a cookie?

Speaker 5

这些是豪华曲奇饼。

These are deluxe cookies.

Speaker 2

酸味糖果太疯狂了。

The sour candy is crazy.

Speaker 2

什么?

What?

Speaker 1

这简直离谱,但美味至极。

It's absolutely unhinged, but completely delicious.

Speaker 5

闻起来太香了。

It smells so good.

Speaker 5

所有曲奇周食谱尽在nytcooking.com。

Find all the cookie week recipes at nytcooking.com.

Speaker 5

立即订阅享受限时优惠。

Subscribe now for a limited time offer.

Speaker 2

好的。

Okay.

Speaker 2

硅谷和旧金山的科技行业正在发生许多变化,但我想以一个实用问题作为结尾,这是我们节目听众经常问的:我现在应该用什么?

So there's lots happening here in the industry in Silicon Valley, in San Francisco, but I want to end on a practical question that we get a lot from listeners to this show, which is like, look, what should I be using right now?

Speaker 2

哪个AI模型最好用?

What is the best AI model?

Speaker 2

什么能给我最大优势又最少烦恼?

What is the thing that will will give me the most advantages and annoy me the least?

Speaker 2

比如,如果只能订阅一个或两个模型,应该选什么?

Like, if I can only subscribe to one model or maybe two models, what should they be using?

Speaker 1

凯文,我认为这个问题没有放之四海而皆准的答案。

So I don't think there is a great one size fits all answer to that question, Kevin.

Speaker 1

我可以很有把握地说,使用ChatGPT、Gemini或Claude中的任意一个来处理多数需求应该都没问题。

I think I could say confidently that you can use either ChatGPT, Gemini, or Claude for many things and probably be fine.

Speaker 1

可能对于绝大多数使用场景来说,这三个模型的表现都相差无几。

And there's probably some vast set of use cases for which all three of those models are roughly equivalent.

Speaker 1

好的?

Okay?

Speaker 1

所以这就是我对大约百分之八十听众的回答。

So that's gonna be my answer for, like, the eightieth percentile of our listeners.

Speaker 1

对吧?

Right?

Speaker 1

但假设你进入了前百分之二十的行列,那些顶尖的AI用户,真正的狂热爱好者。

But let's say you're moving up into the twentieth percentile of, our top AI users, the real freaks out there.

Speaker 1

明白吗?

Okay?

Speaker 1

现在我要告诉你,你会想要不断尝试这些模型。

Now I'm gonna tell you, you are just going to want to experiment with these models all of the time.

Speaker 1

就在过去几个月里,我们看到这些公司都发布了性能很强的新模型。

I mean, just within the past few months, we've seen each of these companies release a very capable new model.

Speaker 1

你难道要让我告诉那前20%的用户,永远只坚持用其中一个吗?

And you want me to tell this 20 percentile, oh, no, just stick with one of them forever?

Speaker 1

不。

No.

Speaker 1

你必须混合使用它们。

You have to be mixing it up.

Speaker 1

再举个例子,我之前在工作中使用Claude时,觉得它在新模型发布前并不怎么实用。

Again, I just use Claude, a model that I have not found very useful at work upon the release of its new model.

Speaker 1

然后我说,天哪。

And I said, oh my gosh.

Speaker 1

好吧。

Okay.

Speaker 1

游戏规则又改变了。

The game just shifted again.

Speaker 1

我...我想提一个我这两天一直在思考的比喻。

I I I wanna bring up this metaphor I've been thinking about over the past day or so.

Speaker 1

2023年,科幻作家Ted Chiang在《纽约客》上发表了一篇广为流传的文章,题为《ChatGPT是网络的模糊JPEG》。

In 2023, the sci fi writer Ted Chiang wrote this widely read and shared essay in the New Yorker called ChatGPT is a blurry JPEG of the web.

Speaker 1

你还记得那篇文章吗?

Do you remember that?

Speaker 4

是的。

Yes.

Speaker 1

文章提出的论点是对ChatGPT的批评,认为这东西其实挺糟糕的,因为它只是互联网上所有内容的拼凑物。

And the argument that it made was a critique of ChatGPT saying this thing really kind of sucks because it's just an amalgamation of everything that has ever been put on the Internet.

Speaker 1

它本质上没有灵魂。

There's kind of no soul to it.

Speaker 1

对吧?

Right?

Speaker 1

但我一直在思考那个关于模糊JPEG的比喻。

But I thought about that metaphor of the blurry JPEG.

Speaker 1

因为当我这周使用Opus,以及前几周使用Gemini时,我有种感觉——就像网页加载JPEG图片时,最初显示的并非完整分辨率。

Because when I used Opus this week and when I used Gemini three the the week before, I had that sensation of, you know, when you're loading up a web page and it is loading up a JPEG and at first, it doesn't load it in full resolution.

Speaker 1

它会先给你一个模糊版本,几秒过后才显示高清画面。

It kinda gives you that blurry version first, and then a few seconds goes by, and then it shows you the higher resolution.

Speaker 1

我们正处在AI逐渐变得高清的时代。

We are in a moment where the AI is getting higher resolution.

Speaker 1

这就是当Claude能够创作出那些首次让我感觉像是出自自己之手的句子时,我的感受。

That was the feeling that I had when Claude was able to just create something that was writing sentences that for the first time felt like me.

Speaker 1

就像是,好吧。

It was like, okay.

Speaker 1

模糊的JPEG图片变得稍微不那么模糊了。

The blurry JPEG is getting a touch less blurry.

Speaker 1

所以我无法给出应该使用哪个模型的单一答案,因为我认为这个答案在未来六个月到一年内会持续变化。

And so that's why I can't give you a single answer to which model should I use because I think the answer to that is just going to be changing consistently over the next six months to a year.

Speaker 1

如果你真的关心这方面,你就必须不断尝试新事物。

And if you really care about this stuff, you're just gonna have to try new things.

Speaker 2

是啊。

Yeah.

Speaker 2

我的意思是,正如你所说,这些技术发展速度之快令人惊叹。

I mean, to your point, it is amazing how quickly this stuff is moving.

Speaker 2

前几天我在写一个关于Chateau Petite发布的书籍章节,所以回顾了三年前人们对这个产品发布时的一些最初反应。

I was writing a book chapter the other day about the launch of Chateau Petite, and so I was going back and looking at some of the, like, initial reactions that people had three years ago to the launch of this product.

Speaker 2

而且它当时糟糕透了。

And it was so bad.

Speaker 2

简直蠢得要命。

It was so dumb.

Speaker 2

按今天的标准来看,我简直不敢相信人们当时会那么容易就被震惊——仅仅因为这些东西能随便拼凑出某个话题的合理句子。

By today's standards, I could not believe how easily amazed people were by the fact that the things could just, like, string together plausible sentences on any given topic.

Speaker 2

不过平心而论,在当时那确实很了不起。

And, like, we should say for the time, that was amazing.

Speaker 2

从来没有聊天机器人能做到那样。

No chatbot had ever done that.

Speaker 2

但回过头看,仅仅三年后的今天,就明显感觉到我对这些工具的期待值被拔高了多少。

But looking back, just even with three years perspective, it is just incredible how much my personal expectations of these tools have been raised.

Speaker 2

可以说在写这本书的过程中,这些工具可能为我节省了整整一年生命——那些原本要花在...嗯...

I would say, like, in the process of writing this book, these tools have probably saved me a year of my life, like a year that I would have had to spend Mhmm.

Speaker 2

跑图书馆、查资料、做研究、拼凑想法的工夫。

Going to libraries, pulling clips, like doing research, stitching together ideas.

Speaker 2

对我来说,没有这些工具的情况下再完成类似的项目简直是难以想象的。

Like, it it is implausible to me that I would ever do any project like this again without these tools.

Speaker 2

是啊。

Yeah.

Speaker 2

我想

I think

Speaker 1

很多

a lot

Speaker 2

人在自己的工作中也有类似的感受。

of people are feeling similarly in their own work.

Speaker 2

没错。

Yeah.

Speaker 1

你知道,我不确定这是否合适,但我经常会有这种想法,因为现在外界对AI有很多批评。

I you know, I'll just say, I don't know if this fits, but there's this there's this thought that I have a lot because I think, you know, there there's much AI criticism out there.

Speaker 1

存在大量的愤怒、敌意和怀疑情绪。

There's a lot of, you know, anger, hostility, skepticism.

Speaker 1

我认为其中很多批评是合理的。

I think a lot of it is warranted.

Speaker 1

我们在节目中经常讨论这个话题。

We talk about it a lot on the show.

Speaker 1

但我逐渐认识到,人们对AI存在两种根本不同的观点。

But I've come to believe that there are fundamentally two different views of AI.

Speaker 1

一种是我称之为'加州派'的AI观,关注的是AI能做什么。

There is what I call the, California view of AI, which is what can it do.

Speaker 1

另一种则是'纽约派'的AI观,关注的是AI不能做什么。

And then there's what I call the New York view of AI, which is what can't it do.

Speaker 1

对吧?

Right?

Speaker 1

你经常在社交媒体上看到这种'AI不能做什么'的观点。

And you see the what can't it view, on social media a lot.

Speaker 1

每当AI在简单测试中失败,或是犯下严重错误时,人们就会说'这破玩意儿完蛋了'。

You know, whenever an AI fails at some simple test, whenever it, you know, makes some terrible mistake and we say, you know, screw this thing.

Speaker 1

然后还有像我们这样的人,我觉得我们对它能做什么更感到印象深刻。

And then you have folks like us who I think are a little bit more impressed at, like, what it can do.

Speaker 1

过去几周模型的发布让我很庆幸自己默认持'它能做什么'的观点,因为它正在实时改变人们的工作流程、工作和生活。

The release of the models over the past few weeks has been a moment where I'm just glad that I have a default view of what can it do because it is changing people's workflows, jobs, lives in real time.

Speaker 1

我认为如果你的默认立场是'它不能做什么',那你就错过了故事的大部分精彩内容。

And I think that if your default is what can't it do, you're just missing a huge part of the story.

Speaker 2

完全同意。

Totally.

Speaker 2

我决定将一条原则贯彻到我未来的生活中:我不会听取那些不使用AI的人对AI的看法。

I I have decided one principle that I am going to apply to my life going forward is that I'm not going to listen to opinions about AI from people who do not use AI.

Speaker 2

我认为,如果你没有至少五小时、十小时左右亲自使用最新模型的第一手直接经验,你实际上是在谈论一个已不复存在的事物。

Like, I think that if you are not grounded in having firsthand direct experience with these models for at least, like, I don't know, five hours, ten hours, something like that with, like, the newest models, you actually are talking about something that no longer exists.

Speaker 2

是啊。

Yeah.

Speaker 1

你是历史学家。

You're a historian.

Speaker 2

这是其中的一个方面。

So that's one side of it.

Speaker 2

关键在于这些技术持续进步。

It's just that these things keep getting better.

Speaker 2

同时,我想听听你对旧金山AI圈里流行的另一种长期视角的看法。

At the same time, I wanna get your opinion on kind of this other, like, long timelines view that is coming into vogue in the San Francisco AI community.

Speaker 2

Dorkash Patel最近采访了著名AI研究员Ilya Sutskever,他们讨论了很多关于当前并非技术停滞,而是这些模型还没达到人们期望的实用程度。

Dorkash Patel recently did an interview with Ilya Sutskever, the famous AI researcher, and they talked a lot about how there's this kind of, you know, not necessarily, like, slowdown happening, but just, like, these models are not as useful as people want them to be.

Speaker 2

比如它们现在还没能为GDP增加数万亿美元。

Like, they are not out there, you know, adding trillions of dollars to GDP.

Speaker 2

企业目前还无法解雇半数员工用AI来替代。

Like, companies are not able to, like, fire half their workers and replace them with AI yet.

Speaker 2

就在这种观点在旧金山AI圈兴起的同时,我认为模型在我们关心的领域确实在进步。

And so that view is kind of springing up within the San Francisco AI crowd at the same time as, like, I think the models actually are getting better at the things that you and I care about.

Speaker 2

你如何看待这种矛盾?

So how do you reconcile those things?

Speaker 1

我认为这两种情况可以同时成立,我们仍处于一个发展轨道上,最可能首先实现的是AI将彻底解决编程和软件工程问题。

I think both can be true, that we are still on a trajectory where the first likeliest thing to happen is that AI will just solve coding and software engineer.

Speaker 1

我们仍会需要软件工程师,但根据我们本月晚些时候的预测节目,他们将不再需要手工编写代码。

We will still have software engineers, but they will not be writing code by hand on our predictions episode later this month.

Speaker 1

我的预测之一可能是到2026年,编程问题将基本得到解决。

One of my predictions may be that by the 2026, coding is just effectively solved.

Speaker 1

这只是众多工具(甚至免费工具)都能为你代劳的事情之一。

This is just something that a lot of tools, even free ones can kind of just do for you.

Speaker 1

但其他许多工作岗位仍然存在。

But there's still a lot of other jobs out there.

Speaker 1

仍有大量翻译工作需要人工完成,而且并非所有工作都像编程那样有明确的规则体系。

There is still a lot of translation left to be done and not every job has as defined a rule set as coding does.

Speaker 1

因此我认为可以同时存在这种情况:模型正在以某种方式进步,使我们更接近实现软件工程自动化。

So I think it can both be true that models are advancing in a way that is bringing us closer to automating software engineering.

Speaker 1

而如果你是一名会计师、律师或医生,AI目前仍只是暂时性有用的工具。

And if you're an accountant, a lawyer, a doctor, AI still is just kind of something that is only momentarily useful.

Speaker 1

我认为问题在于,要将解决编程问题所需的能力推广到其他所有工作中需要什么条件,以及这需要多长时间?

And I think the question will be, what will it take to generalize whatever is needed to solve coding for every other job, and how long will that take?

Speaker 2

是的。

Yeah.

Speaker 2

我认为这是正确的。

I think that's right.

Speaker 2

我认为这场竞赛仍在激烈进行中。

I think the the race is still very much on.

Speaker 2

模型仍在持续改进。

The models are still very much getting better.

Speaker 2

尚需观察这些进步会以多快的速度融入产品,真正改变你我这样的普通人以及程序员、律师、医生等所有使用这些技术的人群的生活面貌。

It remains to be seen how soon or quickly that will kind of diffuse into products that actually make life look very different for people like you and me and for coders and lawyers and doctors and everyone else who uses these things.

Speaker 2

没错。

Yeah.

Speaker 2

敬请期待。

Stay tuned.

Speaker 1

我们回来后,将前往剧院参加Slop的硬分叉评审。

When we come back, we're heading to the theater for the hard fork review of Slop.

Speaker 2

带上你的剧院望远镜。

Bring your theater binoculars.

Speaker 1

那叫观剧镜。

They're called opera glasses.

Speaker 1

你的剧院望远镜。

Your theater binoculars.

Speaker 2

这就是为什么我必须一直

This is why I have to keep

Speaker 3

把你留在身边。

you around.

Speaker 1

对天发誓。

Swear to god.

Speaker 1

难以置信。

Unbelievable.

Speaker 2

好吧,凯西,又到了我们最喜爱的环节时间了。

Well, Casey, it's time once again for one of our favorite segments.

Speaker 2

没错。

That's right.

Speaker 2

对'烂片'的硬核评论。

The hard fork review of slop.

Speaker 5

对'烂片'的硬核评论。

The hard fork review of slop.

Speaker 2

当然,这是我们进行文化批评的环节,我们将用非常严肃的分析来探讨这种正在席卷全球的AI烂片新媒介。

This is, of course, our cultural criticism segment where we bring very serious analysis to this new medium of AI slop that is taking over the world.

Speaker 2

今天,我们又有一些烂片案例供听众们批判性思考。

And today, we have some more examples of slop for our listeners' critical consideration.

Speaker 2

嗯哼。

Mhmm.

Speaker 2

让我们开始吧。

Let's get into it.

Speaker 2

今天首先登场的是节日垃圾内容。

First up today, we have some holiday slop.

Speaker 2

节日期间,世界各地的人们都会与家人团聚,而今年他们可能会在Instagram上看到一个视频,展示一群游客在参观白金汉宫设立的节日市场。

The holidays are a time when people around the world are gathering with their families, and this year, they may encounter an Instagram video that shows a bunch of tourists going around a holiday market set up at Buckingham Palace.

Speaker 2

凯西,让我们播放BBC的这段剪辑吧。

And, Casey, let's play this clip from the BBC, please.

Speaker 4

为什么人们要来白金汉宫参观一个根本不存在的市场?

Why are people coming to Buckingham Palace to see a market that doesn't exist?

Speaker 4

最近社交媒体上流传着AI生成的圣诞市场图片。

In recent days on social media, there have been AI generated images of a Christmas market.

Speaker 4

这些图片是假的,但这并没有阻止人们想来这里体验节日氛围。

They're fake, but that hasn't stopped people wanting to come here and experience a slice of the festive action.

Speaker 4

能告诉我们你们为什么来这里吗?

Tell us why are you here?

Speaker 6

哦,我们是来看圣诞市场的。

Oh, we've got for Christmas market.

Speaker 6

我们不想听你说。

Let's not hear you.

Speaker 5

大家都在为

Everyone's rolling for

Speaker 6

这个AI生成的广告而来。

this AI generated advertising.

Speaker 6

是啊。

So yeah.

Speaker 6

我本来想享受一杯热红酒,结果现在只有保姆战争鸡肉三明治。

I was gonna enjoy a mulled wine, and now I've got my nanny wars chicken sandwiches.

Speaker 6

我非常失望。

I am very disappointed.

Speaker 6

我们其实觉得这事挺搞笑的。

We see the funny side of it, really.

Speaker 6

我们打算去找其他选择。

We're gonna go find alternatives.

Speaker 1

有很多

There are plenty

Speaker 4

伦敦还有很多其他圣诞市场。

of other Christmas markets across London.

Speaker 4

但如果你想去白金汉宫,那里有个礼品店。

But if you do want to go to Buckingham Palace, there is a gift shop.

Speaker 1

首先,BBC,感谢你们所做的一切。

First of all, BBC, thank you for what you do.

Speaker 1

我喜欢那位记者。

I love that reporter.

Speaker 1

他听起来非常愤怒。

He sounded so angry.

Speaker 1

高兴得为这整个情况气得吐唾沫。

Delighted spitting for mad at this whole situation.

Speaker 1

假的。

Fake.

Speaker 1

是啊。

Yeah.

Speaker 1

嗯,你看。

Well, look.

Speaker 1

这太棒了。

This is wonderful.

Speaker 1

这让我想起了近年来同样在英国举办的著名威利·旺卡活动,当时确实有人到场,是个真实的活动。

It's, you know, it's giving me flashbacks to the famous Willy Wonka event that was also held in The UK in recent years where people did show up, and there was a real event.

Speaker 1

但人工智能广告把它渲染得比实际情况宏大得多。

But the AI advertising had made it seem much more grand than it was.

Speaker 1

我们现在已经进入下一阶段,人工智能开始为你们宣传完全不存在的家庭活动了。

We've now moved to the next stage, which is that AI is just now advertising completely nonexistent events for you to go to with your family.

Speaker 2

是的。

Yes.

Speaker 2

无论英国发生了什么,那些人都得提高识别虚假信息的能力。

Whatever's going on in The UK, those people have to up their slop detection game.

Speaker 2

我认为这确实开启了一个有趣的可能性,那就是他们现在真的得在白金汉宫建一个假日集市,以满足那些慕名而来要参观这个根本不存在的假日市场的游客们的明显需求。

I do think this opens up a a very fun possibility, which is that they will actually now have to build a holiday market at Buckingham Palace to capture the obvious demand and the flood of tourists who are coming in to go to this nonexistent holiday market.

Speaker 1

嗯,而且还能防止一场革命。

Well, and just to stop a revolution.

Speaker 1

我是说,你能看出那些人对于那里什么都没有感到相当恼火。

I mean, you could tell a lot of those people were pretty, you know, angry about what was not there.

Speaker 1

是的。

Yes.

Speaker 1

没错。

Yes.

Speaker 1

这将会导致

This is gonna lead

Speaker 2

一种全新的'先装后成'模式。

to a whole new dimension of fake it till you make it.

Speaker 1

是啊。

Yeah.

Speaker 1

我是说,你看。

I mean, look.

Speaker 1

这个案例对我来说非常有趣,因为一方面,当你想到所有可能制作的深度伪造内容时,很少有比'如果白金汉宫有个圣诞市场会怎样'更无害的了。

This one is so interesting to me because on one hand, when you think about all of the different deepfakes you can make, few seem more innocuous than what if there was a Christmas market at Buckingham Palace?

Speaker 1

这听起来其实像是个可爱的小恶作剧,你可以制作出来,也许还能和几个朋友分享。

That actually sounds like a lovely piece of slop that you could make, you know, and maybe share with a few friends.

Speaker 1

但是,因为我们生活在一个噩梦般的信息生态系统中,没人再分得清真伪,你拿这个完全无害的内容,突然间人们就涌向了白金汉宫。

But, you know, because we live in a nightmare information ecosystem where no one knows what's true and false anymore, you take this perfectly benign, you know, piece of content, all of a sudden, people are showing up in Buckingham Palace.

Speaker 1

所以你知道,我很遗憾地说,抱歉要扫兴了。

So there are you know, I'm I'm I'm sad to say, sorry to be a buzzkill.

Speaker 1

这种同样的模式将会导致更糟糕的后果。

There are going to be much worse outcomes from this exact dynamic.

Speaker 1

至少这个案例还有点好笑。

This one is at least a little funny.

Speaker 1

但你知道,如果我是像TikTok这样的平台,我可能会思考:人们不断在这里看到虚假内容然后跑去参加不存在的事件,这对我的平台是不是不太好?

But, you know, if I were a platform like a TikTok, I might be thinking about, is it maybe bad for my platform that people are constantly looking at slop here and then going to nonexistent events?

Speaker 1

因为最终,部分愤怒情绪会反噬平台本身。

Because eventually, some of that anger is gonna come back on the platform.

Speaker 2

他们会冻得发抖,还得吃保姆做的鸡肉,不管她说了什么。

They're gonna be cold, and they're gonna have to eat nanny's chicken, whatever she said.

Speaker 1

我...我觉得她是在偷偷享用Nando's烤鸡。

I I think she was having a cheeky Nando's.

Speaker 2

她真的在偷吃Nando's吗?

Was she having a cheeky Nando's?

Speaker 1

她确实在偷吃Nando's。

She was having a cheeky Nando's.

Speaker 2

哇哦。

Wow.

Speaker 2

好吧,听起来她最后一切顺利。

Well, it all turned out fine for her, sounds like.

Speaker 2

行吧。

Okay.

Speaker 2

今年我们还有一个关于节日垃圾食品的例子,那就是节日餐食垃圾。

We have one more example of holiday slop this year, and that is holiday meal slop.

Speaker 2

彭博社最近有一篇文章,标题是《AI生成的垃圾食谱正在席卷互联网和感恩节晚餐》。

There was an article recently in Bloomberg titled AI slop recipes are taking over the Internet and Thanksgiving dinner.

Speaker 2

这篇文章讲的是美食博主们沮丧地发现,由于人们越来越多地转向AI生成的食谱,他们网站的流量直线下降,但同时也发现其中一些食谱根本不合常理。

This was about the food bloggers who are noting to their chagrin that traffic to their websites has fallen off a cliff since people are increasingly turning to AI generated recipes, but they are also discovering that some of these recipes don't make sense.

Speaker 1

是啊。

Yeah.

Speaker 1

你看,这里其实有两层故事。

So there's really, you know, two stories here.

Speaker 1

一是人们转向AI工具获取食谱,结果得到的食谱荒谬可笑。

One is about the fact that people are turning to AI tools and getting back these recipes that are just nonsensical.

Speaker 1

要知道这些系统并非直接从现有食谱中提取内容。

You know, the these systems are not directly pulling from recipes.

Speaker 1

它们是根据网上看到的各种信息重组出来的,所以不可能每次都靠谱。

They're reconstituting them from a bunch of different things that they've seen online, and that's not gonna work out for you every single time.

Speaker 1

许多人在感恩节期间艰难地发现了这一点。

A lot of folks found that out the hard way over Thanksgiving.

Speaker 1

但还有第二个故事,那些辛勤创作真实食谱并经过测试确保可行的人类创作者们,现在报告称他们网站的访问量正断崖式下跌。

There's a second story though, which is all of the human beings out there who did the hard work of creating real recipes and then testing those recipes to make sure they work are now reporting the traffic to their websites is falling off a cliff.

Speaker 1

我只想说这太糟糕了。

And I just wanna say this sucks.

Speaker 1

我讨厌AI这一点。

I hate this about AI.

Speaker 1

我希望像经营墨西哥美食博客'Muy Bueno'的伊薇特·马克斯·夏普纳克这样的人能维持生计——她曾发布照片展示人们用AI工具生成的两个完全不靠谱的达玛尔食谱。

I want people like Yvette Marquez Sharpnack who runs the Mexican food blog Muy Bueno and who posted photos of two different tamale recipes that people were making using AI tools that were just completely bogus.

Speaker 1

我希望她能继续谋生。

Like, I want her to be able to make a living.

Speaker 1

然而所有AI公司席卷而来,重组了整个互联网内容,却用目前更劣质的产品取而代之。

And instead, all the AI companies came along, they remixed the entire Internet, and they replaced it with what so far is worse.

Speaker 1

所以我讨厌这样,凯文。。

So I hate that, Kevin.

Speaker 2

是啊。

Yeah.

Speaker 2

我觉得他们应该开始卖这些墨西哥粽。

I think they should start selling these tamales.

Speaker 2

我知道一个假日集市他们可以去卖。

I know a holiday market where they can sell them.

Speaker 1

我喜欢你听完我整个咆哮就为了

I love that you just waited through my whole rant so

Speaker 2

能讲你这个蠢笑话。

you could make your stupid joke.

Speaker 2

听着。

Listen.

Speaker 2

不。

No.

Speaker 2

我我同意。

I I agree.

Speaker 2

我认为这不是个好趋势。

I think this is a bad trend.

Speaker 2

但同时,我妻子是个很棒的厨师,最近开始用AI辅助烹饪,做出的菜品相当不错。

At the same time, my wife is a very good cook, has been using AI to do some of her own cooking recently, and it's produced pretty good stuff.

Speaker 2

所以我得说,甲之砒霜,乙之蜜糖。

So I should say one man's slop is another man's treasure.

Speaker 1

说说我们家的折中方案吧,因为我们今年感恩节招待了14个人。

Here's how we split the difference in my house because we did we did Thanksgiving for 14 this this holiday season.

Speaker 1

哇。

Wow.

Speaker 1

你有12个孩子啊。

You have 12 kids.

Speaker 2

太厉害了。

That's amazing.

Speaker 1

才不是。

No.

Speaker 1

其实,如果你非要知道的话,我们的家人是第一次见面。

Actually, our families met for the first time, if you must know.

Speaker 2

哇。

Wow.

Speaker 2

而且它

And it

Speaker 1

进行得很顺利。

went great.

Speaker 1

谢谢你。

Thank you.

Speaker 1

我们都玩得很开心。

We all had a great time.

Speaker 1

感谢家人们为此专程来到湾区。

Thanks to the families for coming up to the Bay Area for that.

Speaker 1

总之,凯文,故事的重点是。

Anyways, point of the story, Kevin.

Speaker 1

我们采用了肯吉·洛佩兹·阿尔特的一个绝妙火鸡食谱,他是全球顶尖厨师之一。

What we did was we took a great turkey recipe from Kenji Lopez Alt, one of the great cooks in all the world.

Speaker 1

没错。

Yes.

Speaker 1

我们按照他的食谱制作了火鸡。

We made his turkey, and we used his recipe.

Speaker 1

但当我们对操作步骤有疑问时,我们确实求助了AI聊天机器人。

But when we had questions about what we were doing, then we did turn to the AI chatbot.

Speaker 1

比如说,嘿,

Say, hey.

Speaker 1

我们是不是该调高温度?

Should we maybe turn the temp up?

Speaker 1

还是该调低温度?

Should we turn the temp down?

Speaker 1

我们用它来获取实时指导。

We used it to get guidance along the way.

Speaker 1

算是折中了一下。

Kinda split the difference there.

Speaker 1

看起来效果还不错。

That seemed to work out fine.

Speaker 2

结果怎么样?

How did it turn out?

Speaker 1

我们把火鸡烤过头了。

We overhooked the turkey.

Speaker 1

但我不会为此责怪AI。

But I'm not gonna blame AI for that.

Speaker 1

我要怪就怪这是我们第一次用烤箱烤18磅重的火鸡。

I'm gonna blame the fact that it was the first time we used our oven to cook an 18 pound turkey.

Speaker 1

明白吗?

Okay?

Speaker 1

听着。

The listen.

Speaker 1

我的治疗师经常这么说。

My therapist says it all the time.

Speaker 1

我们只能通过经验学习,凯瑟琳。

We can only learn through experience, Katherine.

Speaker 2

好吧。

Okay.

Speaker 2

下一个烂摊子。

Next slop.

Speaker 2

这次不是节日主题的烂摊子。

This one is not a holiday piece of slop.

Speaker 2

这是个教育音乐类的烂摊子。

This is an educational music piece of slop.

Speaker 2

这是凯蒂·纳托普洛斯前几天在《商业内幕》上写的一篇精彩报道,关于一个叫'歌词学习'的Instagram账号,它用AI生成的歌曲内容刷屏了Instagram,这些歌曲基本上是在解释人们可能好奇的话题。

This was a great story that Katie Natopoulos wrote at Business Insider the other day about an Instagram account called learning with lyrics that had been flooding Instagram with these posts of AI generated songs that basically explain topics that people might be curious about.

Speaker 2

比如为什么井盖是圆的?

Things like why are manhole covers round?

Speaker 2

魔术贴是如何工作的?

How does Velcro work?

Speaker 2

这是我们《50项标志性技术》特辑中讨论过的话题。

A topic we covered on our 50 iconic technologies episode.

Speaker 2

为什么巨型钢卷要侧放运输而不是平放?

Why are giant steel coils transported on their sides instead of flat?

Speaker 2

凯西,你听过这些歌吗?

Now, Casey, have you heard any of these songs?

Speaker 1

你知道,我还没机会听,但我希望现在就能改变这个状况。

You know, I haven't had the chance, but I'm hoping I could change that right now.

Speaker 2

好的。

Yes.

Speaker 2

这首是讲速冷冰袋的工作原理。

This one is about how instant cold packs work.

Speaker 2

我们来听听看。

Let's take a listen.

Speaker 5

我很好奇速冷冰袋是如何工作的。

I'm curious how instant cold packs work.

Speaker 2

这个账号据说是加州州立大学长滩分校一名叫卡申·汤姆林森的学生创建的,他告诉商业内幕的凯蒂·纳托普洛斯说:‘我一直是个对事物充满好奇心的人。’

Now this is this account is apparently the work of a Cal State Long Beach student named Cashion Tomlinson who told Katie Natopoulos from Business Insider, quote, I've always been someone who's curious about stuff.

Speaker 2

嗯。

Mhmm.

Speaker 2

这简直就是完美的大学生语录。

Which is just a perfect college student quote.

Speaker 1

引起共鸣的王者。

Relatable king.

Speaker 1

另外我猜如果这些视频获得足够多的浏览量,他就能赚到钱,这样的话‘兑现汤姆’确实可以兑现了。

Also, I'm guessing that if these get enough views, he could make some money, in which case cash in could in fact cash in.

Speaker 1

现在我要说说这个。

Now here's what I'll say about this.

Speaker 1

我对那些让人类美食博主生活更艰难的AI垃圾食谱有着相当强烈的负面反应。

I had a sort of strong negative reaction to all of the AI slop recipes that are making life harder for human food bloggers.

Speaker 1

我其实对此没有意见。

I'm actually fine with this.

Speaker 1

如果你在外界想创作一首关于为什么巨型钢卷要侧放运输而非平放的歌曲,你实际上并没有与人类艺术家竞争。

If you are out there and you want to make a song about why giant steel coils are transported on their sides instead of flat, you're not actually competing with a human artist.

Speaker 1

这个领域完全属于你自己。

You actually have that lane to yourself.

Speaker 1

如果你想用AI工具来实现,我会说,愿上帝保佑。

And if you want to use an AI tool to do it, I say, god bless.

Speaker 2

是的。

Yes.

Speaker 2

这可能对WikiHow构成威胁,它原本是人们寻找愚蠢问题答案的地方。

It could be dangerous for WikiHow, which was the previous place that you go to find answers to, like, stupid questions.

Speaker 1

但WikiHow是有史以来最令人作呕的网站之一。

But WikiHow was one of the most disgusting websites ever created.

Speaker 1

简直被广告塞得满满当当。

Just absolutely choked with ads.

Speaker 1

一个实际上讨厌所有访客、只想收集广告的网站。

A website that actually hated all of its visitors and just wanted to collect ads.

Speaker 1

确实。

True.

Speaker 2

是啊。

Yeah.

Speaker 2

其实这让我联想到最近很感兴趣的一件事,听说现在大学生都在用AI音乐生成工具来创作帮助记忆的歌曲,因为有些人就是听觉型学习者。

Actually, this this is sort of rhymes with something that I've been really interested in recently, which is that I've heard that college students are now using these AI music generation tools to, like, make songs to help them remember things because some people are just, like, auditory listeners.

Speaker 2

对。

Yeah.

Speaker 2

比如你想记住克雷布斯循环的步骤?

And so if you're like, how do I remember, like, the steps in the Krebs cycle?

Speaker 2

现在你完全可以用这些AI工具生成一首泰勒·斯威夫特风格的歌——不过别真这么做,她会告你的。

Now you can just go make a Taylor Swift song about it on one of these AI gen actually, don't do that because she'll sue you.

Speaker 2

但你可以做首普通流行歌,这样记步骤可能比死记硬背容易多了。

But you can make a generic pop song about this, and that's maybe easier for you to remember than actually listing out all the steps.

Speaker 1

你知道吗,这很棒。

You know, that's great.

Speaker 1

这让我想起前几天做的一件事,我和朋友们想出了莎士比亚十四行诗第一行的绝妙创意。

That actually reminds me of something I did the other day, which is that my friends and I had come up with the the a great idea for the first line in a gay Shakespeare sonnet.

Speaker 1

你知道,外界一直有些关于莎士比亚性取向的传闻。

So, you know, there's there's some kind of rumors out there about Shakespeare's sexuality.

Speaker 1

我们想,如果他真是同性恋,写出来的诗会是什么风格?

We said, what would he sound like if he was really, you know, gay?

Speaker 1

于是我们想出了这句:'我能否将你比作雪地靴雪橇?'

And so we came up with the line, shall I compare thee to a boots down sleigh?

Speaker 1

这句子简直妙极了。

Which just seemed like such a good line.

Speaker 1

然后我就让Claude续写完这首十四行诗。

And so then I asked Claude to finish the sonnet.

Speaker 1

你猜怎么着?

And you know what?

Speaker 1

它做得非常出色。

It did a great job.

Speaker 2

我甚至不想知道'boots down sleigh'是什么。

I don't even wanna know what a boots down sleigh is.

Speaker 1

等你长大些我再告诉你。

I'll tell you when you're older.

Speaker 2

好吧。

Okay.

Speaker 2

好吧

Alright.

Speaker 2

接下来是《Slop》硬分叉评论环节

Next up in the hard fork review of Slop.

Speaker 2

这条新闻来自北卡罗来纳州,州参议员迪安德丽亚·萨尔瓦多最近发现自己出现在惠而浦公司为巴西家电产品线投放的广告中——尽管她本人从未参与过这则广告

This one comes to us from North Carolina where a state senator named Deandrea Salvador recently found herself in an ad by the Whirlpool company for a line of appliances in Brazil that she had not actually appeared in.

Speaker 2

该公司截取了她2018年TED演讲视频片段,未经许可就植入到他们宣传圣保罗市及巴西节能家电产品的视频中

The company had lifted a section from a TED Talk video that she gave back in 2018 and put it into their video about Sao Paulo and all the energy efficiencies that were in their appliances in Brazil.

关于 Bayt 播客

Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。

继续浏览更多播客