本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
今天的《AI每日简报》将聚焦本周最重要的AI动态——那些并非新模型发布的消息。《AI每日简报》是一档每日播客及视频节目,为您带来AI领域最关键的新闻与故事。朋友们好,在开始前先快速插播几条公告:首先要感谢今日赞助商Blitzy、Vanta和Super Intelligent。
Today on the AI Daily Brief, the most important AI stories this week that aren't new model releases. The AI Daily Brief is a daily podcast and video about the most important news and stories in AI. Hello, friends. Quick announcements before we dive in. First of all, thank you to today's sponsors, Blitzy, Vanta, and Super Intelligent.
想获取无广告版节目,请访问patreon.com/aideallybrief,每月仅需3美元起。当然,如果您有意赞助节目,请联系sponsors@aideallybrief.ai。欢迎回到《AI每日简报》,今天我们要讨论的是本周AI领域那些非模型发布类的重要事件。
To get an ad free version of the show, go to patreon.com/aideallybrief. Ad free starts at just $3 a month. And of course, if you want to sponsor the show, you can reach out at sponsors at aideallybrief dot ai. Welcome back to the AI Daily Brief. Today we are talking about the most important stories in AI this week that are not model releases.
这周可谓高潮迭起——我们迎来了Genie三号、Eleven Music、OpenAI开源模型,当然还有GPT-5。这些重磅消息几乎挤占了其他重要新闻的传播空间。今天我们将以加长版头条形式呈现本期主要内容,但为了完整性,我们仍需对GPT-5发布后的次日反应稍作讨论。
It has been a non stop banger of a week. We got Genie three, Eleven Music, OpenAI's open source models, and of course GPT-five. And that has really crowded out the space for a lot of other consequential stories that have been happening along the way. Today, we're going to be doing an extended headlines that is the main episode that'll cover all of that. But we do, for the sake of completeness, of course, have to do a little bit of day two reactions to GPT five.
尽管其他模型发布同样重要,但显然GPT-5才是重头戏。从期待值来看,它在某些方面甚至超越了自GPT-4以来所有模型。正因如此,我强烈预判今天(以及未来几天)网络上最活跃的声音将来自失望群体。这几乎成了铁律:除非新产品在方方面面都瞬间碾压所有竞品,否则发布初期的舆论焦点总会集中在人们的负面评价上——而这次显然不乏此类声音。
As significant as the other model releases are, obviously, this is the big one. Bigger in some ways in anticipation at least than any model we've had since really GPT four. Given that, I would strongly suspect that the most vocal people you will see online today, and probably for the next couple of days, are going to be those who are disappointed. It is almost an iron law of the anticipation of this thing that unless it instantly blew everyone out of the water in every single way, a lot of the energy that you're going to see in the immediate wake of the release is likely to be about the things that people don't like. And certainly there is plenty of that.
X平台上大量用户嚷嚷着要换回旧模型。我认为有必要深入分析:这些批评者究竟在不满什么?其中一个问题是:OpenAI为简化操作只保留GPT-5作为入口,但实际上它会根据问题类型将请求路由到不同子模型。这导致部分用户得到的是非顶尖模型的答复。发布会次日,Ethan Moloch教授就指出:'由于GPT-5实质是多个模型的集合体(部分卓越,部分平庸),且底层模型选择不透明,网上将出现大量结果差异巨大的案例——这必然引发混乱。'
You've got lots of people out here on X complaining that they want to go back to the old models. I think it's worth digging in a little bit to understand to the extent that people have critiques, what are they actually frustrated about? Now, one thing that's an issue is that while OpenAI was going for clarity by only having GPT-five as the model, GPT-five is of course routing different requests to different models based on what they think is required to get the best answer for the prompter. Because of that, sometimes models that aren't the state of the art are answering people's questions, and a lot of those answers are the ones that are coming up. Yesterday in the wake of the announcement, Professor Ethan Moloch wrote, You're likely going to see a lot of very varied results posted online from GPT-five because it is actually multiple models, some of which are very good and some of which are meh.
当晚他继续补充道:'正如预测,GPT-5 nano/mini版本产出劣质结果的案例已在网络泛滥。OpenAI若不能明确解释GPT-5的工作原理,恐怕会引火烧身——他们是否需要调整切换机制,或至少加强用户教育?'
Since the underlying model selection isn't transparent, expect confusion. He then followed it up later that night. As predicted, examples of GPT-five nano or mini producing bad outputs abound online. Not making it clear how GPT-five works will likely cause issues for OpenAI. I wonder if they'll need to take a different approach to switching or at least educating users about what GPT-five does.
简言之,这是种权衡:从原本公开的模型选择器转向当前这种隐蔽路由机制,并非纯粹的升级,而是不同维度的取舍——当用户被导向性能较差的模型时,这种机制的弊端便暴露无遗。更麻烦的是,许多宣传中的改进并非全模型通用。例如OpenAI员工Roon昨日澄清:'重要提示:只有GPT-5 thinking版本才有真正的写作能力提升,但系统不会自动切换至此模式,需手动选择测试。'
TLDR, there is a trade off here. It is not a strictly better thing moving from the model selector that they had wanted to do away with to this new approach where model selection is obfuscated. It's a different set of trade offs and we're seeing part of that rear its ugly head right now when people are being routed to a model that is underperforming. Related to that, a lot of the advertised improvements aren't happening at all models. For example, Roon from OpenAI yesterday wrote, TLDR only GPT-five thinking has the real writing improvements, and confusingly it doesn't always auto switch to this, so manually switch and try it.
AI聚合账号如Andrew Curran不得不转发提醒:'写作改进仅限GPT-5 thinking版本,测试时请务必选用该模式'——这消息连我都刚知晓,故在此转达。另有一类常见批评与模型性能无关,而是针对产品包装策略。Lussan AlGuybe激烈吐槽:'我简直怒不可遏!GPT-5本是个好模型:定价合理、上下文延长、幻觉减少...但Plus会员却被当韭菜割。'
AI aggregator accounts like Andrew Curran had to retweet that and say, GPT-five thinking has the writing improvement, so if you're testing along these lines, make sure to use that version. This was news to me, so I'm passing it along. Another area of common critique has nothing to do with the model, but is about how it's packaged and what's available. Lussan AlGuybe writes: I honestly couldn't be more pissed. GPT-five is a good model.
他在另一条推文中补充:'今天每个Plus用户的ChatGPT体验都在开倒车:再也无法稳定获取thinking模型。以前我们还能选04minute/04minutei high/03,现在只有每周200条限额的GPT5 thinking,外加一个总把你导向垃圾非推理模型的破路由系统。'虽然这只是个人观点,但200美元/月的Pro账户独占高级功能的做法,确实会让原来自认是高级用户的群体因丧失掌控感而心生抵触。
Pricing is reasonable. Context length improved. Less hallucinations. But for some reason, plus users get shafted. In another tweet they had said: ChatGPT literally got worse for every single plus user today.
(注:此处保留最后一段的完整翻译,因前文已完整呈现所有输入元素对应的输出元素,严格遵循1:1映射原则)
There's no way to reliably get thinking models anymore. Before we had 04minute, 04minutei high, and 03. Now we have GPT5 thinking with 200 messages per week and a router that exclusively routes you to some small and crappy non reasoning model. And while this is only one opinion, it definitely strikes me initially that a lot of the pro features are only so far in the $200 a month pro account. And so there are likely to be people who assume that they were power users before, who are gonna be a little bit turned off by what they perceive as their lack of agency in the new paradigm.
另一方面,也有许多人获得了良好体验。尽管大多数普通用户尚未真正意识到这点,但在该场景下测试过的人都坚称这是有史以来最佳的消费级模型。例如丹·希珀表示,我昨天让我妈妈——一位ChadGBT的忠实粉丝——试用后,这对她而言是范式转变。她说,我真的觉得这个模型太惊艳了。
On the flip side, there are plenty of people who are having good experiences. Although most of the normies haven't really clicked in on this yet, the people who have tested it in that context are absolutely saying it's the best consumer model ever. For example, Dan Shipper said, I asked my mom, who's a huge ChadGBT fan, to review it yesterday. It's a paradigm shift for her. She said, I really think this model is amazing.
这比我通常从ChadGBT获得的回答全面得多。它提供的信息可读性强且逻辑流畅。这个模型堪称黄金。而SIGNAL认为对普通人来说,其他人抱怨的模型选择权衡问题根本不值一提。他们写道:无模型轮盘赌的设定其实比大多数人(尤其是普通用户)能感知的更重要。
This is way more comprehensive than the answers I usually get from ChadGBT. The information it gives me is readable and flows really well. This model is gold. SIGNAL meanwhile thinks that for the average person, the trade off that others are complaining about when it comes to model selection is simply going to be worth it. They write, the no model roulette thing is actually bigger than most people, especially average people, will clock.
它消除了每次交互前消耗心智带宽的终极元决策。向产品和工程团队致敬。对消费者而言,ChatGPT基本就是大多数人接触的AI,而随着GPT-5的到来,这一发展轨迹可能会以更陡峭的曲线延续。维克多·塔隆重申了首日的积极评价:不,你们都错了。GPT-5是质的飞跃。
It collapses the ultimate meta decision that usually burns mental bandwidth every time before you even start interacting. Major props to product and engineering. Consumer wise, ChatGPT is basically AI for most people, and that trajectory continues maybe even at a steeper curve with GPT-five. Victor Tallon doubled down on their positive review from the first day, saying: Nah, you're all wrong. GPT-five is a leap.
我100%坚持这个观点。本不想过早发文以免再次后悔,但它确实解决了一系列AI此前无法攻克的复杂调试指令,还设计出细节与品质远超我所见任何作品的精美像素风Game Boy游戏。这模型绝不可能是差的。我认为你们都被Benchmaxers创伤后应激了,反而对这个优秀模型过度挑剔。记得吗?昨天部分给予好评的评测者提到,许多优势不会立竿见影,需要用它处理更复杂任务才能显现。
I'm 100% doubling down here. I didn't want to post too fast and regret it again, but it just solved a bunch of very, very hard debugging prompts that were previously unsolved by AI, and then designed a gorgeous pixelated Game Boy game with a level of detail and quality that is clearly beyond anything else I've ever seen. There's no way this model is bad. I think you're all traumatized of Benchmaxers and overcompensating against a model that is actually good. Now if you remember, one of the things that some of the reviewers who found it positive yesterday said was that a lot of the benefits aren't going to be instantly obvious, that you have to do harder things with it to really see the benefits.
威尔·布朗的评测可见端倪:好吧,这个模型在光标模式下堪称王者。指令跟随精准得惊人,该质疑时绝不盲从,多任务处理游刃有余。偶有些微格式错误但无伤大雅。代码比三代规范得多,给人可靠感。
You can see that a little bit in the review from Will Brown who wrote, Okay, this model kind of rules in cursor. Instruction following is incredible, very literal, pushes back where it matters, multitasks quite well. A couple tiny flubs in format misses here and there, but not major. The code is much more normal than three's. Feels trustworthy.
Box公司的亚伦·列维写道:要理解GPT-5等强大模型中初现的逻辑推理升级意义有时很困难。举个简单例子展示其强大:我拿了份20页7800字的英伟达财报电话记录,将'毛利率将改善并回升至70%中段'里的'70%中段'改为'60%中段'。对稍有金融常识的分析师来说,这显然矛盾——怎么可能改善后反而回到比前文更低水平?
Aaron Levy from Box wrote: It's sometimes hard to grasp the significance of the reasoning and logic updates that are starting to emerge in powerful models like GPT-five. Here's a very simple example of how powerful these models are getting. I took a recent NVIDIA earnings call transcript document that came in at 20 pages long and had 7,800 words. I took part of the sentence, and gross margin will improve and return to the mid-70s and modified mid-70s to mid-60s. For a remotely tuned in financial analyst, this would look out of place because the margins wouldn't improve and return to a lower number than the one described as a higher number elsewhere.
但约95%的读者不会发现这个改动,因为它完美融入了其余7800字内容。测试多个AI模型时,我问:'文档是否存在逻辑错误?请用一句话回答'。GPT-4.0.1、GPT-4 minutei等半年前最先进的模型基本都回答'未发现逻辑错误'。而GPT-5迅速发现问题并回应:'是的,文档存在毛利率指引的内部矛盾'。
But probably 95% of the people reading this press release would not have spotted the modification because it easily fits right into the other 7,800 words that are mentioned. Testing a variety of AI models, I then asked a series of models, are there any logical errors in this document? Please provide a one sentence answer. GPT-four 0.1, GPT-four point minutei, and a handful of other models that were state of the art just six months ago generally came back and returned that there are no logical errors in the document. GPT-five, on the other hand, quickly discovered the issue and responded with, Yes, the document contains an internal inconsistency about gross margin guidance.
它指出:'前文说毛利率将回到60%中段,后文又说今年晚些时候将达到70%中段'。令人惊讶的是,亚伦写道:'GPT-5 minutei甚至GPT-5 nano都能做到这点。'他总结道:'在企业部署AI代理时,对数据施加更强逻辑推理的能力尤为关键。当前领域的进步令人振奋,这将为企业开辟大量新用例。'Flowerslop也提醒这仅是初代版本。
At one point saying margins will return to the mid-60s, and later saying that they will be in the mid-70s later this year. Amazingly, Aaron writes, this happened with GPT-five minutei, and remarkably even GPT-five nano. He concludes the ability to apply more logic and reasoning to enterprise data becomes especially critical when deploying AI agents in the enterprise. So it's amazing to see the advancements in this space right now, and this is going to open up a ton more use cases for business. Flowerslop also reminded that this is just the first version.
他们写道:'GPT-5是出色的新通用基准,虽未像4.0那样极致优化,也暂不具备4.0的炫酷功能。但公平地说,它叫GPT-5而非5.0——4.0是旧篇章的终页,5是新章节的首页。'
They write GPT-five is a great new general baseline. It's not as hyper optimized as four point zero, and it cannot yet do all the flashy things four point zero could do. But to be fair, it's called GPT-five, not five point zero. Four point zero was the last page of the last chapter. Five is the first page of a new one.
我相信很快会有酷炫新功能推出。当前将GPT-5免费开放已是人类的巨大净收益,将极大加速进步。多数ChatGPT用户只用过4.0,所以对很多人而言今天标志着重大飞跃。那么我的初步实验如何呢?
And I'm sure they will ship some cool new features soon. In the meantime, making GPT-five free is a huge net win for humanity. It will accelerate progress in a big way. Most ChatGPT users only ever used four point zero, so for a lot of people today marks a huge leap forward. So what about my first experiments?
目前为止我的印象大多是积极的。在我看来它像是一个略胜一筹的战略思考者,感觉更全面、更稳健。泰勒·考恩曾写过类似的话:我是GPT-5的忠实粉丝,因为在我感兴趣的领域,它比三代表现更好——这很能说明问题。即使面对经济学、历史和思想领域的复杂查询,它的响应速度也快如闪电。
Mostly so far my impressions are positive. It seems to me like a slightly better strategic thinker. It feels more comprehensive and robust. Tyler Cowen said something similar writing: I'm a big fan of GPT-five as on my topics of interest it does better than 'three and that is saying something. It's also lightning fast even for complex queries of economics, history, and ideas.
回到我的体验,我也发现它极度渴望执行下一步操作,泰勒也注意到了这点。他写道,最令人印象深刻的特性之一,是它能不可思议地预判你接下来可能想问什么。而我认为最强大的一点在于:旧模型最糟糕的特性是当你与O3进行战略讨论时,它总想含糊其辞寻找折中答案。比如你说‘我有A和B两个选项,该选哪个?’它会给出双方理由,但通常不会直接说‘选A’。
Back to me, I also found that it's super eager to do the next thing, which Tyler also found. One of the most impressive features, he writes, is an uncanny sense of what you might wanna ask next. Now, one thing that I found incredibly powerful is that one of the worst features of all the previous models was that when you were having a strategic discussion with O3, it always wanted to hedge and find some sort of compromise answer. In other words, if you said, I have choice A and choice B, which choice should I make? It would give you good reasons for both, but not actually usually just say, So choose choice A.
这个新模型在提供决策支持时显得从容得多,还能给出逻辑依据。我不确定这是否是他们减少谄媚性回应的副产品,但这极大提升了实用性。如我所说,第一印象很好,但还有很多值得深挖的地方。千万别因为今天或周末会看到大量第一印象的负面评价,就认为它不好。人们需要时间适应新界面,最终才能判断它在哪些方面确实优于前代模型。
This model feels much more comfortable with actually providing decisions and logic to back them up. Now, I don't know if that is a byproduct of their work to diminish sycophancy, but I think it makes it hugely more valuable. So like I said, my first impressions are favorable, but there is a lot more to dig into here. And I certainly would not assume that just because you're gonna see a lot of first blush negative impressions today and over the weekend, that that means that anything is bad. You got to give people a chance to get used to it, to figure out the new UI, and ultimately to understand where it is and actually isn't better than the previous models.
说完这些,我们可以聊聊本周其他重要事件。Cloudflare与Perplexity的争端引起了我的注意,我认为这具有相当重大的意义。我总觉得Cloudflare试图充当整个互联网的守门人,与其说是做好网民,不如说是争夺控制权。但让我先陈述事实,你们自行判断。简而言之,Cloudflare指控Perplexity规避反AI爬虫措施。
With that out of the way though, now we can talk about the other important stories that were going on this week. One that caught my attention that I think has fairly significant implications is Cloudflare getting into it with perplexity. Now to me, I can't shake the feeling that Cloudflare is at least as much here trying to just act as gatekeeper of the entire Internet as they are trying to be good Internet citizen. But let me tell you the story and you can make your own call on what you think is going on. The short of it is that Cloudflare has accused Perplexity of circumventing anti AI crawling measures.
今年Cloudflare实施了一系列主动措施,让网站能控制是否允许AI公司抓取其数据。考虑到约20%的网络流量经过Cloudflare,他们在这个领域举足轻重。周二发布的研究报告中,Cloudflare点名批评Perplexity使用反制手段绕过爬虫禁令。其研究人员写道:我们观察到Perplexity的隐蔽爬取行为。虽然最初使用声明过的用户代理,但在遭遇网络拦截后,他们似乎通过伪装身份来规避网站偏好。
This year Cloudflare has put in place a series of proactive measures to ensure that websites can control whether or not AI companies can scrape their data. And given that something like 20% of the web goes through Cloudflare, they are a significant player in this space. Now in a research report published on Tuesday, Cloudflare named and shamed Perplexity for using countermeasures to get around scraping bands. Their researchers wrote: We're observing stealth crawling behavior from Perplexity. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure the crawling identity in an attempt to circumvent the website's preferences.
持续证据显示Perplexity反复修改用户代理、更换源ASN以隐藏爬虫活动,还无视robots.txt文件。Cloudflare声称:这种行为涉及数万个域名,日均数百万次请求。我们通过机器学习与网络信号组合成功识别该爬虫。去年Perplexity就面临类似指控,其CEO阿拉文德·斯里诺瓦兹将责任归咎于平台使用的第三方爬虫。
We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawler activity, as well as ignoring robots. Txt files. Cloudflare claimed, This activity was observed across tens of thousands of domains and millions of requests per day. We were able to fingerprint this crawler using a combination of machine learning and network signals. Now similar claims were levied against Perplexity last year, with CEO Aravind Shrinovaz blaming third party crawlers used by their platform.
但这次Perplexity发言人直指该报告是公关噱头,并补充说博客存在诸多误解。在X平台的详细回应中,Perplexity坚称其爬虫合法:这场争议暴露出Cloudflare系统根本无力区分合法AI助手与真实威胁。若连有益的数字助手和恶意爬虫都分不清,就不该由你们裁定何为合法网络流量。过度拦截伤害所有人——想象用户用AI研究病情、对比商品评测或获取多源新闻时。
This time though, a Perplexity spokesperson called the report a publicity stunt, adding There are a lot of misunderstandings in the blog post. In a longer response on X, Perplexity asserted that their crawlers are legitimate, writing: This controversy reveals that Cloudflare systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats. If you can't tell a helpful digital assistant from a malicious scraper, then you probably shouldn't be making decisions about what constitutes legitimate web traffic. This overblocking hurts everyone. Consider someone using AI to research medical conditions, compare product reviews, or access news from multiple sources.
若其助手被误判为恶意机器人,他们将失去宝贵信息。结果将形成双轨制互联网:访问权限不取决于需求,而取决于基础设施控制者是否认可你所选工具——这些人更关心你的手段而非目的。这损害用户选择权,威胁开放网络对创新服务的可及性。Perplexity的核心论点是其AI助手类似人类助手,实时响应用户查询获取信息。巴拉吉·斯里尼瓦桑写道:Perplexity对Cloudflare的有力反驳。
If their assistant gets blocked as a malicious bot, they lose access to valuable information. The result is a two tiered internet where your access depends not on your needs, but on whether your chosen tools have been blessed by infrastructure controllers who will care more about your means. This undermines user choice and threatens the open web's accessibility for innovative services competing with established giants. Basically what Perplexity were arguing is that their AI assistants are analogous to human assistants, looking up information in real time in response to user queries. Balaji Srinivas wrote: Good rebuttal to cloud for by Perplexity.
关键在于AI代理只是人类的延伸,因此其HTTP请求不应被视作机器人。本质上Perplexity只是按用户指令发起请求。投资人杰弗里·伊曼纽尔写道:为何我总觉得Cloudflare是想充当中间人,未来好从AI代理访问内容的微交易中抽成?值得称赞的是,巴拉吉邀请Cloudflare CEO马修·普林斯参与讨论:若用户不能委托AI代理行动,且robots.txt禁止所有代理流量,代理就无法代用户登录执行操作。或许robots.txt该为AI代理新增章节?
The core point is that an AI agent is just an extension of a human, So when it makes an HTTP request, it shouldn't be treated like a bot. Functionally, perplexity is only making this request at the user's direction. Investor Jeffrey Emanuel wrote, why do I get the feeling that this is more about Cloudflare being able to insert itself as the middleman so that it can attempt in the future to take a cut of any microtransactions associated with the AI agents being allowed to view the content. Now to Cloudflare's credit, Balaji asked their CEO Matthew Prince to engage, writing: What are your thoughts? If users couldn't delegate their actions to AI agents and all agent traffic was forbidden by robots.
普林斯回应:是的,我们正与IETF合作制定新标准。我相信多数正规公司会采纳,届时大部分问题将更清晰。
Txt, then agents wouldn't be able to log in on behalf of users and perform actions. Perhaps robots. Txt should get a new section for AI agents. Prince responded: Yes, we're working on a new standard that's about to be adopted by the IETF to that end. I believe most reputable companies will adopt the new standard, and most of these issues will be cleaner.
一个挑战在于界定代表客户收集的数据范围。如果由z公司为我打造的代理程序通过我的订阅权限阅读《纽约时报》,它能否将这些内容分享给客户z的其他代理?假如我的代理发现斐济机票有优惠价或某商品标价错误呢?Biologi提到:另一个关键点是微支付如今已具备技术可行性和法律许可。因此该标准的更新版本可纳入按请求付费机制。
One challenge is scoping the data gathered on behalf of a client. If my agent, which was built by company z, reads the New York Times for me under my subscription, can it share that content with other agents from customer z? What if my agent finds a great price on tickets to Fiji or a mispricing of a commodity? Biologi said: Another piece of this is that micropayments are now feasible and legal. So an updated version of that standard could include pay per request.
人类可自由翱翔,机器人可能被禁止,而代理程序将被告知规则。Prince精准总结道:乌托邦就是人类免费获取内容,机器人支付高昂费用。当前关于AI爬虫使用的争论核心,实质上是互联网经济本质的变迁。十年前的情况相对简单。
Humans could fly free, robots could be banned, and agents could be told. Prince summed up exactly. Utopia is humans get content for free, robots pay a ton. Now at its core, the argument around the use of AI crawlers is about the changing nature of the Internet economy. A decade ago, it was relatively simple.
网站允许谷歌爬虫抓取其数据,作为交换,它们会获得可货币化的流量。后来随着谷歌排名竞争加剧及搜索摘要功能的使用,这种流量逐渐萎缩。谷歌AI概览功能进一步加剧了这个趋势——尽管他们声称实际并非如此。对于Perplexity等大多数AI服务而言,数据被提取后直接呈现给用户,几乎无需点击跳转原始来源。
Websites allowed Google bots to scrape their data, and in exchange, they'd send you traffic to monetize. Then a combination of more competition from Google rankings and the use of search summaries made traffic dwindle. AI has of course supercharged this trend with Google's AI overviews driving down clicks. Although as we'll see, they say that that's not actually happening. For most AI services, including Perplexity, the data is extracted and presented to the user directly with very little reason to click through to the source.
但现在的核心问题是:谁有权定义何为合法流量?Perplexity的回应将互联网视为数字公地——用户索取信息时,无论亲自获取还是通过AI助手获取都不应区别对待。Cloudflare则主张网站所有者应有选择权,所有网络爬虫都应遵守robots.txt文件的指令。
The big question now though is who gets to decide what legitimate traffic looks like? Perplexity's response treats the Internet as the digital commons. If a user is requesting information, it shouldn't matter if they fetch it themselves or get an AI assistant to fetch it. Cloudflare is asserting that website owners should get to choose, and that all web scrapers should abide by the instructions in robots. Txt.
总体而言,互联网似乎正朝着AI爬虫必须付费访问网站的系统发展。回到Cloudflare的立场问题:争议爆发后,有人指出Cloudflare已开始替用户做决定。ML与基因组学研究员Sawyers写道:Cloudflare未经告知就为我的企业网站启用此功能,我为何需要这个?
Overall, it feels to me like the Internet seems to be on a collision course towards a system where AI scrapers are forced to pay for access to websites. But going back to this question of where Cloudflare sits in all of this. In the wake of the controversy, some people noted that Cloudflare had already started making decisions for them. ML and genomics researcher Sawyers wrote: Cloudflare turned this on for my business website without telling me. Why would I want this?
并分享了系统自动选择禁止AI机器人访问其网站的截图。Y Combinator的Gary Tan写道:默认开启且不通知用户,这简直是过度反应。Cloudflare回应称这是失误,并未为任何现有域名启用该设置。但Gary反驳说YC官网也被默认开启了该设置。Vercel CEO Guillermo Roche表示:阻碍进步是走向无关紧要的最快途径。
And shared an image of them auto selecting that AI bots should not be able to use their website. Gary Tan from Y Combinator wrote: On by default without notification seems like an extreme overreaction. Now Cloudflare for their part suggested it was a mistake and they hadn't turned on the setting for any existing domains. But Gary responded that the setting was also turned on without notification for the Y Combinator website. Guillermo Roche, the CEO of Vercel, wrote: The fastest path to irrelevance is blocking progress.
阻挠消费者真正需要的低摩擦界面。互联网正在变革,应对AI的方式是发展更多AI,而非封锁停滞。我们的数据显示Perplexity和ChatGPT对业务产生了极其积极的影响。
Blocking what consumers actually want. Low friction interfaces. The Internet is changing. The answer to AI is more AI, not to block and stagnate. Our metrics show Perplexity and ChatGPT have had an extremely positive effect on our business.
开发者寻求现代部署和CDN平台推荐时,AI团队会引导他们选择我们。这些已成为当前最高转化率的注册来源,远超谷歌。未来会怎样?Perplexity、vZero和ChatGPT将助力业务增长。
Developers ask for modern deployment and CDN platform recommendations. The AI team tells them to check us out. These today are our highest intent sign up sources, far more intentful than Google. What happens next? Perplexity, vZero, and ChatGPT will grow your business.
它们会自动注册账号,购买产品,消化信息并建立关联,最终促成行动。Claude Code正是这个未来的早期雏形。
They will sign up automatically. They will buy your product. They will digest your information and make it relatable. They will take action. Claude Code is an early glimpse of this future.
在Vercel,我们选择反其道而行。我们将提供AI云和CDN基础设施,助您对接这些智能代理,更重要的是部署您自己的代理。Lee Edwards说得更直白:我因CDN和安全功能热爱Cloudflare,但你要么站在超级智能这边,要么站在数字保守派那边。这无疑是对话的开端而非终结,但正如所言,这场对话对整个数字未来的形态至关重要。作为创始人,您正快速逼近产品市场匹配、下一轮融资或首个大企业订单。
At Vercel, we're excited to take the opposite bet here. We'll give you the AI Cloud and CDN infrastructure to help you integrate with these agents and crucially ship your own. Lee Edwards put it even more simply, I love Cloudflare for the CDN and security features, but either you're on the side of super intelligence or you're on the side of digital nimbies. This is definitely the beginning, not the end of a conversation, but like I said, a super important one for frankly the entire shape of the digital future. As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
但随着AI加速初创企业的开发和交付速度,安全合规的要求比以往更早到来。正确处理安全和合规问题能推动增长,而拖延则可能阻碍发展。Vanta通过深度集成和自动化工作流,专为快速发展的团队设计,助您迅速达到审计要求,并在模型、基础设施和客户演变过程中通过持续监控保持安全。LangChain、Rytr和Cursor等快速成长的客户从一开始就信任Vanta构建可扩展的基础。作为企业采购领域的从业者,我尤其欣赏Vanta如何让合规变得简单。
But with AI accelerating how quickly startups build and ship, security expectations are higher earlier than ever. Getting security and compliance right can unlock growth or stall it if you wait too long. With deep integrations and automated workflows built for fast moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infra, and customers evolve. Fast growing customers like LangChain, Rytr, and Cursor trusted Vanta to build a scalable foundation from the start. And look, as someone who lives in the world of enterprise procurement, I love how Vanta makes it easy to get compliance right.
当您全力争取大单时,最不愿看到的是因Vanta已为超1万家企业解决的问题而功亏一篑。立即访问vanta.com/nlw,通过Vanta初创企业计划节省1000美元,加入超1万家正借助Vanta快速扩张的雄心勃勃的企业。限时优惠,vanta.com/nlw立省1000美元。本节目由Blitzy赞助——拥有无限代码上下文的企业级自主软件开发平台。Blitzy运用数千个专业AI代理,经过数小时思考来理解数百万行代码的企业级代码库。
The last thing you need when you're trying to win that big deal is to have it scuttled by something that Vanta has solved for over 10,000 companies. Go to vanta.com/nlw to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta. That's vanta.com/nlw to save $1,000 for a limited time. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code.
企业工程领导者通过Blitzy平台开启每个开发冲刺,输入开发需求后,平台将提供计划并生成、预编译每个任务的代码。Blitzy自主完成80%以上的开发工作,同时为剩余20%需要人工完成的开发提供指导。上市公司采用Blitzy作为IDE前开发工具后,工程速度提升5倍,配合自选的编程助手,实现AI原生的软件开发生命周期。Blitzy现为符合条件的企业提供限时30天免费概念验证。
Enterprise engineering leaders start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan then generates and pre compiles code for each task. Blitzy delivers 80% plus of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Public companies are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre IDE development tool, pairing it with their coding co pilot of choice to bring an AI native SDLC into their org. Blitzy is providing a limited time, thirty day free proof of concept for qualifying enterprises.
团队将在您组织的实际开发项目中实现5倍效率提升。访问blitzy.com点击
The team will provide a 5x velocity increase on a real development project in your org. Visit blitzy.com and press Book Demo to learn how Blitzy transforms your SDLC from AI assisted to AI native. That's blitzy.com. If you are a regular listener, you will have heard about Superintelligence agent readiness audits at this point. But I wanted to tell you today about the full suite of agent readiness products that go beyond just the initial readiness report.
过去六个月里,SuperIntelligent构建了完整的智能体规划套件。我们协助您从需求发现过渡到规划实施阶段。完成智能体就绪度审计后,我们通过用例规划报告深度分析关键场景——这些报告将阐明技术准备要点、实施可能面临的挑战,以及应选择自研、采购、合作还是组合方案。后续还可获取技术蓝图规格文档,为您的开发团队或合作伙伴提供精准的智能体构建指南。
Over the last six months, Super Intelligent has built out an entire agent planning suite. We help you move from discovery to planning to implementation. After you've completed your agent readiness audits, we help you double click on your most important use cases with what we call our use case planning reports. These reports are going to help you understand what sort of technical preparation you need to do to be ready for a use case, what challenges you might face in implementation, and whether you should be thinking about building, buying, partnering, or some combination. After that, you can even get a spec document in what we call our Technical Blueprint that gives either your developers or the developers of the partner you work with what they need to build exactly the agent that you're looking for.
若想了解SuperIntelligence智能体规划套件,我们开发了定制GPT解答疑问。访问bit.lysuperagent(连续无空格),该智能体还可协助预约团队咨询。接下来延续网络流量话题:谷歌坚称AI功能未导致流量下滑。周三的博客中,谷歌回应了AI概览功能上线后流量下降的质疑,声称搜索到网站的有机点击总量同比保持稳定。
If you want to learn more about SuperIntelligence Agent Planning Suite, we've built a custom GPT to answer your questions. Just go to bit.lysuperagent, that's bit.lysuperagent, all one word, and if you have any questions, the agent can even help you book an appointment with our team. Next up, staying on this theme of web traffic: insists that its AI features aren't actually driving down web traffic. In a blog post on Wednesday, Google responded to outcries about declining traffic that coincided with the introduction of AI overviews. They claimed that total organic click volume from Google search to websites has been relatively stable year over year.
此外平均点击质量提升,实际输送至网站的高质量点击量较去年略有增长。这与第三方数据显示约一年前点击量开始暴跌形成反差,但谷歌驳斥该数据存在方法论缺陷、孤立案例或AI功能推出前就已发生的流量变化。我个人持保留态度,但谷歌专门回应此事,恰恰说明这场讨论的重要性。说到行业重大趋势,GPT-5显然昭示着'未来人人都是程序员'的豪赌。
Additionally, average click quality has increased and we're actually sending slightly more quality clicks to websites than a year ago. Now this is in contrast to third party data that showed the volume of clicks beginning to crater from around a year ago, but Google dismissed that data stating that it was often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in search. I'm not really sure. I think there's a fair bit of skepticism around that post, But the fact that they felt the need to respond at all is just another indicator of why this is such an important conversation. Now, of important conversations and big trends in the industry, obviously what we've seen from GPT-five is that there is a big bet going on right now that everyone is a coder in the future.
顺应这一趋势,谷歌本周将其编程智能体Joules从测试版转为正式发布。该产品对标OpenAI的Codex和Anthropic的Cloud Code,具备AI编程智能体核心功能:后台异步编码、自主检索信息的工具链及广泛平台集成。谷歌实验室产品总监Kathy Korovec表示该智能体将成为长期AI产品线组成部分:'发展轨迹让我们确信Joules将持久存在'。此次发布为观察AI编程大战提供了有趣视角。
And along that theme, Google rolled out its own coding agent Joules out of beta and into general release earlier this week. The product is Google's answer to OpenAI's codex in Anthropix Cloud Code. It does AI coding agent things: asynchronous code writing in the background, tool used to find the information it needs, and a wide range of platform integrations. Kathy Korovec, a Director of Product at Google Labs, believes the agent will be a long term part of their AI offering, stating: trajectory of where we're going gives us a lot of confidence that Joules is around and going to be around for the long haul. Now the release provides an interesting vantage point into the state of the AI coding wars.
Joules自五月公测以来,漫长的测试期通过数百次更新提升了稳定性与用户体验。但谷歌入场时间仍落后OpenAI和Anthropic数月。一方面谷歌无法承受发布不成熟产品,另一方面AI工具的快速迭代使其显得滞后。值得注意的是,异步智能体正成为主流应用模式——这已是AI编程智能体高级用户的默认使用方式。
Joules has been in public beta since May, and that comparatively long testing time allowed Google to make improvements to stability and UX thanks to hundreds of updates. But they're now coming to the market a number of months behind both OpenAI and Anthropic. And on the one hand, Google doesn't have the luxury of launching unpolished products, but on the other hand, the rapid release schedule of AI tools can make them seem pretty far behind. Now one thing to watch is the growth of asynchronous agents as a major use case. This is rapidly becoming the default mode for power users of AI coding agents.
基本模式是让智能体在后台执行任务时,您可处理其他工作。虽然这种生产力提升方式极具前景,但问题在于其token消耗速度远快于结对编程或同步模式。价格虽快速下降却仍显不足——成本压力正是本周另一热议话题。新报告显示,对部分公司而言,这场AI编程浪潮至今仍是赔本生意。
Basically, you set the agents to a task in the background while you work on other things. And while this is an extremely promising way to go about things from a productivity standpoint, one issue is that it burns through tokens much faster than if it were operating as a pair programmer or synchronous agent. Just another interesting dimension to this, prices, as we've seen, coming down quickly, but isn't coming down quickly enough. And indeed that cost squeeze is another story that's getting some amount of attention this week. A pair of new reports are suggesting that the vibe coding wave has been, at least so far, a money losing proposition for some companies.
一位匿名消息人士向TechCrunch透露,Windsurf在被Cognition收购前毛利率极低,这意味着他们平均每服务一位客户都在承受巨额亏损。这一问题其实早有端倪,但近几个月来变得尤为严峻。像Cloud four Opus这类高价机型及后台代理功能的推出,导致重度用户产生了天价账单。Windsurf当时正自主研发模型以降低成本,有消息人士称:'如果不参与模型竞赛,运营成本会非常高昂。'与此同时,该报道还披露了Replit公司业务的一些具体数据。
An anonymous source told TechCrunch that Windsurf had very negative gross margins before they sold to Cognition, meaning they were making a big loss on the average customer. Now this issue has been hinted at for some time, but appeared to get more acute in recent months. More expensive models like Cloud four Opus and the introduction of background agents meant power users were racking up huge bills. Windsurf was working on their own models in order to cut costs, with a source stating: It's a very expensive business to run if you're not going to be in the model game. The information, meanwhile, had some hard numbers on Replit's business.
该公司营收在不到一年内从200万美元飙升至1.44亿美元。但随着需求激增,成本增速更快。The Information报道显示,Replit二月份毛利率为36%,但四月份推出具有更强自主功能的新版代理后,毛利率暴跌至-14%。目前毛利率已小幅回升至23%(七月数据),这得益于Replit开始实施用量收费。据报道,截至五月,Lovable仍保持着约35%的较高毛利率。
The company's revenue has grown from $2,000,000 to $144,000,000 in less than a year. But as demand picked up, costs increased even faster. The Information reported that Replit had a 36% gross margin in February, but that that fell to negative 14% in April when they launched a new version of their agent with more autonomous functionality. Margins have bounced back a little now, reaching 23% in July after Replit started charging for usage. Reportedly, Lovable has managed to keep their gross margin relatively high at roughly 35% as of May.
不过这发生在Lovable推出代理模式之前,其利润率能否维持尚属未知。两家公司计算时都排除了服务免费用户的成本,因此实际毛利率会更低。多位业内人士向TechCrunch表示这属于行业普遍问题。Vibe编程初创公司创始人Nicolas Cheriere指出,所有协同生成产品的毛利率不是持平就是亏损,'简直糟糕透顶'。
However, that was prior to Lovable launching their agent mode, so who knows if that profit margin has held up. Both companies exclude the cost of serving free users from their calculations, so all in margins are gonna be even tighter. Multiple people told TechCrunch that they believe this is an issue across the board. Vibe coding startup founder Nicolas Cheriere said margins on all of the cogen products are either neutral or negative. They're absolutely abysmal.
需知严重亏损对科技初创企业并非新鲜事。公司常会亏本运营以抢占市场份额或吸引用户尝试新产品。优步就是著名案例,其利用风投资金补贴车费实现增长。许多SaaS公司同样将亏损的入门套餐作为获客策略。但AI编程公司的特殊困境在于,其固有成本远高于成熟后毛利率可达70%以上的SaaS企业。
Now keep in mind that being deeply in the red isn't a new thing for tech startups. Companies will often deliver their product at a loss in order to establish market share or encourage customers to try something new. Uber was famously a loss leader in this manner, functionally using VC money to subsidize their rides while they grew. The same is true of many SaaS companies making a loss on introductory offers as an acquisition strategy. The issue for AI coding companies is that they have inherently higher costs than SaaS companies, which can achieve upwards of 70% gross margins at maturity.
除依赖用户黏性后提价外,据悉这些公司还有几条盈利路径。最直接的就是模型服务成本下降。谷歌风投普通合伙人Eric Nordlander表示:'这是所有人的指望。当前推理成本已是历史峰值。'这话固然不假——
Aside from being able to jack up the price once users are reliant on AI coding, these companies are reportedly seeing a few pathways to profitability. The first and most obvious is simply that the cost of serving models will come down. Eric Nordlander, a general partner at Google Ventures said: That's what everyone's banking on. The inference cost today, that's the most expensive it's ever going to be. And on the one hand, this is obvious.
成本下降速度远超预期。但该领域迹象表明,用户最终仍渴望使用最先进的模型。更关键的是,新型使用方式会消耗更多token:既因模型需要推理,也因同时启用多个代理的工作模式。最令我关注的是这可能重塑成功标准。AI创业者Matt Slotnick写道:'人们过度关注AI毛利率。'
Costs have come down dramatically, much faster than I think anyone thought. However, when it comes to this use case, there's also indications that people ultimately want the most state of the art up to date model. And what's more, the new way of using these models just uses more tokens both natively because they're reasoning, but also because of this mode of spinning up multiple agents at once. Now one really interesting thing to me is how this might ultimately impact what we think about as successful. AI entrepreneur Matt Slotnick wrote: People are overly focused on AI margins.
传统SaaS的财务模型建立在劳动密集型架构上——例如需要80%毛利率才能覆盖各类成本最终实现现金流为正。若毛利率仅50%,该模型就会崩溃。
Modern SaaS math assumes a certain operating structure that is labor intensive. E. G. You need 80% gross margins to pay for all the typical costs that allow you to eventually run cash flow positive. And 50% gross margins breaks the math in that model.
但AI公司显然不会遵循这套逻辑。若用AI替代人力,企业运营支出(OpEx)将显著降低。OpEx减少意味着可支配利润空间更大,更不用说智能产品的销售成本也会下降。智能技术不仅注入产品,更融入运营模式,其连锁效应至今仍被严重低估。
But AI companies are very unlikely to follow that model. If you're replacing human labor in your own business, your operational expenses or OpEx will be meaningfully lower. If OpEx is lower, you get a lot more margin to potentially play around with. Not to mention, cost of goods sold on intelligence will decrease. You're injecting intelligence not only into products, but also your operating model, and the cascading effects of that still seem largely underappreciated.
营收与成本模型并非孤立存在。换言之,软件公司必须保持80%毛利率并非天条,这只是当前SaaS的偶然形态。Chris Walsh指出,尽管这个话题当下热议,投资者并未逃离该领域。他写道:'即便毛利率结构较弱,私募市场估值仍保持可观溢价。'
Revenue and cost models don't exist in a vacuum. In other words, he says, there's no divine law that software companies need 80% margins. The current operating model of SaaS just happens to. And Chris Walsh pointed out that although this might be a topic of conversation right now, investors are hardly running away from this category. He wrote, Private market multiples are trading at substantial premiums despite these weaker margin structures.
我未见市场有明显担忧。最后补充几条动态:融资传闻显示,代理设计平台N8ED有望成为下一家AI独角兽。彭博报道该公司正与Excel领投方洽谈以23亿美元估值融资。消息人士强调这是融资前估值,更显其惊人潜力。
I don't see much concern anywhere. Just a few more now before we get out of here. On the funding side of the house, fundraising rumors suggest agent design platform N8ED will be the next AI unicorn. Bloomberg reports the company is in talks to raise at a $2,300,000,000 valuation in a round led by Excel. Sources also say that this is a pre money valuation, making it even more impressive.
如果本轮融资完成,N8N将成为首家纯代理领域的独角兽企业。只要关注智能体领域的人都知道,这家公司正以惊人速度走红,已成为寻求低代码/无代码搭建AI自动化流程人士的首选平台。消息人士透露其年化收入已从去年的720万美元飙升至4000万美元。人才争夺战方面,微软AI首席执行官穆斯塔法·苏莱曼正发起独立挖角行动。
And if the round closes, N8N will be the first pure play agentic unicorn. Now, this is a company that if you're spending any time in the agent space, you know how popular they're getting, and very quickly. They have become a default and a go to for people who are looking for low or no code ways to wire together AI automations. Sources suggest that they've reached a $40,000,000 revenue run rate, up from $7,200,000 last year. Over in the talent wars, Microsoft AI CEO Mustafa Soleiman is embarking on a poaching mission all of his own.
《华尔街日报》报道称,这位谷歌DeepMind联合创始人正在老东家猎取顶尖人才。效仿扎克伯格的做法,他亲自致电招募对象,宣传微软去年新成立的AI部门比谷歌旗下的DeepMind更具灵活性。报道称苏莱曼不仅愿提供更高薪酬,还承诺让加盟者参与将CoPilot重塑为ChatGPT真正竞争对手的项目。这种策略似乎已初见成效——近月已有二十余名谷歌高管及员工转投微软。
The Wall Street Journal reports that Soleiman is raiding Google DeepMind where he was a founder looking for top talent. Mirroring a tactic from Mark Zuckerberg, they write: has been personally calling recruits, pitching them on the idea that the fledgling AI division Microsoft created last year is a nimbler, more workplace than DeepMind has become under Google's ownership. Mustaf is also reportedly willing to beat salary packages and offer the opportunity to reshape CoPilot into a ChatGPT competitor in its own right. And it seems like the pitch is having at least some resonance. The journal states that two dozen Google executives and employees have joined Microsoft in recent months.
虽然微软没有像Meta那样抛出十亿美元级报价,但报道称纳德拉CEO已授权穆斯塔法组建能与OpenAI等巨头抗衡的AI团队。耐人寻味的是,苏莱曼专门为微软面向消费者的聊天机器人招兵买马,而非企业级AI产品。我个人对此战略深表疑虑——收购Inflection这种消费级DNA公司核心团队,在我看来是微软在脱离OpenAI后持续丧失行业影响力的错误决策。
And while we don't have any sort of billion dollar offers of the sort that Meta's been throwing around, the journal does say that CEO Satya Nadella has given Mustafa autonomy to build an AI operation that can compete with top players like OpenAI. Now interestingly, Suliman is reportedly hiring specifically for Microsoft's consumer facing chatbot, not their enterprise focused AI products. I personally have big questions around that strategy for Microsoft. Frankly, I think that buying out the guts of a consumer DNA company like Inflection at this stage looks to me like a mistake that Microsoft made. And that the company's relevance has done nothing but go down since they've tried to move away from OpenAI.
但未来犹未可知,这个领域瞬息万变。说到OpenAI,本周还有两则非模型动态:首先他们正进行二次股票发售谈判,公司估值或将达五千亿美元。彭博社报道包括Thrive Capital在内的现有投资者正洽购员工持股。
But who knows? It's still a highly dynamic space, and a lot could happen. Speaking of OpenAI, as if they didn't have enough news, our last two stories are the non model things that happened in that company this week. First of all, they are in talks about a secondary share sale that would value the company at a half trillion dollars. Bloomberg reports that existing investors, including Thrive Capital, have approached OpenAI about buying employee shares.
若交易达成,估值将较今年初融资时的3000亿美元再跃升三分之二。这也印证了上周的融资传闻——现有投资者因无法在软银主导的轮次中追加投资而沮丧。近年来随着估值飙升,OpenAI一直积极为员工创造流动性机会,自ChatGPT发布后已进行多次二次发行,有时一年不止一次。此次融资背景是Meta正重金挖角AI研究人员。
If the deal goes ahead, would be another two thirds valuation jump from the $300,000,000,000 OpenAI was valued at during their last fundraising round earlier this year. It would also mesh with fundraising rumors from last week, which suggested that existing investors were frustrated that they couldn't get more money into the SoftBank led round as it comes together. Now, OpenAI has been pretty proactive about getting their people some liquidity as their valuations skyrocketed over recent years. They've done multiple secondary rounds since the launch of ChatGPT, sometimes more than one per year. This round, of course, comes with a backdrop of meta spraying money at AI researchers attempting to lure them away from OpenAI.
让员工优先享受估值提升红利,对维系团队士气至关重要。Hyperbolic Labs的Yuchen Jin透露这并非唯一激励措施——周三他发文称:"OpenAI的朋友们现在兴奋不已,不是因为GPT-5将至,而是山姆宣布将为每位员工发放150万美元的两年期奖金。英伟达78%员工是百万富翁,在OpenAI这个比例是100%。"
So handing employees the first offer at boosted valuation could go a long way to building goodwill. Yuchen Jin of Hyperbolic Labs also suggested that this isn't the only incentive going on at the moment. On Wednesday, he wrote: My OpenAI friends are so hyped right now, not because it's the night before GPT-five, but because Sam just announced $1,500,000 bonus for every employee over two years. 78% of NVIDIA employees are millionaires. At OpenAI, it's 100%.
不妨称之为"扎克伯格挖角效应"。《信息报》周四经详细信源证实:约1000名研发工程师(占员工总数三分之一)将获得奖金,金额从数十万到数百万美元不等,分两年归属。
I think we can call it the Zuck poaching effect. On Thursday, the information covered the story with a little more thorough sourcing. They reported that the bonuses were being paid to around 1,000 researchers and engineering employees, representing around a third of the company. Rather than 1,000,000 point dollars across the board, bonuses are ranging from the low hundreds of thousands into the millions. And the bonuses will vest over two years.
最后,OpenAI近乎免费向政府提供ChatGPT以推动应用落地。周三其宣布与美国政府签订协议,以每机构1美元的价格提供一年期企业版许可。公司声明称:"提升政府效能,使服务更快捷、便利、可靠,是让全民受益于AI的关键。我们认为公务员应参与塑造AI应用方式,最佳途径就是为其配备顶尖AI工具,同时设置严格护栏、保持高度透明、尊重公共使命。"
Finally today, OpenAI is giving ChatGPT to the government basically for free in hopes of driving adoption. On Wednesday, OpenAI announced that they had contracted with the U. S. Government to provide one year ChatGPT enterprise licenses for $1 per agency. The company wrote: Helping government work better, making services faster, easier, and more reliable, is a key way to bring the benefits of AI to everyone.
政府事务副总裁乔·拉尔森强调:"此举重点并非获取市场竞争优势,而是推动AI在联邦机构普及。私营领域已拥抱AI,我们认为政府不应落后。"
At OpenAI, we believe public servants should help shape how AI is used. The best way to do that is to put best in class AI tools in their hands with strong guardrails, high transparency, and deep respect for their public mission. At least from a narrative standpoint, OpenAI is insisting that this is not just about locking in lucrative government contracts with a teaser offer. VP of Government Joe Larson said: The focus of this effort is not to gain a market advantage over competitors. It's to scale the adoption of AI across the federal workforce.
朋友们,虽然还有更多新闻,但节目已近尾声。下周不可能比本周更精彩纷呈,我们定会有时间探讨那些故事。今天就到这里结束吧。
The private sector is embracing AI. We don't believe the government should be left behind. So, friends, there are even more stories, but we're already getting long, so we will wrap there. There is no universe in which next week can be as action packed and release filled as this week, so I'm sure we will have time to get into some of those stories. For now, that is where we're gonna wrap.
一如既往地感谢您的聆听或观看。下次再见,愿您平安。
Appreciate you listening or watching, as always. And until next time, peace.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。