本集简介
双语字幕
仅展示文本字幕,不包含中文音频;想边听边看,请使用 Bayt 播客 App。
各位听众,欢迎收听比特币OpTech通讯第383期回顾。
Everyone to Bitcoin OpTech newsletter number three eighty three recap.
本周,我们将探讨通过差分模糊测试在nBitcoin库中发现的一个共识漏洞。
This week, we're going be covering a consensus bug in the nBitcoin library found via differential fuzzing.
我们将讨论L Enhance软件提案,以及它如何将三种不同的契约式操作码整合为一个提案。
We're going talk about the L Enhance software proposal and how it combines three different covenant style op codes into a single proposal.
我们呼吁在拟议的Virops预算下对比特币脚本进行基准测试,随后将讨论Sphinx Plus(用于后量子签名的SLHDSA)的一些优化方案。
We have a call to benchmark Bitcoin script under the proposed Virops budget, and then we're going to talk about some optimizations to Sphinx Plus, which is SLHDSA for post quantum signatures.
本周,我们邀请到了朱利安加入讨论。
This week, we're joined by Julian.
朱利安,你想做个自我介绍吗?
Julian, you want to introduce yourself?
当然。
Sure.
我是朱利安,目前已经从事比特币脚本修复工作大约半年时间。
I'm Julian and I've been working on restoring Bitcoin script for like half a year for now.
当然还有Valves预算的事情。
And also of course the Valves budget.
太好了。
Excellent.
非常感谢你的加入。
Well, thank you for joining us.
我们先进入新闻环节,稍后会讨论你提出的共识变更事项。
We'll jump into the news section, but we'll get to your changing consensus item shortly.
本周唯一的一条新闻标题是《n比特币库中的共识漏洞》。
First and only news item this week titled consensus bug and end Bitcoin library.
这源于Bruno发表的一篇关于n比特币中opnip操作码导致共识失败的深度分析文章。
This was motivated by a delving Bitcoin post from Bruno about a consensus failure in n Bitcoin involving the opnip opcode.
如果我发音不准,Murch可以纠正我。
Murch can correct me if I'm pronouncing that correctly.
他是通过差分模糊测试发现这个问题的,或许需要补充些背景说明。
And he found that using differential fuzzing, maybe for some context.
而比特币是一个.NET库。
And Bitcoin is a dot net library.
我相信它是用C#编写的,用于处理比特币相关事务,如果你在做.NET开发的话,它就像是.NET中进行比特币操作的API。
I believe it's written in C sharp and it's for working with Bitcoin if you're doing dot net stuff, sort of an API to do bitcoin y stuff in dot net.
他在文章中解释了为什么传统模糊测试无法发现这个问题,主要是因为并没有发生崩溃。
He gets into the post about why traditional fuzzing would not have found this, namely because there wasn't a crash.
我认为传统模糊测试可能发现不了这种异常行为,因为这种偏差出现在类似try-catch的结构中,所以他们的差分模糊测试才能发现。
I think it would, I think the deviation in behavior was in sort of a try catch structure, so their differential fuzzing, traditional fuzzing might not have found that.
Murch,你熟悉opnip操作码吗?
Murch, are you familiar with the opnip opcode?
我们在这里的文稿中稍微讨论了一下这个问题。
We sort of get into it a bit in the write up here.
我一时无法凭记忆解释清楚,但如果需要的话我可以去查资料。
I cannot explain it from the top of my head, but I can look it up if you want to continue me.
不,我甚至不确定...不过我想这确实与讨论相关。
No, I'm not even sure it's well, I guess it is germane to the discussion.
它看起来是在原地移除某些内容,在比特币代码的移位操作中,这导致程序试图访问数组已被移除的末端,引发了类似越界异常的情况,随后被静默捕获——在比特币中这被判定为有效脚本,而在比特币核心中则不然(或反之),本质上这导致了基于脚本验证的共识差异。
It's removing something in place, it looks like, and during the shift in end bitcoin, it caused the code to try to access the end, the previously removed end of the array, caused like an out of bounds exception, which was then silently caught, causing, I think it was in Bitcoin just to be a valid script, in Bitcoin Core it wasn't, or vice versa, so essentially there was a consensus discrepancy based on that script evaluation.
按我的理解,它试图从一个已不存在的数组中移除元素,操作步骤的实现顺序似乎有误。
The way I understood it, it tried to remove an element from the array that wasn't there anymore, like the sequence of steps was implemented incorrectly.
OPNIP操作码会移除堆栈顶部的第二个元素——保留顶部元素不动而移除其下方元素。在这个漏洞描述中,当堆栈满16个元素时,它会试图访问一个已不存在的元素,从而在比特币中触发异常并导致崩溃。
OPNIP removes the second element from the top of the stack, so it leaves the top element untouched and then removes the element underneath, and for some reason in this, if the stack was full, so 16 elements or something from the description of this bug, it would try to access an element that was no longer present and that frozen exception and crashes in Bitcoin.
顺便问下,nBitcoin的最大用户是谁?
The biggest consumer of nBitcoin by the way is what?
BTCPay服务器?
BTCPay server?
是的,非常重要的项目。
Yes, very important project.
由于nBitcoin并非比特币的完整节点实现,从这个意义上说,这并非共识关键问题。
So, because there is no full node implementation of Bitcoin and using in Bitcoin, this is not a consensus critical issue in that sense.
我推测BTCPay服务器也不会用它来解析任何内容——或者说即使会,OPNIP操作码本身使用频率应该也不高。
I assume that it would not be used to interpret anything for BTCPay server either because, or maybe it would, but Upnip doesn't get used all that much, I think.
也许只适用于裸脚本,我猜对吧?
Maybe only in bare scripts, guess, right?
或者我想你也可以在脚本哈希中实现,或者支付到脚本哈希中。
So, or I guess you could do it in a script hash or pay to it in a script hash.
但不管怎样,这是一个不太常见的操作码。
But anyway, it's a less commonly known opcode.
是的,而且因为没有全节点,你提到他们可以很快推出修复。
Yes, and because there was no full node, you mentioned that they could roll it out pretty quickly.
如果你查看时间线和深入帖子,所有事情都在同一天发生。
If you look at the timeline and delving post, everything happened on the same day.
布鲁诺报告了这个问题。
Bruno reported it.
尼古拉斯确认了它。
Nicholas confirmed it.
尼古拉斯提交了PR,9.0.3版本也在同一天发布。
Nicholas opened up the PR and 9.0.3 version was released all in the same day.
现在进入我们每月一次的共识变更环节,不过这次要稍微打破顺序,因为朱利安在场可以和我们讨论共识变更环节的第二项议题——VAR操作预算的基准测试。
Moving to our monthly segment on changing consensus, we'll jump a little bit out of order because we have Julian here to talk about our second item from the changing consensus segment, benchmarking the VAR OPS budget.
朱利安,你曾在比特币开发邮件列表和我想还有Delving Bitcoin上发表了关于在VAR操作预算提案下对比特币脚本执行进行基准测试的文章,这个提案最初源自脚本恢复工作。
Julian, you posted to Bitcoin Dev Mailing List and I think Delving Bitcoin about benchmarking Bitcoin script execution under the proposed VAR ops budget, which is from originally motivated from the script restoration effort.
听众们可能通过第374期新闻通讯和播客中对脚本恢复BIPs的讨论对此有所了解。
Listeners may be familiar from this discussion from newsletter and podcast three seventy four on the script restoration BIPs.
朱利安,或许你可以先简单介绍一下什么是VAR操作,以及它与SIG操作有何不同?
Julian, maybe you can remind us a bit about what is VAR ops and how it's different from SIG ops.
是的,没错。
Yeah, exactly.
我想先简单谈谈关于计算预算的整体构想。
I wanted to start a little bit with the idea of having a computational budget in general.
所以我用类似区块大小限制的方式来思考这个问题,我想大家对区块大小限制都很熟悉。
So I think about it similarly to the block size limit, which I think everyone's very familiar with.
区块大小限制的存在可能有很多原因,但对我来说,主要原因是为了保持网络去中心化。
So block size limit is, has probably many reasons to exist, but for me, the main reason is to keep the network key centralized.
我们希望节点运行尽可能简单。
You want the nodes to be as simple to run as possible.
主要是为了解决计算机硬件限制问题——对于区块大小来说,可能是网络速度和存储空间,特别是如果你想运行完整的存档节点,还包括内存。
And just to address these hardware limitations that computers have in for the, block size, it's probably the Internet speed and also if you want to run full archival node, the storage, and probably also memory.
本质上,我们希望区块链尽可能小以便运行节点,因此我们不希望链无限快速增长。
Essentially, you want the chain to be as small as possible to run a node, and therefore, we don't want the chain to grow infinitely fast.
所以我们有区块大小限制,数据量也受到严格控制。
So we have a block size limit, and we have a very controlled amount of data.
我认为设置区块大小限制的一个非常重要的原因是处理单个区块的速度,因为如果允许区块过大,其他矿工处理该区块会非常缓慢,这样第一个区块的矿工在后续区块开采中会获得优势。
I think one very important reason to have a block size limit is the speed at which you can process a single block, because if you make very big, allow very big blocks, it could be very slow for other miners to process that block, and then the miner of the first block would have an advantage to start for the succeeding block.
是的,没错。
Yes, right.
所以数据在网络中快速传播也很重要,而区块越小,这一点就越容易实现。
So it's also important that the data is passed around through the network as fast as possible and that works much better if the blocks are small.
好的。
Okay.
但如果你运行一个节点,你不仅仅是下载区块并存储它,还要验证其中的交易。
But then if you run a node, you're not just downloading the block and storing it away, but you are also validating the transactions inside.
为此,你需要运行脚本解释器。
And for that, you're running the script interpreter.
对于大多数交易来说,实际上只是验证签名。
And for most transactions, it really just verifies the signature.
所以我们把这些操作简称为SIGOps。
And so it runs these SIGOps is how we call them in short.
由于签名验证的计算速度相对较慢,正如你所说,我们已经为计算密集型操作设定了某种预算,但这只是针对SIGOps的特殊限制。
And since verification of signatures is relatively slow to compute, as you say, we already have a kind of a budget for computationally expensive operations, but it's just super specific for SIGOps.
简单来说,SIGOps预算规定每个比特币区块只能包含不超过80,000个SIGOPS。
So the SIGOps budgets, I simplify a little bit, but it basically says you cannot have more than 80,000 SIGOPS in one Bitcoin block.
就像区块大小限制一样,如果你生成的区块超出了这个限制,它就会失效。
So as with the block size limit, if you produce a block, which has more insight, it will just be invalid.
现在你可能会问,我们还有其他在CPU上验证或处理速度较慢的操作吗?
So now you might ask the question, do we have any other operations which are also slow to validate or slow to process on a CPU?
因为这并不涉及网络速度,而是节点计算能力的问题。
Because this doesn't address the speed of the network, but now the computing power of the of the node.
目前,如果你想通过制造一个极其缓慢的交易来攻击比特币,一个最大化缓慢的交易,你可能会选择反复哈希最大栈大小的元素,因为实际上你可以编写出比仅包含最大数量SIGOps更慢的脚本。
And right now, if you if you were to try to attack Bitcoin by making a absolutely slow transaction, a maximally slow transaction, you would probably actually go for repeatedly hashing the maximum size maximum stack size element because you can actually produce scripts that are slower than just putting in the maximum amount of SIGOps.
因此,我认为提出为什么我们只有SIGOps限制而没有其他限制(例如针对哈希操作的)是非常合理的,因为它们也可能很慢。
And so I think it's quite reasonable to ask why do we only have a SIC ops limit and not, for example, also another limit for hash operations because they can be also slow.
虽然不如SIG验证那么慢,但这确实是相同的概念。
Not as slow as the SIC validation, but it is really the same concept.
好的,我们目前对VAR ops有个概念,你想把它和GSR联系起来吗?
Okay, so we haven't, we have an idea of of VAR ops, do you wanna tie it into GSR?
比如我们能否在没有GSR的情况下实现VAR ops?我知道我退出了那个项目,但也许可以谈谈它们之间的关系。
Like could we do VAR ops without GSR or I know I came out of that effort, but maybe talk about the relation.
是的。
Yeah.
没错。
Exactly.
当然,这并不是为比特币新增约束的真正动机。
So, of course, this is not a real motivation to add to add, like, now a new constraint to Bitcoin.
我们的想法是,如果未来要像在GSR中那样添加新操作码,最好能有一个框架来规范交易中允许的计算量。
The idea is that if we were to potentially add new operations like we would like to do in GSR, then it would be great to have a framework which just captures these how much compute should be allowed in a transaction.
具体到GSR,它包含了约15个被禁用的操作码,其中一些计算量也很大。
And specifically for GSR, we, it contains, like, 15 opcodes that have been disabled, and some of them are also computationally expensive.
主要是乘法和除法运算。
And this is mostly the multiplication and division.
你可能认为在现代芯片上乘法运算很快,但这实际上取决于操作数的大小。
You might think that multiplication is quite fast on a on a modern chip, but it really depends how large the operands are.
如果你有两个非常大的数字要相乘,就应该有某种限制。
So if you have two very, very large numbers and you want to multiply them, there should be some kind of limit.
因此,在没有这类约束的情况下直接重新激活所有操作码可能不是个好主意。
So it would probably not be a good idea to just reactivate all the opcodes without having some constraint like that in place.
是的。
Yeah.
也许这里有一点需要说明。
Maybe one comment here.
当我们引入隔离见证(SegWit)时,采用权重限制而非为见证数据和非见证数据分别设置两个独立限制的主要原因,是我们希望在一个单一维度上定义限制。因为如果在多个维度上设计限制,就会给区块构建引入装箱问题。在我看来,VAR运算的尝试正是为了以与权重限制相协调的方式取代SIG运算限制,毕竟我们仍希望区块构建者和费用估算者只需关注单一维度。
So when we introduced SegWit, one of the reasons why we, or the main reason why we introduced the weight limit instead of having two separate limits for witness data and the non witness data is that you want single dimension in which you define the limit, because if you have multiple dimensions in which you design the limit, you introduce a bin packing problem to block building, So basically, it seems to me that VAR ops is an attempt to replace the SIG OPS limit in a way that is also easy to align with the weight limit, because we, we still only want a single dimension for people to build blocks, to estimate fees, and so forth.
是的。
Yes.
完全正确。
Exactly.
Russell O'Connor曾将这个观点发布到邮件列表上。抱歉,VAUPS预算实际上是对SIG运算预算的泛化,它将在这个新版本中彻底取代原有机制。
Russell O'Connor, I think, posted this onto the mailing list, and the SIC ops sorry, the VAUPS budget really tries to generalize the SIGOps budget, and it will completely, replace that into, in this new tablet version.
今后你就不需要再考虑SIG运算了。
So you wouldn't, think about SIGOps anymore.
这将是一个通用的计算预算体系,适用于所有需要执行时间的操作。
It would just be a general computational budget, which would apply to all operations, which take some amount of time to execute.
或许可以深入谈谈你们的基准测试工作?另外我注意到,这里似乎还包含了对使用不同操作系统和硬件人群的行动号召。
Maybe get into a little bit about your benchmarking and then also, I know it seems like there's also a call to action here for people on different operating systems and hardware.
是的。
Yeah.
也许需要理解为什么我们需要基准测试,为什么它很重要。
Maybe to understand why why we want to have a benchmark, why it's important.
当前状态下的Vowel预算,或许我可以稍微解释一下它实际是如何运作的。
The Vowel's budget in its current state, And maybe I explained a little bit how it actually works.
我想这会有帮助。
I think I think that would help.
这与TypeScript中的限制类似,是在输入范围内。
So it's similar to the limit in TypeScript where it is on an input scope.
因此对于每个输入,根据其大小,你会获得更多的SIGOps。
So for each input, depending on its size, you get, more SIGOps.
你至少有一个。
You have at least one.
如果你的输入非常大,里面可能包含多个SIGOps。
And then if your input is very large, you can potentially have multiple SIGOps in there.
同样地,在VAR操作中,我们获取交易的权重,然后乘以一个预算因子——这个因子就像是该模型的自由参数,大致如此。
And, very similarly in VAR ops, we take the weight of the transaction, then we multiply it by a budget factor, which is like a free parameter of that model, right, more or less.
这样就能得出整个交易的计算预算。
And then gives this gives you the computational budget for the whole transaction.
接着我们遍历所有输入项。
And then we iterate through all inputs.
每当执行一个操作码时,我们就直接扣除相应成本。
And as each opcode is executed, we just subtract the cost.
如果预算值跌破零,就会导致交易失败。
And if the budget ever goes below zero, that would make the transaction fail.
这会触发我们引入的新脚本错误,使交易变为无效。
It would make throw in, a new script error that we introduced, and the transaction would be invalid.
需要说明的是,当前形式的预算值设置得非常高,所以如果你只是广播普通交易,基本不需要考虑这个问题。
Maybe to clear up also, the budget in this current form is is, very high, so you would not be really thinking about it if you just, broadcast a normal transaction.
这个机制主要是为了防范最坏情况,比如有人试图对任意大数进行乘法运算等行为。
This is really to catch these, worst cases if you were to try to multiply, for example, some arbitrary large numbers.
这不会真正影响普通用户或仅使用常规交易签名的人。
And it wouldn't really impact the average user or someone who just uses normal transaction signing.
那么问题可能在于我们如何定义每个操作码的成本,以及我们使用什么样的因子?
So, yeah, maybe the question is how do we define the cost for each opcode and also what kind of factor do we use?
我认为,让预算与权重成比例是相当直接的。
So making the budget proportional to the weight is pretty straightforward, I believe.
但接下来你还需要考虑这个因子,它目前必须具有特定的维度。
But then you have this factor, which has to have certain dimension currently.
它被设置为5200。
It's set to 5,200.
当然,你还需要为每个操作定义成本。
And then, of course, you need to define the cost for each operation.
举个例子,我可以使用哈希运算。
So for example, I can use hashing.
如果我们使用当前的哈希函数,每哈希一个字节需要消耗10个VarOps预算。
Maybe if we use hashing function currently, it takes 10 VarOps budget per byte hashed.
由于操作数的大小通常是未知的,运算符总是作用于不同大小的操作数,因此预算也是可变的。
So since the operands are always kind of an of unknown size, so the the size of the opera the the operators always act on operands of various sizes, and this is why the budget is variable.
这就是名称的由来。
So this is kind of where the the name comes from.
它作用于各种可变大小的操作数。
It acts on various variable sized operands.
所以我们总是按字节计算成本。
And so we always calculate the cost per byte.
如果你哈希1字节,成本就是10。
So if you hash one byte, it will cost 10.
如果你哈希20字节,成本就是200,以此类推。
If you hash 20 bytes, it would cost 200 and so on.
这假设了哈希时间呈线性增长,虽然不完全准确,但数量级是正确的。
So it kind of assumes a linear scaling in how long hashing takes, which is not like 100% correct, but it is on the right order of magnitude.
所有这些变量,模型中的这三个参数,都必须通过基准测试的实证数据来验证。
And all these variables, these three parameters in the model, they have to be verified through some empirical data, which is the benchmark.
所以,基本上根据交易的大小,你会获得更多的操作预算,然后根据脚本中的操作,你会根据使用的操作码、输入操作码和处理的数据量来消耗这个预算。你现在正在尝试对此进行基准测试,以确保操作码的成本与实际计算成本大致相符,同时希望验证你所做的和想要做的合理操作不会超出预算,反之,不合理的操作可能会因预算不足而被阻止。
So, so basically depending on the size of the transaction, you get a bigger budget of wire ops, and then depending on what you do in the script, you pay down from that budget depending on the opcodes you use, how much data go into the opcodes and are processed, and you are now trying to benchmark this in order to make sure that we're not that the costs for your opcodes roughly align with the actual computational cost, and you're hopefully also verifying that whatever you do and want to do that is reasonable doesn't exceed D budgets and vice versa, maybe unreasonable things are prevented by debudget.
这样理解大致正确吗?
Is that roughly a good understanding?
好的,我们可以稍微聊聊基准测试的方法论。
Yes, we can talk a little bit of, of, about the methodology of the benchmark.
真正的目标是,假设我们已按Rusty Russell当前提出的形式完全恢复了脚本,同时我们也启用了DevOps预算。
Really the goal is, let's say we have the script completely restored in the form of how, Rusty Russell proposed it right now, and we also, at the same time, have the DevOps budget enabled.
这是个不错的想法。
Then it is a nice thought to have.
在这种状态下,最糟糕的脚本——我指的是验证时间最短的脚本,当然验证时间最长也不会超过你当前所能达到的水平。
In this state, the worst possible script, so what I mean is the script with the lowest validation time, the the highest validation time, of course, it is not higher than whatever you can do right now.
举个例子,你现在可以在一个区块中放入80,000 SIGOps。
So for example, you can put now 80,000 SIGOps into a block.
而在恢复脚本后,如果能确保无法产生任何比那8,000更慢的脚本,那就太理想了。
And after restoring script, it would be great if you cannot produce any script, is slower than those 8,000.
假设目前最坏情况是8,000,但实际上通过哈希运算,你还能产生更慢的验证时间。
Just assuming that 8,000 is the worst right now, but in fact you can, with hashing, you can also produce even slower validation times.
我认为80,000 SIGUPs是区块限制,对吧?
I think 80,000 SIGUPs is the block limit, right?
不是交易限制?
Not the transaction?
没错。
Exactly.
我们是从区块层面来看的,所以我觉得这样更容易理解规模。
We are looking at it on block scale, so I think this makes it a little bit easier to understand the dimensions.
比如80,000次SIG操作,在现代机器上大概需要两秒左右。
So 80,000 SIG ops, for example, is, around two seconds maybe on a modern machine.
如果你假设整个区块都填满了大数乘法运算,那么最理想的情况是,我们能测量到的最慢执行时间仍然低于那两秒,或者低于你的机器对80,000次SIG操作的基准测试结果。
And if you just assume you have the whole block filled with, for example, multiplication of very large numbers, then it will be great if the, slowest execution time we can measure would be still below those two seconds or whatever your machine benchmarks for 80,000 SIG ops.
是的。
Yeah.
这看起来是合理的。
That seems reasonable.
而且在那篇帖子中,如果你读过Dalving的帖子,关于方法论本身就有相当多的讨论。
And there there in the post, if you read the Dalving post, there was quite a bit of discussion around the methodology itself.
或许需要澄清一下,我们的理念是如果有人试图创建极度缓慢的交易或区块等,既然已经可以使用传统脚本或现有Tabscript Tabscript叶版本,我们实际上无需对脚本施加更多限制,因为当前已存在最坏情况。
Maybe to clarify, the idea is if someone wants to try to create a maximally slow transaction or a block or whatever, if you can already use legacy script or existing Tabscript Tabscript leaf version, we don't really need to constrain the script much further because there is already a worst case right now.
而我们的BIPS提案,它们不会改变ChapScript本身。
And our BIPS, they would not change ChapScript itself.
他们不会进一步限制它。
They wouldn't constrain it more.
实际上他们会新增一个叶子版本,你可以使用Rouse预算、新启用的操作码以及其他一些变更,比如大幅提高的堆栈大小限制。
They would actually add a new leaf version where you can use the Rouse budget and the new enabled opcodes and some other changes like the stack size limits, which are quite a bit higher.
是的,关于基准测试的问题在于,不同机器得到的结果差异很大,这可能有点令人惊讶,特别是哈希函数的表现,它们高度依赖于是否启用硬件加速以及在不同架构上的具体实现方式。
And, yeah, the issue with that is with the benchmark is that depending on the machine, you get very different results, which is maybe a little bit surprising, but especially for the hashing functions, they highly depend on if they are hardware accelerated or really how they are implemented on each architecture.
因此,如果能收集大量数据将会非常理想——包括新旧机器、Intel和AMD处理器、ARM芯片、各种操作系统以及不同编译器下的海量数据。
So it would be really great to have, a lot of data, just a lot of data from new machines, from old machines, from Intel, AMD, from ARM chips, from all the operating systems you can think of from different compilers just to collect a big amount of data.
基于这些数据,我们当然会验证或调整我们的三个参数。
And based on that, we would, of course, then verify or adjust our three parameters.
好的。
Cool.
也就是说,大家可以直接从你的代码库获取基准测试程序并运行?
So, basically, people can just get your benchmark from your repository and run that?
是打包好的吗?
Is that is it packaged like that?
是的。
Yes.
我目前...我想直接提供二进制文件,但具体操作在帖子中都有说明。
I'm I'm currently I I want to provide also binaries directly, but you can also it's all in the post.
你可以检出我的分支,启用基准测试编译后运行这个新生成的二进制文件。
You can check out my branch, then compile with benchmarks on and then run this new binary.
朱利安,我有个后续问题要问。
Julian, I had a follow-up.
我想你可能已经提到过,我只是想确认我的理解是否正确,你们并不是通过执行来确定预算的。
I think you may have touched on it, I just wanna make sure from my own understanding, you don't actually execute to determine the budget.
你们是检查脚本,然后基于此来构建这个预算。
You inspect the scripts and then that's how you build this budget.
你们看到这个大小,然后加入各种操作码及其关联的预算,来判断是否超出了限制。
You see it's this, the size, and then you add in the various op codes and, and their associated, budget to see if you, it's exceeded.
是这样吗?
Is that right?
不是在执行前
Not Before
你们实际执行操作。
you actually do the execute.
你们不是先执行再观察。
You don't execute and then see.
你们是通过检查交易的操作码,来确定该交易是否超出了其FAR操作预算。
You, like, you inspect the transactions opcodes in order to determine if that transaction exceeds its FAR op budget.
对吗?
Right?
不。
No.
不。
No.
我们不那样做。
We don't do that.
我们会执行,是的。
We execute and yeah.
我们会执行脚本解释器。
We execute the script interpreter.
如果预算降到零以下,交易就无效。
And if the budget goes below zero, the transaction is invalid.
在某些情况下,你必须确保在执行下一个操作前先检查它的开销有多大,因为,是的,同样的问题。
And for some cases, you have to make sure that you first check how expensive is the next operation before executing it because, yeah, the same problem.
如果你要乘这两个大数,就得先检查一下。
If you multiply the these two huge numbers, then you want to check before.
所以我们会对这类高成本操作进行预检查。
So we do that for expensive operations like that.
好的。
Okay.
这很合理。
That makes sense.
对。
Yeah.
这取决于堆栈状态。
It's dependent on the stack state.
对吧?
Right?
所以你只能真正检查下一个操作的成本会有多高。
So you can only really check the next operation, how expensive it would be.
你无法无限预知未来,因为那样的话你基本上还是得执行整个脚本。
You can just foresee indefinitely into the future because then you basically have to execute the script anyhow.
对。
Right.
有道理。
Makes sense.
没错。
Right.
我想抱歉。
I believe sorry.
我记得Rusty开发过一个工具就是专门做这个的,不过你最好直接问他,因为我觉得直接执行其实很简单,而且总能得到准确结果。
I believe Rusty has built a tool who that wants to do exactly that, but you you should ask him about that because I believe just executing is very simple, and you always have the correct response.
是的。
Yeah.
Julian,我想对听众们的行动呼吁是:不仅要参与深入讨论,还要在你们的硬件上运行基准测试工具,并将结果反馈回来,这样大家才能相应地进行校准。
Julian, I suppose the call to action for listeners here is to participate in the delving discussion, but also run your benchmarking tool on their hardware and contribute the results back so that you all can calibrate accordingly.
还有其他问题吗?
Anything else?
有。
Yes.
我也一直很期待关于元音预算BIPs的反馈,用于伟大的脚本修复工作。
I also always appreciate feedback on the BIPS for the Vowel budget for the great script restoration.
同时我也在寻求对实现代码的审查。
And I am also looking for a code review for the implementation.
好的,感谢您的时间。
Well, you for your time.
谢谢您加入我们,并带我们了解这些不同的部分。
Thanks for joining us and, and walking us through the different pieces here.
我们非常感激。
We appreciate it.
谢谢,Julian。
Thanks, Julian.
好的。
Yeah.
谢谢。
Thank you.
Moon,抱歉。
Moon, I'm sorry.
我刚才没注意到你。
I didn't see you earlier.
如果我们跳过了你的议题而你一直在场,那是我的失误。
If we skipped your item and you were here, that's my fault.
你想快速做个自我介绍吗?之后我会介绍议题,然后由你来主导。
Do you want to introduce yourself really quick and then we'll I'll introduce the item and you can run with it.
大家好。
Hi, everyone.
我是Nick Moonsettler,目前主要致力于推动关于比特币的Anaheim提案进展。
I'm going with the Nick Moonsettler, and I'm mainly trying to move the Anaheim's proposal forward nowadays on Bitcoin.
我对整个脚本恢复和升级方向也非常感兴趣。
I'm also very much interested in the whole script restoration, what ups thing.
我认为这对比特币来说是个好方向。
I think it's a good direction for Bitcoin.
我不确定我们是否准备好迈出那一步。
I'm not sure if we are ready to take that leap.
我对人们的反应有点悲观,但我非常非常喜欢这个方向。
I'm a bit pessimistic on on people's reactions, but I very, very much like it.
你正在投入时间和精力来尝试改进比特币,这正是我们上周在共识变更中提到的L Enhance软分叉议题的初衷。
Well, you're taking your time and and energy and putting it towards trying to improve Bitcoin, which is what brings us to the L Enhance soft fork item from the changing consensus this past week.
你在比特币开发邮件列表中提出了L Enhance的软分叉提案。
You proposed on the Bitcoin dev mailing list, soft fork for L Enhance.
我想我们中有些人已经听说过L Enhance,但你在列表中发布了这四个已有BIP和参考实现的组成操作码的聚合方案。
I think some of us have been hearing about L Enhance, but you posted to the list to aggregate these four constituent opcodes that have BIPs and reference implementations.
更深入地说,这个捆绑方案包含四个操作码:操作码检查模板验证、操作码从堆栈检查签名、操作码内部密钥和操作码配对提交。
Further down the line, you have opchecktemplateverify, We have op check sig from stack, op internal key, and op pair commit as the four different op codes that roll into this bundle called LN Enhance.
Moon,我让你自由发挥,但也许你可以帮听众理解作为终端用户他们能获得什么。
Moon, I'll let you take it wherever you want, but maybe you can help listeners understand what as an end user they might get.
通过这些操作码我们能实现什么?
What could we achieve with these opcodes?
好的。
Okay.
就像我提到的,我是脚本恢复功能的超级粉丝,所以这并不是我对比特币的.
So like I mentioned, I'm a really big fan of script restoration, so this is not like the end goal that I have in mind for Bitcoin.
并不是说我们要激活、增强然后固化。
Not saying that we activate and enhance and then ossify.
增强的意思是作为一个超级保守的分叉,它实际上并不会从根本上改变比特币太多。
And enhance is meant to be like a super ultra conservative fork that does not really change Bitcoin fundamentally all that much.
它某种程度上让闪电网络及相关的二层工具变得更好、更具扩展性、更易使用、更灵活,基本上是为开发者提供更好的工具来构建像契约池和闪电网络相关的通道工厂结构。
It it it sort of makes Lightning and Lightning adjacent layer tools better, more scalable, easier to work with, more flexible, basically better tools for developers to make arc like covenant pool and and and lightning related channel factory construct.
这就是本提案的范围。
So that is the the scope of this proposal.
它试图在我们决定如何处理脚本的过渡期内做到这一点,因为脚本的发展方向有很多可能性。
It tries to do just that in the interim period while we figure out what to do with script because we can go in a lot of directions with script.
我们可以选择简洁化方向。
We can go the simplicity direction.
我们可以恢复脚本功能。
We can restore script.
这些方案都有各自的权衡取舍,最终界限划在哪里。
They all have various trade offs, where we will draw the line exactly.
我也不确定。
I don't know.
按照比特币的发展节奏,最坏情况下这可能需要很多很多年才能确定。
This as things move in Bitcoin, this can take many, many years to figure out worst case.
或者绝对最坏的情况是毫无进展,维持现状。
Or absolutely worst case, nothing happens, and and we stay as we are.
我个人认为这些后续操作码的影响范围非常有限,二阶和三阶效应也很小,我们应该能够立即就此达成共识。
I I believe I personally believe that these follow-up codes are so limited in impact in in second and third order effects and and stuff like that, that we should be able to have consensus on them right now.
我们应当有能力就此达成共识。
We we should have the capacity to have consensus on them.
这就是我推动这项特定提案的原因,我认为按现状激活这些内容是切实可行的。
And that is why I'm pushing this particular proposal because I believe it's realistic to activate this as they are.
你提到了对ARC的改进。
You mentioned improvements to ARC.
我不确定你是否提到过,但是否存在LN对称性改进或相关工作的推进?
I don't maybe you mentioned it, but is there's LN symmetry improvements or or working towards that as well.
对吧?
Right?
我们是否能实现LN对称性/L2?
Is that do we do we get LN symmetry slash L two?
是的。
Yes.
是的。
Yes.
我们绝对能实现闪电对称性。
We absolutely get Lightning symmetry.
我认为'闪电对称性'比'LN对称性'更贴切的原因是,这与网络无关。
The reason why I think lightning symmetry is a better, name for it than LN symmetry is there is nothing network about it.
这项改进针对的是两个节点之间的通道构建。
Like, this improves a channel construction between two peers.
它不会影响网络的结构。
It does not affect how the network is structured.
自动机制使PTSCs更可能或成为可能。
Auto, it makes, PTSCs, more possible or possible.
因此这可能对网络形态产生更大影响,但对称通道本身仅存在于两个节点之间。
So that can have a a bigger effect on how the network is formed, but the the symmetry channel itself is just between two peers.
这是他们之间的私事。
That is their private matter.
它实际上并不影响网络。
It does not really affect the network.
它能够转发相同的HTLCs。
It can, relay the same HTLCs.
它不需要任何人知道这是一个对称通道。
It it doesn't nobody needs to know that this is a symmetry channel.
这就是我的观点。
That's my point.
然而,对称通道具有极强的可组合性,因为它们不依赖于交易ID或常规负债。
However, symmetry channels are extremely composable because they don't rely on on, transaction, ID, normal liability.
我不确定如何准确表达,但基本上你不需要预测任何最终会上链的UTXO或虚拟UTXO的交易ID,这个结构就能运作。
I'm not sure how to phrase this properly, but basically you don't have to predict the the I the transaction ID of any UTXO or virtual UTXO that will eventually hit the hit the chain, for for this construct to work.
因此它们的工作方式让你拥有更大的灵活性。
So they the way they work is is you have much more flexibility.
你甚至可以将它们叠加在任何契约结构之上,或者相互嵌套。
You can even stack them on top of any covenant construct or or even, into each other.
所以基本上,你可以拥有类似状态链的结构,能够衍生出闪电通道之类的功能。
So basically, you can have, state chain like constructs, that that can, you know, spawn lightning channels and and stuff like that.
这种灵活性让你无需预测交易ID。
And and you have this flexibility that you don't need to predict the transaction IDs for it.
就像这样,它是可行的。
Like, it it works.
这就是对称图表的全部意义所在。
That's the whole point of the symmetry chart.
所以我认为它将成为一种基础的构建模块,其重要性远不止于两人之间的闪电对称通道。
So I think it's going to be like a basic construction building block that is, that is going to be way more significant than just a lightning symmetry channel between two people.
好的。
Okay.
我有个简短的问题。
One, one short question.
你刚才说的是L Enhance支持PTLCs还是L Insymmetry支持?
You said that, did you mean L Enhance enables PTLCs or L Insymmetry enables Yeah.
据我所知,Allow应该支持PTLCs。
Allow should enable PTLCs as far as I know.
基本上,就是
Basically, it's
我不太清楚其中的关系,这个关系对我来说不太明确。
not clear what the relationship to me the relationship is not clear to me.
PTLCs难道不应该由Taproot通道来启用吗?
Shouldn't PTLCs be enabled by Taproot channel?
我也不确定。
I'm not sure either.
我出于某些原因听说实现这个非常复杂,但如果通过Stack实现CheckSig就会容易得多,这是我听说的,我个人没有深入研究PTLCs,所以可能
I for some reason I heard that it's it's very complicated to do that, but if you have CheckSig from Stack, it's much easier, that's what I heard, I did not dig into PTLCs personally, so maybe the
信息CheckSig仍然来自Stack,有积极的交互作用,我明白了,好的,是的。
information CheckSig is still from Stack, interacts positively, I see, okay, yeah.
这就是我听到的说法。
That that's what I heard.
是的。
Yeah.
但我主要关注的是这个基础构建模块,我想添加它,同时结合CTV和堆栈校验(CheckSig from Stack),可以实现许多有趣的功能。
But my main focus is really this this basic construction building block that I want to add that and also with CTV and CheckSig from Stack, can do a lot of interesting things.
比如,我们可以构建非交互式通道,你基本上可以保留你的密钥代码,并随着消费动态调整你的热钱包余额。
Like, we could do noninteractive channels that you can just keep your keys code basically and rebalance your hot balance as you spend from it.
你可以随时增加额度,只需通过硬件钱包签名等操作即可完成。
You can just increase it, and you can just do this operation, let's say with a hardware wallet, signing and stuff like that.
从技术上讲,没有这些构建也能实现这些功能,但总会遇到诸如备份机制、备份文件大小和复杂度等问题,没有契约的情况下这些方面总是更糟糕。
Technically, you can do these things without these constructions, but you always run into stuff like the the the backup mechanics and and and the size and complexity of backups that you have are always worse without covenants.
所以我们想要制定契约而不仅仅是预签名交易的主要原因是,我们希望拥有尽可能静态的良好静态备份,并且希望为闪电网络提供一个统一的备份状态。
So the main reason why we want to do covenants and not just the pre signed transactions is that we want to have good static backups as static as possible, and we want to have like a o one backup state for Lightning.
因此我们基本上希望在这方面提供更好的用户体验,为人们提供更安全的东西,同时对比特币网络也更高效。
So we want to have basically better better UX on that front and and something more safer for people and and more efficient for the Bitcoin network.
很多时候你需要在两个人之间建立某种形式的契约关系,我认为将契约框架化为一种无法违背的承诺是个好方法。
A lot of times where you want to have, some form of covenant between two people, and I think it's a good way to frame covenants as a promise that you cannot break.
所以如果双方之间存在一个2/2的多重签名,他们当然可以合作改变结果,但任何一方都不能单方面改变协议。
So if if there is a two of two multisig between two parties, then they can, of course, collaborate to change the outcome, but neither of them can just unilaterally change the deal.
所以这可以说是一种契约。
So that that is sort of a covenant.
CTV则要简单得多。
CTV is much simpler.
它不需要签名。
There is no signature.
你只需要计算出预期输出的哈希值,就能完成承诺。
You just you just basically calculate the hash of the of the outputs that you want to have, and and you can commit to it.
这样就实现了非交互性。
And that makes it non interactive.
不需要双方合作。
You don't need two people to cooperate.
不需要那些共同签名的交互操作。
You don't need all that co signing interactivity.
你只需要明确承诺某些条件,另一方看到后就能确认:'如果满足这些条件,我就会收到款项'。
You can basically just you need to literally commit to something that another party can look at and say, okay, I see that if this and this condition is fulfilled, then I get paid.
因此我们认为这些简单而强大的基础功能足以支持一次软分叉,因为你总是需要评估收益是否值得推动这一流程。
So we believe these are these are simple and and powerful enough primitives to warrant a soft fork, because you always have to check if the gains are enough to warrant to go through the process.
我们认为它能大幅改进许多方面。
We think it improves a lot of things.
这些功能已经足够,并且非常安全且受限。
It's enough and they are they are very safe and limited.
这就是它的核心理念。
That's the the the idea of it.
Moon,你如何看待操作模板哈希或操作模板哈希下的集合,其中还包括来自Stack的CheckSig、内部密钥、内部密钥,以及一个Taproot原生版的CTV变体称为操作模板哈希,这样就不需要配对提交了。
Moon, how do you think about op template hash or the collection under op template hash, where you also have CheckSig from Stack, you have internal key, internal key, you have a variant, of CTV that's Taproot native called op template hash, and then you don't have the pair commit.
你能否,或许可以,为听众具体对比一下,在你目前提到的内容中,哪些在这个变体中不会发生,然后我们可以听听你的看法。
How do you, how do you maybe, maybe just contrast for listeners what tangibly out of the things that you said so far wouldn't happen with that variant and then we can get your take on it.
好的。
Okay.
所以这个提案同样完全能够实现闪电对称性,以及我们讨论过的许多功能。
So this proposal is, also perfectly capable of doing lightning symmetry and, and a lot of the stuff that we told about.
我见过一些合约构建提案需要每笔提交,但我们讨论的大部分内容并不特别需要每笔提交。
I have seen contracts constructs proposals that need a per commit, but most of the stuff that we told about does not specifically need per commit.
每笔提交只是一种非常简单、经济高效且安全的方式,用于承诺两个目前比特币脚本中尚未支持的堆栈元素。
Per commit, is just a very simple cost effective safe way to commit to two stack elements that we currently do not have in Bitcoin script.
当然,我们可以对单个堆栈元素进行哈希处理,但目前没有方法能不可变地承诺两个堆栈元素——这里的不可变指的是无法找到哈希碰撞。
We can, of course, hash a single stack element, but we do not have something that can commit immutably to two stack elements immutably in the sense of finding a hash collision.
对吧?
Right?
所以这在理论上并非完全不可能,但实际中我们认为它不太可行。
So that's not impossibly impossible, but practically, we consider it not really feasible.
而每笔提交在闪电网络对称性方面有着非常具体的原因。
And PerCommit has a very specific reason in regards to Lightning symmetry.
正如我所说,我们希望实现一种情况,即O1备份机制。
We want to have a situation, like I said, we want the O1 backup.
使得通道双方只需保留各自最新的状态作为备份。
That that the channel peers always only have to hold on to their latest state as backup.
他们不需要保存所有签署过的数百或数千个状态。
They don't have to hold on to all the hundreds or thousands of states signed.
这是一种更安全、风险更低的操作方式。
This is a much safer, less risky operation.
另一个降低风险的因素是你不会受到惩罚。
The other part of the less risky is you do not have a penalty.
所以如果你搞砸了任何事情,比如不小心推送了旧状态——可能是因为从备份恢复、虚拟机还原等原因——当你把旧状态推送到链上时出现分歧,通信中断,你推送了错误状态也不会受到惩罚。
So if you screw up anything, if you accidentally push an old state because you restored from backup, you restored the virtual machine, whatever, You push an old state to chain, there is a disagreement, communication breaks down, push it on with you then get punished.
对方会自动花费到最新状态。
The other party will just spend to the latest state automatically.
这才是你预期的结果。
That's like your expected outcome.
而且,如果在闪电网络状态机的任何时刻,出现合作破裂或HTLC接近超时等情况,你都可以直接把当前最新状态推送到链上。
And and, also if if there is at any point in the in the lightning state machine, if you have a situation where, cooperation breaks down or an HTLC approaches a time out of whatever, you can just push whatever latest state you have to change.
你不会因此损失所有资金。
You will not lose all your money.
这就是为什么我说闪电网络操作员确实存在,而这正是它的另一部分。
That is why I say it really there is the lightning operator, and and this is the other part of it.
当然,这实际上让开发者的事情变得更简单了。
Of course, it makes the whole thing actually simpler for developers.
我知道我们已经有一个让开发者们为这种复杂性和危险性而苦苦挣扎的闪电网络,说'我们本可以做得更好'有点奇怪,但我认为我们会从中获得足够的收益,让这成为一个合理的举措。
I know that we already have a lighting network where the developers have struggled to this complexity and and danger, and it's a bit weird to say, okay, but we could have done this a lot better, but I think we will get enough benefit out of this for for this to be, like, a reasonable move.
所以'Per Commit'与此相关。
So Per Commit is related to this.
因此,为了拥有OVAN备份,如果有人因为没拿到最新备份或试图作弊而将一个中间状态推送到链上,我们无从得知。
So to have the OVAN back up, if someone pushes an intermediate state to the chain because they don't have the latest backup or they are trying to cheat, we don't know.
对吧?
Right?
他们把旧状态推送到链上。
They push an old state to chain.
我们只需要最新状态来消费它,并且需要能够重建那个脚本。
All we need is the latest state to spend to it, and we need to be able to reconstruct that script.
要重构脚本,我们需要知道该状态的结算交易形态是怎样的。
And to reconstruct the script, we would need to know what was the the settlement, the shape of the settlement transaction for that state.
但我们没有备份。
But we don't have the backup.
我们只有最新的备份。
We only have the latest backup.
所以我们要做的是,强制将中间状态推送到链上的那一方也提供这些信息。
So what we want to do is we want to force the party that pushes an intermediate state on chain to also provide this information.
因此他需要在见证数据中提供所有内容,这样我们才能重构交易的脚本,然后才能花费到最新状态。
So he needs to provide everything in the witness that, we can use to reconstruct the script of the transaction, and then we can spend to the latest, state.
我不确定是否解释得足够清楚,但这就是Per Commit的全部意义。
I'm not sure if I explained it, clearly enough, but, that's the whole point of per commit.
当然,你现在不明白是吗?
Now, of course you don't yes?
哦,我以为你要做总结了。
Oh, I thought you were going to wrap up.
展开剩余字幕(还有 327 条)
我刚想提一下,Moon,你和Reardon在330期播客里讨论过L的更新。
I was just going to plug that Moon, you and Reardon were on podcast three thirty when we talked about the update to the L.
大约一年前讨论L增强提案时,专门谈到了这个操作码添加到L增强包的事。
Hance proposal just about a year ago, talking about this specific opcode addition to the L Enhance bundle.
如果有人好奇,也可以回听那期播客快速了解情况。
If people are curious, they can also jump back to that to get up to speed as well.
很好。
Great.
好的。
Okay.
那么我主要想说明,我们并非绝对需要这个功能。
So then I will mostly, just mention that we don't absolutely need to have this.
所以如果没有每次提交,只有CTV、堆栈检查和内部密钥,你需要把这些数据放入开放返回中。
So if you don't have per commit, if you only have CTV, check-in from stack and internal key, then what you would do is you would, put this data into an open return.
对吧?
Right?
因此内部数据状态会显示为一笔包含OP_RETURN的支出,数据是可用的。
So the internal data state would look like a spend that has an op return, the data is available.
OP_RETURN数据当然比见证数据贵四倍。
Op return data is of course four times more expensive than witness data.
所以这是一种低效做法。
So that's an inefficiency.
如果这样做,每个人在区块中的可用空间都会减少。
Like everyone has less space in the block if we do this.
我们正努力追求效率,这就是动机所在。
We are trying to be efficient, so that is a motivation.
因此我们希望将这些恢复数据放在见证中。
So we want to have this, this recovery data in the witness.
另一种方法是模板哈希方案,由于他们通过Taproot方案计算这个C缓存类型哈希的方式,它会提交到附件(annex)。
And the other, approach to this is the template hash approach, which makes it possible to because of the way they they compute this C cache type hash using the Taproot schematics, it commits to the annex.
你可以将这些数据放入非空的ONNX中进行转发,这样就能获得见证折扣。
And you can put this this data if you relay non empty ONNX into the ONNX, and that gets the witness discount.
当然,你也可以将OPTURN与模板哈希结合使用,因为它对所有输出进行了承诺。
Of course, you can also use OPTURN with template hash because it commits to all the outputs.
所以它基本上就像CTV,但CTV不对ONNX进行承诺,而模板哈希会对ONNX进行承诺。
So it it's basically like CTV, but CTV does not commit to the ONNX, template hash commits to the ONNX.
如果要解释这些提案之间最大的区别,我认为就是这一点。
If I had to explain the the the greatest difference between the proposals is this.
我认为存在一种可能性,如果我们开始中继非反ONEX数据,可能会引发另一轮关于外生资产的热潮,因为那里简直是外生OSAT负载的理想存放地。
And I think there is a possibility that we if we start relaying non anti ONEX, then we will see another exogenous asset hype cycle regarding because that's kinda like the ideal place for any exogenous OSAT payloads.
就像我说的,它能享受见证折扣。
Like I said, it gets the witness discount.
它承载数据,诸如此类。
It continues data, so on and so on.
我不想过多赘述这一点。
I don't want to harp too much on it.
我已经向大家表达了我的看法,我认为存在风险,但这并非技术性或首要风险。
I I already told everyone what I think about this, I think there is a risk, but it's not a technical and not a first order risk.
那么你的论点会是,配对承诺(Pair Commit)能让人们在丢失所有备份数据的通道关闭情况下更高效地进行恢复。
So your argument would be that Pair Commit makes it more efficient to help people recover from a channel closure where they lost all of their backup data.
那么模板哈希——我不记得他们具体叫什么了——但模板哈希加上从堆栈检查寻找(check seek from stack)和内部密钥(internal key)呢?
Would then the template hash, don't remember what they called it, but template hash plus check seek from stack and internal key.
你会说如果加入配对承诺(peer commit)后,这会得到显著增强吗?
Would you say that it gets significantly enhanced if peer commit got added to it?
不,不会有显著增强。
No, it does not get enhanced significantly.
实际上我要说的是,由于检查模板验证(check template verify)的工作方式,模板哈希必须——某种程度上——你必须拥有那个需要核对和验证的模板哈希在堆栈上。
What I would say it actually, template hash, because of the way, check seek from, check template verify works, you have to, sort of like, have that hash, that template hash that you that you have to check and verify against on the stack.
你必须以某种方式包含这个数据,要么作为签名数据的一部分,要么作为见证参数。
You have to include this somehow either as part of the data signed or as a witness parameter.
对吧?
Right?
所以你必须在堆栈上拥有模板哈希数据。
So you have to have the the template hash data on the stack.
这意味着你必须把这部分成本计入你的业务开销。
That means you have to add this to to your, business cost.
现在有了模板哈希,你能做到这一点。
Now with template hash, can you do that.
你计算这个模板哈希并通过单字节操作码将其置于栈上,然后可以验证签名。
You calculate this template hash and have it on the stack with a single opcode operation, single byte cost, and then you can check a signature against it.
这意味着实际上,你可以用1字节替代33字节,33或34字节的开销现在只需1字节。
That means you can actually, instead of using 33 bytes, you can use like one byte, 33, 34 bytes, and and you can use one byte.
因此从技术上讲,TemplateDash使得更高效的闪电网络对称实现成为可能。
So technically, TemplateDash makes an even more efficient, lightning symmetry implementation possible.
所以如果你只关注闪电网络对称性和契约池结构或板块结构,并且重视模板哈希实现的简洁性。
So if you are only interested in lightning symmetry and the and the covenant pool constructions or plank constructions, then, and, and, and, you value the simplicity of the, of the template hash implementation.
就像你非常看重其简洁性以及它是纯TypeScript实现这一事实——之前有人对使用可升级操作码等方案并不满意。
Like you, you put a large value into that simplicity and the fact that it's a TypeScript only, there have been people who were not happy about using an op an upgradable opcode and and stuff like that.
那么你会认为这是个绝对更优的方案。
Then you would think that this is a strictly better proposal.
从某种角度来看,它确实更胜一筹。
From a certain perspective, it is strictly better.
正如我所说,我们正试图权衡多种风险。
Like I said, we are trying to consider a lot of different risks.
当我们提出这些后续代码时,也在考虑社会层面的潜在风险。
When we propose these follow-up codes, we are trying to consider like social trauma risks as well.
从这个角度看,我认为联盟方案更优。
And from that perspective, I think Alliance is better.
但从技术角度而言,我认为模板哈希检查CycFromStack和内部密钥是非常扎实的提案。
But from any technical perspective, I think template has checked CycFromStack and internal key are a very, very good solid proposal.
所以在技术层面上我很欣赏这个方案。
So I I like it on a technical level.
我认为它可能会在社会层面引发令人遗憾的——该怎么形容——连锁反应?
I think it is going to have a regrettable, how do you say this fallout in the social space?
如果我们选择这个方向,这是我预测的结果。
If we go the direction, that's what I predict.
但是,如果——再次强调如果——我认为我们将进行脚本恢复,那么我们就是在走GSR路线。
But, if if and and again, if I if I think that we are going to have script restore, we are going in the GSR direction.
这是整个问题的另一个方面。
This is another aspect to all this.
那样的话我们就会拥有CAT操作码。
Then we are going to have cat.
所以任何配对提交、向量提交提案都可以通过CAT来实现。
So any any pair commit, vector commit proposal can be implemented with cat.
虽然单次提交仍是更简单安全的实现方式,但你可以用多个操作码来模拟单次提交的功能,CAT就是其中之一。
Per commit is still a simpler and safer way to do it, but you can use multiple opcodes to emulate basically what Per commit does, and CAT is one of them.
当然,如果只采用CAT的话,在表达能力上会存在非常非常明显的门槛限制。
Of course, a very, very significant threshold breaks in expressivity if you just bought CAT.
所以如果我百分百确定我们会恢复脚本功能,那么我会说可能没必要专门添加一个操作码——这个操作码当初被选中就是因为我们以为无法就CAT达成共识。
So if I was like 100% sure that we are going to have script restored, then I would say it might make sense not to add an opcode that was specifically selected because we thought that we can't have consensus on cat.
所以这将是另一个,你知道的,另一个考量
So that would be another, you know, another
还有其他考虑吗?
Another consideration?
好的。
Alright.
是的。
Yeah.
另一个方面,为什么你会偏好模板哈希。
Another another other aspect of this, why you would prefer template hash.
对。
Yes.
但是,如果我们认为未来有些不确定,我们现在就想改进,而社区已经有些受创,我们想尽量减少这些外生资产代币相关创伤的可能性,那么我认为目前Anahance是更好的选择。
But, if if we if we say that future is a bit uncertain and we want to improve things now and, the community is sort of traumatized already, and we want to minimize the potential for these exogenous asset token related traumas, then I believe Anahance is the better choice right here, right now.
啊,好的。
Ah, okay.
明白了。
I see.
因为你认为如果采用模板哈希,配合你称之为'闪电对称性'的改进备份版本,就会想要使用附件(Annex)来存储数据,而人们会利用附件进行见证数据填充,这显然会导致问题,所以选择L方案。
Because you think that if a template hash came, for the, the improved backup version with the Lightning Symmetry, as you call it, you would want to use the Annex for data storage and that would be people using the Annex for witness stuffing and that would blow up of course, so L.
通过配对提交的增强方案可以构建一种见证结构,这种结构专门限制额外数据填充,从而在社会层面更安全。
Enhance by having a pair commit would have a witness construction that only allows the, or doesn't specifically allow more data stuffing and that way it's socially safer.
这是你的论点吗?
Is that your argument?
是的。
Yes.
是的。
Yes.
再举个例子,如果你采用每笔提交都带模板哈希的方案,且模板哈希仅承诺ONNX存在与否(而非ONNX具体数据),仅记录ONNX存在状态。
And then again, if you, if you, if you, if you had per commit with template hash and template hash, let's say only committed to the ONNX being there or not being there, not the ONNX data, just ONNX present or not present.
这种情况下,我甚至认为该提案优于当前采用检查模板验证(CheckTemplate Verify)的LHONS提案。
In that situation, I might even consider that proposal superior to the current LHONS proposal with with the CheckTemplate Verify.
不过我们也喜欢检查模板验证(CTV),因为它能作为裸CTV使用。
Although we also like CheckTemplate Verify for its ability to be used as a bare CTV.
你知道,你可以在传统脚本中使用它,而不需要整个见证机制和其他所有东西。
You know, you can use it in legacy script without the whole witness thing and and everything.
理论上我们喜欢这样,但实际上,似乎总是你真正需要的是Taproot。
We we like that in theory, but in practice, it always seems to be that you really want taproot.
只要它没有损坏,你确实希望密钥池可用于任何协作变更和优化。
You really want to have so long it's not broken that key pod be available for any cooperative change and and optimization.
比如,你可以直接跳过整个脚本化的Taproot处理,只需进行一次单一的——我的意思是,可能是一个基于协作的多签签名,但它看起来仍然像是一个单一签名,并且只获取链上发生事件的最少信息。
Like, you can just skip the whole skip Scripty tap treating, and you can just, do a single, single, or I mean, it's probably a collaborative music based signature, but it still looks like a single SIG and retrieves minimal information about what happened on chain.
因此,这始终是合作解决问题的优选方式。
So that is a preferable way to resolve things cooperatively always.
只要密码学不被攻破,这种高效且内置于所有协议中的能力就能有效简化脚本执行流程。
So long again we do not have this cryptography broken that is very efficient and and also baked in in all of these protocols sort of to to have that ability to shortcut the script execution.
从这个角度看,它并非极具价值,但从另一角度而言,我们确实欣赏CTV能独立于Tabscript之外存在的特性。
So from that regard, it's not super valuable, but on from another perspective, we actually like that CTV can exist outside of Tabscript.
所以这就像抛硬币,结果可能因个人偏好而异。
So this is like coin toss or it can go either way depending on personal preferences.
本周我们还有一项共识变更议题。
And we have one more changing consensus item this week.
SLHDSA Sphinx后, 后量子签名优化方案。
SLHDSA Sphinx post quantum signature optimizations.
我们未能达成一致条件,所以我将尽力谈谈后量子签名优化。
We were unable to get conduition on, so I'm going to do my best to talk about post quantum signature optimizations.
如果我犯了错误,Merg可以插话纠正。
Merg can chime in if I make a mistake.
否则,Conditions那边也有些很棒的博客素材。
Otherwise, Conditions got some great blog material as well.
是的,Conduition在比特币开发者邮件列表上发布了关于优化Sphinx签名算法的内容。
So yeah, Conduition posted the Bitcoin Dev mailing list about optimizing the Sphinx signing algorithm.
我发现Sphinx Plus现在改名叫SLHDSA了。
I found out that Sphinx Plus is now called SLHDSA.
Merch已经在向我挥手了。
Merch is already waving at me.
怎么了,Merch?
What's that, Merch?
不,抱歉,我只是在看你之前在聊天里写的内容,但我觉得Sphinx Plus是昵称,而SLHDSA是无状态哈希数字签名算法,听起来技术性更强,不过我不确定,也许你是对的,我还没研究清楚这些术语的顺序。
No, sorry, I was just looking what you had written in chat earlier, but I think Sphinx Plus is the nickname and SLHDSA is stateless hash based digital signature algorithm, so it seems much more technically, but I don't know, maybe you're right, I haven't researched the order of things.
这是一种量子抗性签名方案或算法,曾是BIP30提案的候选方案,也是比特币量子抗性相关讨论中的方案之一。
This has been a quantum resistance signature scheme or algorithm that's been a candidate for the BIP30 proposal and other ideas around quantum resistance to Bitcoin.
Conduition对该算法进行了一系列性能优化,他认为自己现在可能拥有,引用,'可能是目前公开可用的SLH DSA最快实现,至少在我的硬件上是这样,也可能是GPU实现中最快的之一'。
Conduition has made a bunch of performance improvements in the algorithm that he believes he may now possess, quote, what may be the fastest publicly available implementation of SLH DSA, at least on my hardware, and possibly also one of the fastest GPU implementations.
Moon,我看到你举手了。
Moon, I see your hand up.
Moon,你静音了。
Moon, you're muted.
我想补充一点,严格从技术角度来说,比特币通常不需要像Sphinx这样的方案。
So I just wanted to chime in on that, you generally don't want strictly, technically, something like Sphinx for Bitcoin.
如果你的目标只是每个UTXO支出时使用一个签名,那么Sphinx方案实在是过度设计了。
Like, if if your goal is just to have, like, one signature per spending, a UTXO, then Sphinx is way, way overdone.
比如,我们可以使用哈希算法实现更高效的后量子签名方案。
Like, we we can have much more efficient of it, post quantum signature schemes, using hashes.
Sphinx更适用于非比特币场景,比如需要多次甚至频繁签名的环境。
Sphinx is more more suited to not Bitcoin context and into a context where you have to sign multiple times, even many times.
但这是有代价的。
But this is a cost.
对吧?
Right?
所以基本上,比特币开发者试图在典型使用场景与使用通用方案的成本之间取得平衡——这些方案并非专为比特币设计。
So so, basically, in Bitcoin, what the developers are trying to do is balance out the typical use cases in Bitcoin against the cost of using something very general and not not really, developed with Bitcoin in mind.
关键点似乎是这个一次性签名随机森林机制,通过无状态设计实现——因为这些基于哈希的签名只能使用一次。
And the the key piece seems to be this random, forest of, of, one time use signatures, thing where you basically you are trying to make it stateless because you can only reuse these hash based signatures like one times.
如果不想在签名设备中维护状态(这很麻烦),
So if you don't want to have to maintain a state in a signing device, that's a pain in the ass.
就必须提取交易摘要作为随机源,并找到要签名的随机树分支。
And you have to take a digest of the transaction and use that as a source of randomness and and find the random tree branch that you want to sign with.
所以这大致就是他们的研究方向,如果我没理解错的话。
So that is kind of what they researched, if I understand it correctly.
然后,嗯,是的。
And, and, yeah.
我必须简短地表示不同意。
I have to disagree briefly.
我很希望我们能让UTXO只能单次签名,那样就太好了。
I would love if we could make UTXOs only, like, if you could only sign a single time for UTXO, that would be great.
但我们不能那样做。
But we cannot do that.
那样显然会立即消除地址重用问题,这将会非常棒。
Like, that would obviously get rid of address reuse immediately, which would be brilliant.
但如果你创建了一笔交易,比如因为第一次手续费太低而需要追加手续费的情况呢?
But what if you create a transaction that, for example, doesn't get, like if you have to bump a transaction because it was too low fee the first time, right?
所以总会遇到需要重新发起相同交易的情况。
So there's always situations in which you have to reissue the same transaction.
即便没有任何地址复用的情况,你也必须能够为比特币交易多次签名,否则很容易就能通过等待你签名后伪造签名窃取资金,从而阻止你的交易被确认。
Even if you don't have any address reuse, you have to be able to sign more than once for Bitcoin transactions, or it would be very easy to just censor your transaction from being confirmed, waiting until you make a signature and then forge your signature and steal your money.
所以我们需要多次、多次的签名。
So we need many, many times signatures.
是的。
Yes.
是的。
Yes.
我同意。
I I agree.
不过,比特币通过Taproot已经实现了非平衡树的功能。
However, Bitcoin has this thing with Taproot already where you can have nonbalance trees.
比如,Sphinx就是理想方案。
Like, Sphinx is ideal.
我是说,想象一下他们称之为'超森林水域'的平衡默克尔构造集。
I mean, imagine to have a balanced Merkel construct set that they call hyperforestal water.
我不知道。
I don't know.
基本上,比特币的等效情况是大多数时候你只需要一个签名,但你也希望有能力在必要时以更高的存储成本、带宽成本和手续费为代价,拥有多个签名作为备份方案,但你持乐观态度。
And and basically, the Bitcoin equivalent would be is most of the times you want to have like one signature, but you want to have the ability to have many, many signatures at more cost and more more storage cost, more bandwidth cost, more fees as a backup if you have to, but you are optimistic.
实际上你认为,比如说90%的情况下,你的第一个签名就能被打包进区块。
You you actually think that your first, let's say 90 something percent of the times your first signature will get into a block.
所以你希望这个流程能真正高效。
So you want to have that, like, really efficient.
这就是我想表达的意思——Sphinx并不真正适合比特币,但这项研究看起来很棒。
And, and that's what I wanted to mean that Sphinx is not really suited for Bitcoin, but this research looks great.
我还没有深入研究,但我非常喜欢目前读到的内容。
I did not dig into it, but I really, really liked what I read so far.
这个观点很有意思。
That's an interesting point.
如果你有一棵可以从中花费的叶子树,每片叶子只能使用一次,但你仍然可以拥有许多叶子,成本只会逐渐增加。
If you had sort of a tree of possible leaves to spend from and you only get to use each leaf once, but you could still have many leaves and the cost would just gradually increase.
这实际上也能满足我们的需求。
That would actually also satisfy our requirement.
但当你比如提高交易手续费时,你的交易体积也会随之增大,这有点讽刺。
But then when you, for example, fee bump, your transaction, your transaction would also increase in size, which is kind of funny.
是啊。
Yeah.
手续费率越高,你还得把交易做得更大。
The higher fee rate, you also have to make the bigger transaction.
而且它是有状态的。
And it's stateful.
对吧?
Right?
这正是我之前提到的。
That that's what I mentioned.
缺点是它看起来更适合比特币,但状态性会反噬你。
The the downside is that it looks more suited for Bitcoin, but the statefulness bites you back.
对。
Right.
那么回到Sphinx。
So back to Sphinx.
Sphinx基于改进版的Winternet一次性签名方案和少量多次签名方案,我认为关键点在于随机子集的森林结构和默克尔树。
Sphinx is based on the mod on a modified version of Winternet's one time signatures and on a few time signature scheme, I think that's the point, with the forest random forest of random subsets and Merkle trees.
所以主要缺点是相比我们现在使用的方案,它们的体积非常庞大。
So the big downsides are they're enormous comparatively to what we're using these days.
我手头没有具体数据,但目前公钥需要几KB,签名至少有几百字节,我记得是800字节或更多。
I don't have it here from the top of my head, but we're looking at several kilobytes for pubkeys and signatures currently are in the several 100 bytes at least, I think 800 bytes or more.
我们怎么定义公钥?理论上所有东西都可以表示为哈希值,当你处理公钥脚本时,森林根也只是一个哈希值。
What do we call a pubkey because in theory, every everything can be just represented as a hash when you are dealing with a pubkey and script, when you are representing a pubkey script, everything is just a hash, a forest root is just a hash.
所以我总是对此感到困惑。
So I'm always confused by this.
你可以将公钥承诺写入输出中,从而廉价地使用输出数据(或减少昂贵输出数据的使用),然后在见证中提供额外信息来证明支出权限等等——或者可能不是见证而是另一个扩展,因为现在数据量太大了,否则会大幅降低吞吐量。
You could make a commitment to a public key into the output and thereby use the output data cheaply, or use less output data that is expensive, and then provide the additional information and the witness to prove that you were authorized to spend and so forth, or maybe not the witness, but another extension because the data is so much bigger here now, otherwise that would be a huge throughput decrease.
但具体如何实现这一点可能并不那么重要,不过总的来说,对于我们执行的每个操作、花费的每个输入,我们都需要在链上存储更多数据来授权交易。
But how you implement that specifically is maybe not that important, but generally, per operation that we're performing, per input that we're spending, we would have a lot more data on chain to authorize the transaction.
是的,我想现在我们使用的签名大约是65字节,65到80字节左右,对吧?
Yes, I think right now we have like 65 byte signatures, 65 to 80 bytes something in that range, right?
而基于哈希的最小签名是304字节。
And the smallest hash based signature was 304 bytes.
实际上我一直在研究的方案,大小在800字节到3.5千字节之间。
And, actually, what I have been playing with, ranges from 800 to 3.5 kilobytes.
这些都是非常基础的版本。
These are very primitives.
他们正在研究的提案其实更优秀。
The the proposal that they have been looked at are actually better.
但如果你想使用无状态结构,那就必须面对几千字节的大小。
But, if you want to use the stateless structure, then it is going to be kilobytes.
这是肯定的。
That's for sure.
是的。
Yeah.
而无状态性非常重要。
And stateless is very important.
例如,在硬件设备上,我们无法访问内存池中的内容或之前已签名的内容,或者存储这些内容会非常困难。如果你使用某种Tails实现的钱包或其他不知道之前操作记录的签名者,我们绝对需要无状态签名。
For example, on hardware devices, we would not have access to what's in the mempool or what has been signed before, or it would be very painful to store it, and if you, for example, use some sort of Tails implement wallet or whatever where you don't know what you did before or generally have signers that don't track previous interactions, we absolutely would like stateless signatures.
嗯,Conduition并不一定改进这些操作的输出结果,而更多是优化和提升执行这些特定操作的性能。
Well, Conduition is not necessarily improving any of these, the output of these, but more so the optimization and the performance of doing these particular operations.
对于SLHDSA来说,他的性能优化方案需要数兆字节的内存和预计算,因此在硬件签名设备等资源受限环境中无法发挥作用。
So for SLHDSA, his optimizations performance wise required several megabytes of RAM and some pre computation, so it wouldn't help in these resource constrained environments like hardware signing devices.
他在文章中详细列出了多种优化手段来提升性能,包括SHA-256中间状态缓存、XMSS树缓存(这是SLHDSA使用的内部数据结构之一)。
In his post, he wrote up a bunch of different optimizations to improve performance, SHA-two 56 mid state caching, XMSS tree caching, which is one of the internal data structures used in SLHDSA.
他运用了CPU厂商提供的专用硬件指令进行硬件加速,包括向量化哈希、并行化部分哈希运算、多线程处理其他操作,甚至利用GPU和Vulkan图形编程API实现更深度的并行运算。
He applied some hardware acceleration using dedicated hardware instructions from CPU manufacturers, vectorized hashing, paralyzing some of the hashing operations, multithreading to paralyze some of the other operations, GPUs and the Vulkan Graphics programming API to further paralyze some operations.
根据他笔记中的输出数据对比或时间对比,优化后的代码能在11毫秒内完成消息签名,而他提到的最快开源库则需要94毫秒。
And the output, the data comparison or the time comparison in his notes, his code can then sign a message in eleven milliseconds while the fastest open source library he notes is ninety four milliseconds.
那就是PQClean库。
That's the PQClean library.
他的代码能在两毫秒内生成密钥,而PQClean需要十二毫秒。我还看到根据他图表上的视觉效果,验证速度也有显著提升,虽然没有具体数字,但看起来验证速度至少快了一个数量级。
His code can generate keys in two milliseconds whereas PQClean takes twelve milliseconds, and I also saw that there was a big speed up in verification according to the visuals on his chart, but I didn't see exact numbers, but it looked like at least an order of magnitude faster on verification as well.
我会
I would
我记得他在某处写道,签名验证的速度几乎与椭圆曲线签名验证一样快。
I think he wrote somewhere that the signature verification was almost as fast as the elliptic curve signature verification.
哦,实际上那是在我们的总结里
Oh, actually that's in our write up
这里。
here.
由于我们没有开启conduition,我建议大家不要只看我们的总结,他在原始帖子中引用了自己conduition.io网站上的博客文章,里面有详细说明。那篇博客还有几个月前的一篇早期文章,概述了各种基于哈希的签名方案。如果你对这类内容感兴趣,我觉得这两篇博客会很有帮助。
Because we didn't have conduition on, I would encourage folks to not just read our summary, but he has a blog post that he references in his original post to his conduition.io website that gets into the details, and that blog also has an earlier post from a few months ago, I think it was a few months ago, overviewing the different hash based signature schemes, so if you're curious about these kind of things, I think that those two blog posts would be very informative for folks.
嗯,Moon?
Yeah, Moon?
我也建议大家去看看。
I would also encourage everyone to take a look at it.
看起来组织得非常好,写得也很棒。
It looks extremely well organized, well written.
我还没时间深入研究,估计也不会去研究,但看起来确实需要投入不少时间,因为里面有很多非常复杂的内容。
I haven't had time to dig into it, I don't think I will, but it also looks like you need to really take some time because there's a lot of very complex stuff in there.
我其实很喜欢这个,相比那些关于Opryten的过滤器讨论之类的,这让我在智力上感到更加耳目一新,我更喜欢这个。
I actually love this, it's much more refreshing to me intellectually than the whole filter talk and and all that thing about Opryten, so I I like like this better.
不过,好吧,
But Well,
这标准可真是够低的。
talking about a low bar here.
是啊。
Yeah.
但关于这一切我还有一点想说。
But one more thing I wanted to say about all this.
我们讨论所有这些话题,是因为未来几年可能会出现具有密码学意义的量子计算机。
So we are we are talking about all these things because of the potentially cryptographically relevant quantum computers to appear in the coming few years.
这并非确定无疑的事。
This is not a certainty.
许多人将其视为必然会发生的事情。
A lot of people are handling it as a certainty.
我们并不确定。
We don't know.
假设我们将拥有具有密码学意义的量子计算机,比特币为此升级需要付出巨大代价。
It has a huge cost for Bitcoin to upgrade assuming that we are going to have cryptographically relevant quantum computers.
而如果这种情况并未发生,那么这些代价基本上就是白费了。
And if that does not happen, then this cost was paid for nothing, basically.
这也是我认为我们应该首先考虑有状态签名方案的原因,只有在那样的未来真正到来时,才迁移到更全面的后量子解决方案。
And that is another reason why I think we should probably consider stateful signing first and only migrate to more comprehensive post quantum solutions if that future actually unfolds.
所以我支持这样的观点:我们应该先启用基本原语,让人们能够创建量子安全的输出方案,在花费时保留选择权。
So I'm on the I'm on the side that says we should just enable the basic primitives where people can use them to create quantum safe outputs where where you have optionality at spend time.
因此你可以在花费时决定是否使用AC签名,是否要使用由后量子恶意软件签名的AC签名。
So you can decide the spend time if you want to use an AC signature, if you want to use an AC signature that is signed by a post quantum malware.
这可以是一个不同的Tap叶节点,你知道的。
This can be a different tap leaf, you know.
你可以直接揭示一个Tap叶节点,在那里你还需要提供冗长的端口签名,这效率非常低。
You you can just reveal a tap leaf where where you you also have to provide the long port signature, which is very inefficient.
这些VOTS一次性签名、互联网一次性签名效率更高,但计算量更大,顺便说一句。
These these VOTS one time signature, Internet one time signatures are much more efficient, computationally heavier, by the way.
所以端口签名计算量轻,而它则计算量更大。
So long port is computationally lightweight, mean then it is computationally heavier.
因此对每个节点来说成本更高,但在数据存储和网络传播方面成本更低。
So it's a larger cost for every node, but a smaller cost in terms of data storage and network propagation.
所以你需要权衡这些因素。
So you have to balance these things out.
但我想说的是,我认为我们首先应该为用户提供在花费时选择量子抗性方式花费比特币的灵活性。
But guess, I my point was that I believe we we first should provide people with optionality to have, a quantum resistant way to spend their coins at spend time.
所以他们可以决定,啊,我在新闻上听说有人在某处破解了160位椭圆曲线签名,并且他们证明可以用量子计算机破解。
So they can decide, a, I heard in the news that someone somewhere cracked on 160 bit elliptic curve signature, and they proved it that they could crack it with a quantum computer.
好的。
Okay.
我现在要花掉我的1000个比特币或1万个比特币,从今天开始附加某种量子抗性算法,但如果闪电通道有100万个槽位,可能关闭闪电通道的人并不在意这个。
I am going to spend my thousand Bitcoins or 10,000 Bitcoins now attaching some form of quantum hard algorithm from this day on, but maybe someone closing the Lightning channel does not really care about it if the Lightning channel is like 1,000,000 slots.
这看起来不会一下子全部发生。
That that just doesn't seem so it's not going to happen all at once.
不会让所有UTXO一下子都变得不安全。
It's not going to make every UTX auto cable all at once.
人们有不同的观点。
People have different perspectives.
我个人认为我们应该给他们自由选择权,而且我认为在完全后量子迁移发生之前,现在就应该认真研究对比特币更友好的有状态构造方案。
I personally believe we should give them the freedom, and I think we should take a hard look at, more Bitcoin friendly stateful, constructs for now before the migration happens, before the full post quantum migration happens.
这只是我对此的看法。
That that's just my take on this.
抱歉。
Sorry.
太棒了,感谢你的分享。
Awesome, thank you for that.
我正想说些非常相似的观点。
I was going to say something very similar.
我认为BIP 360目前正在进行相当大幅度的重写,正越来越接近这种无密钥路径结构的Taproot方案。这非常契合你所描述的:我们将拥有一个仅需Schnorr签名即可花费的脚本叶节点(虽然成本略高,因为需要证明该叶节点存在于树中),同时旁边还可以设置后量子叶节点作为备用选项。这样在花费时,你可以选择仅披露后量子叶节点并使用它。我之前确实没考虑过先采用有状态方案的路径,但你甚至可以同时设置包含有状态和无状态的叶节点,这会很有意思。
I think the BIP three sixty is currently again in a pretty substantial rewrite and it's getting closer and closer to this taproot without the key path construction, and I think that would very much fit into what you're describing where we would have a script leaf that is spendable just with Schnorr signatures and slightly more expensive because you have to go to the script leaf, like show in the control block that it existed in the tree, and then you would have the option to have post quantum leaves next to that, that you can fall back to, so when you spend, you can just simply only reveal the post quantum leaf and use that, and I hadn't actually considered previously the route of going stateful first, but you could even have leafs for stateful and stateless, and that might be interesting.
是的。
Yes.
是的。
Yes.
而且,而且,首要的、最重要的事情是应对长程攻击。
And, and, the first thing, the most important thing is to deal with long range attacks.
对吧?
Right?
而且,BIP 360已经处理了长程攻击问题。
And, and BIPS three sixty already deals with long range attacks.
如果所有人都迁移到BIP 360,当然,我们对于未迁移的UTXO无能为力,这很遗憾,或者说这是个非常棘手的话题,我不想深入讨论。
If everyone migrates to BIPS three sixty, of course, we can do nothing about non migrating UTXO sadly, or or that is a very difficult conversation I don't want to steer into.
但首先必须解决长程攻击问题。
But first you have to deal with the long range attack.
当短程攻击看起来可能实际可行时,就必须认真应对短程攻击了。
And when it looks like short range attacks are becoming, like, maybe possible, practically, then you really have to deal with the short range attacks.
重申一下,我认为可以先作为可选方案,然后视情况而定。
Again, I think first optionally, and then then then we will see.
一旦充分证明了可行性,我们就必须全面迁移到后量子化方案——可能是无状态的签名方案,同时再增加区块大小或采用扩展盒之类的方案,到时候再看吧。
And once, sufficient capability is demonstrated, we absolutely have to migrate a full post quantum, maybe stateless, signature scheme with another block size increase, extension box or something, we will see I guess.
太棒了。
Awesome.
Moon,感谢你参与讨论这个话题。
Moon, thanks for chiming in on that one.
我想我们可以就此结束,本周的共识变更环节就到这里,接下来我们将进入版本发布和候选发布环节。
I think we can wrap that up and thus wrap up the changing consensus segment for this week, and we'll move on to releases and release candidates.
Moon,欢迎你继续留下,如果你有其他事情要处理,我们也完全理解。
Moon, you're welcome to hang on or if you have other things to do, we understand.
本周我们有两个版本发布。
We have two release, two releases this week.
现在我把话筒交给Gustavo,他负责本周的这个环节以及'值得关注的代码'部分。
I'll turn it over to Gustavo who authored this and the Notable Codes segment for this week.
谢谢Gustavo。
Thanks Gustavo.
欢迎。
Welcome.
谢谢你,Mike。
Thank you, Mike.
谢谢你,Merg。
Thank you, Merg.
也感谢Moon和Julian在解释方面所做的出色工作。
And thank you, Moon and Julian for all the great work in explaining.
本周我们在Lightning方面有两个版本发布。
So this week we have two releases on the Lightning front.
第一个是Core Lightning 2512版,也称为'大胆无缝升级体验',包含几个新功能:如将BIP39助记词种子短语作为新的默认备份方法、路径查找功能改进、XPAY、ping方法也得到大幅优化、新增实验性的即时通道功能,以及许多其他功能和错误修复。
The first one, Core Lightning version 2512, also called Bold Seamless Upgrade Experience, has a few new features such as BIP39 mnemonic seed phrases as a new default backup method, pathfinding is improved, XPAY, the ping method is also improved heavily, experimental just in time channels are added and there are many other features and bug fixes.
这个版本中值得注意的一个重要方面是对大型节点进行了大量性能改进。
One important thing to remark from this release is the vast array of performance improvements for large nodes.
我认为这就是为什么人们称之为Bolt无缝升级体验,因为Bolt作为一项服务,利用这些改进为大型节点带来了好处。
I believe that's why they call it the Bolt's seamless upgrade experience because Bolt's being a service that uses cord lining benefits from these improvements for large nodes.
需要特别强调的是,这些性能改进还涉及破坏性的数据库变更。
Importantly, to also add that there are breaking database changes related to these performance improvements.
因此,本次发布包含了一个降级工具,以防升级数据库时出现问题。
So this release includes a downgrade tool in case something goes wrong when upgrading the database.
但我们将在重要代码和文档变更部分更详细地讨论这一点。
But we're going to be covering that in more detail in the notable code and documentation changes section.
以上就是CoreLiving 2,512版本的更新内容。
So that covers CoreLiving version 2,512.
该版本还包含更多功能改进和错误修复,具体内容可查阅变更日志或发布说明。
There are there are many more features and bug fixes explained in the change log or release notes of this new version.
关于LDK版本0.2,这是LDK的重大更新,实验性地加入了拼接功能,新增异步支付功能,支持静态发票支付,这是经过数月甚至可能数年的开发成果,最后还包含了使用临时锚点的零费用承诺通道功能。
So for LDK version 0.2, this is a major release of LDK that adds splicing in an experimental manner, paying, serving and paying static invoices for async payments, which has been a work of many months, even maybe even years on LDK and finally zero fee commitment channels using ephemeral anchors are also part of this release.
需要注意的是,拼接功能预计将与Eclair及未来版本的Cord兼容,但异步支付实现仅适用于基于LDK的节点。
You can, it's important to note that splicing is expected to be compatible with Eclair and future versions of cord lining, however the async payments implementation will only work with LDK based notes.
另一个重要变化是API的扩展,提供了原生Rust异步API,因此部分方法已更新为同步方式运行,但还有更多改进正在路上。
There are also another important change is the expansion of the API to offer a native rust asynchronous API, So a few methods are updated to to operate in a synchronous manner, but many, many more changes right there.
以上就是发布环节的全部内容。
So that covers the release section.
Mike,有什么要补充的吗?
Merge, Mike, any any comments?
还有什么需要补充的吗?
Anything to add here?
完美。
Perfect.
我们继续讨论显著的代码和文档变更,涉及CoreLightning、LDK、LMD、BTCPay服务器以及一项BIP改进。
We move on with the notable code and documentation changes where we have corelining, LDK, LMD, BTCPay server and also a BIP improvement.
那么我们从Core Lightning 87.28开始。
So we start with Core Lightning eighty seven twenty eight.
这里修复了一个与解锁HSMD时输入的密码相关的bug,或者简单来说,就是与你的CoreLogin节点密钥相关的问题。
Here, there's a bug fix related to the password you enter when unlocking your HSMD or simply your the key related to your CoreLogin node.
过去,一旦用户输入错误的密码,HSMD会因未知消息类型而崩溃。
So previously, once a user would enter the wrong passphrase, HSMD would crash with an unknown message type.
现在这个问题已得到妥善处理,即使输入错误密码也不会导致崩溃。
Now this is properly handled so that it doesn't crash if you enter the wrong passphrase.
系统会优雅地处理这种用户错误边缘情况并正常退出。
It will simply handle this user error edge case and exit cleanly.
虽然可能不算重大改进,但对用户体验很重要——用户原本无法理解为何系统会直接退出或崩溃,而不是提示密码错误。
So not, maybe not a huge improvement but quite user facing because the user wouldn't understand why it would simply exit, it would fail or crash instead of just responding that the wrong passphrase was entered.
具体来说,这里发生的情况是它使用了write_all函数,而write_all函数缺少了wire协议长度修复前缀。
Particularly what happened here is that it was using write all and write all was missing the wire protocol length fix prefix.
因此当Lightning D从HSMD接收到错误或消息类型时,无法正确解析。
So the when Lightning D would receive the error or message type from HSMD, it wouldn't interpret it correctly.
不过write_all已被wire_sync_write取代,这样就避免了未知消息类型错误,并且这个问题得到了100%妥善处理
However, write all is replaced with wire sync write which makes it that there's no unknown message type error and this is properly 100 handled
并干净利落地退出。
and exits cleanly.
那么我们继续看Core Lightning 8,002。
So we move on with Core Lightning 8,002.
这个版本在发布说明部分有提到。
This is the one that was mentioning in the release section.
新增了一个Lightning D降级工具,当升级数据库出错时,可以安全地回退到2,509版本。
There's a Lightning D downgrade tool that is added in case there's an error when you upgrade your database, can safely downgrade it to 2,509 in case of an error.
这里没有太多要说的,只是一个100%的安全机制
So not much else to say here, just a safety mechanism 100 for
新版本。
the new release.
最后是Coralining 8,035的最后一个更新。
And finally, the last one for Coralining 8,035.
这里修复了一个长期存在的错误。
Here a long standing bug was fixed.
基本上当Coralining重启时,它会回滚最近的15个区块。
We're basically coraligning when it restarts, it will roll back the latest 15 blocks.
默认情况下是这样,但这也是可配置的。
Well, by default, that could also be configurable.
当回滚这15个最新区块时,它会重置那些已保存交易(或者说UTXO)的消费高度——
When it rolls back those latest 15 blocks, any transactions it had had saved or let's say UTXOs it had saved, it will reset the spend height of those UTXOs that
属于
were part
这15个区块的部分UTXO,由于区块回滚,这些UTXO的消费高度值将被重置为空。
of those 15 blocks, it will reset that value to null because it rolled back the blocks, so it removes the spin height value of the UTXOs that were spent in those blocks.
然而,这个bug的问题是当重启并重新同步链、下载区块时,它没有正确更新这些UTXO的消费高度。
However, the bug was that when restarting and resyncing the chain and downloading the blocks, it would not properly update the spend height of those UTXOs.
所以这个PR的核心作用是让Core Lightning现在会确保重新监视所有UTXO,包括那些被回滚且消费高度被移除的UTXO。
So basically what this PR does is that Core Lightning will now make sure to rewatch all UTXOs, including those that were rolled back and the spin height was removed.
这不仅向前修复了bug以防未来再次出现,还添加了一次性后向扫描功能,以恢复之前因该bug遗漏的所有消费记录。
So not only does that to move forward so that the bug is removed in the future, but it also adds a one time backward scan to recover any spends that were previously missed due to this bug.
举个例子,如果你上周有一个记录,某个UTXO的消费高度被置为null(因为你回滚了那15个区块),而系统未能重新监视该UTXO并更新其消费高度。
So for example, if you had a note last week, a UTXO spend height was changed to null because you had rolled back those 15 blocks and it had failed to rewatch that UTXO and upgrade updated spend height.
重启时,它现在会执行一次反向扫描,确保所有已花费的UTXO都被重新监控。
When when restarting, it will now do a one time backward scan to make sure no UTXOs spent all UTXOs are rewatched.
这里主要的问题是,Core Lightning可能会转发已经关闭的通道公告,因为它未能重新监控那些用于关闭通道的已花费UTXO。
And most the issue here caused was that Core Lightning could relay channel announcements that had already been closed because it would fail to rewatch a UTXO that was spent, that would close a channel.
什么事,默奇?
Yes, Murch?
对,你最后提到了一点,但我看资料时没注意到具体内容。当你重启一个CLN节点时,它可能会错过某个区块或公告。所以再次启动节点时,它会回退15个区块,从那里重新处理区块链和公告。为了做到这一点,它会把状态重置到15个区块前的状态,这意味着它会将通道状态的认知回退到15个区块前的状态。
Yeah, you said it towards the end, but when I read this, I missed the actual thing, so when you restart a CLN node, it might miss a block or miss announcements, so on starting the node again, it goes back 15 blocks and reprocesses the blockchain and the announcements from there, and in order to do so, would sort of reset the state to 15 blocks in the past, which would mean that it changed how it perceived channel states back to 15 blocks ago.
因此,如果CLN在这些区块中有状态变更,它实际上不会重新监视新通道,因为某种程度上它会从数据库中删除信息,然后在这15个区块的重扫描中不再重新收集这些信息。
So if CLN had a state change in those blocks, it would actually not rewatch the new channels, because the, like it sort of would remove information from its database and then not recollect it on this 15 block rescan.
所以,这看起来像是有人考虑过,为了正确处理这个问题,我们需要回滚这个状态,但随后某些处理方式与第一次相比发生了变化,导致无法以相同方式重新处理。因此,如果你在这15个区块期间有任何状态变更,就会引入与初始扫描不同的观察差异。
So, this seems like someone thought about, oh, in order to process this correctly, we need to roll back this state, but then some, something changed in how things got processed versus the first time and it wouldn't get reprocessed in the same way, so it caused, if you had any state chain during the time, these 15 blocks, would introduce some observation differences versus the original scan.
总之,看起来他们已经修复了这个问题,虽然必须承认这个bug确实有点吓人,但好在已经修复了。
Anyway, it seems like they fixed it, which is probably it's kind of scary that they had this bug, I must admit, but good that they fixed it.
是的,没错。
Yeah, exactly.
正如你在merch中提到的,这是对这个PR很好的重新表述。
As you mentioned in merch, that was a good way to rephrase this this PR.
谢谢你这么说。
Thank you for that.
那么我们继续讨论LDK4226。
So we move on with LDK4226.
这里发生了两件事。
So here two things happen.
我认为如果按照提交顺序,新增了三个明确与跳板支付转发相关的本地失败类型。
I would say if we follow in the commit order, the three new local failures are added that are explicitly related to trampoline payment forwarding.
LDK的跳板支付实现非常精简,此前尚未实现转发功能,因此新增这三个失败原因作为支持跳板支付转发的第一步。
So LDK has a very minimalistic trampoline payments implementation where forwards were not yet implemented, so three new failure reasons are added as a first step towards supporting trampoline payment forwarding.
同时还新增了一个安全机制,用于强制校验跳板洋葱路由的接收方约束条件。
As well, another safety mechanism is also added that basically enforces the trampoline onion recipient constraints.
这意味着新增了测试用例来验证跳板载荷与外部洋葱路由的匹配性,特别关注接收跳板支付时的金额和CLTV字段。
This means that tests are added to cover the validation of trampoline payloads against the outer onion and this is specifically related to the amount and the CLTV fields of received trampolines.
总之就是安全机制和整体上为LDK增加更多跳板支付支持做准备。
So just safety mechanism and overall preparation towards more support for trampoline payments in LDK.
接下来是LND $10,341的修改。
And so the next one is LND $10,341.
这里修复了一个bug:当TOR服务重启时,LND会重复添加相同的TOR洋葱地址到节点公告和getinfo输出中。
Here, a bug is fixed where basically on a restart of the TOR service, LND would re add the same TOR Onion address to the node announcement and to the get info output.
简单说就是公告和命令输出中的洋葱地址会被重复显示,对吧?
So basically the Onion address would just be duplicated in that announcement and that output of the command, right?
因此这个PR确保创建新隐藏服务时永远不会重复生成地址。
So here instead the PR ensures that the create new hidden service will never duplicate an address.
所以当TOR服务重启时,会确保不会在节点公告消息或getinfo输出中重复添加已存在的地址。
So if the TOR service restarts, it will make sure not to duplicate an address that is already present in the note announce a note announcement message or they get info output.
这其实不是什么大问题。
So not much of a this wasn't a major issue.
只是让Tor服务重启的处理方式更规范。
It's just cleaner way of of handling a a Tor service restart.
对此还有什么要补充的吗?
Anything to add here?
合并吧,Mike。
Merge, Mike.
很好。
Perfect.
我们继续讨论BTCPay服务器的6986号问题。
We move on with BTCPay server 6,986.
这里引入了一个名为'货币化'的新功能,它能让大使——即基于B2C模式的服务器用户——通过运行服务器并大量开展用户和商户入驻工作来获得收益。
So a new feature here is introduced called monetization, which is which enables ambassadors, which are B2C based server users that run a server that are doing a lot of work onboarding users and merchants.
该功能允许他们将自己的工作变现。
This feature allows them to monetize their work.
那么它是如何运作的呢?
And how does it work?
简单来说,当我作为大使,比如说在我的城市或国家为BTCPay服务器招募用户和商户时,通常会运营一个服务器,并让他们注册到我的服务器上,这样他们就能在我的服务器上创建自己的店铺。
Basically, when I'm an ambassador and I'm, let's say, onboarding users and merchants to BTCPay server in my city or my country, I will often run a server and tell them to sign up to my server so that they can create their store on my server.
该功能允许服务器管理员要求用户登录时订阅服务。
This feature allows server admins to require a subscription for user login.
这意味着用户(比如商户)在我的服务器上创建账户后,首先会进入默认的7天免费试用期,之后我作为服务器管理员可以配置入门套餐或二级套餐。
So this means that a user that creates an account on my server or let's say a merchant that creates an account on my server would enter first a default seven day free trial period and I could as a server admin then configure a starter plan or a second tier plan.
这实际上是基于我们在第375期通讯中描述的订阅功能构建的,该功能允许商户向用户定义周期性支付方案和套餐计划。
Really this builds on a feature we described in Newsletter three seventy five called subscriptions, where merchants can define recurring payment offerings and plans to users.
因此这只是订阅功能的延伸,让BitsyPay服务器的推广者能够通过这种方式将其工作变现。
So this is just a way an extension of the subscription feature allowing ambassadors of BitsyPay server to monetize their work in this way.
不过,订阅功能有默认设置,如我之前提到的,包括7天免费试用期和免费入门套餐。
However, subscription there's defaults, like I mentioned, of a seven day free trial period and a free startup plan.
但管理员可以完全按照自己的需求进行自定义设置。
However, admins can customize this in the exact way they want.
最后,对于已经拥有订阅的现有用户——抱歉,我指的是那些已经在大使服务器上拥有店铺的用户——他们不会自动加入订阅计划,不过服务器管理员可以轻松将这些现有用户迁移至新的订阅模式。
Finally, existing users that already had a subscription or excuse me, users that were already part of a that already had a store on a server of an ambassador will not be automatically enrolled into a subscription, though the server admin can easily migrate pre existing users to the new subscription model.
对此还有什么补充想法吗?
Any extra thoughts here?
很好。
Perfect.
最后一项,BIPS 2015为BIPS 54(共识清理提案)添加了文本测试向量。
So the final one, BIPS twenty fifteen adds text test vectors to BIPS 54, which is the consensus cleanup proposal.
这里为共识清理软件的四个不同部分新增了测试向量。
Here new test vectors are added for the four different parts of the consensus cleanup software.
这些测试向量是通过Bitcoin Inquisition上公开拉取请求中的BIP54实现生成的,同时还使用了包含在定制版Bitcoin Core分支中的自定义矿工测试单元来构建。
This is generated using the BIP54 implementation that is available on an open pull request in Bitcoin Inquisition and also a custom miner test unit that is included in the custom Bitcoin Core branch is also used to build these test vectors.
因此,这还包括了关于如何使用和审查这些测试向量以评估该提案的文档和说明。
So this also includes documentation and instructions on how to use and review these test vectors to review this proposal.
在三月(实际上是二月)的通讯中,变化共识部分也涵盖了这一内容,所以你可以查阅该期通讯以获取关于这些测试向量及该BIP提案的更多背景信息。
So in March, newsletter February, this was also covered in the changing consensus section, so you can check that newsletter for additional context on these test vectors in this BIP proposal.
以上就是本节以及整个通讯和节目的全部内容。
So that covers this section and the whole newsletter and show.
Merg、Mike,如果你们有任何想法请提出。
Merg, Mike, if you have any thoughts please.
是的,我们要求或强烈建议BIP在进入提案阶段前必须具备测试向量,且在最终确定前必须完成测试。拥有测试向量来验证BIP实现正确性,意味着不同实现版本更有可能相互兼容且完全符合规范。这项伟大的共识清理工作已经持续多年——它最初于2019年提出,而最近两年由Antoine Ponceau重新提交的版本进行了完善。现在有了测试向量,我认为这标志着BIP不仅完全明确了规范,而且实现起来也更容易了。
Yeah, so we require or we emphatically suggest that BIPs have test vectors before they move to proposed and they're required before BIPs move to final, so having test vectors to test that you implemented a BIP correctly means that it makes it much more likely that the different implementations of a specification are interactive, are all spec conform, and so the great consensus cleanup is moving towards, well, it's been a multi year process, it was originally proposed in 2019, there's this redo or warming it back up by Antoine Ponceau in the last two years, and with the test vectors now I think the BIP is not just completely specified, but also it's easier to implement completely.
我认为这是向真正提出激活伟大共识清理迈进的又一个里程碑。
I think this is another milestone towards being able to actually propose activation of the GREAT Consensus Cleanup.
因此,如果你对共识清理感兴趣,现在是时候更积极地发声了,应该让更多人知道并支持这项提案——在我看来,在所有软分叉提案中,它目前在采用曲线或准备曲线上是最接近成熟的一个。
So I think that if the Consensus Cleanup is something that you're interested in, it's time to get a little more verbal about it and to mention that this is a proposal people should be looking into and supporting, because I think it's by far the closest among soft fork proposals that is on the adoption curve or readiness curve.
总之,伟大的共识清理正在接近最终实现
So yeah, anyway, great consensus cleanup is getting closer to being
谢谢Murch。
Thank you Murch.
我想补充的是,如果大家对第340期通讯中的这项具体提案感兴趣,其中详细解析了该提案的各个部分——包括传统输入的SIG OPS限制、将时间扭曲宽限期延长至两小时、以及重复交易修复方案。大家可以通过查阅第240期通讯获取更多背景信息,以便全面理解这项提案的内容。
I want to add that if people are interested in this specific proposal in newsletter three forty, there was a breakdown of the different parts of this proposal which are the legacy input SIG OPS limit, the increasing the time warp grace period to two hours, duplicate and the duplicate transaction fix, so people can check out Newsletter two forty for additional context to fully understand what this proposal is about.
是的,也别忘了查看主题页面,那里有Gustavo提到的一些内容参考,以及其他相关讨论。我记得Antoine最近也在通过播客和视频向大家介绍BIP54,以及为什么我们不应将其视为比特币协议的潜在变更选项。
Yeah, and check out the topics page as well, where you see references to some of what Gustavo mentioned, among other mentions, and I think that Antoine's been out as well doing podcasts and videos with people about BIP54 and why we shouldn't be thinking about that as a potential Bitcoin protocol change.
感谢Gustavo带我们浏览了版本更新和重要代码环节。
So thanks, Gustavo, for taking us through the releases and Notable Code segment.
也感谢Murch共同主持。
Thank you, Murch, for co hosting as well.
我们要感谢Moon和Julian的参与,感谢各位听众的收听。
We want to thank Moon and Julian for joining us and for you all for listening.
干杯。
Cheers.
关于 Bayt 播客
Bayt 提供中文+原文双语音频和字幕,帮助你打破语言障碍,轻松听懂全球优质播客。