Skip to content

BIP-352: introduce per-group recipient limit K_max (=2324)#2106

Open
theStack wants to merge 5 commits intobitcoin:masterfrom
theStack:bip352_limit_max-k-PR
Open

BIP-352: introduce per-group recipient limit K_max (=2324)#2106
theStack wants to merge 5 commits intobitcoin:masterfrom
theStack:bip352_limit_max-k-PR

Conversation

@theStack
Copy link
Contributor

@theStack theStack commented Feb 19, 2026

As previously announced in the libsecp256k1 silentpayments module discussion and on the mailing list ~2 weeks ago, this PR introduces a K_max limit to BIP-352 to mitigate worst-case scanning time for adversarial transactions. So far no objections have been raised against that protocol change, which is backwards-incompatible in theory.

The Python reference implementation is adapted and test vectors for the following two scenarios are added:

  • Sending fails if any recipient group (i.e. recipients that share the same scan public key) exceeds the limit
  • Scanning only detects outputs with k values up to K_max (exclusive, i.e. from k=0..999), even if more would be available

Can be tested by running $ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json. The test vectors pass the tests in the libsecp256k1 module as well, see commit theStack/secp256k1@65ce623. [1]

Note that the value K_max=1000 is somewhat arbitrary and can still be debated, it's simply one that came up relatively early in the discussion and is believed to be fine even for very unusual scenarios like "exchange sends to another exchange" (see bitcoin-core/secp256k1#1799 (comment)). An alternative number that was brought up later was K_max=2324, as this is the theoretical maximum for standard-sized (i.e. <= 100_000 kvB) transactions (bitcoin-core/secp256k1#1799 (comment)). Personally I think even a lower limit than K_max=1000 would be fine, like something in the "lower hundreds" range, but with 1000 we are still comfortably in the "seconds" range for the worst-case on a reasonably modern machine. // EDIT: the proposed value was raised from 1000 to 2324

The chosen value of Kmax = 2324 (suggested earlier in bitcoin-core/secp256k1#1799 (comment) and in #2106 (comment) below) represents the maximum number of P2TR outputs that can fit into a 100kvB transaction, meaning a transaction that adheres to the current standardness rules is guaranteed to be within the limit. This ensures flexibility and also mitigates potential fingerprinting issues.
See benchmark results bitcoin-core/secp256k1#1765 (comment) for more details.

Another suggested BIP change is introducing a warning about wallet interoperability w.r.t. the number of labels created. As I see this as an orthogonal change, I will open a separate PR for this. One slightly related PR is #2087, which would standardize the EC parts of the reference code and make it look a bit nicer. It's not strictly necessary, but I'm happy to rebase this PR if #2087 gets in first.

[1] I haven't included this commit in the secp PR yet as it's unclear to me if we want to bloat the repository with >120k new LOC and several mega-bytes of additional test data, considering a test case for this already exists. Something to be discussed further in bitcoin-core/secp256k1#1765. // EDIT: after the test vector encoding improvements as suggested in #2106 (comment), this is not an issue anymore

@murchandamus
Copy link
Member

murchandamus commented Feb 19, 2026

I guess I misunderstood so far what kind of limit you were looking to impose. I thought that Silent Payment transactions would generally be limited to 1000 outputs.

If a malicious party were to send 1000 outputs to two recipients, they’d incur a 2×1000² validation time on two recipients, or a 2²×1000² total validation time. If they make an oversized transaction and send 1000 outputs each to 23 recipients, each of the 23 recipients would have 23×1000² total validation time. So, the total validation time is still quadratic, but distributed to multiple recipients, but it gets worse for each recipient the more recipients there are. Wouldn’t it be better to just limit silent payment transactions to 1000 outputs? It’s not like that would make Silent Payment transactions stick out like a sore thumb. ;)

@theStack
Copy link
Contributor Author

theStack commented Feb 20, 2026

I guess I misunderstood so far what kind of limit you were looking to impose. I thought that Silent Payment transactions would generally be limited to 1000 outputs.

Limiting the total number of transaction outputs for Silent Payments eligibility might be a possible alternative, but it seems to be more restrictive in terms of composability. There is currently no rule that in a Silent Payments transaction all (taproot) outputs have to be Silent Payments related, users are free to combine it with an arbitrary number of non-Silent-Payments outputs. Considering this, can we be sure that a total tx output limit would never cause unintended problems in any wallet flow? It could e.g. happen that a user first creates exactly N_max [1] outputs with a Silent Payments library, and then some collaborative protocol adds another unrelated taproot output on top, without necessarily being aware that the already existing outputs even have a Silent Payments context (they just look like random taproot outputs). In that scenario the Silent Payments recipients wouldn't get paid, as the resulting transaction isn't considered Silent Payments eligible, due to having N_max+1 outputs and thus exceeding the limit. (Slightly related: #2055 (comment)).

That said, I'm not too familiar with concrete collaboration protocols and flows, so the scenario I painted might also be a bit far-fetched, especially since it must be a single entity having control over all the inputs private keys (as otherwise it couldn't have created the Silent Payments outputs in the first place). When thinking about simpler solutions, one obvious idea would be to limit the size of Silent Payments eligible transactions to 100 kvB. No matter what protocols are in use, it seems at least very unlikely to me that any of them would at the end not respect the current policy limit and create a tx that would likely not be relayed and need out-of-band miner communication. On the other hand, no policy limit is set in stone, so in some sense even that would potentially restrict flexibility for the future. 🤷‍♂️
I remember having discussed this with @RubenSomsen a while ago, who likely has more convincing arguments for K_max.

[1] using the N_max notion for your "limit total outputs" proposal to differentiate it from the K_max one

@murchandamus murchandamus added Proposed BIP modification Pending acceptance This BIP modification requires sign-off by the champion of the BIP being modified labels Feb 20, 2026
@murchandamus
Copy link
Member

cc: @josibake, @RubenSomsen

@Sosthene00
Copy link

Hi everyone, sorry I just caught up on this.

First, I concept ACK a limit on k.

But as I said when we first talked about this on a call a couple months ago I consider this a rather theoretical problem since afaiu the incentives for the supposed attacker are just not there at all, i.e. attacker will spend a lot of money just to slow down transaction scanning for the victim. Even worse, he's actually paying the victim a significant amount. I wish I could be attacked like this more often :)

More seriously I agree on a reasonable restriction to prevent worst cases. Limiting k to 1000 seems reasonable to me, that's a number of outputs for the same recipient we're very unlikely to ever hit in practice and it gives us a clear upper bound of how long scanning can take. That doesn't interfere with policy like other proposals, that's purely a cap on a silent payment parameter, and it can be easily bypassed/reversed later if needed, even if I doubt we'll ever need to.

* Let ''input_hash = hash<sub>BIP0352/Inputs</sub>(outpoint<sub>L</sub> || A)'', where ''outpoint<sub>L</sub>'' is the smallest ''outpoint'' lexicographically used in the transaction<ref name="why_smallest_outpoint"></ref> and ''A = a·G''
** If ''input_hash'' is not a valid scalar, i.e., if ''input_hash = 0'' or ''input_hash'' is larger or equal to the secp256k1 group order, fail
* Group receiver silent payment addresses by ''B<sub>scan</sub>'' (e.g. each group consists of one ''B<sub>scan</sub>'' and one or more ''B<sub>m</sub>'')
* If any of the groups exceed the limit of ''K<sub>max</sub>'' (=1000) silent payment addresses, fail.<ref name="why_limit_k">'''Why is the size of groups (i.e. silent payment addresses sharing the same scan public key) limited by ''K<sub>max</sub>''?''' An adversary could construct a large full-block-sized transaction consisting of N=23255 outputs (that's the theoretical maximum under current consensus rules, w.r.t. the block weight limit) that all target the same entity, consisting of one large group. Without a limit on the group size, scanning such a transaction with the algorithm described in this document would have a complexity of ''O(N<sup>2</sup>)'' for that entity, taking several minutes on modern systems. By limiting the number of "k" iterations in the inner loop, the complexity is reduced to ''O(N·K<sub>max</sub>)'', cutting down the scanning cost to the order of tens of seconds instead.</ref>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider:
...By limiting the group size to 'Kmax', the number of iterations of the inner loop is reduced to 'Kmax', and the complexity is reduced to...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, restructured that part based on your suggestion.

@theStack theStack force-pushed the bip352_limit_max-k-PR branch from 21a297e to d1f061b Compare February 24, 2026 14:11
@theStack
Copy link
Contributor Author

@Sosthene00: Thanks for weighing in! I agree that it's a very unusual attack with relatively weak incentives, probably even calling it "attack" in the first place seems a bit exaggerated. Note though that the recipient doesn't necessarily get paid, as there is currently nothing in the protocol mandating a minimum output amount. An attacker could include e.g. one large non-Silent-Payments output as change to themselves, and all the other Silent-Payments outputs that match for the recipient have 0 sats (being below the dust limit, that's non-standard, but still consensus valid). Silent Payments wallets would still want to scan such a transaction, considering that the single high-amount output could be a match.

@RubenSomsen
Copy link
Contributor

@murchandamus raises a good point that ~23 users can be targeted simultaneously every 10 minutes, rather than one (or e.g., at k=500, ~46). This mainly makes it cheaper for an attacker because it only requires 1/23rd of a full block worth of fees per victim.

That said, I don't think this is a major concern. Limiting k is mainly making it computationally feasible for an individual user to deal with being targeted regardless of the whims of the block space market. Furthermore, it remains the case that for a targeted attack to occur (regardless whether 1 or 23 people are targeted), it requires outbidding everyone else for the entire block.

If we're looking at the effect on the total set of users, during a targeted attack the combined scanning burden actually goes down. A block with a single targeted worst-case transaction is ~5x cheaper to scan for non-targeted people (expensive EC mults are only done per TX, so fewer TXs = faster). The most effective way to maximize the combined scanning burden is not with a targeted attack, but by filling the block with 1-input 1-output taproot transactions.

The reasons why we'd prefer limiting K has already been mentioned, but my generalized take is that we want to minimize the number of constraints we put on SP transactions to ensure no incompatibilities occur with other protocols (including future ones).

And for those unaware, @theStack was appointed co-author of BIP352, so his voice carries no less weight than my own.

@murchandamus
Copy link
Member

Understood, thanks for the reminder. I opened a PR to add theStack to BIP352 in #2107.
Regarding this change, please let me know whenever it’s ready to merge, @theStack.

@Sosthene00
Copy link

@Sosthene00: Thanks for weighing in! I agree that it's a very unusual attack with relatively weak incentives, probably even calling it "attack" in the first place seems a bit exaggerated. Note though that the recipient doesn't necessarily get paid, as there is currently nothing in the protocol mandating a minimum output amount. An attacker could include e.g. one large non-Silent-Payments output as change to themselves, and all the other Silent-Payments outputs that match for the recipient have 0 sats (being below the dust limit, that's non-standard, but still consensus valid). Silent Payments wallets would still want to scan such a transaction, considering that the single high-amount output could be a match.

Ok the 0 sats output attack is interesting, but I don't think it does change the dynamic here.

First as you mentionned that makes the transaction non-standard, it raises the bar but I agree it's not enough of a deterrent nowadays.

Second wallets and scanners like blindbit would be very likely to ignore such outputs. For example with Dana we're still experimenting, we used to ignore all outputs barely above dust limit and even raised the bar to 1000 sats for a time, but 0 sats outputs will definitely be ignored.

Actually it makes me think that there's a problem with that we haven't talked about yet as far I know at least: if a transaction contains more than one output for a recipient but that output derived with k == 0 is below that dust value that wallet and/or blindbit choose to ignore, but say output at k == 1 is above, I'm not sure that it will be caught I'd rather check that.

Probably not an attack but it could definitely happen by mistake if wallets are not aware of that and result in failure to correctly identify our outputs.

@Eunovo
Copy link
Contributor

Eunovo commented Feb 25, 2026

Second wallets and scanners like blindbit would be very likely to ignore such outputs. For example with Dana we're still experimenting, we used to ignore all outputs barely above dust limit and even raised the bar to 1000 sats for a time, but 0 sats outputs will definitely be ignored.

@Sosthene00 I and @RubenSomsen had discussed this idea previously and it comes with some drawbacks. See bitcoin-core/secp256k1#1799 (comment)

@real-or-random
Copy link
Contributor

An alternative number that was brought up later was K_max=2324, as this is the theoretical maximum for standard-sized (i.e. <= 100_000 kvB) transactions (bitcoin-core/secp256k1#1799 (comment)).

If it should happen that we see applications hitting the K=1000 limit, then if you see a tx with ~1000 outputs (e.g., 1001 with one change output), then chances are high that it's an SP transaction. I'm not sure how real this concern is, but a limit of K_max = 2324 would mitigate this, and IMO an additional factor of ~2x doesn't matter that much given that we're in the order of seconds already. (Plus, some of @w0xlt's suggested optimizations could bring scanning time down again).

@real-or-random
Copy link
Contributor

[1] I haven't included this commit in the secp PR yet as it's unclear to me if we want to bloat the repository with >120k new LOC and several mega-bytes of additional test data, considering a test case for this already exists. Something to be discussed further in bitcoin-core/secp256k1#1765.

I tend to think that part of this discussion belongs here. If the BIP has test vectors, the simplest thing for implementations (libsecp256k1 or others) is to use them all. Treating a handful of vectors differently creates friction. But having multi-MB files in the repo may also create friction.

Perhaps a better encoding can be found? On the sending side, a "count" arg could be added. On the recipient side, we'll need the K+1 outputs but maybe it's enough to "expect" only the number of found outputs (instead of their data). I am not sure if the added implementation complexity is worth the hassle.

@theStack
Copy link
Contributor Author

theStack commented Feb 25, 2026

Actually it makes me think that there's a problem with that we haven't talked about yet as far I know at least: if a transaction contains more than one output for a recipient but that output derived with k == 0 is below that dust value that wallet and/or blindbit choose to ignore, but say output at k == 1 is above, I'm not sure that it will be caught I'd rather check that.

Probably not an attack but it could definitely happen by mistake if wallets are not aware of that and result in failure to correctly identify our outputs.

Good point, that sounds like a pitfall that's very easy to fall into. I think it's worthwhile to add an explicit warning about that scenario to the BIP (e.g. "after a match, always continue scanning with the next k, even if that match was not considered relevant for the wallet (e.g. due to being dust)" or sth similar) in a separate PR, to help implementers avoid the issue.

An alternative number that was brought up later was K_max=2324, as this is the theoretical maximum for standard-sized (i.e. <= 100_000 kvB) transactions (bitcoin-core/secp256k1#1799 (comment)).

If it should happen that we see applications hitting the K=1000 limit, then if you see a tx with ~1000 outputs (e.g., 1001 with one change output), then chances are high that it's an SP transaction. I'm not sure how real this concern is, but a limit of K_max = 2324 would mitigate this, and IMO an additional factor of ~2x doesn't matter that much given that we're in the order of seconds already. (Plus, some of @w0xlt's suggested optimizations could bring scanning time down again).

I'm not sure about these concerns either, but I do agree that K_max = 2324 would be the safest choice in that regard, with a resulting worst-case scanning time that is still acceptable (even without further optimizations). Curious what others think, but I wouldn't mind bumping the limit.

I tend to think that part of this discussion belongs here. If the BIP has test vectors, the simplest thing for implementations (libsecp256k1 or others) is to use them all. Treating a handful of vectors differently creates friction. But having multi-MB files in the repo may also create friction.

In libsecp256k1, what would take the most space is the vectors.h file (~3.5 MB in theStack/secp256k1@7ae84f8 with K_max=1000), which is generated from the BIP .json (~1 MB) file with a Python script. One idea might be to not add that header to the repository and let the build system handle its generation instead, but then again we currently only support this in Autotools, but not in CMake (see e.g. bitcoin-core/secp256k1#1723), so not sure. (Also it's the opposite what we currently have in other modules like musig2, where only the test vector header is in the repository, but not the .json files.)

Perhaps a better encoding can be found? On the sending side, a "count" arg could be added. On the recipient side, we'll need the K+1 outputs but maybe it's enough to "expect" only the number of found outputs (instead of their data). I am not sure if the added implementation complexity is worth the hassle.

Makes sense to consider a better encoding. An advantage would be that test cases are more readable (at least for the sender side, it would only be a few lines with an extra "count" field) and thus easier to be extended. For the recipient side, I just discovered that we already have specified an "n_outputs" field in the BIP document, it's just not used yet:

bips/bip-0352.mediawiki

Lines 411 to 422 in 97781ea

"expected": {
"addresses": [<array of bech32m strings, one for the silent payment address and each labeled address (if used)>],
"outputs": [<array of outputs with tweak and signature; contains all possible output sets, tester must match a subset of size `n_outputs`>
{
"priv_key_tweak": <hex encoded private key tweak data>,
"pub_key": <hex encoded X-only public key>,
"signature": <hex encoded signature for the output (produced with spend_priv_key + priv_key_tweak)>
},
...
],
"n_outputs": <integer for the exact number of expected outputs>
}

We could make the "addresses" and "outputs" fields optional if "n_outputs" is provided instead, and take use of that for the new K_max test cases. On the other hand, I'm not sure about the implementation complexity either. Will take a look to see how much effort it would need (also on the secp256k1 side, where we likely would want/need to change the structures in vectors.h as well, to avoid thousands of empty fields in the other test vector data that isn't related to K_max).

@murchandamus murchandamus added the PR Author action required Needs updates, has unaddressed review comments, or is otherwise waiting for PR author label Feb 25, 2026
@theStack theStack force-pushed the bip352_limit_max-k-PR branch from d1f061b to 4dac93c Compare February 27, 2026 02:47
@theStack
Copy link
Contributor Author

theStack commented Feb 27, 2026

Updated the PR with the following changes:

@theStack theStack changed the title BIP-352: introduce per-group recipient limit K_max (=1000) BIP-352: introduce per-group recipient limit K_max (=2324) Feb 27, 2026
Copy link
Contributor

@nymius nymius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cACK 311cc66: I agree with the limit. The increase in scanning time is linear, the performance is still in the same order of magnitude, and decreases the chances of wallet fingerprinting. As I have commented, I would reference the rationale for the chosen number in the spec.

ACK 8f7adfd..4dac93c:
Checked full test execution: didn't have any issues. It took around 16.78 seconds to complete. The bulk of this time was spend obviously in the new vector.

I've sorted and compared the differences in sizes of test vectors in the repository, and bip352 vectors are far from the median, but also far from the heaviest files:

Files before after changes:

[5.2M]  bip-0054/test_vectors/sigops.json
[4.1M]  bip-0054/test_vectors/timestamps.json
[696K]  bip-0119/vectors/ctvhash.json
[404K]  bip-0352/send_and_receive_test_vectors.json
[ 36K]  bip-0346/ref-impl/txhash_vectors.json

Files after before changes:

[5.2M] bip-0054/test_vectors/sigops.json
[4.1M] bip-0054/test_vectors/timestamps.json
[696K] bip-0119/vectors/ctvhash.json
[188K] bip-0352/send_and_receive_test_vectors.json
[ 36K] bip-0346/ref-impl/txhash_vectors.json

In theory this is a backwards incompatible protocol change.
Practically, no existing Silent Payments wallets out there supports
sending to such a high quantity of recipients (not even in terms of
_total_ number of recipients), so the K_max limit should be safe to
introduce, without any negative effects in the wallet ecosystem.
Introduce an optional "count" field for recipient objects.
Also update the documentation of the fields.
Introduce an optional "n_outputs" field as alternative to the detailed
"outputs" objects (the field was already specified, but not used so
far). Also update the documentation of the fields.
Test case: as the (only) recipient group contains 2325 addresses and
thus exceeds the K_max limit by one, sending fails.

Can be tested by
`$ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json`
Test case: even though there are 2325 outputs targeted to the recipient,
only 2324 are found due to the introduced K_max limit. Any
implementation following the new BIP protocol rule wouldn't create such
a transaction in the first place, but an attacker might do.

Can be tested by
`$ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json`
@theStack theStack force-pushed the bip352_limit_max-k-PR branch from 4dac93c to 4a8ad2c Compare February 27, 2026 17:03
@theStack
Copy link
Contributor Author

@nymius: Thanks for the thorough review! Makes sense to explain the reasoning behind the magic number, extended the foot note accordingly with the following sentences:

The chosen value of Kmax = 2324 represents the maximum number of P2TR outputs that can fit into a 100kvB transaction, meaning a transaction that adheres to the current standardness rules is guaranteed to be within the limit. This ensures flexibility and also mitigates potential fingerprinting issues.

(small nit, I think in your file size comparison you swapped the before/after outputs).

@real-or-random
Copy link
Contributor

LGTM (I only reviewed the text changes)

Did anyone double-check that 2324 is really the right number? I suggested it based on https://bitcoinops.org/en/tools/calc-size/, and it would be nice if someone more familiar with transaction encoding could check it. I had ignored inputs, I think. (I mean if the true value is a tiny bit lower, that doesn't matter, but it would be sad if it's a tiny bit higher.)

* Let ''input_hash = hash<sub>BIP0352/Inputs</sub>(outpoint<sub>L</sub> || A)'', where ''outpoint<sub>L</sub>'' is the smallest ''outpoint'' lexicographically used in the transaction<ref name="why_smallest_outpoint"></ref> and ''A = a·G''
** If ''input_hash'' is not a valid scalar, i.e., if ''input_hash = 0'' or ''input_hash'' is larger or equal to the secp256k1 group order, fail
* Group receiver silent payment addresses by ''B<sub>scan</sub>'' (e.g. each group consists of one ''B<sub>scan</sub>'' and one or more ''B<sub>m</sub>'')
* If any of the groups exceed the limit of ''K<sub>max</sub>'' (=2324) silent payment addresses, fail.<ref name="why_limit_k">'''Why is the size of groups (i.e. silent payment addresses sharing the same scan public key) limited by ''K<sub>max</sub>''?''' An adversary could construct a large full-block-sized transaction consisting of N=23255 outputs (that's the theoretical maximum under current consensus rules, w.r.t. the block weight limit) that all target the same entity, consisting of one large group. Without a limit on the group size, scanning such a transaction with the algorithm described in this document would have a complexity of ''O(N<sup>2</sup>)'' for that entity, taking several minutes on modern systems. By capping the group size at ''K<sub>max</sub>'', we reduce the inner loop iterations to ''K<sub>max</sub>'', thereby decreasing the complexity to ''O(N·K<sub>max</sub>)''. This cuts down the scanning cost to the order of tens of seconds. The chosen value of ''K<sub>max</sub>'' = 2324 represents the maximum number of P2TR outputs that can fit into a 100kvB transaction, meaning a transaction that adheres to the current standardness rules is guaranteed to be within the limit. This ensures flexibility and also mitigates potential fingerprinting issues.</ref>
Copy link
Contributor

@real-or-random real-or-random Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth rephrasing this in terms of worst-case block scanning time. Because otherwise the reader may wonder if this is really an improvement in the end.

Suggested change
* If any of the groups exceed the limit of ''K<sub>max</sub>'' (=2324) silent payment addresses, fail.<ref name="why_limit_k">'''Why is the size of groups (i.e. silent payment addresses sharing the same scan public key) limited by ''K<sub>max</sub>''?''' An adversary could construct a large full-block-sized transaction consisting of N=23255 outputs (that's the theoretical maximum under current consensus rules, w.r.t. the block weight limit) that all target the same entity, consisting of one large group. Without a limit on the group size, scanning such a transaction with the algorithm described in this document would have a complexity of ''O(N<sup>2</sup>)'' for that entity, taking several minutes on modern systems. By capping the group size at ''K<sub>max</sub>'', we reduce the inner loop iterations to ''K<sub>max</sub>'', thereby decreasing the complexity to ''O(N·K<sub>max</sub>)''. This cuts down the scanning cost to the order of tens of seconds. The chosen value of ''K<sub>max</sub>'' = 2324 represents the maximum number of P2TR outputs that can fit into a 100kvB transaction, meaning a transaction that adheres to the current standardness rules is guaranteed to be within the limit. This ensures flexibility and also mitigates potential fingerprinting issues.</ref>
* If any of the groups exceed the limit of ''K<sub>max</sub>'' (=2324) silent payment addresses, fail.<ref name="why_limit_k">'''Why is the size of groups (i.e. silent payment addresses sharing the same scan public key) limited by ''K<sub>max</sub>''?''' An adversary could construct a block filled with a single transaction consisting of N=23255 outputs (that's the theoretical maximum under current consensus rules, w.r.t. the block weight limit) that all target the same entity, consisting of one large group. Without a limit on the group size, scanning such a block with the algorithm described in this document would have a complexity of ''O(N<sup>2</sup>)'' for that entity, taking several minutes on modern systems. By capping the group size at ''K<sub>max</sub>'', we reduce the inner loop iterations to ''K<sub>max</sub>'', thereby decreasing the worst-case block scanning complexity to ''O(N·K<sub>max</sub>)''. This cuts down the scanning cost to the order of tens of seconds. The chosen value of ''K<sub>max</sub>'' = 2324 represents the maximum number of P2TR outputs that can fit into a 100kvB transaction, meaning a transaction that adheres to the current standardness rules is guaranteed to be within the limit. This ensures flexibility and also mitigates potential fingerprinting issues.</ref>

Copy link
Member

@murchandamus murchandamus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did anyone double-check that 2324 is really the right number?

Looks right.

The smallest possible input is 41 bytes (empty input script, no witness). Minimum transaction header is 10 bytes, but the output counter would take 3 bytes here, so it would be 12 bytes. P2TR outputs weigh 43 bytes:

100'000 - 41 - 12 = 99'947
99'947 / 43 = 2324.348837209

So a transaction with at most 100'000 vbytes size cannot create more than 2324 P2TR outputs.

@RubenSomsen
Copy link
Contributor

@murchandamus I don't think this changes the conclusion, but an empty witness is not the right example. At least one input must be a supported key spend (e.g. regular taproot key spend) for the transaction to be eligible to be evaluated by the protocol (otherwise there is no valid source from which the shared secret can be derived).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

BIP update by author PR Author action required Needs updates, has unaddressed review comments, or is otherwise waiting for PR author

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants