Skip to content

Kafka 17792 header parsing times out processing and using large quantities of memory if the string looks like a number#17510

Merged
gharris1727 merged 7 commits intoapache:trunkfrom
msillence:KAFKA-17792
Jan 27, 2025
Merged

Kafka 17792 header parsing times out processing and using large quantities of memory if the string looks like a number#17510
gharris1727 merged 7 commits intoapache:trunkfrom
msillence:KAFKA-17792

Conversation

@msillence
Copy link
Copy Markdown
Contributor

Jira KAFKA-17792

We have trace headers such as:

"X-B3-SpanId": "74320e6e26adc8f8"

if however the value happens to be: "407127e212797209"

This is then treated as a numeric value and it tries to convert this as a numeric representation and an exact value using BigDecimal
Specifically the calls to setScale with floor and ceiling seem to cause the problem

testing this the behaviour of biginteger and setScale calls shows there are already limits on the number range possible sadly my particular number falls in the range of about 25 minutes and 4 gig of memory rather than quick failure

1e+1 time 0.0 totalMemory 532676608
1e+10 time 0.0 totalMemory 532676608
1e+100 time 0.0 totalMemory 532676608
1e+1000 time 0.0 totalMemory 532676608
1e+10000 time 0.005 totalMemory 532676608
1e+100000 time 0.035 totalMemory 532676608
1e+1000000 time 0.228 totalMemory 532676608
1e+10000000 time 4.308 totalMemory 926941184
1e+100000000 time 117.119 totalMemory 3221225472
1e+1000000000 time 0.0 totalMemory 3221225472 BigInteger would overflow supported range
1e+10000000000 time 0.001 totalMemory 3221225472 Too many nonzero exponent digits.
1e+100000000000 time 0.0 totalMemory 3221225472 Too many nonzero exponent digits.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@github-actions github-actions bot added streams core Kafka Broker tools connect build Gradle build or GitHub Actions clients labels Oct 15, 2024
@mimaison
Copy link
Copy Markdown
Member

Thanks for the PR. Please make sure it only contains changes related to the issue this is addressing. Currently I see a lot of other changes including some non-Apache Kafka files.

@msillence
Copy link
Copy Markdown
Contributor Author

Thanks for the PR. Please make sure it only contains changes related to the issue this is addressing. Currently I see a lot of other changes including some non-Apache Kafka files.

I'm not quite sure what I've done wrong, I cloned the repo and applied my change again and forced pushed it to my fork and it still looks wrong

The change is here msillence@3720951

@msillence
Copy link
Copy Markdown
Contributor Author

Oh now I see the problem I've mixed up repos this change is against github.com:confluentinc/kafka

@github-actions github-actions bot added the small Small PRs label Oct 16, 2024
@msillence
Copy link
Copy Markdown
Contributor Author

actually there is a simpler solution https://stackoverflow.com/a/12748321

@msillence msillence force-pushed the KAFKA-17792 branch 2 times, most recently from f8af0e1 to 4b4a8f1 Compare October 17, 2024 19:27
@msillence
Copy link
Copy Markdown
Contributor Author

I feel like the correct fix is the stackoverflow test:
bd.signum() == 0 || bd.scale() <= 0 || bd.stripTrailingZeros().scale() <= 0
sadly there are quite a few tests that expect the perhaps incorrect behavior where Float.MAX_VALUE which is 3.4028235E38 looks like a integer to me but the ceil/floor calls failed due to the number of zeros so it fell though to another part of the parsing and returned a float.
To further confound the issue the very small numbers such as 1e-100000000 will with the new integer test fall though to the float conversion which only checks the upper range and will simply convert to zero as float doesn't have the precision. Fixing that breaks yet more tests.

@mimaison mimaison requested a review from gharris1727 October 22, 2024 13:01
Copy link
Copy Markdown
Contributor

@gharris1727 gharris1727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msillence Thanks for the PR!

I did not know that setScale with very large numbers was so costly. I see that the old logic (before #15469) used RoundingMode.ROUND_UNNECESSARY, which probably had some guards in place to avoid consuming resources for an impossible operation.

sadly there are quite a few tests that expect the perhaps incorrect behavior where Float.MAX_VALUE which is 3.4028235E38 looks like a integer to me but the ceil/floor calls failed due to the number of zeros so it fell though to another part of the parsing and returned a float.

That number is indeed an integer, but it's too large to fit in a LONG64, which has ~19 decimals of precision. The tests are correct that this should be parsed as a FLOAT32.

While I understand that having mostly STRINGs with an occasional number-look-alike makes it natural to suggest changing this into a STRING, I don't think that's backwards compatible or safe. Another user could currently have a flow which parses all Decimals, and the current implementation would have some of their decimals fall back to STRING.

To avoid accidental Decimal parsing, I would recommend either an alternative HeaderConverter implementation, or avoiding using IDs that can be confused for numbers, such as with an alphabetic prefix. Even with this patch, you will still get some accidental Decimal, Float, or Double parsing when the e lands in some part of the value.

private static SchemaAndValue parseAsExactDecimal(BigDecimal decimal) {
BigDecimal abs = decimal.abs();
if (abs.compareTo(TOO_BIG) > 0 || (abs.compareTo(TOO_SMALL) < 0 && BigDecimal.ZERO.compareTo(abs) != 0)) {
throw new NumberFormatException("outside efficient parsing range");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this stage in parsing, we've successfully parsed a BigDecimal. This method tries to losslessly cast the BigDecimal to a primitive integer type. If the cast would be lossy, it returns null and lets the caller figure out whether it's a float32, float64, or has to be an arbitrary precision decimal.

Rather than throwing NumberFormatException and making this a STRING, we should return null, and let the other fallback logic happen, so this shows up as an arbitrary precision Decimal.

I also suspect that the comparison could be a little tighter in that case: TOO_BIG could be LONG_MAX, and TOO_SMALL could be ONE, eliminating calling setScale for 1e20 which cannot possibly fit in a LONG.

@mjsax mjsax removed streams core Kafka Broker tools build Gradle build or GitHub Actions clients labels Nov 2, 2024
@showuon
Copy link
Copy Markdown
Member

showuon commented Jan 21, 2025

@gharris1727 , do you have any further comments for this?

Copy link
Copy Markdown
Contributor

@gharris1727 gharris1727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msillence Thanks for changing this to fallback to the Float/Double/Decimal logic! We're interested in getting this into the upcoming release, so please let us know if you're able to update the PR soon.

@showuon This PR is currently not mergeable and would cause build/test failures. But the changes needed to get it to mergeable seem minor.

Comment thread connect/api/src/main/java/org/apache/kafka/connect/data/Values.java Outdated
Comment thread connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java Outdated
Comment thread connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java Outdated
Comment thread connect/api/src/main/java/org/apache/kafka/connect/data/Values.java
Comment thread connect/api/src/main/java/org/apache/kafka/connect/data/Values.java Outdated
Comment thread connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java Outdated
Comment thread connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java Outdated
Comment thread connect/api/src/test/java/org/apache/kafka/connect/data/ValuesTest.java Outdated
Comment thread connect/api/src/main/java/org/apache/kafka/connect/data/Values.java Outdated
@sillencem
Copy link
Copy Markdown

Sorry I'd been busy with other things. I took another look and I think I've worked out the issues with the long durations now and eliminated all the ceil/floor calls

@sillencem
Copy link
Copy Markdown

I rebased and the build is now clean

Copy link
Copy Markdown
Contributor

@gharris1727 gharris1727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gharris1727 gharris1727 merged commit d001b47 into apache:trunk Jan 27, 2025
gharris1727 pushed a commit that referenced this pull request Jan 27, 2025
…ct Values (#17510)

Reviewers: Greg Harris <greg.harris@aiven.io>, Mickael Maison <mickael.maison@gmail.com>
gharris1727 pushed a commit that referenced this pull request Jan 27, 2025
…ct Values (#17510)

Reviewers: Greg Harris <greg.harris@aiven.io>, Mickael Maison <mickael.maison@gmail.com>
@gharris1727
Copy link
Copy Markdown
Contributor

Thanks so much @msillence for reporting and fixing this!

pdruley pushed a commit to pdruley/kafka that referenced this pull request Feb 12, 2025
…ct Values (apache#17510)

Reviewers: Greg Harris <greg.harris@aiven.io>, Mickael Maison <mickael.maison@gmail.com>
manoj-mathivanan pushed a commit to manoj-mathivanan/kafka that referenced this pull request Feb 19, 2025
…ct Values (apache#17510)

Reviewers: Greg Harris <greg.harris@aiven.io>, Mickael Maison <mickael.maison@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants