Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ public static class RealtimeToOfflineSegmentsTask extends MergeTask {
DISTINCTCOUNTRAWTHETASKETCH, DISTINCTCOUNTTUPLESKETCH, DISTINCTCOUNTRAWINTEGERSUMTUPLESKETCH,
SUMVALUESINTEGERSUMTUPLESKETCH, AVGVALUEINTEGERSUMTUPLESKETCH, DISTINCTCOUNTHLLPLUS,
DISTINCTCOUNTRAWHLLPLUS, DISTINCTCOUNTCPCSKETCH, DISTINCTCOUNTRAWCPCSKETCH, DISTINCTCOUNTULL,
DISTINCTCOUNTRAWULL, PERCENTILEKLL, PERCENTILERAWKLL);
DISTINCTCOUNTRAWULL, PERCENTILEKLL, PERCENTILERAWKLL, PERCENTILETDIGEST, PERCENTILERAWTDIGEST);
}

// Generate segment and push to controller based on batch ingestion configs
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.pinot.core.segment.processing.aggregator;

import com.tdunning.math.stats.TDigest;
import java.util.Map;
import org.apache.pinot.core.common.ObjectSerDeUtils;
import org.apache.pinot.core.query.aggregation.function.PercentileTDigestAggregationFunction;
import org.apache.pinot.segment.spi.Constants;


public class PercentileTDigestAggregator implements ValueAggregator {

@Override
public Object aggregate(Object value1, Object value2, Map<String, String> functionParameters) {
byte[] bytes1 = (byte[]) value1;
byte[] bytes2 = (byte[]) value2;

// Empty byte arrays represent the default null value for BYTES columns.
// Deserializing byte[0] would throw BufferUnderflowException, so handle it explicitly.
if (bytes1.length == 0 && bytes2.length == 0) {
int compression = getCompression(functionParameters);
return ObjectSerDeUtils.TDIGEST_SER_DE.serialize(TDigest.createMergingDigest(compression));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it work if you simply return empty bytes?

Copy link
Copy Markdown
Contributor Author

@justahuman1 justahuman1 Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick review!

I don't think returning empty bytes is safe. The query-time aggregation function and RollupReducer both deserialize without checking for empty bytes. At LinkedIn, we hit this exact issue with HLL in production recently where empty bytes caused BufferUnderflowException during the query execution path (I wanted to send a PR to add a guard for all aggregators separately as well if that makes sense). The reason for this empty serialization was to ensures everything downstream can parse it properly.

What do you think/recommend?


I tried to create a static variable for this (for perf reasons) but apparently byte buffers can be different for empty arrays with a different compression value. See below script for test:

import com.tdunning.math.stats.TDigest;
import com.tdunning.math.stats.MergingDigest;
import java.nio.ByteBuffer;
import java.util.Arrays;

public class TDigestTest {
    public static void main(String[] args) {
        TDigest d100 = TDigest.createMergingDigest(100);
        TDigest d200 = TDigest.createMergingDigest(200);

        ByteBuffer buf100 = ByteBuffer.allocate(d100.smallByteSize());
        d100.asSmallBytes(buf100);
        byte[] bytes100 = buf100.array();

        ByteBuffer buf200 = ByteBuffer.allocate(d200.smallByteSize());
        d200.asSmallBytes(buf200);
        byte[] bytes200 = buf200.array();

        System.out.println("compression=100 size: " + bytes100.length + " bytes: " + Arrays.toString(bytes100));
        System.out.println("compression=200 size: " + bytes200.length + " bytes: " + Arrays.toString(bytes200));
        System.out.println("equal: " + Arrays.equals(bytes100, bytes200));
    }
}

Output:

compression=100 size: 30 bytes: [0, 0, 0, 2, 127, -16, 0, 0, 0, 0, 0, 0, -1, -16, 0, 0, 0, 0, 0, 0, 66, -56, 0, 0, 0, -46, 1, -12, 0, 0]
compression=200 size: 30 bytes: [0, 0, 0, 2, 127, -16, 0, 0, 0, 0, 0, 0, -1, -16, 0, 0, 0, 0, 0, 0, 67, 72, 0, 0, 1, -102, 3, -24, 0, 0]

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we allow empty bytes in the value aggregator (i.e. the raw value), we should also allow it in aggregation function and roll-up reducer. They should follow the same contract for input values

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Given that I am still new to the Pinot codebase, I would like to rely on your judgement. What do you believe is the right call? Should I make this change in the aggregation function and roll-up reducer or shall I return empty bytes as you initially suggested?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically we want to treat empty bytes as null (i.e. missing value). We should handle empty bytes in all 3 places.
It is okay to merge the current PR, and fix this in a separate PR, where we modify all different functions to follow this convention. Let me know how do you want to proceed.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor Author

@justahuman1 justahuman1 Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patience @Jackie-Jiang, I was waiting for tests/builds to finish and also wanted to think this through. Updated this PR to treat empty bytes as missing in PercentileTDigestAggregator (returns empty bytes now).

I see a similar fix for CPC currently handles empty bytes per #17925). Agree it should be a separate PR, I want to think through the tradeoff more (and probably add metrics so silent nulls aren't invisible) before picking it up.

Let me know if you're in favor of adding this null handling throughout T-Digest (and potentially other sketches) and I will send it out in a separate PR.

}
if (bytes1.length == 0) {
Comment thread
justahuman1 marked this conversation as resolved.
return bytes2;
}
if (bytes2.length == 0) {
return bytes1;
}

int compression = getCompression(functionParameters);
TDigest first = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(bytes1);
TDigest second = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(bytes2);
TDigest merged = TDigest.createMergingDigest(compression);
merged.add(first);
merged.add(second);
return ObjectSerDeUtils.TDIGEST_SER_DE.serialize(merged);
}

private int getCompression(Map<String, String> functionParameters) {
String compressionParam = functionParameters.get(Constants.PERCENTILETDIGEST_COMPRESSION_FACTOR_KEY);
return compressionParam != null
? Integer.parseInt(compressionParam)
: PercentileTDigestAggregationFunction.DEFAULT_TDIGEST_COMPRESSION;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,9 @@ public static ValueAggregator getValueAggregator(AggregationFunctionType aggrega
case PERCENTILEKLL:
case PERCENTILERAWKLL:
return new PercentileKLLSketchAggregator();
case PERCENTILETDIGEST:
case PERCENTILERAWTDIGEST:
return new PercentileTDigestAggregator();
default:
throw new IllegalStateException("Unsupported aggregation type: " + aggregationType);
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.pinot.core.segment.processing.aggregator;

import com.tdunning.math.stats.TDigest;
import java.util.HashMap;
import java.util.Map;
import org.apache.pinot.core.common.ObjectSerDeUtils;
import org.apache.pinot.segment.spi.Constants;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;

import static org.testng.Assert.assertEquals;
import static org.testng.Assert.assertNotNull;


public class PercentileTDigestAggregatorTest {

private PercentileTDigestAggregator _aggregator;

@BeforeMethod
public void setUp() {
_aggregator = new PercentileTDigestAggregator();
}

@Test
public void testAggregateWithDefaultCompression() {
TDigest first = TDigest.createMergingDigest(100);
for (int i = 0; i < 100; i++) {
first.add(i);
}
TDigest second = TDigest.createMergingDigest(100);
for (int i = 100; i < 200; i++) {
second.add(i);
}

byte[] value1 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(first);
byte[] value2 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(second);

Map<String, String> functionParameters = new HashMap<>();
byte[] result = (byte[]) _aggregator.aggregate(value1, value2, functionParameters);

TDigest resultDigest = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(result);
assertNotNull(resultDigest);
assertEquals(resultDigest.size(), 200);
assertEquals(resultDigest.quantile(0.5), 99.5, 1);
}

@Test
public void testAggregateWithCustomCompression() {
TDigest first = TDigest.createMergingDigest(100);
for (int i = 0; i < 50; i++) {
first.add(i);
}
TDigest second = TDigest.createMergingDigest(100);
for (int i = 50; i < 100; i++) {
second.add(i);
}

byte[] value1 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(first);
byte[] value2 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(second);

Map<String, String> functionParameters = new HashMap<>();
functionParameters.put(Constants.PERCENTILETDIGEST_COMPRESSION_FACTOR_KEY, "200");

byte[] result = (byte[]) _aggregator.aggregate(value1, value2, functionParameters);

TDigest resultDigest = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(result);
assertNotNull(resultDigest);
assertEquals(resultDigest.size(), 100);
assertEquals(resultDigest.quantile(0.5), 49.5, 1);
}

@Test
public void testAggregateWithBothEmptyBytes() {
byte[] empty1 = new byte[0];
byte[] empty2 = new byte[0];

Map<String, String> functionParameters = new HashMap<>();
byte[] result = (byte[]) _aggregator.aggregate(empty1, empty2, functionParameters);

// Should return a valid serialized empty TDigest, not crash
TDigest resultDigest = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(result);
assertNotNull(resultDigest);
assertEquals(resultDigest.size(), 0);
}

@Test
public void testAggregateWithFirstEmptyBytes() {
TDigest second = TDigest.createMergingDigest(100);
for (int i = 0; i < 50; i++) {
second.add(i);
}
byte[] empty = new byte[0];
byte[] value2 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(second);

Map<String, String> functionParameters = new HashMap<>();
byte[] result = (byte[]) _aggregator.aggregate(empty, value2, functionParameters);

// Should return the non-empty side as-is
assertEquals(result, value2);
TDigest resultDigest = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(result);
assertEquals(resultDigest.size(), 50);
}

@Test
public void testAggregateWithSecondEmptyBytes() {
TDigest first = TDigest.createMergingDigest(100);
for (int i = 0; i < 50; i++) {
first.add(i);
}
byte[] value1 = ObjectSerDeUtils.TDIGEST_SER_DE.serialize(first);
byte[] empty = new byte[0];

Map<String, String> functionParameters = new HashMap<>();
byte[] result = (byte[]) _aggregator.aggregate(value1, empty, functionParameters);

// Should return the non-empty side as-is
assertEquals(result, value1);
TDigest resultDigest = ObjectSerDeUtils.TDIGEST_SER_DE.deserialize(result);
assertEquals(resultDigest.size(), 50);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,8 @@ public void validateTaskConfigs(TableConfig tableConfig, Schema schema, Map<Stri
// check no mis-configured aggregation function parameters
Set<String> allowedFunctionParameterNames = ImmutableSet.of(Constants.CPCSKETCH_LGK_KEY.toLowerCase(),
Constants.THETA_TUPLE_SKETCH_SAMPLING_PROBABILITY.toLowerCase(),
Constants.THETA_TUPLE_SKETCH_NOMINAL_ENTRIES.toLowerCase());
Constants.THETA_TUPLE_SKETCH_NOMINAL_ENTRIES.toLowerCase(),
Constants.PERCENTILETDIGEST_COMPRESSION_FACTOR_KEY.toLowerCase());
Map<String, Map<String, String>> aggregationFunctionParameters =
MergeRollupTaskUtils.getAggregationFunctionParameters(taskConfigs);
for (String fieldName : aggregationFunctionParameters.keySet()) {
Expand All @@ -515,10 +516,12 @@ public void validateTaskConfigs(TableConfig tableConfig, Schema schema, Map<Stri
for (String functionParameterName : functionParameters.keySet()) {
// check that function parameter name is valid
Preconditions.checkState(allowedFunctionParameterNames.contains(functionParameterName.toLowerCase()),
"Aggregation function parameter name must be one of [lgK, samplingProbability, nominalEntries]!");
"Aggregation function parameter name must be one of [lgK, samplingProbability, nominalEntries,"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider simply print out allowedFunctionParameterNames

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. In case it matters, just want to call out that this results in a cosmetic difference (the set is lowercase).

[lgk, samplingprobability, nominalentries, compressionfactor]

+ " compressionFactor]!");
// check that function parameter value is valid for nominal entries
if (functionParameterName.equalsIgnoreCase(Constants.CPCSKETCH_LGK_KEY)
|| functionParameterName.equalsIgnoreCase(Constants.THETA_TUPLE_SKETCH_NOMINAL_ENTRIES)) {
|| functionParameterName.equalsIgnoreCase(Constants.THETA_TUPLE_SKETCH_NOMINAL_ENTRIES)
|| functionParameterName.equalsIgnoreCase(Constants.PERCENTILETDIGEST_COMPRESSION_FACTOR_KEY)) {
String value = functionParameters.get(functionParameterName);
Comment thread
justahuman1 marked this conversation as resolved.
String err = "Aggregation function parameter \"" + functionParameterName + "\" on column \"" + fieldName
+ "\" has invalid value: " + value;
Expand Down
Loading
Loading