Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 10 additions & 6 deletions .test-infra/jenkins/job_LoadTests_ParDo_Flink_Python.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,15 @@ String now = new Date().format("MMddHHmmss", TimeZone.getTimeZone('UTC'))
* GROUP BY test_id, timestamp
* ORDER BY timestamp
* );
*
* Subsumed by the new Grafana dashboard:
* http://metrics.beam.apache.org/d/MOi-kf3Zk/pardo-load-tests?orgId=1&var-processingType=streaming&var-sdk=python
*/

def batchScenarios = { datasetName ->
[
[
title : 'ParDo Python Load test: 20M 100 byte records 10 times',
title : 'ParDo Python Load test: 20M 100 byte records 10 iterations',
test : 'apache_beam.testing.load_tests.pardo_test',
runner : CommonTestProperties.Runner.PORTABLE,
pipelineOptions: [
Expand Down Expand Up @@ -161,19 +164,20 @@ def streamingScenarios = { datasetName ->
test : 'apache_beam.testing.load_tests.pardo_test',
runner : CommonTestProperties.Runner.PORTABLE,
pipelineOptions: [
job_name : 'load-tests-python-flink-streaming-pardo-5-' + now,
job_name : 'load-tests-python-flink-streaming-pardo-1-' + now,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have mixed feelings about it. This test is not load-tests-python-flink-batch-pardo-1 but on streaming. There are more differences between them: batch-pardo-1 uses 10 iterations, this test uses 5 iterations. 0 counters in batch-pardo-1 vs. 3 counters right here. Because of that, I think we should stay with the previous job_name: load-tests-python-flink-streaming-pardo-5.

The general idea behind load tests is that we run the same configuration on different runners, in different SDKs and in different mode (batch or streaming). Grafana dashboards for load tests were designed with that convention in mind. If you choose java and streaming from the list, Grafana will pull data from these measurements: java_streaming_pardo_1, java_streaming_pardo_2 and so. Your streaming tests are a bit problematic, because they are not being run on Dataflow and batch. Also, they have no Java counterpart.

That being said, I think about two solutions:

  1. Add more charts. We would end up with a total of six charts. The fifth and the sixth chart would be empty in most cases (for Java and for batch).
  2. Create a separate, more specific version of dashboard just for these two tests (streaming-pardo-5 and streaming-pardo-6). Leave "ParDo Load Tests" dashboard intact.

@mxm What do you think?

Copy link
Copy Markdown
Contributor Author

@mxm mxm Aug 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, this is just the job name. More important is the table we are writing to further down. Unfortunately, the Grafana setup forces me to do that. I would rather not change this but the Grafana setup is very inflexible and in this regard a regression from the old framework we used: https://apache-beam-testing.appspot.com/explore?dashboard=5751884853805056

Your streaming tests are a bit problematic, because they are not being run on Dataflow and batch.

I don't fully understand your point to be honest, in order for the dropdown menus to work properly, i.e. choosing SDK and the mode (batch/streaming), this change is required because the table name is composed of $sdk_$mode_. The test parameters looked identical to me for Dataflow/Flink. If the iterations don't match, we can adjust that. The input is already the same.

Adding more charts would be another option. We have to remove the streaming dropdown and just add one chart per streaming and batch run. I think that is the best option. It gives us a bit more flexibility.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kamilwu If you agree, I'd remove the streaming/batch dropdown and just add a new chart for the streaming mode. I suppose that is a better migration path because there are no other streaming load tests at the moment.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are no other streaming load tests at the moment.

Not quite. At the moment, we have streaming load tests for Java (Dataflow only). Apart from that, I'm investigating running other Python load tests (ParDo 1, 2, 3 and 4) in streaming mode too.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kkucharc is actually working on streaming load tests for Python on Dataflow and she's already prepared a PR: #12435. We would like to show metrics from these new tests too.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not, then I'm fine with adding new charts (I suppose you'd meant "chart", "dashboard" is a different kind of thing) and removing the selector for batch/streaming.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mxm Do you think it is possible to adjust those parameters so that pardo-5 can become pardo-1 and pardo-6 can become pardo-2, pardo-3 or pardo-4? The main advantage of this solution is that we wouldn't have to modify dashboards at all. The old version would just work.

That was the original idea in this PR which you I understood you didn't like. pardo_5 became pardo_1. As for pardo_6, that's not possible because it measures the checkpoint duration and should be a separate panel.

If not, then I'm fine with adding new charts (I suppose you'd meant "chart", "dashboard" is a different kind of thing) and removing the selector for batch/streaming.

Yes, I meant panel, corrected above.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for pardo_6, that's not possible because it measures the checkpoint duration and should be a separate panel.

I see. Then, let's do the opposite way (adding new charts and removing the selector). Thank you.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went the other route you suggested and adjusted the parameters for the load tests. Adding more panels seemed like a good idea but it also adds significant noise to the dashboard.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for the latency / checkpoint duration. I think they are good panels to have which are applicable to many runners. I'd like to keep them where they are so we can follow the performance regression guidelines in the release guide.

project : 'apache-beam-testing',
publish_to_big_query : true,
metrics_dataset : datasetName,
// Keep the old name to not break the legacy dashboard
metrics_table : 'python_flink_streaming_pardo_5',
influx_measurement : 'python_streaming_pardo_5',
influx_measurement : 'python_streaming_pardo_1',
input_options : '\'{' +
'"num_records": 2000000,' +
'"key_size": 10,' +
'"value_size": 90}\'',
iterations : 5,
number_of_counter_operations: 10,
number_of_counters : 3,
iterations : 10,
number_of_counter_operations: 0,
number_of_counters : 0,
parallelism : 5,
// Turn on streaming mode (flags are indicated with null values)
streaming : null,
Expand Down
Loading