Skip to content

Update the list of algos to benchmark#2337

Merged
rapids-bot[bot] merged 24 commits intorapidsai:branch-22.08from
jnke2016:branch-22.08_fea_add_algo_for_benchmark
Jun 21, 2022
Merged

Update the list of algos to benchmark#2337
rapids-bot[bot] merged 24 commits intorapidsai:branch-22.08from
jnke2016:branch-22.08_fea_add_algo_for_benchmark

Conversation

@jnke2016
Copy link
Contributor

@jnke2016 jnke2016 commented Jun 3, 2022

This PR

  1. Update the way uniform neighbor sample is imported( it has been removed from experimental)
  2. Ping libraft-headers and pyraft to 22.08
  3. Add Triangle count to the list of algos to benchmarks

@codecov-commenter
Copy link

codecov-commenter commented Jun 3, 2022

Codecov Report

❗ No coverage uploaded for pull request base (branch-22.08@7100fd5). Click here to learn what that means.
The diff coverage is n/a.

@@               Coverage Diff               @@
##             branch-22.08    #2337   +/-   ##
===============================================
  Coverage                ?   60.10%           
===============================================
  Files                   ?      102           
  Lines                   ?     5158           
  Branches                ?        0           
===============================================
  Hits                    ?     3100           
  Misses                  ?     2058           
  Partials                ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7100fd5...b6e69d7. Read the comment docs.

@rlratzel rlratzel added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change python labels Jun 7, 2022
@rlratzel
Copy link
Contributor

rlratzel commented Jun 7, 2022

rerun tests

@rlratzel rlratzel added this to the 22.08 milestone Jun 8, 2022
@jnke2016 jnke2016 marked this pull request as ready for review June 13, 2022 06:13
@jnke2016 jnke2016 requested a review from a team as a code owner June 13, 2022 06:13
@rlratzel
Copy link
Contributor

rerun tests

Copy link
Contributor

@rlratzel rlratzel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I just had some minor feedback on one of the test files.

# FIXME: Do more testing for this datasets
# [utils.RAPIDS_DATASET_ROOT_DIR_PATH/"email-Eu-core.csv"]
datasets = utils.DATASETS_UNDIRECTED
# datasets = utils.DATASETS_UNDIRECTED_WEIGHTS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed?

Comment on lines +128 to +129
# print('input_df = \n', input_df.sort_values([*input_df.columns]))
print('result_df = \n', result_nbr.sort_values(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see the usefulness of print statements in tests, but it also seems like there should be a better way to get this information if we think it's needed by everyone (eg. a dedicated test that fails if a particular result is unexpected). If this was just a debug print though, I think it should be removed.

@rlratzel
Copy link
Contributor

@gpucibot merge

@rapids-bot rapids-bot bot merged commit 0bcb6e0 into rapidsai:branch-22.08 Jun 21, 2022
@jnke2016 jnke2016 deleted the branch-22.08_fea_add_algo_for_benchmark branch September 24, 2022 23:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants