Update the list of algos to benchmark#2337
Update the list of algos to benchmark#2337rapids-bot[bot] merged 24 commits intorapidsai:branch-22.08from
Conversation
Codecov Report
@@ Coverage Diff @@
## branch-22.08 #2337 +/- ##
===============================================
Coverage ? 60.10%
===============================================
Files ? 102
Lines ? 5158
Branches ? 0
===============================================
Hits ? 3100
Misses ? 2058
Partials ? 0 Continue to review full report at Codecov.
|
|
rerun tests |
|
rerun tests |
rlratzel
left a comment
There was a problem hiding this comment.
Looks good, I just had some minor feedback on one of the test files.
| # FIXME: Do more testing for this datasets | ||
| # [utils.RAPIDS_DATASET_ROOT_DIR_PATH/"email-Eu-core.csv"] | ||
| datasets = utils.DATASETS_UNDIRECTED | ||
| # datasets = utils.DATASETS_UNDIRECTED_WEIGHTS |
| # print('input_df = \n', input_df.sort_values([*input_df.columns])) | ||
| print('result_df = \n', result_nbr.sort_values( |
There was a problem hiding this comment.
I can see the usefulness of print statements in tests, but it also seems like there should be a better way to get this information if we think it's needed by everyone (eg. a dedicated test that fails if a particular result is unexpected). If this was just a debug print though, I think it should be removed.
|
@gpucibot merge |
This PR
uniform neighbor sampleis imported( it has been removed from experimental)libraft-headersandpyraftto 22.08Triangle countto the list of algos to benchmarks