Skip to content

Commit

Permalink
Finish transition in tests
Browse files Browse the repository at this point in the history
  • Loading branch information
mariosasko committed Nov 24, 2021
1 parent 0eeeb71 commit 3fe4010
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 7 deletions.
5 changes: 0 additions & 5 deletions tests/test_arrow_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,6 @@ def test_try_incompatible_extension_type(self):
arr = pa.array(TypedSequence(["foo", "bar"], try_type=Array2DExtensionType((1, 3), "int64")))
self.assertEqual(arr.type, pa.string())

def test_catch_overflow(self):
if config.PYARROW_VERSION.major < 2:
with self.assertRaises(OverflowError):
_ = pa.array(TypedSequence([["x" * 1024]] * ((2 << 20) + 1))) # ListArray with a bit more than 2GB


def _check_output(output, expected_num_chunks: int):
stream = pa.BufferReader(output) if isinstance(output, pa.Buffer) else pa.memory_map(output)
Expand Down
2 changes: 0 additions & 2 deletions tests/test_dataset_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -285,8 +285,6 @@ def test_load_real_dataset_all_configs(self, dataset_name):

def get_packaged_dataset_names():
packaged_datasets = [{"testcase_name": x, "dataset_name": x} for x in _PACKAGED_DATASETS_MODULES.keys()]
if datasets.config.PYARROW_VERSION.major < 3: # parquet is not supported for pyarrow<3.0.0
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
return packaged_datasets


Expand Down

1 comment on commit 3fe4010

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show benchmarks

PyArrow==3.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.070855 / 0.011353 (0.059502) 0.004050 / 0.011008 (-0.006959) 0.031406 / 0.038508 (-0.007102) 0.035464 / 0.023109 (0.012355) 0.295987 / 0.275898 (0.020089) 0.331140 / 0.323480 (0.007661) 0.081642 / 0.007986 (0.073657) 0.004973 / 0.004328 (0.000645) 0.009178 / 0.004250 (0.004927) 0.040807 / 0.037052 (0.003754) 0.296483 / 0.258489 (0.037994) 0.334721 / 0.293841 (0.040880) 0.085618 / 0.128546 (-0.042928) 0.008809 / 0.075646 (-0.066837) 0.253305 / 0.419271 (-0.165966) 0.046397 / 0.043533 (0.002864) 0.299517 / 0.255139 (0.044378) 0.323993 / 0.283200 (0.040793) 0.084572 / 0.141683 (-0.057111) 1.741209 / 1.452155 (0.289055) 1.813434 / 1.492716 (0.320717)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.219483 / 0.018006 (0.201476) 0.438240 / 0.000490 (0.437750) 0.002609 / 0.000200 (0.002410) 0.000076 / 0.000054 (0.000022)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.036728 / 0.037411 (-0.000684) 0.023448 / 0.014526 (0.008922) 0.030524 / 0.176557 (-0.146032) 0.200840 / 0.737135 (-0.536295) 0.031514 / 0.296338 (-0.264825)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.418815 / 0.215209 (0.203606) 4.200895 / 2.077655 (2.123240) 1.788439 / 1.504120 (0.284319) 1.577130 / 1.541195 (0.035935) 1.653054 / 1.468490 (0.184564) 0.417340 / 4.584777 (-4.167437) 4.609503 / 3.745712 (0.863791) 3.697648 / 5.269862 (-1.572213) 0.886275 / 4.565676 (-3.679402) 0.050299 / 0.424275 (-0.373976) 0.010908 / 0.007607 (0.003301) 0.525225 / 0.226044 (0.299180) 5.238551 / 2.268929 (2.969622) 2.249542 / 55.444624 (-53.195082) 1.885967 / 6.876477 (-4.990510) 2.012581 / 2.142072 (-0.129492) 0.531366 / 4.805227 (-4.273861) 0.114925 / 6.500664 (-6.385740) 0.057698 / 0.075469 (-0.017771)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.566393 / 1.841788 (-0.275395) 12.219270 / 8.074308 (4.144962) 26.631639 / 10.191392 (16.440247) 0.796329 / 0.680424 (0.115905) 0.532862 / 0.534201 (-0.001339) 0.368610 / 0.579283 (-0.210673) 0.502200 / 0.434364 (0.067837) 0.251601 / 0.540337 (-0.288737) 0.263179 / 1.386936 (-1.123757)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.071056 / 0.011353 (0.059703) 0.003822 / 0.011008 (-0.007186) 0.029765 / 0.038508 (-0.008743) 0.033748 / 0.023109 (0.010639) 0.321074 / 0.275898 (0.045176) 0.357897 / 0.323480 (0.034418) 0.085537 / 0.007986 (0.077552) 0.004632 / 0.004328 (0.000304) 0.007323 / 0.004250 (0.003072) 0.044471 / 0.037052 (0.007419) 0.317572 / 0.258489 (0.059083) 0.357422 / 0.293841 (0.063581) 0.085120 / 0.128546 (-0.043426) 0.008801 / 0.075646 (-0.066845) 0.252891 / 0.419271 (-0.166380) 0.045567 / 0.043533 (0.002034) 0.321272 / 0.255139 (0.066133) 0.340099 / 0.283200 (0.056900) 0.080453 / 0.141683 (-0.061230) 1.668890 / 1.452155 (0.216736) 1.705737 / 1.492716 (0.213020)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.302628 / 0.018006 (0.284622) 0.440631 / 0.000490 (0.440142) 0.015342 / 0.000200 (0.015143) 0.000251 / 0.000054 (0.000197)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.034855 / 0.037411 (-0.002556) 0.021352 / 0.014526 (0.006826) 0.032468 / 0.176557 (-0.144088) 0.196991 / 0.737135 (-0.540145) 0.030557 / 0.296338 (-0.265781)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.427358 / 0.215209 (0.212149) 4.293670 / 2.077655 (2.216015) 1.881336 / 1.504120 (0.377216) 1.709406 / 1.541195 (0.168211) 1.766199 / 1.468490 (0.297709) 0.423745 / 4.584777 (-4.161032) 4.642978 / 3.745712 (0.897265) 2.061027 / 5.269862 (-3.208835) 0.887271 / 4.565676 (-3.678406) 0.050925 / 0.424275 (-0.373350) 0.011026 / 0.007607 (0.003419) 0.539497 / 0.226044 (0.313452) 5.348246 / 2.268929 (3.079317) 2.367863 / 55.444624 (-53.076761) 2.042077 / 6.876477 (-4.834400) 2.196495 / 2.142072 (0.054422) 0.539554 / 4.805227 (-4.265674) 0.117366 / 6.500664 (-6.383298) 0.058879 / 0.075469 (-0.016590)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.546793 / 1.841788 (-0.294994) 12.354151 / 8.074308 (4.279843) 27.217696 / 10.191392 (17.026304) 0.704045 / 0.680424 (0.023621) 0.532919 / 0.534201 (-0.001282) 0.375332 / 0.579283 (-0.203951) 0.503304 / 0.434364 (0.068940) 0.258829 / 0.540337 (-0.281508) 0.270338 / 1.386936 (-1.116598)

CML watermark

Please sign in to comment.