-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big data op_test benchmark, for checking output consistent in different runs. #10646
Big data op_test benchmark, for checking output consistent in different runs. #10646
Conversation
dzhwinter
commented
May 14, 2018
- refine op_test framework, move and refine some standalone function as the test suite.
- add benchmarkSuite, for checking the number stability, gpu cpu speed.
Maybe the change of mul_op test can be separated into another PR? |
This PR is mainly focused on the big input data test case, that's why I noticed the mul_op time cost sucks. But that's ok if you think that we should separate mul_op into a new PR. |
start = time.time() | ||
for i in range(iters): | ||
callback(*args, **kwargs) | ||
elapse = time.time() - start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+=? shoud elapse be initiated before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
@@ -72,6 +72,8 @@ def convert_np_dtype_to_dtype_(np_dtype): | |||
return core.VarDesc.VarType.INT64 | |||
elif dtype == np.bool: | |||
return core.VarDesc.VarType.BOOL | |||
elif dtype == np.uint16: | |||
return core.VarDesc.VarType.INT16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uint16 and int16 is different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is obvious you are right, but we have to use INT16 here, because
-
we do not have unsigned data type in OpProto, both for uint16, uint32 and uint64.
/~https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/framework.proto#L97 -
the uint16 in this PR is only for float16 in op_test, it seems a little confusing but it is the fact.
/~https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/unittests/op_test.py#L473
The guy who do the fp16 job explain the reason, "the pybind does not have the built-in float16 support, so he chooses INT16 to allocate the same size memory".
Considering that our messed datatype in python, it definitely needs to be clean up.
@@ -368,6 +370,13 @@ class Operator(object): | |||
Block. Users can use the build in instructions to describe their neural | |||
network. | |||
""" | |||
OP_WITHOUT_KERNEL_SET = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This set is sad...Can you file a issue and assign to me to clean it up? So that I don't forget.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
def customize_fetch_list(self): | ||
""" | ||
customize fetch list, configure the wanted variables. | ||
>>> self.fetch_list = ["Out"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a tab?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The output of operator will be automatically inserted into fetch list, it is same here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent!