-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
Thanks for your contributions @huangzhiyuan. |
@mxnet-label-bot Add [MKLDNN, pr-awaiting-review] |
|
||
mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& param); | ||
mkldnn::eltwise_forward::primitive_desc GetActFwdDescImpl( | ||
const ActivationParam& param, bool is_train, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unify the format of "&"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessary there for bool or int type, please refer to /~https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/mkldnn/mkldnn_pooling-inl.h#L139
cc @anirudh2290 |
*/ | ||
/*! | ||
* Copyright (c) 2019 by Contributors | ||
* \file mkldnn_act-inl.h |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix file name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
const std::vector<NDArray>& out_data) { | ||
CHECK(in_data[0].dtype() == mshadow::kUint8 || | ||
in_data[0].dtype() == mshadow::kInt8) | ||
<< "mkldnn_quantized_activation op only supports uint8 and int8 as input " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mkldnn_quantized_activation
is not a valid operator name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
if (param.act_type == activation::kReLU) { | ||
TYPE_ASSIGN_CHECK(*out_type, 0, mshadow::kInt8); | ||
} else { | ||
LOG(FATAL) << "QuantizedActivationOp only supports act_type=relu for now"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QuantizedActivationOp
is not a valid operator name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -414,6 +414,57 @@ def check_quantized_flatten(shape, qdtype): | |||
check_quantized_flatten((10, 15, 18), qdtype) | |||
check_quantized_flatten((3, 4, 23, 23), qdtype) | |||
|
|||
@with_seed() | |||
def test_quantized_act(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have cases for non-relu activation type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Negative test cases are not necessary there, it well prompt that _contrib_quantized_act only supports act_type=relu for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine. I'm personally okay about there is no non-relu cases in UT. But do you mind running one on your machine and paste the error log here? I just want to make sure that the error happens at a appropriate place and the message is accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, the error log just like below when change activation type from relu to no-relu type (e.g. sigmoid):
[16:03:13] src/operator/quantization/quantized_activation.cc:54: _contrib_quantized_act only supports act_type=relu for now
Aborted
Please retrigger the CI |
@TaoLv @ZhennanQin @ciyongch please help review the PR again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for addressing my comments. Approved now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution. LGTM
@huangzhiyuan please rebase the code since I have merged the quantize v2 changes. I will merge the PR if everything is fine. |
Merging now, thanks for your contribution :) |
* add quantized relu * fix testcase * add author and skip quantized-relu for gpu * fix comments * retrigger ci * retrigger ci * comment fix * retrigger ci * retrigger ci
* add quantized relu * fix testcase * add author and skip quantized-relu for gpu * fix comments * retrigger ci * retrigger ci * comment fix * retrigger ci * retrigger ci
Description
This PR is to add quantized relu op and its MKLDNN implementation.
@pengzhao-intel @ZhennanQin @TaoLv
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments