-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QualcommQnn] add ops #9538
[QualcommQnn] add ops #9538
Conversation
Thanks for your contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
4352069
to
a68da13
Compare
lite/backends/nnadapter/nnadapter/src/optimizer/convert_datalayout_nchw_to_nhwc.cc
Outdated
Show resolved
Hide resolved
a68da13
to
9b1e262
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
* windows ci fix (#9559) * [NNAdapter] support device data (#9493) * [QualcommQnn] support exp, log, reduce_mean, reduce_max, reduce_sum, floor (#9505) * [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * [NNAdapter] support vit model (#9583) * [NNAdapter] set output lod according to input lod * [NNAdapter] slice support EndsTensorList * [NNAdapter] fuse pass (5d->4d) * fix cmake cxx flags (#9467)
* [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <zhu_py@qq.com>
…le#9580) * [QualcommQnn] add ops (PaddlePaddle#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <zhu_py@qq.com>
…le#9580) * [QualcommQnn] add ops (PaddlePaddle#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <zhu_py@qq.com>
* [X86] Add set value op and double data type to framework. (#9580) * [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <zhu_py@qq.com> * [PaddleSpeech] Add OPs and others needed by fastspeech_2 model (#9706) * [Host] add 3 OPs: set_value, round, share_data test=develop * [Host] add expand_v2 OP registration with type kBool test=develop * [Arm] add reduce_sum OP Int64 registration and neon implement & add reduce_max OP kInt32 registration test=develop * [X86] fix bug in set_value OP test=develop * [Extra] move 2 round and share_data to extra test=develop * [proto] fix a bug test=develop Co-authored-by: csy0225 <78470701+csy0225@users.noreply.github.com> Co-authored-by: zhupengyang <zhu_py@qq.com>
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm