Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] partial disabling of caching for conv2d, conv_transpose, quantize and pool2d #36595

Closed
wants to merge 31 commits into from

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented Oct 20, 2021

PR types

Bug fixes

PR changes

OPs

Describe

Disables some more caching of oneDNN objects in order to fix #34554

- compilation fix

- fix

- fixes

- fix

- fix

- fix again

- fix

- another fix

- another compilation fix

- fix

- fix

- fix

- lint
- pool2d partially stripped of caching
@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added the Intel label Oct 20, 2021
Copy link
Contributor

@wozna wozna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rest of the changes look good for me

mkldnn::convolution_backward_data,
mkldnn::convolution_backward_weights>(
mkldnn_engine, ctx.GetPlace()),
is_test_(false) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't there be a check for the is_test attribute as it is done for forward is_test_(ctx.Attr<bool>("is_test"))? Then we also check if is_test is false, which if we set false here, it doesn't make sense

@jczaja
Copy link
Contributor Author

jczaja commented Oct 29, 2021

depracated

@jczaja jczaja closed this Oct 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Deploying models with multi-thread will raise MKLDNN error
2 participants