Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch Conv Transpose Padding Fix #7958

Merged
merged 133 commits into from
May 14, 2021

Conversation

Jeffrey-Sima
Copy link
Contributor

Background

  • We noticed a discrepancy between the output shapes produced by Pytorch and TVM for a Pytorch network containing a single torch.nn.ConvTranspose2d operator.
  • When comparing the attributes of the torch.nn.ConvTranspose2d operator and the tvm.relay.nn.conv2d_transpose
    operator, the output_padding parameter in tvm.relay.nn.conv2d_transpose would always default to
    0 regardless of what output padding was set in torch.nn.ConvTranspose2d.
  • Upon further inspection, it was found that in tvm/python/tvm/relay/frontend/pytorch.py, the import logic for convolution layers was missing the output_padding parameter.

The Fix

  • All fixes were implemented in tvm/relay/frontend/pytorch.py.
  • To resolve the missing padding parameter, convolution method of the
    PyTorchOpConverter class is updated so that when it constructed the relay convolution op it supplied the output_padding attribute in the cases where it was creating convolution transpose operations.
  • Over the course of the fix I also discovered that the convolution class automatically converted
    torch.nn.ConvTranspose1D operations into tvm.relay.nn.conv2d_transpose. This was fixed so now they were
    converted into tvm.relay.nn.conv1d_transpose operations.
  • Over the course of the fix we also discovered that torch.nn.Conv1d operations were being converted into
    tvm.relay.nn.conv2d operations. This was fixed so that they are now converted into tvm.relay.nn.conv1d
    operations. There is a slight caveat where because tvm does not support grouped 1D convolution as stated in the
    description of tvm.relay.nn.conv1d, in that case we convert the operation to 2D convolution which does have
    support for grouped convolution. After the 2D convolution, we then squeeze the output to get the correct shape and
    values for a grouped 1D convolution.

Test Coverage

  • Extended the test_forward_conv_transpose test in tvm/tests/python/frontend/pytorch/test_forward.py.

Mikael Sevenier added 30 commits March 18, 2020 19:30
# Conflicts:
#	python/tvm/relay/frontend/tensorflow.py
Mikael Sevenier and others added 15 commits March 22, 2021 16:46
…rameter for conv transpose operations

* updating pytorch converter to correctly convert conv1d to conv1d in tvm inestead of a flattened conv2d unless under circumstances of grouped convolution
* updating pytorch converter to correctly convert conv1d transpose to conv1d transpose in tvm instead of a flattened conv2d transpose
* added tests to cover these latest additions
…he#34)

SYSOL-584 Pytorch Conv Transpose Padding Fix

Approved-by: Alicja Kwasniewska
Approved-by: Mikael Sevenier
@Jeffrey-Sima
Copy link
Contributor Author

@masahi masahi self-assigned this May 1, 2021
@masahi
Copy link
Member

masahi commented May 1, 2021

Thanks, can we close #7912?

@Jeffrey-Sima
Copy link
Contributor Author

Jeffrey-Sima commented May 3, 2021 via email

…anspose-padding-fix

# Conflicts:
#	tests/python/frontend/pytorch/test_forward.py
@Jeffrey-Sima
Copy link
Contributor Author

@masahi How can we pass the tvm-ci/pr-merge step? Looking at the details the error comes up as:

tvm._ffi.base.TVMError: MicroSessionTimeoutError: failed to read reply message after timeout 5s

I do not believe the changes in this PR cause this issue.

@u99127
Copy link
Contributor

u99127 commented May 13, 2021

I think just retriggering ci will sometimes fix the problem.

@masahi
Copy link
Member

masahi commented May 13, 2021

@Jeffrey-Sima sorry missed your last comment, yes please try rebase and retrigger ci

@masahi masahi merged commit aa7bfe7 into apache:main May 14, 2021
@masahi
Copy link
Member

masahi commented May 14, 2021

thanks @Jeffrey-Sima

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jun 17, 2021
* fix conv transpose import from TF

* fix String::fromwe() to String::from()

* * fixing pytorch converter to take into account the output_padding parameter for conv transpose operations
* updating pytorch converter to correctly convert conv1d to conv1d in tvm inestead of a flattened conv2d unless under circumstances of grouped convolution
* updating pytorch converter to correctly convert conv1d transpose to conv1d transpose in tvm instead of a flattened conv2d transpose
* added tests to cover these latest additions

* * removing print statements used for debugging

* * fixing typos and formatting

* * fixing formatting

* * fixing grammar

* * formatting fixes

* * updated formatting after running pylint and python_format checks

Co-authored-by: Mikael Sevenier <mikael.sevenier@sima.ai>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jun 17, 2021
* fix conv transpose import from TF

* fix String::fromwe() to String::from()

* * fixing pytorch converter to take into account the output_padding parameter for conv transpose operations
* updating pytorch converter to correctly convert conv1d to conv1d in tvm inestead of a flattened conv2d unless under circumstances of grouped convolution
* updating pytorch converter to correctly convert conv1d transpose to conv1d transpose in tvm instead of a flattened conv2d transpose
* added tests to cover these latest additions

* * removing print statements used for debugging

* * fixing typos and formatting

* * fixing formatting

* * fixing grammar

* * formatting fixes

* * updated formatting after running pylint and python_format checks

Co-authored-by: Mikael Sevenier <mikael.sevenier@sima.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants