This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
[Large Tensor] Fixed SoftmaxActivation op #17634
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
The Softmax Activation op was previously breaking on large tensor (dimension >= 2^32) data. With the following input:
the following error was thrown:
To root cause this issue, I ran the previous command in a Python script with GDB, and found that the underlying problem was in the shape construction logic of
softmax_activation-inl.h
. In the functions for computing the forward pass result and the gradient, several of the variables used theint
dtype when they should have been usingindex_t
to properly handle long int dimensions. I switched these variables toindex_t
and, after rebuilding, the previous input command displayed the correct output:To ensure completeness and to prevent future breaking changes, I also added a nightly test for the Softmax Activation op with large tensor data in
tests/nightly/test_large_array.py
.Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments
Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
Results
The key difference between CPU and GPU tests was the instance type (r5dn.24xl for CPU, p2.16xl for GPU). All relevant build flags remain the same, and both were tested using CPU context.
Single operator test - SoftmaxActivation op (GPU)
Single operator test - SoftmaxActivation op (CPU)
Full OpPerf test (GPU)
Full OpPerf test (CPU)
@apeforest @access2rohit @ChaiBapchya