-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[MXNET-1405] tests for large tensor support for Softmax operator #15042
Conversation
@mxnet-label-bot add [pr-awaiting-review] |
@apeforest : Please review |
tests/nightly/test_large_array.py
Outdated
@@ -279,6 +279,19 @@ def test_diag(): | |||
assert_almost_equal(r.asnumpy(), np.diag(a_np, k=k)) | |||
|
|||
|
|||
def test_softmax(): | |||
def softmax_forward(input_data, true_output): | |||
data = mx.sym.Variable('data') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just use ndarray API?
tests/nightly/test_large_array.py
Outdated
nparr = ndarr.asnumpy() | ||
assert_almost_equal(nparr, true_output, rtol=1e-5, atol=1e-5) | ||
|
||
softmax_forward(mx.nd.ones((128, LARGE_X)), np.full((128, LARGE_X), 0.0078125)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this 0.0078125 hand calculated? How to verify it? Can you use numpy calculation as the expected result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes ... will do
tests/nightly/test_large_array.py
Outdated
nparr = ndarr.asnumpy() | ||
assert_almost_equal(nparr, true_output, rtol=1e-5, atol=1e-5) | ||
|
||
softmax_forward(mx.nd.ones((128, LARGE_X)), np.full((128, LARGE_X), 0.0078125)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 128? Can we use predefined constants?
tests/nightly/test_large_array.py
Outdated
@@ -279,6 +279,19 @@ def test_diag(): | |||
assert_almost_equal(r.asnumpy(), np.diag(a_np, k=k)) | |||
|
|||
|
|||
def test_softmax(): | |||
def softmax_forward(input_data, true_output): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is called once, do we need this sub routine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
355049f
to
03aa04d
Compare
def test_softmax(): | ||
input_data = mx.nd.ones((SMALL_Y, LARGE_X)) | ||
true_output = np.full((SMALL_Y, LARGE_X), (1 / SMALL_Y)) | ||
output = nd.softmax(input_data, axis=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when you test this way, the softmax is operating only on one dimension which does not exceed 2^32, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But shouldn't we test softmax over a vector larger than 2^32?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fine. let's leave that as TODO in the next phase to support large int in one dimension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@mxnet-label-bot add [pr-awaiting-merge] |
15301a2
to
f523f73
Compare
Description
Added test case to verify Large Tensor Support for Softmax Operator
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.