-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[MXNet-1211] Factor and "Like" modes in BilinearResize2D operator #13226
Conversation
@mxnet-label-bot add [pr-awaiting-review] |
@apeforest @samskalicky @yuxihu @ChaiBapchya could you please take a look at this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add tests for this operator?
.describe("output height (required)"); | ||
DMLC_DECLARE_FIELD(width).set_range(1, 10000) | ||
.describe("output width (required)"); | ||
DMLC_DECLARE_FIELD(height).set_range(0, 10000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why change the range from 1 to 0. How does 0 size make sense for resizing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is required for using "scale"-mode. F.e. you have tensor with shape (1, 3, 100, 100), so you can do BilinearResize2D(tensor, height=0.5, width=0.5, mode='scale') and get tensor of shape (1, 3, 50, 50)
DMLC_DECLARE_FIELD(width).set_range(1, 10000) | ||
.describe("output width (required)"); | ||
DMLC_DECLARE_FIELD(height).set_range(0, 10000) | ||
.set_default(-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
height and width are currently required parameters. Why set a default value for them? Also how does setting default=-1
make sense when the range for the field is (0,1000)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not required parameter, when using mode 'like'. In this mode we can pass two tensors and the first tensor would be resized to second tensor's size. But for other modes it is still required, so default value is inappropriate and should raise error.
.describe(R"(output height if mode is "size" or scale on which input height should be multiplied if mode is | ||
"scale" or "odd_scale")"); | ||
DMLC_DECLARE_FIELD(width).set_range(0, 10000) | ||
.set_default(-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above.
.add_enum("like", bilinear_resize::like) | ||
.add_enum("to_even", bilinear_resize::to_even) | ||
.set_default(bilinear_resize::size) | ||
.describe(R"(resizing mode. "size" - resize to distinct size; "scale" - original height and width are multiplied |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in scale mode what is the "appropriate scale"? I dont see that defined in the params.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think any. If we have original tensor with shape (1, 3, 10000, 10000) and resize it with height=0.001 and width=0.001 in "scale" mode we will get tensor with shape (1, 3, 10, 10). And back we can resize the result tensor in scale mode with height=100 and width=100 and get tensor of original size. I'll try to clarify this in description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably I used wrong term. Maybe it would be better if I change "scale" to "factor".
.set_default(bilinear_resize::size) | ||
.describe(R"(resizing mode. "size" - resize to distinct size; "scale" - original height and width are multiplied | ||
by appropriate scale; "odd_scale" - the same, but result height and width are odd value | ||
(if h%2==0 then h=h+1 and the same for width); "like" - resize first input to the height |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be good to clarify what h is here? original image height or the height parameter in the parameter list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right, I'll do this, thank you.
CHECK_EQ(inputs.size(), 1); | ||
bool modeLike = param.mode == bilinear_resize::like; | ||
size_t expected = modeLike ? 2 : 1; | ||
CHECK_EQ(outputs.size(), expected); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so there will be 2 outputs in like
mode? what is the second output? if it does have two outputs, it should be specified in the param description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two outputs of the backward function. The operator still has one output. In "like" mod we pass two input tensors to resize one to the size of second, so the backward function should return gradients to both tensors. Actually the second tensor, from which we get result size, should get zero gradients from the output of this operator, because it is needed only to get it's shape. It is realized in cc and cu files.
I'm realy sorry for force push, I get confused with rebase and commited all master to this branch, after that tried to roll back and finally erased my first commits. :( |
@@ -6527,12 +6527,64 @@ def check_bilinear_resize_op(shape, height, width): | |||
x = mx.nd.random.uniform(shape=shape) | |||
y = mx.nd.contrib.BilinearResize2D(x, height=height, width=width) | |||
assert_almost_equal(y.asnumpy(), py_bilinear_resize(x.asnumpy(), height, width)) | |||
def check_bilinear_resize_modes_op(shape, height_factor=None, width_factor=None, shape_1=None, mode=None): | |||
x = mx.nd.random.uniform(shape=shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unit tests for operators should cover forward pass and backward pass. See here - https://mxnet.incubator.apache.org/faq/add_op_in_backend.html#unit-test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it is failing on 2 different validation CI checks
1 looks like uncovered a flaky test
other has build failure which needs your attention. Please review and make requisite changes. Thanks.
Thank you, @ChaiBapchya.
And in the log file of build there is no error and it starts at 19:30 and fails at 21:30:
The same for [GPU: MKLDNN_CUDNNOFF]. No error in log, build script lasts for 1h 59m 55s and receives interrupt signal. Jenkins build description. What should I do with this problem? |
@lobanov-m this was around the time we were changing our CI. can you retrigger the build (push an empty commit). |
Last mentioned problem with pull request have been resolved. Is anything else should be changed? |
@anirudhacharya @azai91 - ping for review :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@apeforest - Can you please help review this PR? |
@mxnet-label-bot update [pr-awaiting-response] |
Rebase done. |
@mxnet-label-bot update [pr-awaiting-review] |
@mxnet-label-bot add [pr-awaiting-merge] @lobanov-m please rebase your PR to resolve conflicts. |
@mxnet-label-bot add[pr-awaiting-review] |
@mxnet-label-bot remove[pr-awaiting-response] |
@pinaraws rebase done |
Built failed because of
and the second error:
|
Error in build doesn't connected with pull request:
|
@anirudhacharya Are all your concerns addressed on this PR ? |
@piyushghai yes, my concerns are addressed, please see here - #13226 (review) |
@ChaiBapchya Are your concerns also addressed on this PR ? |
@lobanov-m Can you please resolve conflicts? |
@Roshrini rebase done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All concerns have been addresses.
LGTM.
Thanks @lobanov-m for the excellent work!
I will merge it after the PR passes.
Glad to merge it. Thank you! |
It's good news. Thank you! |
…ache#13226) * Added "factor" and "like" modes into BilinearResize2D operator. Also added tests and some fixes to visualization needed due to added modes. * Lint fix * Test fix * retrigger CI * retrigger CI * Retrigger CI * retrigger CI * retrigger ci * retrigger ci again
…ache#13226) * Added "factor" and "like" modes into BilinearResize2D operator. Also added tests and some fixes to visualization needed due to added modes. * Lint fix * Test fix * retrigger CI * retrigger CI * Retrigger CI * retrigger CI * retrigger ci * retrigger ci again
Description
This PR adds few options to BilinearResize2D operator. This options are needed for Fully-Convolutional realization of nets, that uses bilinear resize. The added modes are "odd_scale", "like" and few options to change height and width of tensor to even or odd: "to_even_down", "to_even_up", "to_odd_down", "to_odd_up".
Such transformations couldn't be done in other ways because when we use symbolic models we can't get tensors shapes from Python or any other frontend.
With these mods it is is possible to realize segmentation network ICNet that could work with input of any size.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments