Skip to content
This repository has been archived by the owner on Aug 15, 2019. It is now read-only.

Align pow gradient with TF #1446

Merged
merged 12 commits into from
Jan 4, 2019
Merged

Align pow gradient with TF #1446

merged 12 commits into from
Jan 4, 2019

Conversation

jgartman
Copy link
Contributor

@jgartman jgartman commented Dec 12, 2018

Description

Multiple problems with the gradient for pow were brought up in tensorflow/tfjs#346. One was fixed by #1376, the other is that the gradient with respect to the exponent is nan when the base is negative. This differs from TF which returns 0 in this case: /~https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_grad.py#L1046


For repository owners only:

Please remember to apply all applicable tags to your pull request.
Tags: FEATURE, BREAKING, BUG, PERF, DEV, DOC, SECURITY

For more info see: /~https://github.com/tensorflow/tfjs/blob/master/DEVELOPMENT.md


This change is Reviewable

@jgartman
Copy link
Contributor Author

jgartman commented Dec 19, 2018

There was a test I added that was failing because of numerical precision issues and I decided to just remove it. The failure was because tf.greater is being used to compare the base of an exponent to 0 and on Safari when the base is 0 this comparison was returning true. I tried comparing the base to a small non-zero value like 1e-4 but the test still failed. Let me know if there are other suggestions, not sure how useful this PR is if it's not going to work as expected on all platforms.

Copy link
Contributor

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @jgartman)


src/ops/arithmetic_test.ts, line 820 at r1 (raw file):

  });

  it('gradient wrt exponent with non-positive base', () => {

negative :)


src/ops/binary_ops.ts, line 268 at r1 (raw file):

    };
    const derExp = () => {
      const condition = $base.greater(scalar(0));

no need to box the 0 in a scalar anymore


src/ops/binary_ops.ts, line 269 at r1 (raw file):

    const derExp = () => {
      const condition = $base.greater(scalar(0));
      const logBase = $base.log().toFloat().where(condition, zerosLike($base));

I think log will return a float.

also, where broadcasts you can just pass 0 directly (without boxing). This will reduce the memory footprint

Copy link
Contributor Author

@jgartman jgartman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained


src/ops/binary_ops.ts, line 269 at r1 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

I think log will return a float.

also, where broadcasts you can just pass 0 directly (without boxing). This will reduce the memory footprint

Doesn't look like where broadcasts /~https://github.com/tensorflow/tfjs-core/blob/v0.14.2/src/ops/logical_ops.ts#L135-L165

Copy link
Contributor

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 2 files at r2.
Reviewable status: 0 of 1 approvals obtained

@nsthorat
Copy link
Contributor

nsthorat commented Jan 3, 2019

Hmm -- unfortunately the merge broke this :/ could you look into this if you have a sec?

@jgartman
Copy link
Contributor Author

jgartman commented Jan 4, 2019

Just needed to update a recently added test to match the new behavior

Copy link
Contributor

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r3.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained

@nsthorat nsthorat merged commit 1b475ac into tensorflow:master Jan 4, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants