Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move sgd to phi #40045

Merged
merged 6 commits into from
Mar 2, 2022
Merged

Move sgd to phi #40045

merged 6 commits into from
Mar 2, 2022

Conversation

phlrain
Copy link
Collaborator

@phlrain phlrain commented Mar 1, 2022

PR types

Breaking changes

PR changes

OPs

Describe

move sgd to phi

@paddle-bot-old
Copy link

paddle-bot-old bot commented Mar 1, 2022

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@chenwhql chenwhql left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有几处细节问题,也可以后续PR完善

multi_precision ? master_param->data<MPDType>() : nullptr;
MPDType* master_out_data =
multi_precision
? master_param_out->mutable_data<MPDType>(dev_ctx.GetPlace())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mutable_data to Alloc

grad.data<T>(),
learning_rate.data<T>(),
param.numel(),
param_out->mutable_data<T>(dev_ctx.GetPlace()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

multi_precision ? master_param->data<MPDType>() : nullptr;
MPDType* master_out_data =
multi_precision
? master_param_out->mutable_data<MPDType>(dev_ctx.GetPlace())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

}

template <typename T>
void sgd_dense_param_sparse_grad_impl(const DenseTensor& param,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

按照编码规范,这些内部函数应该都是驼峰式命名,可能原来的同学写得不规范,可以顺便改下

Copy link
Contributor

@zyfncg zyfncg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimizer相关的SelectedRows kernel是否要按照目前的规则放在selected_rows目录下?

Comment on lines +77 to +82
// do check here
// if (multi_precision) {
// bool has_master =
// ctx.HasInput("MasterParam") && ctx.HasOutput("MasterParamOut");

// }
Copy link
Contributor

@zyfncg zyfncg Mar 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释是不是可以删除了?

Comment on lines +114 to +119
// do some check here
// if (multi_precision) {
// bool has_master =
// ctx.HasInput("MasterParam") && ctx.HasOutput("MasterParamOut");

// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@phlrain phlrain merged commit f3d54e2 into develop Mar 2, 2022
@phlrain phlrain deleted the move_sgd_to_phi branch March 21, 2022 14:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants