-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add data transform fn #6953
Add data transform fn #6953
Conversation
… add-data-transform-fn
… add-data-transform-fn
… add-data-transform-fn
paddle/framework/data_transform.cc
Outdated
data_transform_map = new DataTransformFnMap(); | ||
} | ||
return *data_transform_map; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following is a thread-safe singleton:
DataTransformFnMap& DataTransformFnMap::Instance() {
static DataTransformFnMap inst;
return inst;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
namespace paddle { | ||
namespace framework { | ||
|
||
using DataTransformFN = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DataTransform should not only use Tensor
as its argument type.
How about SelectedRows
? LoDTensor
? vector<Scope>
? It should take Variable as its argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/framework/data_transform.h
Outdated
size_t left_hasher = kernel_type_hasher(kernel_pair.first) << 1; | ||
size_t right_hasher = kernel_type_hasher(kernel_pair.second); | ||
std::hash<size_t> hasher; | ||
return hasher(left_hasher + right_hasher); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A little concern on the correctness of hash calculation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I will find a better way to do hash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HashCombine fit here well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HashCombine combinate the different hash value without any conflict, as @QiJune shows in previous PR, https://stackoverflow.com/questions/2590677/how-do-i-combine-hash-values-in-c0x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
const Variable& in, Variable* out)>; | ||
using KernelTypePair = std::pair<OpKernelType, OpKernelType>; | ||
|
||
static void hash_combine(std::size_t& seed, const OpKernelType& t) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code style is:
template <class T>
inline void HashCombine(const T& v, std::size_t* seed) {
std::hash<T> hasher;
*seed ^= hasher(v) + 0x9e3779b9 + (*seed << 6) + (*seed >> 2);
}
We can fix it later.
fix: #6823