-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auto data sync between different devices #6549
Comments
after discussion, there are several ways to implement this:
so I will do a survey on CUDA Unified Memory first.(#6549) |
I think it's important and it should not only be This I hope it can reserve a interface to let developers implement this |
@tensor-tang cool, that is a good suggestion! |
The memory of MKLDNN is different from Paddle, and you can refer to /~https://github.com/PaddlePaddle/Paddle/tree/develop/doc/design/mkldnn#layers |
project: #6403
In a sequence of operators, when some of them run on GPU, some can only run on CPU, we need to auto add some operators to copy host memory and device memory, so the whole graph can run in a multi devices environment.
some problem
The text was updated successfully, but these errors were encountered: