-
-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scalar indexing when running UDE on GPU #832
Comments
It's in the adjoint handling of the mutation. Do: function ude(u,p,t,q)
knownPred = Lux.gpu(knownDynamics(u,nothing,q))
nnPred = Lux.gpu(first(neuralNetwork(u,p,st)))
knownPred .+ nnPred
end You weren't even mutating |
Thanks for your prompt response! There seems to be a hidden Float32 when doing that. I know from running NODEs that the neural network gives a Float64, so what else could it be?
|
Output of Lux.gpu(...) use convert(CuArray, ...) |
Thank you, converting everything to CuArray ended up solving some other issues I've had when running on GPUs as well! |
I'm trying to run the following MWE of a UDE with CUDA:
I get the following error, despite not seeing where I'm doing any scalar indexing when solving the UDE.
The text was updated successfully, but these errors were encountered: