You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apologies if this is a very basic question: before all the changes in the DiffEqFlux.jl library one could specify different adjoint methods (in the solve call to the neural ODE) which were then used in sciml_train (my understanding of how that worked is quite rudimentary). But now optimization is done via Optimization.jl and changing the adjoint methods in solve seems to do nothing.
I haven't benchmarked this properly (so it might just be my imagination) but current adtype options in Optimization.jl feel slower than when was using sciml_train with the adjoint options. Is it still possible to specify the adjoint methods to be used in Optimization.jl gradient calculations in some way or is that for some reason removed (I don't mean only DiffEqFlux.jl)?
Also, sorry if this isn't the right repo for that, wasn't sure whether I should've asked this in Optimization.jl or here since this is the original library I had used. Many thanks for the great libraries!
The text was updated successfully, but these errors were encountered:
I haven't benchmarked this properly (so it might just be my imagination) but current adtype options in Optimization.jl feel slower than when was using sciml_train with the adjoint options.
It's the same code. Basically, sciml_train was just moved to become a full documented package.
Is it still possible to specify the adjoint methods to be used in Optimization.jl gradient calculations in some way or is that for some reason removed (I don't mean only DiffEqFlux.jl)?
Yes, same syntax (sensealg). It's documented in the docstrings:
Ah, many thanks! Hadn't read the SciMLSensitivity docs (and didn't click in my head) that AutoForwardDiff would override/ignore sensealg and switching to AutoZygote was the key.
Apologies if this is a very basic question: before all the changes in the
DiffEqFlux.jl
library one could specify different adjoint methods (in thesolve
call to the neural ODE) which were then used insciml_train
(my understanding of how that worked is quite rudimentary). But now optimization is done viaOptimization.jl
and changing the adjoint methods insolve
seems to do nothing.I haven't benchmarked this properly (so it might just be my imagination) but current
adtype
options inOptimization.jl
feel slower than when was usingsciml_train
with the adjoint options. Is it still possible to specify the adjoint methods to be used inOptimization.jl
gradient calculations in some way or is that for some reason removed (I don't mean onlyDiffEqFlux.jl
)?Also, sorry if this isn't the right repo for that, wasn't sure whether I should've asked this in
Optimization.jl
or here since this is the original library I had used. Many thanks for the great libraries!The text was updated successfully, but these errors were encountered: