Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: MethodError: no method matching copy #770

Closed
wocaishiniliu opened this issue Oct 6, 2022 · 2 comments
Closed

ERROR: MethodError: no method matching copy #770

wocaishiniliu opened this issue Oct 6, 2022 · 2 comments

Comments

@wocaishiniliu
Copy link

wocaishiniliu commented Oct 6, 2022

My question

I am sure that the error is not in push!(θs, copy(θ))
Maybe I should use the new version. But I could not know the corresponding code......

Here is the error

┌ Warning: sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train
└ @ DiffEqFlux C:\Users\a  li\.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:6
┌ Warning: AD methods failed, using numerical differentiation. To debug, try ForwardDiff.gradient(loss, θ) or Zygote.gradient(loss, θ)
└ @ DiffEqFlux C:\Users\a  li\.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:24
ERROR: MethodError: no method matching copy(::NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}}})
Closest candidates are:
  copy(::Union{SubArray{T, N, <:PooledArrays.PooledArray{T, R}}, PooledArrays.PooledArray{T, R, N}} where {T, N, R}) at C:\Users\a  li\.julia\packages\PooledArrays\DXlaI\src\PooledArrays.jl:227
  copy(::Union{TransparentColor{C, T}, C} where {T, C<:Union{AbstractRGB{T}, AbstractGray{T}}}) at C:\Users\a  li\.julia\packages\ColorVectorSpace\bhkoO\src\ColorVectorSpace.jl:250
  copy(::DataStructures.SortedMultiDict) at C:\Users\a  li\.julia\packages\DataStructures\59MD0\src\sorted_multi_dict.jl:388
  ...
Stacktrace:
 [1] __solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoFiniteDiff{Val{:forward}, Val{:forward}, Val{:hcentral}}, DiffEqFlux.var"#93#100"{var"#loss#115"{Matrix{Float64}, var"#predict#114"{NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, Vector{Float64}}}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::Flux.Optimise.Optimiser, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; maxiters::Int64, callback::Function, progress::Bool, save_best::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ OptimizationFlux C:\Users\a  li\.julia\packages\OptimizationFlux\cpWyO\src\OptimizationFlux.jl:19
 [2] solve(::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoFiniteDiff{Val{:forward}, Val{:forward}, Val{:hcentral}}, DiffEqFlux.var"#93#100"{var"#loss#115"{Matrix{Float64}, var"#predict#114"{NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, Vector{Float64}}}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Flux.Optimise.Optimiser; kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:maxiters, :callback), Tuple{Int64, var"#120#121"{Vector{Any}, Vector{Any}}}}})
   @ SciMLBase C:\Users\a  li\.julia\packages\SciMLBase\mOGJz\src\solve.jl:71
 [3] sciml_train(::var"#loss#115"{Matrix{Float64}, var"#predict#114"{NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, 
Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, Vector{Float64}}}, ::NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}}}, ::Flux.Optimise.Optimiser, ::Nothing; lower_bounds::Nothing, upper_bounds::Nothing, cb::Function, callback::Function, maxiters::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ DiffEqFlux C:\Users\a  li\.julia\packages\DiffEqFlux\Em1Aj\src\train.jl:43
 [4] train_one_round(node::NeuralODE{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(swish), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, Nothing, Nothing, Tuple{Float64, Float64}, Tuple{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}}, Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{Vector{Float64}, Float64, Float64}}}}, θ::NamedTuple{(:layer_1, :layer_2, :layer_3), Tuple{NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}, NamedTuple{(:weight, :bias), Tuple{Matrix{Float32}, Matrix{Float32}}}}}, y::Matrix{Float64}, opt::Flux.Optimise.Optimiser, maxiters::Int64, y0::Vector{Float64}; kwargs::Base.Pairs{Symbol, var"#120#121"{Vector{Any}, Vector{Any}}, Tuple{Symbol}, NamedTuple{(:cb,), Tuple{var"#120#121"{Vector{Any}, Vector{Any}}}}})
   @ Main .\REPL[197]:10
 [5] top-level scope
   @ REPL[211]:1

Here is the code after changed

using DataFrames, CSV
delhi_train = CSV.read("E:\\julia_scipts\\DailyDelhiClimateTrain.csv", DataFrame)
delhi_test = CSV.read("E:\\julia_scipts\\DailyDelhiClimateTest.csv", DataFrame)
delhi = vcat(delhi_train, delhi_test)

using Statistics, Dates
using Base.Iterators: take, cycle

delhi[:, :year] = Float64.(Dates.year.(delhi[:, :date]))
delhi[:, :month] = Float64.(Dates.month.(delhi[:, :date]))
df_mean = combine(groupby(delhi, [:year, :month]), [:meantemp, :humidity, :wind_speed, :meanpressure] .=> mean)
rename!(df_mean, [:year, :month, :meantemp,
    :humidity, :wind_speed, :meanpressure])

df_mean[!, :date] .= df_mean[:, :year] .+ df_mean[:, :month] ./ 12;

features = [:meantemp, :humidity, :wind_speed, :meanpressure]

t = df_mean[:, :date] |>
    t -> t .- minimum(t) |>
         t -> reshape(t, 1, :)#1×52 Matrix

y = df_mean[:, features] |>
    y -> Matrix(y)' |>
         y -> (y .- mean(y, dims = 2)) ./ std(y, dims = 2)#4×52 Matrix

T = 20
train_dates = df_mean[1:T, :date]
test_dates = df_mean[T+1:end, :date]
train_t, test_t = t[1:T], t[T:end]
train_y, test_y = y[:, 1:T], y[:, T:end];



using Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots

function neural_ode(t, data_dim; saveat = t)
    f = Lux.Chain(Lux.Dense(data_dim, 64, swish),
        Lux.Dense(64, 32, swish),
        Lux.Dense(32, data_dim))
    rng = Random.default_rng()    
    global p, st = Lux.setup(rng, f)
    node = NeuralODE(f, (minimum(t), maximum(t)), Tsit5(),
        saveat = saveat, abstol = 1e-9,
        reltol = 1e-9)
    return p, st, node
end

function train_one_round(node, θ, y, opt, maxiters,
    y0 = y[:, 1]; kwargs...)
    predict(θ) = Array(node(y0, p, st)[1])
    loss(θ) = begin= predict(θ)
        Flux.mse(ŷ, y)
    end

    θ = θ == nothing ? p : θ
    res = DiffEqFlux.sciml_train(
        loss, θ, opt,
        maxiters = maxiters;
        kwargs...
    )
    return res.minimizer
end
function train= nothing, maxiters = 150, lr = 1e-2)

    log_results(θs, losses) =
        (θ, loss) -> begin
            push!(θs, copy(θ))
            push!(losses, loss)
            false
        end

    θs, losses = [], []
    num_obs = 4:4:length(train_t)
    for k in num_obs
        p, st, node = neural_ode(train_t[1:k], size(y, 1))
        θ = train_one_round(
            node, θ, train_y[:, 1:k],
            AdamW(lr), maxiters;
            cb = log_results(θs, losses)
        )
    end
    θs, losses
end

Random.seed!(1)
θs, losses = train();
@avik-pal
Copy link
Member

avik-pal commented Oct 6, 2022

You are trying to copy a NamedTuple which is undefined. You need to use Functors for that.

using Functors
nt = (;a  = randn(10), b = randn(2))
fmap(copy, nt)

Also, Lux guarantees that parameters are not updated in-place, so unless sciml_train does in-place update of thetas (I don't remember what it does), you shouldn't need copy

@ChrisRackauckas
Copy link
Member

It doesn't do an inplace update of the thetas, so the copy isn't necessary. But indeed this is unrelated to the package and has two fixes mentioned, so I'll close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants