-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KLU.jl takes twice the time of kvxopt for factorization #8
Comments
I should mention that I'm using KLU.jl 0.4.0. Using 0.3.0 gives the same result. |
Thanks! I'll look into this. The allocation might be creating the 0-based vectors, but I'm not sure. |
Thanks! That was my suspect, but the code below does not support it. function copy_vectors(A)
c0 = A.colptr .- 1
r0 = A.rowval .- 1
end
@btime copy_vectors($A)
# 104.424 μs (4 allocations: 1.33 MiB)
Even the same allocation repeats for ten times, the overhead would just be 1 ms. |
Can you recheck using |
You are right. I should have used
I will go ahead and close the issue for now. I might benchmark KLU on the same matrix in C over the weekend and will report back. |
Keep in mind that |
Great news. I compared |
Fantastic! Thank you for checking that for me. I've never run the same tests on both C and Julia |
Hi,
I'm comparing KLU.jl with kvxopt, a Python package that wraps KLU. For the same sparse matrix of size 19928*19928, KLU.jl takes twice the time for factorization. Not sure where the issue is except for the megabytes of memory allocated. I believe it would be helpful as KLU.jl is used in many applications.
Below are the code in Julia and Python with the timing data on my PC. The data file is also attached.
Julia code:
9241jac.zip - need to be extracted.
The text was updated successfully, but these errors were encountered: