Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic when releasing write lock #24

Closed
kayleg opened this issue Oct 30, 2019 · 5 comments
Closed

Panic when releasing write lock #24

kayleg opened this issue Oct 30, 2019 · 5 comments
Labels
bug Something isn't working

Comments

@kayleg
Copy link

kayleg commented Oct 30, 2019

From reading the oneshot channel code, this appears to happen when the waiting lock is dropped before it can be notified. (This is just a guess, but the best I could gleam even though I don't believe I have a case where this would happen in my project) The current error message and stack trace makes it impossible to find out which was the offending lock.

thread 'tokio-runtime-worker-5' panicked at 'Sender::send: ()', src/libcore/result.rs:1084:5
futures_locks::rwlock::RwLock<T>::unlock_writer 
at ./home/rust/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-locks-0.4.0/src/rwlock.rs:470

I also question if this is a good behavior as there is no way (without catching a panic) for the crate user to handle that error.

@kayleg
Copy link
Author

kayleg commented Oct 31, 2019

Upon digging into the source, it looks like when any future returned by either .write() or .read() is dropped before gaining the lock, it will close the oneshot receiver, but does not remove the tx from the list of waiters. Then the tx makes its way to the front of the queue and send is invoked but it has already been closed.

@asomers
Copy link
Owner

asomers commented Oct 31, 2019

Ooh, interesting. I suppose it would be too much to ask for the error to be reproducible?

@asomers asomers added the bug Something isn't working label Oct 31, 2019
@kayleg
Copy link
Author

kayleg commented Oct 31, 2019

It's occurs in a massive codebase, randomly too. What I think is going on in my case is a gRPC client is dropping the connection to my tower-grpc server which then cancels the future that was waiting on the lock. If I get some down time, I'll try come up with a small reproducible case.

@asomers
Copy link
Owner

asomers commented Oct 31, 2019

I've got a test case! Thanks for reporting this. I'll have a fix soon, probably by Monday at the latest.

@kayleg
Copy link
Author

kayleg commented Oct 31, 2019

Thanks!! In my fork I tweaked the unlock to loop until it finds a viable receiver. It works, but definitely not the most optimum.

asomers added a commit that referenced this issue Nov 2, 2019
If a Future gets dropped after being polled() but before gaining
ownership of the Mutex or RwLock, a panic would result when the owner
tried to transfer ownership to the dropped Future.  Fix the drop methods
to handle waiting Futures that have already disappeared.

Kudos to Kayle Gishen for reporting and diagnosing the bug

Fixes #24
asomers added a commit that referenced this issue Nov 2, 2019
If a Future gets dropped after being polled() but before gaining
ownership of the Mutex or RwLock, a panic would result when the owner
tried to transfer ownership to the dropped Future.  Fix the drop methods
to handle waiting Futures that have already disappeared.

Kudos to Kayle Gishen for reporting and diagnosing the bug

Fixes #24
@asomers asomers closed this as completed in 6a89fc3 Nov 4, 2019
asomers added a commit that referenced this issue Nov 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants