-
Notifications
You must be signed in to change notification settings - Fork 495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How is a timelock attack on forwarding nodes mitigated by lightning? #662
Comments
This is indeed a known issue with the current multi-hop payment design. |
Some discussions around this issue (back from 2015):
And this issue: #182 |
Ah those are some very interesting discussions. Should I close this in favor of #182 then? I have some interesting thoughts that came out of a recent discussion I've been having. I think I might have an interesting solution #4 to add to the list that is somewhat along the lines of a "reputation system" that a couple people mused about. I'll add those thoughts to #182. |
Yes please, let's centralize this in #182.
Great, thanks for sharing this! |
@fresheneesz another thing to consider is that, in your example, the funds in the path or only timelocked until the expiry of the final hop. after that hop gets failed, the remaining hops are settled immediately off chain all the way back to the sender. more generally, if hops |
@cfromknecht Yes, but those time locks can be long (hours), right? The idea is to have a way to disincentivize channels that cause those long locks. Also, I'm not sure what you mean by "i and j are delaying" - if node j delays sending the HTLC, then nodes S through j-1 are all locked up. There's no way to lock up the nodes between i and j without locking up nodes through to S. |
If an attacker pays themselves through honest nodes and the payee node refuses to relay the secret, they can lock up funds for honest nodes for the timeout period without spending any money. It seems the attacker would only need to be willing to lock as much of its own capacity as the victim node with the largest capacity it wants to target. See here for details.
Is this currently mitigated? If not, how could it be?
The text was updated successfully, but these errors were encountered: