-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Improve docs of remove_all
and remove_prefix
#11182
Conversation
These docs didn't mention how the removal works internally. This is important for the user to know that calling such a method multiple times in the same block leads always to the same result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good idea to stress this fact. Maybe for frame doc the comment could be shorter (like just apply the largest limit
).
/// | ||
/// Calling this multiple times per block with a `limit` set leads always to the same keys being | ||
/// removed and the same result being returned. This happens because the keys to delete in the | ||
/// overlay are not taken into account when deleting keys in the backend. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then Note is more about the underlying reason (no persistence of the applied limit in the overlay).
Wonder if renaming limit
to limit_backend
or backend_fix_limit
could be a first hint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I like your comment on sp-io better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I like your comment on sp-io better.
Which part exactly? Could you add some suggestions to the pr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The note that describe the way overlay is handled:
/// The deletion would always start from `prefix` resulting in the same keys being deleted
/// every time this function is called with the exact same arguments per block. This happens
/// because the keys in the overlay are not taken into account when deleting keys in the
/// backend.
Is more clear to me, but that may be because I know the problematic.
I was thinking a bit about it during lunch and another way to put it would be: limit is only applied over state as it is at the start of the block, ignoring previous calls
or something like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked to @shawntabrizi and he said we can continue with the current docs.
/// All values in the client overlay will be deleted, if there is some `limit` then up to | ||
/// `limit` values are deleted from the client backend, if `limit` is none then all values in | ||
/// the client backend are deleted. | ||
/// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could add For multiple calls the largest limit is applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anyway to change this behavior? It is rather counter intuitive now.
you would need to manage prefix removal progress from the state machine change overlay. But thing like remove at key "a" then remove at key "ab" with limits will look like a nightmare. Generally solution, as I see it, is to use prefix removal the same way we add or remove value: need to be in the overlay, then calculation of storage root from runtime (and in trie crate) need to have this prefix removal in the delta but not produce the change set immediately (as querying the change set would need to be either at pruning or in an asynch way in client/db). I remember having implemented pieces of it and the bad thing are: need new trie api to detach/remove a prefix, break the substrate design (trie related query of prefix nodes leaks in client code). |
We could change the implementation on the host side to take into account the keys in the overlay and don't let them count towards the limit when we iterate over them in the backend. However, that would mean the more often you call the method the more costly it gets. For parachains that could also be bad for example, because it could happen that you are required to include data in the proof which was set as I thought about doing the changes on the client side before changing the documentation. I'm not really sold on anything, but if we change the host side it may results in assuming too much about the runtime side and we break something else. Calling This https://substrate.stackexchange.com/questions/1646/multiple-remove-all-calls-in-same-block post brought this up here. |
These docs didn't mention how the removal works internally. This is important for the user to know that calling such a method multiple times in the same block leads always to the same result.
These docs didn't mention how the removal works internally. This is important for the user to know that calling such a method multiple times in the same block leads always to the same result.
These docs didn't mention how the removal works internally. This is important for the user to know that calling such a method multiple times in the same block leads always to the same result.
These docs didn't mention how the removal works internally. This is important for the user to know
that calling such a method multiple times in the same block leads always to the same result.