-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: xcm transfer rate limit pallet #561
Conversation
…r duration calculated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i saw lot of test about deferring process, none about processing/rejecting deferred messages
@@ -133,6 +133,10 @@ impl cumulus_pallet_xcmp_queue::Config for Runtime { | |||
type ControllerOriginConverter = XcmOriginToCallOrigin; | |||
type PriceForSiblingDelivery = (); | |||
type WeightInfo = weights::xcmp_queue::HydraWeight<Runtime>; | |||
type ExecuteDeferredOrigin = EnsureRoot<AccountId>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we'll have t create a referendum every time a single message is deferred?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will have to create a referendum every time we want to execute or discard a deferred message/deferred messages before its/their execution time.
Correction: We will have to create a referendum if we want to discard deferred messages or if we want to execute deferred messages in case they are not executed automatically. This can happen if the message is too heavy to be processed on_idle
and there are no incoming XCMs for that parachain.
I would expect that to be quite rare. But of course open to changing the configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about it further: If we expect to need to discard messages with short deferral periods, this might be troublesome and we might have to devise another mechanism, e.g. having a less privileged origin that can extend the deferral of deferred messages to a time that governance can properly react to it.
do checkout the PR in cumulus that has tests for this, but I can add an integration test about execution as well. |
|
}; | ||
// ... or that don't have a rate limit configured. | ||
let Some(limit_per_duration) = T::RateLimitFor::get(&asset_id) else { | ||
total_weight.saturating_accrue(T::DbWeight::get().reads(2)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the 1 one the read here refers to RateLimirFor::get().
Have you considered that eg T::RateLimitFor::get() can do more than 1 read?
It is generic , so provided implementation might do even more reads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we considered it and decided against doing the benchmarking to ship faster.
Happy to document it more obviously if you want.
.saturating_add(T::DbWeight::get().reads(104 as u64)) | ||
.saturating_add(T::DbWeight::get().writes(103 as u64)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
100 reads/writes? just looks suspicious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the worst case is quite terrible: it happens when all messages are overweight and involves a write for every message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what impact this will have on the cost of transfer ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess this function is called by the normal xcm inherent as well, do we account for this extra weight consumed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The XCMP queue actually does not account for the weight of storing overweight messages.
So we're doing no worse than the previous implementation, but it's not great, yeah.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what impact this will have on the cost of transfer ?
I don't think it will have any? These XCM processing costs are eaten by the chain
Integration of galacticcouncil/cumulus#3 into Hydradx.
Provides a pallet that implements a rate limit for tokens coming in via XCM that can be configured in the asset registry.
Should not change runtime logic as the deferrer for the xcmp pallet is set to
()
.Limitations
MultiLocation
s of incoming messages without reanchoring or anything. Might be worth checking whether that's an issue.ReserveAssetDeposited
andReceiveTeleportedAsset
, meaning that HDX tokens "returning" from other chains are not tracked or limited.Todos
[ ] add integration test?requires galacticcouncil/warehouse#200