Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: xcm transfer rate limit pallet #561

Merged
merged 88 commits into from
Jul 14, 2023
Merged

feat!: xcm transfer rate limit pallet #561

merged 88 commits into from
Jul 14, 2023

Conversation

jak-pan
Copy link
Contributor

@jak-pan jak-pan commented Apr 24, 2023

Integration of galacticcouncil/cumulus#3 into Hydradx.
Provides a pallet that implements a rate limit for tokens coming in via XCM that can be configured in the asset registry.
Should not change runtime logic as the deferrer for the xcmp pallet is set to ().

Limitations

  • Counts accumulated amounts via MultiLocations of incoming messages without reanchoring or anything. Might be worth checking whether that's an issue.
  • Only processes XCMv3, though we have a test that converted XCMv2 works fine as well. Non-convertible XCMv1 or XCMv2 should not be executed by the executor anyway.
  • Only tracks and limits incoming tokens, not outgoing.
  • Only tracks and limits ReserveAssetDeposited and ReceiveTeleportedAsset, meaning that HDX tokens "returning" from other chains are not tracked or limited.

Todos

  • write more doc comments
  • write readme
  • [ ] add integration test?
  • run benchmarks

requires galacticcouncil/warehouse#200

Copy link
Member

@mrq1911 mrq1911 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i saw lot of test about deferring process, none about processing/rejecting deferred messages

runtime/hydradx/src/lib.rs Outdated Show resolved Hide resolved
pallets/otc/Cargo.toml Outdated Show resolved Hide resolved
pallets/dca/src/lib.rs Show resolved Hide resolved
runtime/common/src/weights/xcmp_queue.rs Outdated Show resolved Hide resolved
@@ -133,6 +133,10 @@ impl cumulus_pallet_xcmp_queue::Config for Runtime {
type ControllerOriginConverter = XcmOriginToCallOrigin;
type PriceForSiblingDelivery = ();
type WeightInfo = weights::xcmp_queue::HydraWeight<Runtime>;
type ExecuteDeferredOrigin = EnsureRoot<AccountId>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we'll have t create a referendum every time a single message is deferred?

Copy link
Contributor

@apopiak apopiak Jun 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have to create a referendum every time we want to execute or discard a deferred message/deferred messages before its/their execution time.
Correction: We will have to create a referendum if we want to discard deferred messages or if we want to execute deferred messages in case they are not executed automatically. This can happen if the message is too heavy to be processed on_idle and there are no incoming XCMs for that parachain.
I would expect that to be quite rare. But of course open to changing the configuration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about it further: If we expect to need to discard messages with short deferral periods, this might be troublesome and we might have to devise another mechanism, e.g. having a less privileged origin that can extend the deferral of deferred messages to a time that governance can properly react to it.

pallets/otc/Cargo.toml Outdated Show resolved Hide resolved
@apopiak
Copy link
Contributor

apopiak commented Jun 26, 2023

i saw lot of test about deferring process, none about processing/rejecting deferred messages

do checkout the PR in cumulus that has tests for this, but I can add an integration test about execution as well.
/~https://github.com/galacticcouncil/cumulus/pull/3/files#diff-f516e3fdc13a7eddee189799381162efddc665fd931691277586a18bbdc93aeb

@apopiak apopiak requested a review from mrq1911 June 28, 2023 14:14
@apopiak
Copy link
Contributor

apopiak commented Jun 30, 2023

  • as discussed offline, split into base PR and integration. deploy Basilisk integration first

pallets/xcm-rate-limiter/Cargo.toml Outdated Show resolved Hide resolved
pallets/xcm-rate-limiter/Cargo.toml Outdated Show resolved Hide resolved
};
// ... or that don't have a rate limit configured.
let Some(limit_per_duration) = T::RateLimitFor::get(&asset_id) else {
total_weight.saturating_accrue(T::DbWeight::get().reads(2));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the 1 one the read here refers to RateLimirFor::get().

Have you considered that eg T::RateLimitFor::get() can do more than 1 read?

It is generic , so provided implementation might do even more reads.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we considered it and decided against doing the benchmarking to ship faster.
Happy to document it more obviously if you want.

Comment on lines 86 to 87
.saturating_add(T::DbWeight::get().reads(104 as u64))
.saturating_add(T::DbWeight::get().writes(103 as u64))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100 reads/writes? just looks suspicious.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the worst case is quite terrible: it happens when all messages are overweight and involves a write for every message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what impact this will have on the cost of transfer ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i guess this function is called by the normal xcm inherent as well, do we account for this extra weight consumed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The XCMP queue actually does not account for the weight of storing overweight messages.
So we're doing no worse than the previous implementation, but it's not great, yeah.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what impact this will have on the cost of transfer ?

I don't think it will have any? These XCM processing costs are eaten by the chain

runtime/hydradx/Cargo.toml Show resolved Hide resolved
@apopiak apopiak changed the title feat!: xcm transfer rate limit feat!: xcm transfer rate limit pallet Jul 12, 2023
@apopiak apopiak mentioned this pull request Jul 12, 2023
3 tasks
@mrq1911 mrq1911 merged commit 3d9f4d4 into master Jul 14, 2023
@jak-pan jak-pan deleted the feat/xcm-defer branch February 22, 2024 10:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants