-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RedisSlidingWindowRateLimiter does not provide IdleDuration #61
Comments
@dlxeon can you please open a PR? |
My solution is simple and not ideal, it should be improved for a generic use in library.
That has couple things to improve:
I think to overcome multiple simultaneous requests handled by same rate limiter instance I can modify code to add thread-safe requests counter. Below is untested idea. I'll test that later this week
|
The current implementation always returns
TimeSpan.Zero
for RateLimiter.IdleDuration. When using a partitioned rate limiter this means the rate limiters created for the partitions never get cleaned up and will accumulate.I think this is true for all rate limiters in this repo however as it is the only one I am using right now, I only looked at the sliding window rate limiter in detail so far. For that the most reasonable approach to me seems to be to accept some imprecision and just track the last
expireat
value set in theRedisSlidingWindowManager
and provide an estimated idle duration for that. This is under the assumption that if a manager ever deletes a partition that was not actually fully idle, it is really no problem to have it just be re-created by the factory. The actual limit is still in redis after all.I could create a pull request for the redis sliding rate limiter if this approach is acceptable.
This would not fully match what the interface wants though. The documentation says:
A better estimate or precise answer could of course be given by reaching out to redis, however to perform potentially blocking operations in there seems problematic to me.
The text was updated successfully, but these errors were encountered: