Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to a 1 second backoff when hitting 429s #142

Merged
merged 1 commit into from
Oct 17, 2024

Conversation

rliddler
Copy link
Contributor

When importing a large amount of new entries, or deleting them we're hitting against the general API limit.

Using a pool improves our performance but we still seem to see the odd failure where the 10 routines are fighting one another enough that we fail after X retries (tried up to 10 and still saw issues).

The big fix that worked here was to ensure we backoff for a minimum of 1 second at least (anything smaller, or even negative means we still do 1 now).

This results in the rate we submit results peaking at several hundred requests per second until we hit our per min API rate limit, then we are at the mercy of the token bucket refill - essentially doing max we can as our rate limit refills.

This slows down a little, but it's in theory the best we can do per the rate limit and we're a better citizen talking to the API.

client/client.go Outdated Show resolved Hide resolved
When importing a large amount of new entries, or deleting them we're
hitting against the general API limit.

Using a pool improves our performance but we still seem to see the odd
failure where the 10 routines are fighting one another enough that we
fail after X retries (tried up to 10 and still saw issues).

The big fix that worked here was to ensure we backoff for a minimum of 1
second at least (anything smaller, or even negative means we still do 1
now).

This results in the rate we submit results peaking at several hundred
requests per second until we hit our per min API rate limit, then we are
at the mercy of the token bucket refill - essentially doing max we can
as our rate limit refills.

This slows down a little, but it's in theory the best we can do per the
rate limit and we're a better citizen talking to the API.
@rliddler rliddler force-pushed the rob/handle-backoff-retries-better branch from b5f9801 to e91b80a Compare October 17, 2024 16:32
@rliddler rliddler merged commit 65f1a71 into master Oct 17, 2024
1 check passed
@rliddler rliddler deleted the rob/handle-backoff-retries-better branch October 17, 2024 16:35
@@ -14,7 +14,7 @@ import (
"github.com/pkg/errors"
)

const maxRetries = 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could it be an idea to be able to customize this via command line arguments? 💡

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah good shout! I just whacked it up for now as it didn't seem harmful to just try a bit more 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants