-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make asynchronous emitters have the same (configurable?) policy when event queue is full #7057
Comments
Do you have any strong opinions on which policy would be the best to use across all the emitters? |
In my opinion, dropping old events is better than dropping new ones (as |
Also, what do you mean by "backpressure policy"? Does that just involve dropping the oldest events in queue? |
Yes, that was not very precise to call it "backpressure" policy, because we probably never want actual backpressure in metrics emitter, because metrics are relatively unimportant to slow down queries. Rephrased to "event throttling" policy. Simple policies might be "drop oldest" and "drop newest". Some more sophisticated policies may change the event "level" (like logging level) when the queue is full, like stop emitting some relatively less important kinds of events. (Probably trying to return to the default level after some backoff time or when the queue becomes sufficiently empty). |
Looking at the code, I noticed that NonBlockingStatsDEmitter is a library class. Does it make sense to ignore it when refactoring since we can't change it or do you think that a custom implementation should be written? |
Opened a proposal (#7075) since the change adds a config. |
I've created an issue in the repo of that library: DataDog/java-dogstatsd-client#71 |
Does that correspond to what we're using? NonBlockingStatsDClient isn't a Datadog library. |
Sorry, I misread something. You are correct. |
This issue has been marked as stale due to 280 days of inactivity. |
This issue narrows the idea of #7037.
There are many emitters (at least:
AmbariMetricsEmitter
,GraphiteEmitter
,StatsDEmitter
andKafkaEmitter
andHttpPostEmitter
, I haven't checked other) that use the same producer-consumer pattern for asynchronous emit:emit()
pushes the event to some queue (one of the queues), and there is an asynchronous executor that retrieves events from the queue and sends them over network to emit.AmbariMetricsEmitter
andGraphiteEmitter
use the same policy when the queue is full (they log a warning). ButStatsDEmitter
apparently silently discards new events when the queue is full (seeNonBlockingStatsDClient
code).KafkaEmitter
discards new events, but increments "lost events" counts.HttpPostEmitter
packs events in batches and drops the oldest batch when overwhelmed, simultaneously logging that (seeHttpPostEmitter.limitBuffersToEmitSize()
andlimitFailedBuffersSize()
).I think all emitters should be similar in this regard. Probably event throttling policy should be configurable.
Related to #2868.
The text was updated successfully, but these errors were encountered: