-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor BatchLogProcessor #2494
Refactor BatchLogProcessor #2494
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2494 +/- ##
=======================================
- Coverage 77.9% 77.9% -0.1%
=======================================
Files 123 123
Lines 22881 22943 +62
=======================================
+ Hits 17838 17875 +37
- Misses 5043 5068 +25 ☔ View full report in Codecov by Sentry. |
Excellent approach! Please make this ready for review once you feel it's ready :) |
// Check if the a control message for exporting logs is already sent to the worker thread. | ||
// If not, send a control message to export logs. | ||
// `export_log_message_sent` is set to false ONLY when the worker thread has processed the control message. | ||
if !self.export_log_message_sent.swap(true, Ordering::Relaxed) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we use compare_exchange
instead of swap
. This will avoid redundant writes if the export_log_message_sent
is already true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used swap
because it's less verbose. Note that both swap
and compare_exchange
need exclusive access to the cache line. More importantly, a compare_exchange
operation would need exclusive access even to perform the comparison.
I pushed some new changes. I have added a new if check using atomic load to avoid aggressively calling swap
. We would now call swap
only if the atomic load value returned false
. Let me know what you think.
|
||
// This method gets upto `max_export_batch_size` amount of logs from the channel and exports them. | ||
// It returns the result of the export operation. | ||
// It expects the logs vec to be empty when it's called. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can get rid of logs
vec and instead create a fresh array and pass its slice to export method.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For using array, we need to know the size of the array beforehand which we don't because we don't know how many items are present in the channel. It might be less than max_export_batch_size
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can allocate an array of max_export_batch_size, and slice it at the required length. Anyway, this is not a blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Always stack allocating a big array even to export very few logs seems wasteful.
What's the benefit of using an array instead of a vec
here? We are reusing the same vec
for each export anyway so allocation isn't a concern.
let result = export_with_timeout_sync( | ||
config.max_export_timeout, | ||
&mut exporter, | ||
let result = get_logs_and_export( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as flush - we need to ensure logs are continued to be exported until it finishes all logs received upto the point shutdown was invoked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to fix flush/shutdown, but good to merge and follow up.
Also, I am unsure of the perf testing, as any perf test would instantly fill up the channel, so majority of items would be taking the "drop_due_to_full" path. Developing a testing strategy for BatchProcessor is also good candidate for another PR. Irrespective of that, the direction in this PR looks correct to me.
Since we are using a no-op exporter, the channel should be getting emptied really quick as well. But yes, it would be nice to verify that objectively instead of manual reasoning. |
Towards #2478
Changes
BatchLogProcessor
to wake up the background thread ONLY when needed BatchProcessor shutdown issue #2478 (comment)Performance
There is a significant performance improvement mainly because of
mpsc:Receiver
not being excessively active (unlike how it is on main branch). It looks like there is a significant amount of synchronization overhead with senders and receivers actively working on the channel all the time. With this PR, the channel receivers are only woken up when needed so the logging threads incur a lesser synchronization overhead cost.I updated the
Logging_Comparable_To_Appender
benchmark and logs stress test to use aBatchProcessor
with a no-op exporter and ran the tests. Here are the results:Merge requirement checklist
CHANGELOG.md
files updated for non-trivial, user-facing changes