-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StreamSourceRecordReader should support AckPolicy.MANUAL #17
Comments
Thank you. I added an option to set the ack policy (auto or explicit). |
Hi @jruaux , thanks for putting together a change for this. I've been working on another PR for this but there are some differences between our approaches: I was hoping to define the behavior in terms of the semantics of the delivery policy (at least once vs at most once) rather than the Redis behavior of XACK. To me as a connector user, this is a more meaningful guarantee. I also believe the current implementation after these changes may be missing part of that guarantee. Read on for details: If the connector fails between reading messages from the stream and writing the messages to Kafka, the default behavior of the One last comment: the new changes refer to a method I'll push my changes and post my PR so you can review the spirit of my approach. The changes are complete, but I was writing more unit tests to try to verify the behavior. I hadn't noticed the availability of the integration tests, but I'd be happy to pivot to writing more of those instead to demonstrate some of the potential issues with recovery. Edit: One other thing I forgot to mention is that by relying solely on the kafka-connect source offset for the stream recovery, it provides the framework necessary to support "exactly-once" processing for connectors just released in Kafka 3.3 . |
Here's a link to the commit with all the changes. The master branch here has diverged too much for now, but it's probably not too much work to incorporate the idea of the |
Thanks for your input and contribution. I agree with exposing the semantics to the user (vs the Redis implementation). I will also incorporate that message recovery mechanism, although I'm thinking it might belong to Spring Batch Redis as it might benefit other projects. Thanks again! |
@jruaux - I pushed another commit on that branch that fixes issues with |
@ahawtho I included your message recovery logic in Spring Batch Redis v3.0.7 and this project's early access release leverages it. Can you let me know what you think? If all looks good I'll cut a new version |
@jruaux I will try to take a look at this tomorrow or Friday. Thanks for putting it all together! |
🎉 This issue has been resolved in |
StreamSourceRecordReader
uses the defaultAckPolicy
onRedisItemReader
, which isAUTO
. If the connector fails to write to the topic after the reader has received a message, and the task is stopped/restarted, that message will be lost (at-most-once processing).The connector should also support at-least-once processing by XACK'ing each message after they've been committed, indicated by
AckPolicy.MANUAL
. The connector should support a property such as:and I'd argue the default should be
at-least-once
. I believe I can put together a PR for this.The text was updated successfully, but these errors were encountered: