You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem
If there's a large number of subscribers (especially for the same topic), then polling each topic separately will be very inefficient and cause database contention. Also, the existing polling topic listener can't poll as quickly since there are so many of them, increasing message latency.
Solution
Use a single, shared polling topic listener.
Implement a SharedPollingTopicListener that uses the existing logic in PollingTopicListener except moves most of the Flux definition to the constructor and calls Flux.share().
Instead of calling repository.findByFilter(), a new method repository.findByConsensusTimestampGreaterThan(long consensusTimestamp) will be created to return results for all topics.
Ensure it uses an index or create one if necessary.
Filtering will be done in memory.
The interval can be set to occur more often like 500ms.
Alternatives
Replace the existing PollingTopicListener with the new approach.
Additional Context
The text was updated successfully, but these errors were encountered:
Problem
If there's a large number of subscribers (especially for the same topic), then polling each topic separately will be very inefficient and cause database contention. Also, the existing polling topic listener can't poll as quickly since there are so many of them, increasing message latency.
Solution
Use a single, shared polling topic listener.
SharedPollingTopicListener
that uses the existing logic inPollingTopicListener
except moves most of the Flux definition to the constructor and callsFlux.share()
.repository.findByFilter()
, a new methodrepository.findByConsensusTimestampGreaterThan(long consensusTimestamp)
will be created to return results for all topics.Alternatives
Replace the existing
PollingTopicListener
with the new approach.Additional Context
The text was updated successfully, but these errors were encountered: