You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My use case looks something like this: We have a combination of both historic data and live data that use the same schema. From an end user perspective both should be query-able but anything that falls under historic data is expected to have higher latency for those requests.
Currently I have a Garnet instance that is streamed the live data and I'm trying to figure out how to set up the historic data. I don't want to write all the historic data directly to this Garnet instance since it is far too large for the amount of memory I have available, I would want it to be dumped into the spillover disk log. I'm wondering if it's possible to set up second Garnet instance that would write directly to the disk log. The primary would still be responsible for reading both from its own in memory database and the shared disk log.
My concerns with such an approach are
I believe there is a minimum memory size so I wouldn't be able to set the secondary to write directly do disk without first filling up an internal cache
I'm not sure if it's possible for two garnet instances to share a disk log or if that may cause some unexpected or incompatible behavior
I'm not entirely clear on how Garnet reads from the disk log for queries, or more specifically, how much latency I should expect from requests that have to read the disk log.
The other alternative I see is to just use a secondary garnet instance for the historic data without a shared disk log and have my read APIs query both Garnet instances to see where the underlying data is. Either way, I'm not entirely sure what the best approach here is or if I'm trying to solve a use case in a way that Garnet isn't really designed to be solved, so would love some guidance on this.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My use case looks something like this: We have a combination of both historic data and live data that use the same schema. From an end user perspective both should be query-able but anything that falls under historic data is expected to have higher latency for those requests.
Currently I have a Garnet instance that is streamed the live data and I'm trying to figure out how to set up the historic data. I don't want to write all the historic data directly to this Garnet instance since it is far too large for the amount of memory I have available, I would want it to be dumped into the spillover disk log. I'm wondering if it's possible to set up second Garnet instance that would write directly to the disk log. The primary would still be responsible for reading both from its own in memory database and the shared disk log.
My concerns with such an approach are
The other alternative I see is to just use a secondary garnet instance for the historic data without a shared disk log and have my read APIs query both Garnet instances to see where the underlying data is. Either way, I'm not entirely sure what the best approach here is or if I'm trying to solve a use case in a way that Garnet isn't really designed to be solved, so would love some guidance on this.
Beta Was this translation helpful? Give feedback.
All reactions