Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(page_service / timeline::handle): the GateGuard need not be a special case #11030

Merged
merged 8 commits into from
Feb 28, 2025

Conversation

problame
Copy link
Contributor

@problame problame commented Feb 27, 2025

Changes

While working on

I found myself needing to cache another expensive Arc::clone inside
inside the timeline::handle::Cache by wrapping it in another Arc.

Before this PR, it seemed like the only expensive thing we were caching was the
connection handler tasks' clone of Arc<Timeline>.

But in fact the GateGuard was another such thing, but it was special-cased
in the implementation.

So, this refactoring PR de-special-cases the GateGuard.

Performance

With this PR we are doing strictly less operations per Cache::get.
The reason is that we wrap the entire Types::Timeline into one Arc.
Before this PR, it was a separate Arc around the Arc and
one around the Arc.

With this PR, we avoid an allocation per cached item, namely,
the separate Arc around the GateGuard.

This PR does not change the amount of shared mutable state.

So, all in all, it should be a net positive, albeit probably not noticable
with our small non-NUMA instances and generally high CPU usage per request.

Reviewing

To understand the refactoring logistics, look at the changes to the unit test types first.
Then read the improved module doc comment.
Then the remaining changes.

In the future, we could rename things to be even more generic.
For example, Types::TenantMgr could really be a Types::Resolver.
And Types::Timeline should, to avoid constant confusion in the doc comment,
be called Types::Cached or Types::Resolved.
Because the handle module, after this PR, really doesn't care that we're
using it for storing Arc's and GateGuards.

Then again, specicifity is sometimes more useful than being generic.
And writing the module doc comment in a totally generic way would
probably also be more confusing than helpful.

…reference cycles

Storing Arc<Timeline> inside RequestContext risks reference cycles if
the RequestContext gets stored in an object that is owned directly or
indirectly by Timeline.

So, we wrap the TimelineMetrics into an Arc and propagate that instead.
This makes it easy for future metrics to be access through
RequestContext, like we do for storage_io_size here.

To make the page_service case happy, where we may be dispatching to
a different shard every successive request, and don't want to be cloning
from the shared Timeline::metrics on each request, we pre-clone as
part of the handle cache miss.
@problame problame marked this pull request as ready for review February 27, 2025 21:07
@problame problame requested a review from a team as a code owner February 27, 2025 21:07
@problame problame requested review from erikgrinaker, VladLazar and arpad-m and removed request for erikgrinaker February 27, 2025 21:07
Copy link

github-actions bot commented Feb 27, 2025

7744 tests run: 7365 passed, 0 failed, 379 skipped (full report)


Code coverage* (full report)

  • functions: 32.8% (8643 of 26363 functions)
  • lines: 48.6% (73200 of 150560 lines)

* collected from Rust tests only


The comment gets automatically updated with the latest test results
823bf12 at 2025-02-28T12:39:51.294Z :recycle:

@problame problame enabled auto-merge February 28, 2025 11:30
@problame problame added this pull request to the merge queue Feb 28, 2025
Merged via the queue into main with commit 7c53fd0 Feb 28, 2025
88 checks passed
@problame problame deleted the problame/timeline-handle-orthogonality branch February 28, 2025 12:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants