-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce the number of bytes hashed by IchHasher. #37427
Conversation
r? @eddyb (rust_highfive has picked a reviewer for you, use r? to override) |
The first optimization looks good, I'm happy to approve that. |
9f7a960
to
5810cdd
Compare
I've changed the second commit to leb128-encode integers. |
Do you have numbers on the performance impact of that? |
leb128 made very little difference to the performance. The number of bytes hashed increased slightly (a few percent) compared to the "just truncate" version, mostly because 255 is a fairly common value and it leb128-encodes to two bytes; I think it's used by Hasher to indicate the end of some types, or something like that? The speed difference was negligible -- leb128 encoding is much cheaper than blake2b hashing :) |
@@ -50,23 +73,28 @@ impl Hasher for IchHasher { | |||
} | |||
|
|||
#[inline] | |||
fn write_u8(&mut self, i: u8) { | |||
self.write_uleb128(i as u64); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
u8
doesn't need to use leb128. There's nothing to be gained here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it does need to use leb128. If you have a u16 value like 256 it gets encoded as [128,2]. That would be indistinguishable from two u8 values 128 and 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But encoding is typeful - a u8
can never be in the same start as a u16
. A sufficient criterion is that the encoding for each type is prefix-free.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a hash like this to be correct, it has to be unambiguous. My personal benchmark for this, the one I find most intuitive, is: If we were serializing this data instead of hashing it, would we be able to correctly deserialize it again? In this case yes, because we always know whether we are reading a u8
or a u16
or something else next. A problem only arises if we have a bunch of, say, u16
s in a row but they are encoded with an ambiguous variable length encoding.
It's a bit unfortunate that we have to duplicate the leb128 implementation. Can you factor things in a way so that we can have a unit test for it (e.g. by making |
☔ The latest upstream changes (presumably #37439) made this pull request unmergeable. Please resolve the merge conflicts. |
This significantly reduces the number of bytes hashed by IchHasher.
5810cdd
to
3e65298
Compare
I redid the second commit so it uses the leb128 encoding from libserialize. |
3e65298
to
60d4232
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Happy to merge it after you renamed the bytes
field and removed the comment about usize
.
|
||
#[inline] | ||
fn write_usize(&mut self, i: usize) { | ||
// always hash as u64, so we don't depend on the size of `usize` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since are encoding as leb128 before hashing, it should not make a difference whether the number was a usize
or a u64
before. The leb128 representation of both will be identical.
|
||
#[derive(Debug)] | ||
pub struct IchHasher { | ||
state: ArchIndependentHasher<Blake2bHasher>, | ||
bytes: Vec<u8>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you rename this to something like leb128_helper
? Otherwise one might think that it contains the bytes that were hashed so far.
This significantly reduces the number of bytes hashed by IchHasher.
60d4232
to
d73c68c
Compare
Updated to address comments. |
@bors r+ Thanks, @nnethercote! |
📌 Commit d73c68c has been approved by |
⌛ Testing commit d73c68c with merge 69c4cfb... |
💔 Test failed - auto-mac-64-opt-rustbuild |
@bors retry |
⌛ Testing commit d73c68c with merge e67a9ba... |
💔 Test failed - auto-mac-64-opt-rustbuild |
…lwoerister Reduce the number of bytes hashed by IchHasher. IchHasher uses blake2b hashing, which is expensive, so the fewer bytes hashed the better. There are two big ways to reduce the number of bytes hashed. - Filenames in spans account for ~66% of all bytes (for builds with debuginfo). The vast majority of spans have the same filename for the start of the span and the end of the span, so hashing the filename just once in those cases is a big win. - u32 and u64 and usize values account for ~25%--33% of all bytes (for builds with debuginfo). The vast majority of these are small, i.e. fit in a u8, so shrinking them down before hashing is also a big win. This PR implements these two optimizations. I'm certain the first one is safe. I'm about 90% sure that the second one is safe. Here are measurements of the number of bytes hashed when doing debuginfo-enabled builds of stdlib and rustc-benchmarks/syntex-0.42.2-incr-clean. ``` stdlib syntex-incr ------ ----------- original 156,781,386 255,095,596 half-SawSpan 106,744,403 176,345,419 short-ints 45,890,534 118,014,227 no-SawSpan[*] 6,831,874 45,875,714 [*] don't hash the SawSpan at all. Not part of this PR, just implemented for comparison's sake. ``` For debug builds of syntex-0.42.2-incr-clean, the two changes give a 1--2% speed-up.
⌛ Testing commit d73c68c with merge aa90c30... |
💔 Test failed - auto-win-msvc-64-opt-rustbuild |
@bors: retry On Fri, Nov 4, 2016 at 10:23 PM, bors notifications@github.com wrote:
|
⌛ Testing commit d73c68c with merge 149d86a... |
💔 Test failed - auto-mac-64-opt-rustbuild |
@bors: retry On Fri, Nov 4, 2016 at 11:04 PM, bors notifications@github.com wrote:
|
Reduce the number of bytes hashed by IchHasher. IchHasher uses blake2b hashing, which is expensive, so the fewer bytes hashed the better. There are two big ways to reduce the number of bytes hashed. - Filenames in spans account for ~66% of all bytes (for builds with debuginfo). The vast majority of spans have the same filename for the start of the span and the end of the span, so hashing the filename just once in those cases is a big win. - u32 and u64 and usize values account for ~25%--33% of all bytes (for builds with debuginfo). The vast majority of these are small, i.e. fit in a u8, so shrinking them down before hashing is also a big win. This PR implements these two optimizations. I'm certain the first one is safe. I'm about 90% sure that the second one is safe. Here are measurements of the number of bytes hashed when doing debuginfo-enabled builds of stdlib and rustc-benchmarks/syntex-0.42.2-incr-clean. ``` stdlib syntex-incr ------ ----------- original 156,781,386 255,095,596 half-SawSpan 106,744,403 176,345,419 short-ints 45,890,534 118,014,227 no-SawSpan[*] 6,831,874 45,875,714 [*] don't hash the SawSpan at all. Not part of this PR, just implemented for comparison's sake. ``` For debug builds of syntex-0.42.2-incr-clean, the two changes give a 1--2% speed-up.
…lwoerister Reduce the number of bytes hashed by IchHasher. IchHasher uses blake2b hashing, which is expensive, so the fewer bytes hashed the better. There are two big ways to reduce the number of bytes hashed. - Filenames in spans account for ~66% of all bytes (for builds with debuginfo). The vast majority of spans have the same filename for the start of the span and the end of the span, so hashing the filename just once in those cases is a big win. - u32 and u64 and usize values account for ~25%--33% of all bytes (for builds with debuginfo). The vast majority of these are small, i.e. fit in a u8, so shrinking them down before hashing is also a big win. This PR implements these two optimizations. I'm certain the first one is safe. I'm about 90% sure that the second one is safe. Here are measurements of the number of bytes hashed when doing debuginfo-enabled builds of stdlib and rustc-benchmarks/syntex-0.42.2-incr-clean. ``` stdlib syntex-incr ------ ----------- original 156,781,386 255,095,596 half-SawSpan 106,744,403 176,345,419 short-ints 45,890,534 118,014,227 no-SawSpan[*] 6,831,874 45,875,714 [*] don't hash the SawSpan at all. Not part of this PR, just implemented for comparison's sake. ``` For debug builds of syntex-0.42.2-incr-clean, the two changes give a 1--2% speed-up.
IchHasher uses blake2b hashing, which is expensive, so the fewer bytes hashed
the better. There are two big ways to reduce the number of bytes hashed.
The vast majority of spans have the same filename for the start of the span
and the end of the span, so hashing the filename just once in those cases is
a big win.
with debuginfo). The vast majority of these are small, i.e. fit in a u8, so
shrinking them down before hashing is also a big win.
This PR implements these two optimizations. I'm certain the first one is safe.
I'm about 90% sure that the second one is safe.
Here are measurements of the number of bytes hashed when doing
debuginfo-enabled builds of stdlib and
rustc-benchmarks/syntex-0.42.2-incr-clean.
For debug builds of syntex-0.42.2-incr-clean, the two changes give a 1--2%
speed-up.