-
Notifications
You must be signed in to change notification settings - Fork 625
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add hashset (new hash table) and use in command lookup #1186
base: unstable
Are you sure you want to change the base?
Conversation
Signed-off-by: Viktor Söderqvist <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1186 +/- ##
============================================
- Coverage 70.65% 70.40% -0.26%
============================================
Files 114 115 +1
Lines 61799 63782 +1983
============================================
+ Hits 43664 44903 +1239
- Misses 18135 18879 +744
|
For command lookup, since the command table is rarely updated, why not build an efficient trie? Or it is even more slower? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a partial review (everything but hashset.c, which I guess is the interesting stuff) and tests. Blocked time off tomorrow to read through :) Exciting stuff!
void getRandomBytes(unsigned char *p, size_t len); | ||
|
||
/* Init hash function salt and seed random generator. */ | ||
static void randomSeed(void) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you print out the seed, so that if something fails we can actually reproduce it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure...
/* --- Global variables --- */ | ||
|
||
static uint8_t hash_function_seed[16]; | ||
static hashsetResizePolicy resize_policy = HASHSET_RESIZE_ALLOW; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use type callbacks or values instead of the globals? We discussed changing this in Redis to make it more modular, but never got around to it. This allows much finer control of the policies as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we could. It needs some design work. I wanted to avoid side-stepping too much. Just focusing on replacing dict at the first step.
Follow-up.
.keyCompare = hashsetStringKeyCaseCompare, | ||
.instant_rehashing = 1}; | ||
|
||
/* Command set, hashed by char* string, stores serverCommand structs. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/* Command set, hashed by char* string, stores serverCommand structs. */ | |
/* Sub-command set, hashed by char* string, stores serverCommand structs. */ |
Is there a reason these need to be different? It seems like we could use declared name for both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SoftlyRaining knows this better.
I suppose it's about renamed commands or something, where the declared name and the real name are not the same.
This changes the type of command tables from dict to hashset. Command table lookup takes ~3% of overall CPU time in benchmarks, so it is a good candidate for optimization. My initial SET benchmark comparison suggests that hashset is about 4.5 times faster than dict and this replacement reduced overall CPU time by 2.79% 🥳 --------- Signed-off-by: Rain Valentine <[email protected]> Signed-off-by: Rain Valentine <[email protected]> Co-authored-by: Rain Valentine <[email protected]>
@xingbowang For command table, yes a perfect hash function would be even better. The main point of this implementation is to use it for the database keys though, which will come in another PR. (A trie is possibly slower because there might be more memory accesses.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excellent work!
to be honest, I didn't like open-addressing hash tables before, because the tombstones generated by deletions have a very negative impact on both performance and space. I felt they were not suitable for online services with a large number of deletion operations.
However, this design is quite clever, deleting an element does not create a tombstone, only when a bucket has been full at some point is it considered to have a tombstone, which greatly reduces the impact of deletion operations.
A bucket's length is 7, so in terms of chained hashing, a tombstone effect would only occur when there are 7 collisions, which intuitively seems to be a relatively rare situation.
In theory, this should save memory. Compared to the previous chained hashing, it saves two pointers (for the previous and next dictEntry
) and adds one hash value, saving about 15 bytes per element. However, the frequency of resizing might be higher, as resizing is triggered not only by the fill factor but also by the proportion of tombstones. Do we have reliable tests for space and performance?
… rehashed buckets can be skipped (#1147) Fixes 2 bugs in hashsetNext(): - null dereference when iterating over new unused hashset - change order of iteration so skipping past already-rehashed buckets works correctly and won't miss elements Minor optimization in hashsetScan and some renamed variables. Signed-off-by: Rain Valentine <[email protected]>
@soloestoy I'm glad you like the design. In my WIP branch for using this for keys, i get 10-15% better performace for GET compared to unstable. I set the fill factor to 77% currently, and with 7/8 of the space used for pointers (1/8 in each bucket is metadata bitys), it means that 67% of the allocation is pointers. Rehashing triggered by the proportion of tombstones is a corner case. It didn't implement it first, but I realized in the defrag tests, the test code tried to create a lot of fragmentation and it added many keys and eviction deleted many keys. In this case, the table became full of tombstones. It does not happen very often and I think it will not affect the performance. There is a potential problem with CoW during fork though. The dict can avoid rehashing up to it becomes 500% fill factor. This is not possible with open addressing. We have to rehash at some point. I set the hard limit to 90% currently. I hope it will not be a major problem. In the "resize avoid" mode, resizing is allowed but incremental rehashing is paused, so only new keys are added to the new table. This also avoids destroying the CoW. |
Co-authored-by: Madelyn Olson <[email protected]> Signed-off-by: Viktor Söderqvist <[email protected]>
Use variable name `s` for hashset instead of `t`. Use variable name `element` instead of `elem`. Remove the hashsetType.userdata field. Some more fixes. Signed-off-by: Viktor Söderqvist <[email protected]>
Signed-off-by: Viktor Söderqvist <[email protected]>
Signed-off-by: Viktor Söderqvist <[email protected]>
Signed-off-by: Viktor Söderqvist <[email protected]>
This is the first in a series of PRs about using a new hash table.
The "hashset" is a cache line optimized open addressing hash table implemented as outlined in #169. It supports incremental rehashing, scan, random key, etc. just like the dict but faster and using less memory. For details, see the comments in
src/hashset.{c,h}
.The plan (WIP) is to use it for keys, expires and many other things, but this first PR just contains these two:
Fixes #991