-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Topk SAE training #370
Conversation
f4d6c41
to
0f3c243
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #370 +/- ##
==========================================
+ Coverage 72.74% 72.88% +0.13%
==========================================
Files 22 22
Lines 3266 3297 +31
Branches 431 438 +7
==========================================
+ Hits 2376 2403 +27
- Misses 762 764 +2
- Partials 128 130 +2 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good other than the two comments I made.
total_steps=cfg.total_training_steps, | ||
final_l1_coefficient=cfg.l1_coefficient, | ||
) | ||
|
||
# Setup autocast if using | ||
self.scaler = torch.cuda.amp.GradScaler(enabled=self.cfg.autocast) | ||
self.scaler = torch.amp.GradScaler(device="cuda", enabled=self.cfg.autocast) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably use whatever device is set through the config, rather than being hardcoded to CUDA.
from sae_lens.training.training_sae import ( | ||
TrainingSAE, | ||
TrainingSAEConfig, | ||
_calculate_topk_aux_acts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this function is intended to be imported by other files/classes, probably should not be an underscore/internal function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's only imported by its own unit test, it should not be imported externally which is why it has an underscore
c7076c3
to
b49fece
Compare
Description
This PR implements topk SAE training by adding the topk auxiliary loss. This PR makes several design choices:
topk
is specified as anarchitecture
for training, so it's similar togated
andjumprelu
SAEs. This seems fit the idea of an SAE architecture since it has its own custom training routine and losses, and it seems strange to call "jumprelu" and "gated" architectures, but not topk.Our implementation of topk training is likely less efficient than Eleuther's as they use a custom sparse kernel for the SAE decoder (see /~https://github.com/EleutherAI/sae/blob/main/sae/kernels.py). We can try to support something like this in the future, but it will likely require a bit of refactoring before we can support a special decoder kernel just for topk.
I'm currently running some test training runs to make sure things look decent, and will upgrade this PR from draft when those are complete.
Fixes #202
training test run dashboard: https://api.wandb.ai/links/chanind/zju8dl70
Type of change
Please delete options that are not relevant.
Checklist:
You have tested formatting, typing and unit tests (acceptance tests not currently in use)
make check-ci
to check format and linting. (you can runmake format
to format code if needed.)