--- license: apache-2.0 language: - en tags: - mechanistic interpretability - sparse autoencoder - llama - llama-3 --- ## Model Information A SAE (Sparse Autoencoder) for [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B). It is trained specifically on layer 19 of DeepSeek-R1-Distill-Llama-8B and achieves a final L0 of 93 during training. This model is used to decompose Llama's activations into interpretable features. The SAE weights are released under Apache, however DeepSeek-R1-Distill-Llama-8B is to be used under Meta's Llama 3 License. ## How to use A Jupyter Notebook is provided to test the model Open In Colab ## Training Our SAE was trained using [LMSYS-Chat-1M dataset](https://arxiv.org/pdf/2309.11998). ## Acknowledgements This release wouldn't have been possible without the work of [Goodfire](https://www.goodfire.ai/) and [Anthropic](https://transformer-circuits.pub/) A huge thank goes to [runpod](https://www.runpod.io/), who generously sponsored the compute for this run! ``` .x+=:. z` ^% .uef^" .u . . '88" <888'888k 888E~?888L I888 9888 4888> ' d888 '88%" 8888N=*8888 d888 '88%" 9888 9888 4888> ' 9888 'Y" 888E 888E I888 9888 4888> 8888.+" %8" R88 8888.+" 9888 9888 4888> 9888 888E 888E I888 9888 .d888L .+ 8888L @8Wou 9% 8888L 9888 9888 .d888L .+ 9888 888E 888E `888Nx?888 ^"8888*" '8888c. .+ .888888P` '8888c. .+ 9888 9888 ^"8888*" ?8888u../ 888E 888E "88" '888 "Y" "88888% ` ^"F "88888% "888*""888" "Y" "8888P' m888N= 888> 88E "YP' "YP' ^Y" ^Y' "P' `Y" 888 98> J88" '8 @% ` :" ```