Neuronpedia
Get Started
API
Releases
Jump To
Search
Models
Steer
SAE Evals
Blog/Podcast
NEW
Slack
Privacy & Terms
Contact
Sign In
© Neuronpedia 2025
Privacy & Terms
Blog/Podcast
GitHub
Slack
Twitter
Contact
Home
Google DeepMind · Exploring Gemma 2 with Gemma Scope
Gemma-2-2B
Attention Out - 16k
0-GEMMASCOPE-ATT-16K
324
Prev
Next
MODEL
0-gemmascope-att-16k
Source/SAE
INDEX
Go
Explanations
attends to the "hide" token from various tokens denoting locations or contexts for hiding
oai_attention-head · gpt-4o-mini
Triggered by @bot
New Auto-Interp
AutoInterp Type
claude-3-5-haiku-20241022
Generate
Top Features by Cosine Similarity
Comparing With
GEMMA-2-2B @ 0-gemmascope-att-16k
Configuration
google/gemma-scope-2b-pt-att/layer_0/width_16k/average_l0_104
How To Load
Prompts (Dashboard)
36,864 prompts, 128 tokens each
Dataset (Dashboard)
monology/pile-uncopyrighted
Features
16,384
Data Type
float32
Hook Name
blocks.0.attn.hook_z
Hook Layer
0
Architecture
jumprelu
Context Size
1,024
Dataset
monology/pile-uncopyrighted
Activation Function
relu
Show All
Embeds
Plots
Explanation
Show Test Field
Default Test Text
IFrame
<iframe src=https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/324?embed=true&embedexplanation=true&embedplots=true&embedtest=true" title="Neuronpedia" style="height: 300px; width: 540px;"></iframe>
Link
https://www.neuronpedia.org/gemma-2-2b/0-gemmascope-att-16k/324?embed=true&embedexplanation=true&embedplots=true&embedtest=true
Not in Any Lists
Add to List
▼
No Comments
ADD
Head Attr Weights
0:
0.62
1:
0.01
2:
0.02
3:
0.02
4:
0.17
5:
0.07
6:
0.01
7:
0.04
Negative Logits
L
-0.37
R
-0.37
E
-0.37
X
-0.35
P
-0.35
up
-0.34
B
-0.34
Z
-0.33
r
-0.33
C
-0.33
POSITIVE LOGITS
)");
0.67
^(@)
0.61
")));
0.60
>");
0.60
"):
0.59
>\<^
0.58
`,
0.55
'));
0.54
'):
0.54
."));
0.52
Act
ivations
Density 0.004%
Test
Steer
Stacked
Snippet
Full
Split DFA
Combine DFA
Show Breaks
Hide Breaks
No Known Activations