← Back to home

Research Notes

Weekly thoughts, paper notes, and experiments around LLM safety and interpretability.
The softmax bottleneck in language modeling
February 15, 2026 · 13 min read

A deep review of how the softmax bottleneck limits expressivity in language models and how mixture-of-softmaxes raises the effective rank.

Attention as modern Hopfield memory
February 11, 2026 · 13 min read

A focused review of the modern Hopfield-network view of attention, with emphasis on storage capacity and retrieval behavior.

Topic log

At-a-glance list of covered research questions.