
Sam Buchanan
Postdoctoral Scholar
University of California, Berkeley
[email protected]
I am a postdoctoral scholar at the University of California, Berkeley. Previously, I was at TTIC, after finishing my PhD in Electrical Engineering at Columbia University.
I study the mathematics of representation learning from the perspective of signals and data. I’m interested in questions that span theory and practice — What structural properties of modern data underlie the successes of deep learning? How can we scale our models more effectively by exploiting this structure? I’m especially interested in transformers, and applications to text and visual data.
Recent Highlights
- Book Release: We have released version 1.0 of our new fully-open-source book on representation learning with deep networks (link). It covers theoretical foundations from information theory to optimization as well as concrete applications such as transformers and contrastive learning. The book is full of new perspectives from our recent research—check out Chapters 3 and 6 for our take on diffusion, endorsed by the great Kevin Murphy!
Upcoming Events
- Tutorials: I will be lecturing on topics from our new book at IAISS 2025 in sunny Tuscany, Italy (link), and at ICCV 2025 in sunny Honolulu, Hawaii (link).
Recent Updates
-
New Job: I’m starting as a postdoc at UC Berkeley! (Sep 2025)
-
Preprint: Two new preprints posted! One on a theoretical analysis of memorization and generalization in diffusion models (link), and one on building diffusion models with proximal operators, leading to fewer NFEs at sampling time (link). (Aug 2025)
-
2nd Conference on Parsimony and Learning: I co-organized the second Conference on Parsimony and Learning (CPAL). Thanks to all attendees for making it a success! (Mar 2025)