Photo of Sam
    Buchanan

Sam Buchanan

Postdoctoral Scholar
University of California, Berkeley [email protected]

Here is a more formal photo.

I am a postdoctoral scholar at the University of California, Berkeley. Previously, I was at TTIC, after finishing my PhD in Electrical Engineering at Columbia University.

I study the mathematics of representation learning from the perspective of signals and data. I’m interested in questions that span theory and practice — What structural properties of modern data underlie the successes of deep learning? How can we scale our models more effectively by exploiting this structure? I’m especially interested in transformers, and applications to text and visual data.

Recent Highlights

  • Book Release: We have released version 1.0 of our new fully-open-source book on representation learning with deep networks (link). It covers theoretical foundations from information theory to optimization as well as concrete applications such as transformers and contrastive learning. The book is full of new perspectives from our recent research—check out Chapters 3 and 6 for our take on diffusion, endorsed by the great Kevin Murphy!

  • Accepted Papers: Two papers accepted to NeurIPS 2025! One on a theoretical analysis of memorization and generalization in diffusion models (link), and one on building diffusion models with proximal operators, leading to fewer NFEs at sampling time (link). (Sep 2025)

  • New Job: I’m starting as a postdoc at UC Berkeley! (Sep 2025)

Upcoming Events

  • Tutorials: I will be lecturing on topics from our new book at ICCV 2025 in sunny Honolulu, Hawaii (link) on October 19th.

  • 3rd Conference on Parsimony and Learning: I am co-organizing the third Conference on Parsimony and Learning (CPAL). This year, the conference will be held in Tübingen, Germany. Submit your work (early December deadline) and attend!

Past Updates