Sam Buchanan
Research Assistant Professor
Toyota Technological Institute at Chicago
[email protected]
[email protected]
6045 South Kenwood Ave, 411
Chicago, IL 60637
I am a Research Assistant Professor at TTIC. I completed my Ph.D. in Electrical Engineering at Columbia University in 2022, working with John Wright, and my B.S. in Electrical Engineering at the University of Kansas.
I study the mathematics of representation learning from the perspective of signals and data. I’m interested in questions that span theory and practice — What structural properties of modern data play a role in the success or failure of deep learning? How can we design better deep architectures by exploiting these structures? I’m especially interested in applications to visual data.
Upcoming Events
-
Talks: This Fall, I will give talks at Asilomar 2024 and at the IDEAL Privacy and Interpretability in Generative AI workshop on white-box transformers. Thanks to the organizers for the invitations to speak!
-
MDS ‘24 Special Session: I am co-organizing a SIAM MDS ‘24 special session on Mathematical Principles in Foundation Models. Stop by on Friday to listen to the outstanding speakers, and check out our interesting associated poster presentations (titles listed on the minisymposium webpage) at the end of the day!
-
2nd Conference on Parsimony and Learning: I am co-organizing the second Conference on Parsimony and Learning (CPAL), to take place at Stanford University in March 2025! Stay tuned for further updates about paper submission. (Summer 2024)
Recent Highlights
- White-Box Deep Networks Tutorial: We delivered a tutorial on building white-box deep neural networks at ICASSP 2024 in Seoul (Apr 2024), and at CVPR 2024 in Seattle (Jun 2024). The most recent tutorial slides can be found here (Lectures 2-1 – 2-3).
Recent Updates
-
Publication: The full version of the CRATE story has been accepted for publication in JMLR. CRATE is a “white-box” (yet scalable) transformer architecture where each layer is derived from the principles of compression and sparsification of the input data distribution. This white-box derivation leads CRATE’s representations to have surprising emergent segmentation properties in vision applications without any complex self-supervised pretraining. (Aug 2024)
-
Talk: Spoke at the BIRS Oaxaca “Mathematics of Deep Learning” workshop about white-box networks (Jun 2024).
-
Publication: Learned proximal networks, a methodology for parameterizing, learning, and evaluating expressive priors for data-driven inverse problem solvers with convergence guarantees, appeared in ICLR 2024. The camera-ready version of the manuscript can be found here. (May 2024)
-
Publication: CRATE-MAE appeared in ICLR 2024. At the heart of this work is a connection between denoising and compression, which we use to derive a corresponding decoder architecture for the “white-box” transformer CRATE encoder. The camera-ready version of the manuscript can be found here. (May 2024)
-
Talk: Gave my annual Research at TTIC talk about TILTED! Here is the video recording. (Mar 2024)
-
Talk: Gave the Redwood Seminar. (Feb 2024)
-
Publication: We presented CRATE at NeurIPS 2023, and as an oral in the XAI in Action workshop. (Dec 2023)
-
Publication: We presented TILTED at ICCV 2023. TILTED improves visual quality, compactness, and interpretability for hybrid neural field 3D representations by incorporating geometry into the latent features. Find the full version on arXiv. (Oct 2023)