Past Updates
-
Talk: Spoke at the IDEAL Privacy and Interpretability in Generative AI workshop. (Nov 2024)
-
Talk: Spoke at Asilomar 2024. (Oct 2024)
-
MDS ‘24 Special Session: Co-organizing a SIAM MDS ‘24 special session on Mathematical Principles in Foundation Models. (Oct 2024)
-
Publication: The full version of the CRATE story has been accepted for publication in JMLR. CRATE is a “white-box” (yet scalable) transformer architecture where each layer is derived from the principles of compression and sparsification of the input data distribution. This white-box derivation leads CRATE’s representations to have surprising emergent segmentation properties in vision applications without any complex self-supervised pretraining. (Aug 2024)
-
Talk: Spoke at the BIRS Oaxaca “Mathematics of Deep Learning” workshop about white-box networks (Jun 2024).
-
White-Box Deep Networks Tutorials: We delivered a tutorial on building white-box deep neural networks at ICASSP 2024 in Seoul (Apr 2024), and at CVPR 2024 in Seattle (Jun 2024). The most recent tutorial slides can be found here (Lectures 2-1 – 2-3).
-
Publication: Learned proximal networks, a methodology for parameterizing, learning, and evaluating expressive priors for data-driven inverse problem solvers with convergence guarantees, appeared in ICLR 2024. The camera-ready version of the manuscript can be found here. (May 2024)
-
Publication: CRATE-MAE appeared in ICLR 2024. At the heart of this work is a connection between denoising and compression, which we use to derive a corresponding decoder architecture for the “white-box” transformer CRATE encoder. The camera-ready version of the manuscript can be found here. (May 2024)
-
Talk: Gave my annual Research at TTIC talk about TILTED! Here is the video recording. (Mar 2024)
-
Talk: Gave the Redwood Seminar. (Feb 2024)
-
Publication: We presented CRATE at NeurIPS 2023, and as an oral in the XAI in Action workshop. (Dec 2023)
-
Publication: We presented TILTED at ICCV 2023. TILTED improves visual quality, compactness, and interpretability for hybrid neural field 3D representations by incorporating geometry into the latent features. Find the full version on arXiv. (Oct 2023)
-
ICLR 2024: Presented two posters in Vienna: Learned Proximal Networks (Friday, May 10, a.m. session; project page, ICLR page) and CRATE-MAE (Thursday, May 9, a.m. session; project page, ICLR page). (May 2024)
-
1st Conference on Parsimony and Learning: I co-organized the inaugural Conference on Parsimony and Learning (CPAL), which took place at the University of Hong Kong from January 3–6, 2024. Thanks to all authors, speakers, organizers, and especially to the local team at HKU, whose hard work made the conference a success! Stay tuned for CPAL 2025. (Jan 2024)
-
(December 2023) New preprint posted on methodology for data-driven inverse problem solvers with convergence guarantees. We presented this work at the NeurIPS 2023 Learning-Based Solutions for Inverse Problems workshop.
-
(June 2023) We taught a short course at ICASSP 2023 in Rhodes, Greece, titled “Learning Nonlinear and Deep Low-Dimensional Representations from High-Dimensional Data: From Theory to Practice”.
-
(January 2023) I co-organized the third Workshop on Seeking Low-Dimensionality in Deep Neural Networks (SLowDNN). Here is a link to my tutorial.
-
(September 2022) I defended my Ph.D. thesis (back in June!), and started as a Research Assistant Professor at TTIC.
-
(May 2022) I received the Eli Jury Award from the Columbia EE Department for “outstanding achievement in the area of signal processing”.
-
(May 2022) We taught a short course at ICASSP 2022 in May, titled “Low-Dimensional Models for High-Dimensional Data: From Linear to Nonlinear, Convex to Nonconvex, and Shallow to Deep”. Slides are available!
-
(April 2022) I attended the Princeton ML Theory Summer School this summer from June 13–17.
-
(March 2022) New preprint released on invariance-by-design neural architectures for computing with visual data, with theoretical guarantees. Feedback is very much appreciated!
-
(December 2021) We presented our paper Deep Networks Provably Classify Data on Curves at NeurIPS.
-
(August 2021) I gave a talk about our work on the multiple manifold problem at the IMA Workshop on Mathematical Foundation and Applications of Deep Learning at Purdue. Thanks to the organizers for the opportunity to speak!
-
(July 2021) I will be attending the Princeton Deep Learning Theory Summer School this year.
-
(May 2021) We will present our paper “Deep Networks and the Multiple Manifold Problem” at ICLR 2021 on Thursday, May 6th! Conference link here, paper link here.