Welcome, it’s a pleasure to connect with you! I’m Bingcong Li, a postdoctoral researcher at ETH Zurich collaborating with Prof. Niao He and the ODI group. Prior to this, I received doctoral degree from the University of Minnesota under the supervision of Prof. Georgios B. Giannakis, and then I gained industry experience, dedicating a year to LLMs.

Find me via bingtsongli[@]gmail.com or bingcong.li[@]inf.ethz.ch.

General interests

My research leverages optimization to further enhance the excitement surrounding deep learning. My primary focus is to understand the optimization dynamics on neural networks and leverage these insights to design more efficient algorithms for pretraining and finetuning.

I enjoy cycling 🚴🏻 and skiing 🎿 outside offices.

Recent updates

  • 02/2025. [Preprint] Transfer learning provably benefits RLHF. Check out our paper!
  • 01/2025. [ICLR 2025] We prove that initialization exponentially impacts the convergence behavior of ScaledGD on LoRA type problems (i.e., linear –> quadratic rates).
  • 12/2024. Talked about Architecture-Aware Optimization at ELLIS UnConference.
  • 12/2024. [ICASSP 2025] A new variant of SAM is released.
  • 09/2024. [NeurIPS 2024] We study the implicit regularization of sharpness-aware minimization (SAM) and explicify it to alleviate computational burdern of SAM. The resultant approach is useful for finetuning LLMs with LoRA.
  • 05/2024. [ICML 2024] Memory-efficient private finetuning for LLMs. We also have a paper at Theoretical Foundations of Foundation Models (TF2M) workshop.
  • 01/2024. Start as a postdoc in ETH Zurich, working with Prof. Niao He.
  • 12/2023. [ICASSP 2024] Universal ‘preconditioner’ for meta learning.
  • 09/2023. [NeurIPS 2023] Improving generalization by refining optimization of sharpness-aware minimization; see here.