Welcome, it’s a pleasure to connect with you! I’m Bingcong Li, a postdoctoral researcher at ETH Zurich collaborating with Prof. Niao He and the ODI group. Prior to this, I received doctoral degree from the University of Minnesota under the supervision of Prof. Georgios B. Giannakis, and then I gained industry experience, dedicating a year to LLMs.

Find me via bingtsongli[@]gmail.com or bingcong.li[@]inf.ethz.ch.

General interests

My research leverages optimization to further enhance the excitement surrounding deep learning. My primary focus is on elevating the generalization capabilities of models and advancing them towards greater robustness and trustworthiness.

I enjoy cycling 🚴🏻 and skiing 🎿 outside offices.

Recent updates

  • 09/2024. [NeurIPS 2024] We study the implicit regularization of sharpness-aware minimization (SAM) and explicify it to alleviate computational burdern of SAM. The resultant approach is useful for finetuning LLMs with LoRA.
  • 05/2024. [ICML 2024] Memory-efficient private finetuning for LLMs. We also have a paper at Theoretical Foundations of Foundation Models (TF2M) workshop.
  • 01/2024. Start as a postdoc in ETH Zurich, working with Prof. Niao He.
  • 12/2023. [ICASSP 2024] Universal ‘preconditioner’ for meta learning
  • 09/2023. [NeurIPS 2023] Improving generalization by refining optimization of sharpness-aware minimization.