Ashok Vardhan Makkuva

Hello 

Postdoctoral researcher
School of Computer and Communication Sciences, EPFL
Email | Google Scholar | Linkedin | Twitter

   

I am on the faculty job market this year!

My Research

I am a postdoctoral researcher at EPFL, working with Michael Gastpar, Jason Lee, and Martin Jaggi. My research establishes AI Foundations to build reliable and interpretable AI systems. To this end, my work has delivered impactful practical advances and key theoretical insights across two main research thrusts: (1) Thrust 1 (Reliability) — Algorithmic Frameworks for AI via Information-Theoretic Principles, and (2) Thrust 2 (Interpretability) — Theoretical Foundations of AI via Structured Data. My fundamental contributions across these areas have appeared in top-tier machine learning venues such as NeurIPS, ICLR, and ICML, and have been been recognized with a DAAD AInet Fellowship, an ICLR Spotlight Award, a Best Paper Award from ACM MobiHoc, the Joan and Lalit Bahl Fellowship (twice), the Sundaram Seshu International Student Fellowship, and a Qualcomm Innovation Fellowship for two mentored students.

I have delivered invited talks at leading institutions, including Stanford, Berkeley, and Microsoft Research, as well as a tutorial at NeurIPS and an upcoming invited article in IEEE BITS Magazine. Few recent publications reflective of my profile include: Fundamental limits of prompt compression, Attention with Markov, Local to Global, Constant depth suffices, and Markov to Laplace via Mamba .

Before joining EPFL, I got my Ph.D. in Electrical and Computer Engineering from UIUC, where I worked with Pramod Viswanath and Sewoong Oh. During my PhD, I had the pleasure of collaborating with and Sreeram Kannan, Founder and CEO of EigenLayer.

Prior to that, I graduated from IIT Bombay with a B. Tech. (Honors) in Electrical Engineering and Minors in Mathematics, where I worked with Vivek Borkar.

News

  • Apr 2025: Honored to receive the prestigious DAAD AInet Fellowship, awarded to outstanding international AI researchers for an exclusive postdoc research visit to top German universities.

  • Apr 2025: Excited to share that my body of work on Markovian analysis of transformers will be the centerpiece for an upcoming invited article in the IEEE BITS Magazine!

  • Feb 2025: Attention with Markov is accepted for an ICLR Spotlight (5% out of 11,670 papers).

  • Jan 2025: We discover surprising in-context learning abilities of Mamba when trained on Markov chains! More details on our arXiv preprint here.

  • Dec 2024: Wonderful experience presenting our NeurIPS tutorial --- Sandbox for the Blackbox: How LLMs Learn Structured Data? Full material including slides and video are available here !

  • Nov-Dec 2024: Talk on our comprehensive body of work on Markovian analysis of transformers at (i) Stanford IT Forum, hosted by Ayfer Ögzür and Tsachy Weissman, and (ii) ETH Zürich, hosted by Andreas Krause.

  • Sep 2024: Three papers to appear at NeurIPS 2024! One explores the fundamental limits of prompt compression and the other two provide a complete characterization of transformers when trained on Markovian inputs: Local to Global and Constant depth suffices.

  • Aug 2024: Excited to share that I will be giving a tutorial at NeurIPS 2024 with my amazing colleagues Bingbin Liu and Jason Lee, titled Sandbox for the Blackbox: How LLMs Learn Structured Data?

  • +older news…

Visitor counter

Flag Counter