Ashok Vardhan Makkuva

Hello 

Associate Professor
Information Processing and Communications Laboratory (LTCI), Télécom Paris
Institut Polytechnique de Paris
Email | Google Scholar | Linkedin | Twitter

   

Looking for strongly motivated students for exciting projects on AI Reasoning and Interpretable AI!

My Research

I am an Associate Professor at Télécom Paris, Institut Polytechnique de Paris, part of the MIC team. Earlier, I was a postdoctoral researcher at EPFL, hosted by Michael Gastpar, and in close collaboration with Martin Jaggi and Caglar Gulcehre. My primary research focus is in building strong AI Foundations towards designing reliable and interpretable AI, rooted in information-theoretic principles. To this end, my work has delivered impactful practical advances and key theoretical insights across two main research thrusts: (1) Thrust 1 — Algorithmic foundations of reliable AI via information-theoretic principles, and (2) Thrust 2 — Theoretical foundations of interpretable AI via structured data. Few recent publications reflective of my profile include: Fundamental limits of prompt compression, Attention with Markov, Attention with Markov, Two layers is all you need, and Markov to Laplace via Mamba .

My fundamental contributions across these areas have appeared in top-tier machine learning venues such as NeurIPS, ICLR, and ICML, and have been been recognized with a DAAD AInet Fellowship, NeurIPS and ICLR Spotlight Awards, a Best Paper Award from ACM MobiHoc, the Joan and Lalit Bahl Fellowship (twice), the Sundaram Seshu International Student Fellowship, and a Qualcomm Innovation Fellowship for two mentored students. I have delivered invited talks at leading institutions, including Stanford, Berkeley, and Microsoft Research, as well as tutorials at NeurIPS 2024 and ICTS 2025 and an upcoming invited article in IEEE BITS Magazine.

Before joining EPFL, I got my Ph.D. in Electrical and Computer Engineering from UIUC, where I worked with Pramod Viswanath and Sewoong Oh. During my PhD, I had the pleasure of collaborating with and Sreeram Kannan, Founder and CEO of EigenLayer.

Prior to that, I graduated from IIT Bombay with a B. Tech. (Honors) in Electrical Engineering and Minors in Mathematics, where I worked with Vivek Borkar.

News

  • Sep 2025: Our recent paper showcasing that two layers are enough to represent any $k$-th order induction head will appear at NeurIPS 2025--Spotlight!.

  • Aug 2025: Honored to be an invited speaker at ICTS, Bangalore, presenting a tutorial on recent advances in LLMs' (representation, learning, generalization).

  • Apr 2025: Honored to receive the prestigious DAAD AInet Fellowship, awarded to outstanding international AI researchers for an exclusive postdoc research visit to top German universities.

  • Apr 2025: Excited to share that my body of work on Markovian analysis of transformers will be the centerpiece for an upcoming invited article in the IEEE BITS Magazine!

  • Feb 2025: Attention with Markov is accepted for an ICLR Spotlight (5% out of 11,670 papers).

  • Jan 2025: We discover surprising in-context learning abilities of Mamba when trained on Markov chains! More details on our arXiv preprint here.

  • Dec 2024: Wonderful experience presenting our NeurIPS tutorial --- Sandbox for the Blackbox: How LLMs Learn Structured Data? Full material including slides and video are available here !

  • Nov-Dec 2024: Talk on our comprehensive body of work on Markovian analysis of transformers at (i) Stanford IT Forum, hosted by Ayfer Ögzür and Tsachy Weissman, and (ii) ETH Zürich, hosted by Andreas Krause.

  • Sep 2024: Three papers to appear at NeurIPS 2024! One explores the fundamental limits of prompt compression and the other two provide a complete characterization of transformers when trained on Markovian inputs: Local to Global and Constant depth suffices.

  • Aug 2024: Excited to share that I will be giving a tutorial at NeurIPS 2024 with my amazing colleagues Bingbin Liu and Jason Lee, titled Sandbox for the Blackbox: How LLMs Learn Structured Data?

  • +older news…

Visitor counter

Flag Counter