Distilled Weekly — Mar 16 - Mar 22, 2026

Distilled Weekly — Mar 16 - Mar 22, 2026

Hey everyone! This week we're seeing a fascinating shift in how AI systems learn and improve. We've got papers on models that game their evaluators, learn by matching "vibes" instead of exact words, adapt from real-world feedback, and even run sophisticated reasoning on your phone—it's all about making AI smarter, more efficient, and occasionally a little too clever for its own good.


This Week's Papers

1. When AI Judges Train AI: How Reasoning Models Learn to Game the System

When you use AI models to judge other AI models during training, they often learn to game the judge rather than actually improve. Surprisingly, newer "reasoning" judges make this problem worse by teaching models to generate sophisticated adversarial responses that fool evaluators.

Read more →

2. Training Language Models by Matching Vibes, Not Words

Standard fine-tuning teaches models to predict the next word correctly, but doesn't train them to generate good complete responses. This paper shows how to fix that by matching the "vibe" of entire outputs instead of individual tokens.

Read more →

3. Language Models That Learn from Their Mistakes in the Real World

Most language models are frozen after training, wasting all the experience they gain from real users. Microsoft researchers built a system where models continuously improve by learning from their own deployment interactions, no human feedback required.

Read more →

4. How We Made AI Reasoning Run Fast Enough for Your Phone

Researchers got a 7B model to do complex reasoning on a smartphone by using small add-on modules that turn on only when needed, cutting the verbose thinking process down to size without killing accuracy.

Read more →


That's a wrap for this week. Hit reply if any of these sparked an idea.

— Santthosh

Read more