Distilled Weekly — Feb 16 - Feb 22, 2026

Distilled Weekly — Feb 16 - Feb 22, 2026

Welcome to this week's Distilled! We're diving into two papers that tackle very different but equally critical problems in AI development. First up: a surprisingly tiny fraction of training tokens can derail your entire LLM, and second, we're looking at how GLM-5 is pushing beyond autocomplete-style coding to actually building complete software systems.


This Week's Papers

1. Why 0.01% of Tokens Are Breaking Your LLM Training (And How to Fix It)

When training language models with reinforcement learning, about 1 in 10,000 tokens get wildly oversized updates that destabilize everything. Masking just these troublemakers fixes the problem.

Read more →

2. GLM-5: Teaching AI to Actually Build Software, Not Just Suggest Code

GLM-5 shifts from helping you write code to actually acting as a software engineer that can handle complex, multi-hour development tasks autonomously.

Read more →


That's a wrap for this week. Hit reply if any of these sparked an idea.

— Santthosh

Read more