May 5, 2026ResearchSkillsAgents

Ctx2Skill teaches the Skills movement how to mint its own skills

Tsinghua and UIUC just dropped Ctx2Skill on arXiv. The setup is the part everyone in the Skills wave has been hand-waving past. You can hand an agent a markdown skill, sure, but who writes the skill in the first place? Ctx2Skill answers it with three-agent self-play. A Challenger generates tasks. A Reasoner solves them with a growing skill library it extracts as it goes. A Judge scores. Skills get distilled into natural-language procedures, replayed across time so the system doesn't overfit to the latest tasks.

Across four context-learning benchmarks from CL-bench, this consistently lifts solving rates regardless of the backbone model. Which is the part that matters. The skill library generalizes. It's not just frozen prompts.

This is the academic answer to the entire skills movement that exploded over the past month. Anthropic Skills, mattpocock/skills, browserbase/skills, andrej-karpathy-skills, SkillClaw, Skills-Coach, EvoAgent. All hand-curated by humans. Ctx2Skill is the first paper that lets the agent grow its own skill library autonomously through structured self-play. If this trains stably at scale, the next moat in agent products is not who has the best Anthropic-distributed skill pack, it's whose runtime mints the right skill on the spot.

Paper at https://arxiv.org/abs/2604.27660.
← Previous
OpenClick is the open-source macOS click agent nobody asked Apple for
← Back to all articles

Comments

Loading...
>_