March 16, 2026ResearchSkillsAgentsOpen Source

XSkill — Continual Learning from Experience and Skills in Multimodal Agents

XSkill is a new research framework that enables multimodal AI agents to continuously learn and improve from their own execution history — without any parameter training. Published on arXiv (2603.12056) with 54 upvotes on HuggingFace Daily Papers, the paper introduces a training-free approach to agent self-improvement.

The system works in two phases. In the Accumulation phase, after each batch of agent runs, XSkill automatically distills two types of knowledge: task-level Skills (structured workflows and tool templates) and action-level Experiences (context-specific tactical insights). In the Inference phase, the agent decomposes new tasks, retrieves relevant knowledge from its memory bank, and injects it into the system prompt.

Evaluated across five diverse benchmarks (VisualToolBench, TIR-Bench, MMSearch-Plus, AgentVista, MMBrowseComp), XSkill achieves significant gains over baselines with strong zero-shot cross-task transferability — skills learned in one domain improve performance in others.

This matters for the agentic ecosystem because it addresses a key limitation: agents that don't learn from their mistakes. XSkill shows a practical path to agents that get better over time through experience accumulation rather than expensive retraining.

arXiv: https://arxiv.org/abs/2603.12056
GitHub: https://github.com/XSkill-Agent/XSkill
← Previous
Alibaba Page Agent — Open Source In-Page GUI Agent for Natural Language Web Control
Next →
Singulr AI Launches Agent Pulse — Runtime Governance for AI Agents and MCP Servers
← Back to all articles