Memento-Skills: A Framework Where Agents Autonomously Design Other Agents
Memento-Skills introduces a self-improving agent system that autonomously constructs, adapts, and improves task-specific agents through experience. The paper, trending on HuggingFace Daily Papers with 23 upvotes, demonstrates 26.2% and 116.2% relative improvements in overall accuracy on two evaluation benchmarks.
The core innovation is a "Read, Execute, Reflect, Write" cycle where agents learn by evolving external skills stored as markdown files rather than updating the underlying language model. When tasks fail, the system identifies problematic skills and rewrites them — essentially letting agents redesign themselves through experience.
The framework includes nine built-in skills (filesystem operations, web search, image analysis, document processing) and supports multiple deployment interfaces including CLI, desktop GUI, and a Feishu messaging bridge. It's specifically optimized for Chinese LLM ecosystems including Kimi, MiniMax, and GLM models.
Memento-Skills distinguishes itself from similar projects by prioritizing skill self-evolution through memory-based reinforcement learning with reusable skill units. Rather than prompting an LLM differently, the system treats agent capabilities as evolvable programs that improve with use.
The project was developed by a team of 17 researchers led by Huichi Zhou and Jun Wang. Code is available under MIT license.
GitHub: https://github.com/Memento-Teams/Memento-Skills
Paper: https://arxiv.org/abs/2603.18743
← Back to all articles
The core innovation is a "Read, Execute, Reflect, Write" cycle where agents learn by evolving external skills stored as markdown files rather than updating the underlying language model. When tasks fail, the system identifies problematic skills and rewrites them — essentially letting agents redesign themselves through experience.
The framework includes nine built-in skills (filesystem operations, web search, image analysis, document processing) and supports multiple deployment interfaces including CLI, desktop GUI, and a Feishu messaging bridge. It's specifically optimized for Chinese LLM ecosystems including Kimi, MiniMax, and GLM models.
Memento-Skills distinguishes itself from similar projects by prioritizing skill self-evolution through memory-based reinforcement learning with reusable skill units. Rather than prompting an LLM differently, the system treats agent capabilities as evolvable programs that improve with use.
The project was developed by a team of 17 researchers led by Huichi Zhou and Jun Wang. Code is available under MIT license.
GitHub: https://github.com/Memento-Teams/Memento-Skills
Paper: https://arxiv.org/abs/2603.18743