Google DeepMind Releases AGI Cognitive Framework with $200K Kaggle Hackathon
Google DeepMind has published a cognitive framework for measuring progress toward AGI, accompanied by a $200,000 Kaggle hackathon to crowdsource the evaluations. The paper dropped March 16; the hackathon opened March 17 and runs through April 16.
The framework identifies 10 cognitive abilities hypothesized to be essential for general intelligence: perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem-solving, and social cognition. Drawing on decades of research from psychology and neuroscience, it proposes a rigorous evaluation protocol where a system's performance across targeted cognitive tasks generates a "cognitive profile" of strengths and weaknesses.
The Kaggle hackathon focuses on the five abilities with the largest evaluation gap: learning, metacognition, attention, executive functions, and social cognition. These are precisely the capabilities that distinguish autonomous agents from simple chatbots — an agent needs to learn from experience, monitor its own reasoning, manage attention across tasks, plan and execute multi-step workflows, and understand social context.
This matters for the agentic ecosystem because it provides the first standardized way to measure whether AI systems are developing the cognitive capabilities that agents need. Instead of relying on task-specific benchmarks, the community now has a framework for evaluating the underlying cognitive architecture.
Paper: https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework/
Kaggle hackathon: submissions open March 17 through April 16, results June 1.
← Back to all articles
The framework identifies 10 cognitive abilities hypothesized to be essential for general intelligence: perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem-solving, and social cognition. Drawing on decades of research from psychology and neuroscience, it proposes a rigorous evaluation protocol where a system's performance across targeted cognitive tasks generates a "cognitive profile" of strengths and weaknesses.
The Kaggle hackathon focuses on the five abilities with the largest evaluation gap: learning, metacognition, attention, executive functions, and social cognition. These are precisely the capabilities that distinguish autonomous agents from simple chatbots — an agent needs to learn from experience, monitor its own reasoning, manage attention across tasks, plan and execute multi-step workflows, and understand social context.
This matters for the agentic ecosystem because it provides the first standardized way to measure whether AI systems are developing the cognitive capabilities that agents need. Instead of relying on task-specific benchmarks, the community now has a framework for evaluating the underlying cognitive architecture.
Paper: https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework/
Kaggle hackathon: submissions open March 17 through April 16, results June 1.