At Google DeepMind, a team led by Chief AGI Scientist Shane Legg, has introduced a fresh framework to better understand AGI systems and what comes before them.
The essence? For AI research, it’s vital to clearly define what AGI means. We need to measure not just how AI systems perform, but also their adaptability and independence.
The Challenge of Pinning Down AGI
DeepMind’s research considers nine AGI definitions. Each, however, falls short in fully capturing AGI’s true nature. They range from the Turing Test, assessing conversational abilities, to practical tests like the Coffee Test, and even include abstract concepts like consciousness.
Take current language models. They might pass the Turing Test, yet that alone doesn’t make them AGI. The complexity of consciousness in machines remains a puzzle. And while a system failing to perform everyday tasks like coffee making suggests it’s not AGI, the reverse doesn’t automatically imply it is.
A New Lens to View AGI
In their comprehensive approach to evaluating AI, the DeepMind team advocates for a six-fold criteria. They emphasize the importance of focusing on the functional capabilities of AI, rather than anthropomorphic attributes such as consciousness. A balanced evaluation of AI should consider both the diversity and the proficiency of tasks an AI system can handle.
The team suggests prioritizing cognitive abilities, without insisting on the necessity of physical task execution for a system to qualify as AGI. They also highlight the significance of recognizing a system’s potential to perform AGI-level tasks, regardless of its current readiness for deployment in real-world scenarios.
Assigning value to tasks that bear real-world significance is another key aspect of their criteria. Finally, they view AGI not as a finite endpoint, but rather as a continuum with varying degrees of intelligence, underscoring the evolutionary nature of AI development.
Understanding AI’s Depth and Breadth
DeepMind presents a framework where AI’s ‘performance’ and ‘generality’ are mapped. This ranges from no AI to superhuman AGI, surpassing human abilities in every task. ‘Performance’ looks at how AI compares to human capabilities, and ‘generality’ measures how broad the range of tasks is where AI meets these capabilities.
In summary, AGI is an evolving field with many layers. It calls for a deep understanding and dynamic methods to measure its progress and possibilities.
Image: DIW / AI-gen
Read next: OpenAI Says ChatGPT Belongs in Classrooms, Critics Say It’ll Help Students Cheat