Thinking Fast

In this Google DeepMind podcast episode featuring Professor Anna Fry interviewing Dr Shane Legg, a co-founder and Chief AGI Scientist at DeepMind the conversation focuses extensively on Artificial General Intelligence (AGI), defining it across different levels, from "minimal AGI" (human-level cognitive capability) to "full AGI" and eventually Artificial Super Intelligence (ASI). Legg provides a consistent prediction of a 50/50 chance of minimal AGI by 2028, while discussing the current capabilities and weaknesses of AI, emphasizing that its performance is uneven but rapidly improving through combined algorithmic and architectural changes. Crucially, the discussion also covers the profound societal and economic transformation expected from AGI, including the need to structure a post-AGI world, and the importance of ensuring the technology is safe and ethical through advanced reasoning capabilities.

Beyond Human: Shane Legg's Three Stages of AGI that Will Redefine Our World

The arrival of Artificial General Intelligence (AGI) is no longer a distant thought experiment; it is an approaching reality that promises massive societal and economic transformation,,. Shane Legg, Chief AGI Scientist and co-founder of Google DeepMind, has long been credited with popularizing the term AGI. He offers a clear spectrum for how intelligence will evolve in the coming years, differentiating between minimal AGI, full AGI, and Artificial Super Intelligence (ASI),.

The AGI Tiers: Minimal, Full, and Super

Legg views AGI not as a single threshold but as a continuum of capability. The first critical milestone he defines is minimal AGI (or sometimes simply "AGI"), achieved when an artificial agent can perform at least the kinds of cognitive tasks that people can typically perform,. This standard is chosen carefully: setting the bar lower would mean the AI fails at things expected of people, while setting it much higher would require capabilities that many people do not possess.

Legg maintains a consistent timeline for this threshold, predicting a 50/50 chance of achieving minimal AGI by 2028. However, he estimates this development might be seen in about two years, possibly spanning one to five years,. When minimal AGI is reached, the AI will stop failing in ways that would surprise us if we assigned that cognitive task to a person.

Moving beyond typical human performance leads to the next tier: full AGI. This level is reached when AI achieves the full spectrum of what is possible across all human cognition. Legg suggests that full AGI will follow some years after minimal AGI, potentially within three to six years, or within a decade.

Finally, once a system surpasses the full capacity of human cognition, it enters the realm of Artificial Super Intelligence (ASI). ASI is characterized as an AGI that is so capable generally that it is somehow far beyond what humans can achieve. Legg argues that human intelligence is emphatically not the upper limit of what is possible, pointing out that machine intelligence possesses computational advantages over the human brain by six, seven, or even eight orders of magnitude across several dimensions, including energy consumption, space, bandwidth, and signal propagation speed. He expects that as we develop our understanding of intelligent systems, AIs will inevitably move towards super intelligence.

Shane Legg uses the comparison between the computational capabilities of machine intelligence and the human brain to emphasize that human intelligence is emphatically not the upper limit of what is possible in intelligent systems. He argues that because AIs possess tremendous computational advantages, they will inevitably move towards Artificial Super Intelligence (ASI), far beyond human capabilities.

Legg quantifies these computational advantages across several critical dimensions, stating that machine intelligence is ahead of the human brain by "six, seven, or even eight orders of magnitude in all four dimensions simultaneously".

He details these dimensions by comparing the physical limits of the human brain to those of a modern data centre:

* Energy Consumption: The human brain operates on a mere 20 watts. In contrast, a data centre can consume 200 megawatts.
* Space: The human brain weighs only a few pounds. Conversely, a data centre can weigh several million pounds.
* Bandwidth (Signal Frequency): Signals are sent within the human brain through dendrites at a frequency of about 100 hertz or maybe 200 hertz in the cortex. In a data centre, the channel bandwidth can reach 10 billion hertz.
* Signal Propagation Speed: Signals in the human brain are electrochemical wave propagations that move at a speed of about 30 metres/second. In a data centre, signals can travel at the speed of light, which is 300,000 kilometres/second.

Essentially, Legg highlights that the physical and energetic constraints placed on human intelligence do not apply to machine intelligence, allowing AI systems to surpass human cognitive limits as our understanding of how to build intelligent systems develops.

To put the concept of "six, seven, or even eight orders of magnitude" into perspective, if you consider distance, one order of magnitude is a factor of 10. A difference of six orders of magnitude means the machine capacity is a million times greater (10 to the power of 6) than the human brain in that dimension, illustrating the vast chasm in raw processing potential that Legg is pointing to.

The Need for "System Two Safety"

With the immense potential power of AGI and ASI looming, Legg stresses the importance of understanding and preparing for the risks. This is where he draws a direct connection to Daniel Kahneman’s work, specifically focusing on "system two safety". "System two safety" is a Kahneman derived concept for ways towards a human thinking tool that if applied to the development of AI systems can ensure they behave ethically and safely.

Kahneman's work, particularly in Thinking, Fast and Slow, describes two modes of processing in the mind:

1. System 1 (Fast Thinking): This operates automatically and quickly, requiring little or no effort and generating impressions, feelings, and impulses instantly,,. It is characterized by quick, instinctive judgments, often referred to by Legg as "gut instinct".
2. System 2 (Slow Thinking): This allocates attention to effortful mental activities, such as complex computations, reasoning, and deliberate choices,,. It is associated with self-control and the process of stepping back to analyze a situation.

Legg applies this distinction to ethical decision-making, noting that when a person faces a difficult ethical scenario, they cannot rely solely on the quick, instinctive judgments of System 1. They must instead engage in the slow, deliberate, and effortful process of System 2. This System 2 thinking involves several steps:

* Analyzing the complexities and nuances of the situation.
* Considering all possible actions that could be taken.
* Forecasting the likely consequences of different actions.
* Rigorously applying a strong system of ethics, norms, and morals to the analysis,.

Therefore, "system two safety" refers to embedding this deliberate, analytical reasoning process within AI, training the system to achieve a robust ethical understanding and applying it consistently, potentially at a superhuman level,. This process is crucial for managing the risks associated with Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI),.

This is the application of the widely known distinction between System 1 (quick, instinctive thinking) and System 2 (slow, deliberate thinking),. When a person faces a difficult ethical scenario, they cannot rely merely on gut instinct (System 1); they need to engage in the analytical, effortful process of System 2. This involves stepping back to analyze the nuances of the situation, considering possible actions, forecasting likely consequences, and rigorously applying moral and ethical norms,.

Legg argues that embedding this kind of deliberate reasoning capability within AI is crucial, especially if the development toward super intelligence cannot be stopped. Currently, modern AIs demonstrate a "chain of thought" capability, allowing developers to observe the AI reasoning through moral or ethical dilemmas. Legg believes that if we can refine this reasoning process, training the AI to consistently and robustly apply strong ethical understanding, an AI could potentially become more ethical than people by reasoning about complex decisions at a superhuman level. This ethical reasoning is critical for navigating the profound structural changes AGI will bring to our economy and society,.

Leave a Comment