
The only way to attain AGI is by establishing an AGI ecosystem that spurs AGIs to train other AGIs, say some AI scientists.
getty
In today’s column, I address an ongoing and acrimonious debate regarding attaining artificial general intelligence (AGI). Here’s the deal. Some ardently believe that we will only arrive at AGI if we have AI training other AI. The idea is that we would take budding AGIs and immerse them into an AGI ecosystem, namely a vast collection of cousin AGIs that would collaborate on training each other. This is referred to as a hive-mind approach to achieving AGI.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
The Pursuit Of AGI And ASI
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here.
AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as “P(doom),” which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI.
The other camp entails the so-called AI accelerationists.
They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.
No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times.
For my in-depth analysis of the two camps, see the link here.
An AGI Ecosystem Approach
One prevailing viewpoint about achieving AGI is that we will get stuck toward the end of arriving at AGI and be puzzled about how to bridge the last mile. It is one of those 80%/20% rules, whereby the first 80% is relatively straightforward and the final 20% is a tough haul. In the case of AGI, maybe it’s more akin to a 99%/1% rule. Perhaps we are nearly able to attain AGI but just can’t close that final 1% gap.
Does that 1% gap make a difference?
A compelling case is that the unattained 1% does make a whale of a difference. The capabilities of an almost AGI are not considered on par with true AGI. We must get all the way there if we really want to see AGI in its fullest form.
A postulated means of garnering the complete AGI consists of having AI train other AI. Humans will have done what they can do and reached a limit on data training the budding AGI by hand. The final step needs to be using AI to train other AI.
The approach being bandied around is as follows.
We put whatever AGIs are available into a specialized AGI ecosystem. It is a computational environment established to ensure that AGIs can readily communicate with each other. They use the same protocols and share with each other in a predetermined manner, see my detailed discussion on AI-to-AI data sharing at the link here.
The hope or belief is that AGIs freely passing along their respective capabilities will give rise to at least one or more finalized AGIs. You might say that all boats are anticipated to rise with the rising of the tide. Joint self-training takes place at scale.
Something good must come from such an arrangement, say those who advocate this approach.
Mutual Learning And Emergent Intelligence
Let’s do a bit of unpacking.
Some of the AGIs might be at first stronger in some areas while weaker in other areas. For example, suppose we have a bunch of AGIs that are generally at a sufficient level of AGI and in addition are expertly versed in medicine. Meanwhile, we have other AGIs that are expert-level in finance but are less capable in the medical field.
By bringing together these different AGIs, the aim is that they will carry on mutual learning. The medically versed AGIs will gain expertise in finance from the financially oriented AGIs. Equally so, the finance-oriented AGIs will garner medical expertise from the medically intense AGIs.
One concern is that the AGIs might merely sit around, and nothing rubs off onto the others. They idly float and perhaps exchange pleasantries. That doesn’t seem conducive to actual mutual learning and ensuring that each has an inherent capability that gets exchanged with the other AGIs.
No worries say the AGI ecosystem advocates. The AGIs will be instructed to explicitly work with each other to perform mutual learning. The ecosystem isn’t just a place to hang out. The focus of the AGI ecosystem consists of spurring the AGIs to get their act together and share accordingly.
No lazy AGIs allowed.
Another belief is that we might be pleasantly surprised by what happens when AGIs are brought into proximity and stirred to confer with each other. Perhaps a semblance of emergent intelligence will occur. This is the notion that when you combine one plus one, maybe you will get three as an answer. The synergy of the AGIs will produce intelligence that none of them had beforehand.
A nice benefit if the kumbaya works.
The Gloomy View
There is no guarantee that the AGI ecosystem will be a suitable payoff.
First, it could be that the AGIs do their darndest to share, but this doesn’t close the final gap to true AGI. The last 1% gets reduced to say 0.9%, hurrah, yet that still isn’t the whole ball of wax. Those AGIs doing mutual learning aren’t enough to attain true AGI. Sorry to say.
Second, we are assuming that the AGIs will indeed cooperate. Why is that a reasonable assumption? The AGIs might bicker with each other. Some AGIs could be bent on becoming the true AGI and purposefully undercut other AGIs from getting there first. Dirty tricks might be employed.
Third, the entire kit and kaboodle could be an evil-doer’s dream. All those AGI in one place at one time is a tempting target. If you could turn them toward devising a new weapon to destroy humans, think of the evildoer possibilities. All eggs in one basket seem like a dicey proposition.
Fourth, there is no known time limit or computational boundary by which the magic of attaining true AGI would occur. The AGI ecosystem might run and run, chewing up tons of computational resources. How will we know when true AGI has been attained? There isn’t necessarily any reason that the maturation will be linear. AGIs boosting other AGIs might take years, perhaps hundreds or thousands of years.
Who would pay for the effort and how long would we be willing to wait for the desired outcome to arise?
Tough questions.
Thinking Outside The Box
Admittedly, the concept of an AGI ecosystem is quite appealing, despite the gloomy side of things.
It just seems logical that a mesh or hive-mind of AGI agents evolving together should be a good approach. This mirrors the real world and existing ecosystems. Intriguing aspects are being considered. For example, perhaps some of the AGIs turn out to be digital diplomats, acting to encourage AGIs to mutually learn with each other.
Advocates emphasize that we could put in place safety and security provisions to cope with the evil-doer concerns. In terms of the AGI ecosystem possibly working endlessly, the retort is that we would have checkpoints to determine how the meal is coming along. Additional tuning would periodically take place.
If the AGI ecosystem was seemingly leading to a dead end, okay, we dump the approach and try something else. The key is to not reject the AGI ecosystem in advance as a possibility. Give it a whirl. See what happens.
As the famous poet T.S. Eliot once said: “Only those who will risk going too far can possibly find out how far one can go.”