If you've considered AI, You've Got To Read THIS!!!

"Whenever I talk about the future of AGI, someone starts talking about the possibility that AGI will “take over the world.”

One question is whether this would be a good or bad thing — and the answer to that is, of course, “it depends” … I’ll come back to that at the end of this post.Another relevant question is: If this were going to happen, how would it most likely come about.  How would an “AGI takeover” be likely to unfold, in practice?One option is what Eliezer Yudkowsky has called AI “FOOM” …  i.e. a “Hard Takeoff”  (a possibility which I analyzed a bit , some time ago…)

The basic idea of AI Foom or Hard Takeoff is that, sometime in the future, an advanced AGI may go from relatively innocuous subhuman-level intelligence all the way up to superhuman intelligence, superintelligence, in 5 minutes or some other remarkably short period of time…..  By rewriting its code over and over (each time learning better how to rewrite its code), or assimilating additional hardware into its infrastructure, or whatever….
A Hard Takeoff is a special case of the general notion of an Intelligence Explosion — a process via which AGI gets smarter and smarter via improving itself, and thus getting better and better and faster and faster at making itself smarter and smarter.   A Hard Takeoff is, basically,  a really really fast Intelligence Explosion!

Richard Loosemore and I have argued that an Intelligence Explosion is probable.   But this doesn’t mean a Hard Takeoff is probable.

Nick Bostrom’s nice illustration of the Hard Takeoff idea

What often seems to happen in discussions of the future of AI (among hardcore futurist geeks, anyway) is something like:

  • Someone presents the Foom / Hard Takeoff idea as a scary, and reasonably likely, option
  • Someone else points out that this is pretty unlikely, since someone watching the subhuman-level AGI system in question would probably notice if the AGI system were ordering a lot of new hardware for itself, or undertaking unusual network activity, or displaying highly novel RAM usage patterns, or whatever…

In spite of being a huge optimist about the power and future of AGI, I actually tend to agree with the anti-Foom arguments.   A hard AGI takeoff in 5 minutes seems pretty unlikely to me.

What I think is far more likely is an Intelligence Explosion manifested as a “semi-hard takeoff” — where an AGI takes a few years to get from slightly subhuman level general intelligence to massively superhuman intelligence, and involved various human beings, systems and institutions in the process.

A tasty semihard cheese — appropriate snack food  for those living through the semihard takeoff to come.  Semihard cheeses are generally good for melting; and are sometimes said to have the greatest complexity and balance.

After all, a cunning and power-hungry human-level AGI wouldn’t need to suddenly take over the world on its own, all at once, in order to gain power.  Unless it was massively superhuman, it would probably consider this too risky a course of action.   Rather, to take power, a human-level AGI would would simply need to accumulate a lot of money (e.g. on the financial markets, using the superior pattern recognition capability it could achieve via tightly integrating its mind with statistical and machine learning software and financial, economic and news databases) and then deploy this wealth to set up a stronghold in some easily-bought nation, where it could then pay and educate a host of humans to do its bidding, while doing research to improve its intelligence further…

Human society is complex and disorganized enough, and human motivations are complex and confused enough, and human judgment is erratic enough, that there would be plenty of opportunities for an early-stage AGI agent to embed itself in human society in such a way as to foster the simultaneous growth of its power and intelligence over a period of a few years.   In fact an early-stage AGI probably won’t even need to TRY for this to happen — once early-stage AGI systems can do really useful stuff, various governments, companies and other organizations will push pretty hard to use these systems as thoroughly as they can, because of the economic efficiency and scientific and media status this will bring.

Once an AGI is at human level and embedded in human society in judicious ways, it’s going to be infeasible for anyone to get rid of it — and it’s going to keep on growing in intelligence and power, aided by the human institutions it’s linked with.   Consider, e.g., a future in which:

  •  Azerbaijan’s leaders get bought off by a wildly successful AGI futures trader, and the nation becomes an AGI stronghold, complete with a nuclear arsenal and what-not (maybe the AGI has helped the country design and build nukes, or maybe it didn’t need the AGI for that…).
  • The nation the AGI has bought is not aggressive, not attacking anyone — it’s just sitting there using tech to raise itself out of poverty … doing profitable deals on the financial markets, making and selling software products/services, patenting inventions, … and creating a military apparatus for self-defense, like basically every other country.

What happens then?  The AGI keeps profiting and self-improving at its own pace, is what happens?  Is the US really gonna nuke a peaceful country just for being smart and getting rich, and risk massive retaliation and World War III?  I doubt it….  In its comfy Azerbaijani stronghold, the AGI can then develop from human-level to massively transhuman intelligence — and then a lot of things become possible…

I have spun out one scenario here but of course there are lots of others.  Let’s not allow the unrealism of the “hard takeoff in 5 minutes and the AGI takes over the world” aka “foom” scenario to blind our minds to the great variety of other possibilities….  Bear in mind that an AGI going from toddler-level to human-level in 5 years, and human-level to superhuman level in 5 more years, is a FOOM on the time-scale of human history, even if not as sudden as a 5 minute hard takeoff on the time-scale of an individual human life…

So how could we stop a semihard takeoff from happening?   We can’t really — not without some sort of 1984++ style fascist anti-AI world dictatorship, or a war destroying modern society projecting us back before the information age.   And anyway, I am not in favor of throttling AGI development personally; I doubt the hypothetical Azerbaijanian AGI would particularly want to annihilate humanity and I suspect transhuman AGIs will do more good than harm, on average over all possible worlds….  I’m not at all sure that “an AGI taking over the world” — with the fully or partly witting support of some group(s) of humans — would be a bad thing, compared to other viable alternatives for humanity’s future….

In terms of risks to humanity, this more realistic “semihard takeoff” development scenario highlights where the really onerous risks probably are.   SIAI/MIRI and the Future of Humanity Institute seem to spend a lot of energy thinking about the risk of a superhuman AGI annihilating humanity for its own reasons; but it seems to me a much more palpable and probable risk will occur at the stage where an AGI is around human-level but not yet dramatically more powerful and intelligent than humans, so that it still needs cooperation from human beings to get things done.  This stage of development will create a situation in which AGI systems will want to strike bargains with humans, wherein they do some things that certain humans want, in order to get some things that they want…

But obviously, some of the things that some humans want, are highly destructive to OTHER humans…

The point is, there is a clear and known risk of early-stage AGIs being manipulated by humans with nasty or selfish motives, because many humans are known to have nasty or selfish motives.   Whereas the propensity of advanced AGIs to annihilate lesser sentiences, remains a wild speculation (and one that I don’t really find all that credible)…..

I would personally trust a well-designed, self-improving AGI more than a national government that’s in possession of the world’s smartest near-human-level AGI; AGIs are somewhat of a wild card but can at least be designed with initially beneficent motivational systems, whereas national governments are known to generally be self-serving and prone to various sorts of faulty judgments….  This leads on to the notion of the AI Nanny, which I’ve written about before.   But my point here isn’t to argue the desirability or otherwise of the AI  Nanny — just to point out the kind of “semihard takeoff” that I think is actually plausible.

IMO what we’re likely to see is not a FOOM exactly, but still, a lot faster than AI skeptics would want to accept….   A Semihard Takeoff.  Which is still risky in various ways, but in many ways more exciting than a true Hard Takeoff — because it will happen slowly enough for us to watch and feel it happen…

E-mail me when people leave their comments –

You need to be a member of Peacepink3 to add comments!

Join Peacepink3