Skip to content

Artificial General Intelligence

How to Build an AGI -- and How Not To

So, you want to build an artificial general intelligence?

One approach is to turn your AI investment money into a large pile of cash and then burn it. That worked for a lot of people over the last sixty years, so don't feel bad.

But on the off chance that (a) you actually want to get the job done, and (b) you can handle it when your core beliefs get challenged, what follows might interest you.

Before we start, let's get a couple things out of the way:

  1. If you think the current boom in Big Data/Deep Learning means that we are on the glide path to AGI ...... well, please, just don't.
  2. There seems to be a superabundance of armchair programmers who know how to do AGI. There's not much I can do to persuade you that this is different, other than suggest that you look at my bio page and then come back here if you still think it is worth your while.

The Problem

Artificial Intelligence research is at a very bizarre stage in its history.

Almost exclusively, people who do AI research were raised as mathematicians or computer scientists. These people have a belief -- a belief so strong it borders on a religion -- that AI is founded on mathematics, and the best way to build an AI is to design it as a formal system. They will also tell you that human intelligence is just a ridiculously botched attempt by nature to build something that should have been a logico-mathematical intelligence.

Mathematicians are the high priests of Science, so we tend to venerate their opinion. But I'm here to tell you that on this occasion they could turn out to be spectacularly wrong.

  • There are reasons to believe that a complete, human-level artificial general intelligence cannot be made to work if it is based on a formal, mathematical approach to the problem.

Don't confuse that declaration with the kind of slogans produced by luddites:  No machine will ever be creative ... No machine will ever be conscious ... Machines can only do what they are programmed to do.  Those slogans are driven by lack of understanding or lack of imagination.

The declaration I just made is of a different caliber: it is based on an argument with science and math behind it. Unfortunately the argument is not easy to convey in a short, nontechnical form, but here is my attempt to pack it into a single paragraph:

  • There is a type of system called a "complex system," whose component parts interact with one another in such a horribly tangled way that the overall behavior of the system does not look even remotely connected to what the component parts are doing.  (You might think that the "overall behavior" would therefore just be random, but that is not the case:  the overall behavior can be quite regular and lawful.)  We know that such systems really do exist because we can build them and study them, and this matters for AI because there are powerful reasons to believe that all intelligent systems must be complex systems. If this were true it would have enormous implications for AI researchers:  it would mean that if you try to produce a mathematically pure, formalized, sanitized version of an intelligence, you will virtually guarantee that the AI never gets above a certain level of intelligence.

If this "complex systems problem" were true, we would expect to see certain things go wrong when people try to build logico-mathematical AI systems. I am not going to give the detailed list here, but take my word for it:  all of those expected bad things have been happening. In spades.

As you can imagine, this little argument tends to provoke a hot reaction from AI researchers.

That is putting it mildy.  One implication of the complex systems problem is that the skills of most AI researchers are going to be almost worthless in a future version of the AI field -- and that tends to make people very angry indeed.  There are some blood-curdling examples of that anger, scattered around the internet.

The Solution

So what do we do, if this is a real problem?

There is a potential solution, but it tends to make classically-trained AI researchers squirm with discomfort: we must borrow the core of our AI design from the architecture of the human cognitive system.  We have to massively borrow from it.  We have to eat some humble pie, dump the mathematics and try to learn something from the kludge that Nature produced after fiddling with the problem for a few billion years.

Once again, a caution about what that does not mean -- it doesn't mean we have to copy the low level structure of the brain.  the terms "cognitive system" and "brain" are two completely different things.  The proposed solution has nothing to do with brain emulation; it is about ransacking cognitive psychology for design ideas.

I have presented this potential solution without supplying a long justification.  The purpose of this essay is to give you the conclusions, not the detailed arguments (you can find those elsewhere, in this paper and this paper).

The good news is that there are some indications that building an AGI might not be as difficult as it has always seemed to be, if done in the right way.

Your AGI Team

What you will need, then, when you put together your AGI construction team, is a combination of freethinking cognitive psychologists who are willing to stretch themselves beyond their usual paradigm, and some adept systems people (mostly software engineers) who are not already converts to the religion of logico-mathematical AI.

Ah ... but there lies the unfolding tragedy.

I started out by saying that we have reached a bizarre stage in the history of AI. It is bizarre because I doubt that there is a single investor out there who would have the courage to face down the mathematicians and back an approach such as the one I just described.  Most investors consider the math crowd to be invincible, so the idea that AI experts could be so fundamentally wrong about how to do AI is like believing in voodoo, or the tooth fairy. Just downright embarrassing.

What I think we are in for, then, is another 60 years of AI in which we just carry on with business as usual:  a series of boom-and-bust cycles.  The bursts of hype and "Real AI is just around the corner", followed by angry investors and another AI winter.  There will be progress on the lesser things, if that is what you want, but the real AGI systems that some of us dream of -- the ones that will do new science on a scale that beggars belief; the ones that will build the starships, renovate the planet and dispense the kind of indefinite life-extension worth having -- those kinds of AI will not happen.

History, if the human race survives long enough to write it, will say that this was bizarre because, in fact, we probably could have built a complete, human-level AGI at least ten years ago (perhaps even as far back as the late 1980s).  And yet, because nobody could risk their career by calling out the queen for her nakedness, nothing happened...