Bio

I originally graduated from the Physics Department at University College London.  My first real job was in the British Museum Research Laboratory (using the thermoluminescence dating technique to date burnt flints and pottery), but after nine months I signed up as a self-funded postgraduate student with John G. Taylor in the Mathematics Department at Kings College London.  The original plan was that I would work on quantum gravity, but one of John’s other interests was the new field of “neural nets,” so I took his course on the subject … and became instantly fascinated by all things cognitive and AI.

It wasn’t possible to continue with that interest in the Kings context, so in 1981 I left physics and started an intensive effort to become qualified in cognitive psychology, artificial intelligence, neuroscience and software engineering.  My main focus of interest was hierarchical concept learning in humans and how this could be implemented in an artificial intelligence.

A couple of years later, John Morton was starting the new MRC Cognitive Development Unit at UCL, and he wanted to take me on as a part-time Ph.D. student (again, self-funded).  But we ran into a serious roadblock: both the Psychology and Computer Science departments refused to allow me to register.  Each said that my research topic belonged in the other place. Computer Science eventually gave in and allowed me to sign up, but only if I agreed to three conditions.  I had to state that I would:

  • Never set foot in the CS Department,
  • Never ask for any help from CS faculty, and
  • Never describe my work as “artificial intelligence” or publish it in an AI journal.

(!) The reason for these bizarre conditions was that at the time all artificial intelligence research was banned at UCL, by order of the Provost James Lighthill (who had written a famous report condemning AI as pseudoscience). So, after a couple of years of struggle, confined to the CDU building with no like-minded colleagues, no courses to take, and no access to computing except an Apricot, I had to write that one off.

Finally, in 1987 I completed an M.Sc. (with distinction), at Warwick University, in Cognition, Computing and Psychology. I was already so steeped in the connectionist revolution happening at the time, that I taught that component of the course.

A few highlights of later work:

  • 1987-89:  Worked on a project to build an AI system on a large network of transputers at Bristol Polytechnic. Transputer technology was bleeding-edge at the time, but seen from the inside it was impossible to use in large systems, and the AI project collapsed in chaos when the director changed all the locks one weekend and disappeared, on suspicion of embezzling research funds (!).   Soon after, the company that invented the transputer – Inmos – collapsed as well. The mistake they made was to deliver hardware without giving enough thought to how difficult that hardware was to program.
  • 1990-91: Neural net software development and cognitive psychology research, applied to the psychology of reading, with Gordon Brown, at the University College of North Wales, in Bangor.
  • In 1992:  Ph.D. research, supervised by Trevor Harley (until recently head of Psychology at Dundee University). This was very much a rinse-and-repeat experience: dark hints were dropped in Psychology that unless I did experiments on babies or people, there would be no degree.
  • From 1994. The kind of unusual “complex neural nets” approach I was pursuing was impossible to do without the right kind of tools, so my attention switched to ways to develop those tools.
  • 1995. I married an American citizen and emigrated from the UK to the United States.
  • Software development:  worked on a large project to port the CorelDraw! software package from PC to Mac.
  • Software development:  joined Star Bridge Systems to build an AI system on a parallel supercomputer built using FPGAs. The supposed supercomputer actually did not work and could not be programmed — none were ever sold — but it took 9 months to find that out.
  • Physics/Mathematics/Computer Science professor at Wells College for over three years.

There is an interesting backstory to this.  What I did not know, in 1987, was that the two fields of Cognitive Psychology and Artificial Intelligence were going through an acrimonious civil war (started by a faction within AI who called themselves The Neats, who decided to label all the psychology-ish people “Scruffs” and throw them out of the AI field).  Meanwhile, I was innocently trying to do work that involved a close unification of AI and Cognitive Psychology, clueless to the fact that this was now unfundable.

Research

I have focussed on two main research questions:

  • How can we build an artificial intelligence system that can learn to acquire its own concepts, at any level of abstraction, without needing a human in the loop?
  • How can we build the motivation system for a safe and friendly artificial intelligence, so that it behaves coherently under all circumstances, and never deviates from its initial safe and friendly path?

It may not be obvious, but these two questions are closely linked because the solution to the first one leads directly to a possible solution to the second.

The main result of the work that I have done is something unusual and tricky to explain.

It turns out that the behavior of complex systems has enormous significance for AI research. In a 2007 paper I called this the “complex systems problem”…

  • The Complex Systems Problem.  Because of the way that “complex systems” cannot be reverse engineered, and because thinking systems contain an element of “complex-system-ness” that cannot be avoided, it will never be possible for a complete, human-level AGI (Artificial General Intelligence) to be constructed using the mathematical, logic-centered approach to AI that has dominated the field since its inception.  The alternative — the only alternative, as far as I can see — is to borrow as much as possible from the design of the human cognitive system.

If you just read that last paragraph and breezed right through it, I would strongly recommend that you take a pause to think about it, because there is a revolutionary implication contained in it.

If the Complex Systems Problem is valid then research in AI has been going slowly because the methods used in the field cannot ever work. It also means that if someone does try to adopt a different approach to AI, we might discover that the task of building a full, human-level system is actually not as difficult as it seemed, all this time.

The CSP is not based on guesswork, or preference — I am not pushing it because I like cognitive psychology and think everyone should do things my way — it arises from a detailed, technically powerful and compelling argument. For the argument itself, see this 2013 paper.

There is a nasty corollary to this argument.  It has implications that are so unthinkable to AI researchers (most of whom have been raised as mathematicians) that they cannot accept it. No amount of reasoned discussion will convince someone that their hard-won skills are useless for the work that defines and validates them. As a result, there is virtually no-one pursuing this alternative approach to AI, and hostility to the idea is intense.

Conclusion

My approach to AI is so unusual that it is either completely nuts (in which case you should laugh and move on), or it will eventually turn out to be one of those things that a future history will tell you should have been obvious all along.

Also, I have been working on the problem of AGI safety since the late 1980s, and one implication of that long line of work is that it may well be possible to construct AGI systems that are so safe, and with such a strong guarantee of friendliness, that they could be the safest form of technology ever invented.  That seemingly outlandish claim is based on the properties of thermodynamic systems, where unpredictability at the local level is nevertheless consistent with macroscopic properties that are extremely predictable.

Library

Finally, here is some circumstantial evidence that I love what I (try to) do.  Some pictures of my office in the Physics Department at Wells College, after I had crammed half of my library into it, a few years ago…

 

Richard Loosemore.