The truly intelligent computer (or robot) is a staple of science fiction in all forms. While there's a lot of activity, the prospect of developing human intelligence in software faces some basic problems.

Futurists and science writers regularly predict virtual reality, intelligent computers, digitised memory and any number of exciting advances that are 'just around the corner'; usually 5-10 years away, which gives them time to sell lots of books before having to revise the estimate to 5-10 years from now (I sometimes fear I am too cynical).

Clarification

When I say human intelligence I mean an intelligence that performs in the same way as humans can. I don't mean it has to be biological, only that it can solve the same sorts of problems as humans. I also don't care about speed - which is a function of hardware, only capabilities.

Basic problem

Human intelligence is non-algorithmic. Say that slowly to yourself a few times. Non-al-go-rith-mic. Not capable of being expressed as an algorithm or step-by-step process.

Really.

This means that you can't write a computer program (an algorithm) that fully expresses human intelligence.

Insight

Roger Penrose in his 1989 book "The Emperor's New Mind" and subsequent works, presents a detailed argument against this 'Strong AI' view which rests, in part, on the problem of insight. Humans can see a solution to a problem which can't be determined by a set of rules (an algorithm).

There's a lot of additional, interesting stuff in his works (some strong AI proponents dismiss it as 'full of complicated mathematics', as though that was a bad thing...) but I see this as a fundamental problem which (sadly) means I wont be uploading my consciousness to the 'net' any time soon.

So the next time someone tells you it will only be 10 years until we move beyond the need for physical existence and enter a fully virtual world, ask them about insight and if (probably when) that give the blank look characteristic of the over-enthused but under-informed, give them a reading list and suggest they don't get to have an opinion unless they've got some counter-arguments.

Implications

So, should all those researchers get proper jobs?

No; they already have proper jobs. AI research already produces good and useful results and we can expect that it will continue to do so. It just isn't going to be human intelligence.

A machine intelligence produced by AI research might be smarter than a human in some senses, it might pass the Turing test (so you can talk to it without knowing it's a computer),it might even be self-aware (assuming we can agree on what that even means); it just wont be human.

But computers keep getting faster/more memory/better connectivity

Doesn't help. The problem is not about performance, but capacity.

What about neural nets/parallel processing/Richter paradigm/thing I read in New Scientist...

No again. These are still algorithmic computers (or myths), really good and interesting ones, but algorithmic computers all the same.

But we keep finding out more about how humans learn

This is another 'march of history' argument. Surely once we know enough about the human brain, how memory works, quantum mechanics etc then somehow we must get artificial human intelligence.

Seductive but fallacious. The problem of insight isn't address by more of anything. Come up with reasons why it doesn't apply (and get those reasons widely accepted) then we can talk.

Do other people know this stuff?

Yes, although surprisingly some people active in the field seem not to be aware of these issues. Of those who are, some agree, some disagree, some disagree really quite a lot.

There is vigourous debate on this point in mathematical circles and I don't claim the my view is absolutely correct. Read some of the debate yourself and make up your own mind.

References

Yes, they're Wikipedia articles. I can't put a link to my bookshelf in a web page.

The Emperor's New Mind, Roger Penrose.Oxford University Press, 1989
http://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

Strong AI (with links)
http://en.wikipedia.org/wiki/Strong_AI

Turing test
http://en.wikipedia.org/wiki/Turing_test

Gödel Incompleteness Theorems
http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem
these demonstrate the inadequacy of any fixed set of rules and help explain why insight is a problem for algorithmic intelligences.