The Moravec paradox in artificial intelligence and why it matters

In the last years a lot of progress has been made in artificial intelligence. There are many exiting breakthroughs, especially in machine learning and artificial neural networks. Major accomplishments in vision, audio and natural language processing have been achieved. And still, the term artificial intelligence seems to be heavy, compared to real-world applications around or even state-of-the-art research. AI-experts know about the limitations of current approaches, even if they are sort of “biased”. But besides consulting companies, who want to raise funds or sell their product, the media either glamorizes or demonizes AI. In some disciplines AI may be superior to humans, but we can not just extrapolate this success to other tasks. This simple truth is called Moravec paradox.

The Moravec paradox

The Moravec paradox is quite old, it was formula in the 1980s by Hans Moravec, Rodney Brooks, Marvin Minsky and others. They discovered that it pretty easy to reach adult level scores in intelligence tests by computers, but it´s extremely hard to make computers capable of basic perception.

This paradox tells us something about intelligence in general and current AI research approves it even more. One of the biggest insights of all the AI research might be, that what we think, what intelligence is, is wrong.

What AI currently does is statistics, very advanced statistics but it has nothing do to with understanding or thinking. No neural network “understands” anything in terms of, what we humans think what “understand” means. ANNs are correlation machines, they match input and output in a very complex manner. Saying that they work like brains is wrong, they might be inspired by brains, but one single artificial neuron (perceptron) is not more than input * weights + bias. A network is a nonlinear function approximation system.

Problem definition, cost and reward functions

Every machine learning algorithm is based on curve fitting, that means we formulate a problem and let the network brute force with data until it learns how to fit the curve. There are a lot of obstacles in this process and researchers try to tackle these, quite successful. What sometimes seems to be forgotten is, that the hardest problem is to formulate problem. This is where the intelligence comes from and it´s made by humans. One popular example of what happens, when the reward function is not well defined can be seen here:

Deep reinforcement learning on reward optimization

World knowledge

Without common knowledge and “understanding” machine learning systems will stay what they are currently, expert systems for specific tasks. There are experiments mixing up statistic-based learning strategies and knowledge graphs and it could be possible, that one day these systems become intelligent. But for now, we are building statistical models, which are powerful and will change the way we live, but keep the feet on the ground, these systems are far away from human intelligence. AI can play GO, even Starcraft 2 like an expert, but it fails on understanding context in simple tasks (e.g. echo dot).

Interesting reads or talks:

https://katbailey.github.io/post/ai-and-the-future-of-work/

http://fortune.com/2019/01/22/artificial-intelligence-ai-reality/

Related Posts

Leave a reply