Why AI is Still Dumb

October 29, 2019 | 05:33
Learning agents in a computer game. The training is based on machine learning (Image: RUB, Institute of Neuroinformatics)
Learning agents in a computer game. The training is based on machine learning (Image: RUB, Institute of Neuroinformatics)
There are hardly any computer-based applications left now that have not been subjected to an Artificial Intelligence (AI) makeover. We all expect our latest devices and machines to be ‘smart’ and imbued with AI capabilities. Researchers at the Institute of Neuroinformatics at RUB (Ruhr University Bochum) have already been working in this field for 25 years. According to their studies, we need totally new strategies to make the process of machine learning more efficient and flexible before AI systems can be considered truly ‘intelligent’.

Machine learning

According to Laurenz Wiskott of the chair of "Theory of Neural Systems" of the RUB there are two successful types of machine learning today: ‘deep learning’ and ‘reinforcement learning’. In both approaches the system is trained to execute a specific task such as to make a decision. During a period of training the desired result is provided along with the task. After a few iterations, the AI ​​system improves its ability to arrive at the desired outcome more quickly and can often get better than a human at performing the task.
 
The work by Laurenz Wiskott (left) and Tobias Glasmachers (right) has been inspired by natural processes in the human brain (Image: Roberto Schirdewahn, RUB)

Clever it ain’t

Wiskott suggests that these machine learning processes (which were originally devised back in the 1980s) are fundamentally dumb. Nowadays systems can be built with far more powerful processors handling much more data so that these outdated and inefficient learning processes can be executed innumerable times during training to feed neural networks with a plethora of images and image descriptions. The resulting system may end up being good at performing this one task but is quite inflexible so cannot generalise or apply its acquired ‘knowledge’ to other, similar tasks.

New approaches

To try to make such systems more flexible the researchers at the Neural Computation Institute are devising new strategies for systems to help them autonomously discover patterns or structures in data sets. An example of this analysis approach would be to form clusters or detect and evaluate slow changes in video images. The researchers hope this unsupervised and autonomous learning process will enable systems to ‘explore’ the world independently and tackle tasks they have not been explicitly trained to perform.
Loading comments...
related items