A team of researchers working at the School of Electrical Engineering and Computer Science at Washington State University have been funded by the Defense Advanced Research Projects Agency (DARPA) to the tune of $1 million, to create a test to measure the ‘intelligence’ of an AI system.
 
In the past, ratings of an AI system’s ‘intelligence’ were rather theory-based. The researchers had neither measured real performance in novel, unfamiliar environments nor taken the complexity of the tasks into account. However, the research team is now working on a concrete test that evaluates AI systems taking into consideration the difficulty of the tasks to be solved - just like tests devised for testing the IQ of people. The aim is to achieve a score that also takes into account accuracy, correctness, speed and amount of data involved.
 
People are pretty good at adapting and learning to achieve a goal while working in unfamiliar situations. AI systems have been developed to efficiently perform a particular task; instruct a driverless car to find its way out of a wheat field and its sensors would set about looking for lane markings, vehicles and obstacles without much success; in fact it would most likely stay where you left it. Machines that learn and act intelligently in new environments are still the holy grail of AI research. The measurement of performance in such situations is therefore the basis for the development of more flexible systems, in applications such as an assistance robot to help less-capable people perform everyday tasks. It is interesting to explore to what extent these AI systems can transfer acquired skills to solve new problems.
 
The CASAS (Center of Advanced Studies in Adaptive Systems) department of the WSU also deals with robot assistants for the elderly. They could contribute to the safety, health, mobility and social interaction of older people. The AIQ website includes an AI system ranking list makes for interesting reading.