Personal Blog
Breakthrough in common-sense knowledge
Breakthrough in common-sense knowledge

Published: 2018-01-24 in CogSci || Author: Dr. Dr. Ir. Gerard Jagers op Akkerhuis

Everyone knows that to survive and thrive people need knowledge about their environment. To share this knowledge people use language. Language is a way to communicate many things, for example: This is a poisonous snake. Don’t drop the glass, it will break! That hedge is too high to jump over.

 

What is expressed by such sentences is knowledge that everyone understands from own experience: it is ‘common-sense knowledge’. Since the Stone Age, more and more common-sense knowledge has accumulated in human language. Why is such knowledge so important for AI?

The reason is simple. In whatever way we create an AI, it will have to be able to act and survive in the same world as we live in. The AI will have to understand our world. It must understand the same common-sense knowledge as humans. But exactly the task of providing an AI with common-sense knowledge has proven to be an unsurmountable obstacle. Why has this been so difficult?

A computer does not automatically understand the world it ‘lives’ in. The basic problem is that the common-sense knowledge a computer needs, consists of millions of only slightly related facts. For this reason, any random next event may require that an AI acts in a way it never did before.

To tackle this problem, some major projects have been initiated aiming at providing AI agents with common-sense knowledge through organized schemes of how things in the world classify, and relate. So far, such information was organized by hand. Handwork takes much effort and time. Millions and millions have been spent on this. The outcome is a range of rigid schemes about things and relationships. The real world, however, changes constantly. And humans will not always be there with new schemes that can tell an AI what to do in a new situation.

This brings us to the innovation that has been realized by Hans Peter Willems from Mind|Construct. He calls his new computer program ‘ASTRID’. As far as can be deduced from the scientific literature and other sources, it is the first time in human history that ASTRID, a computer, autonomously extracts common-sense knowledge from human language. A result like this does not fall from the sky. It has taken nine years of study of how the human brain deals with language, and of how such insights can be translated to a functional computer program.

Now that ASTRID autonomously learns common-sense knowledge, a logical next question is: ‘What can her capacity be used for?’

 

An important aspect of her novel capacity is, that it is general. What does that mean? Currently, ASTRID reads English texts only, but new languages can be added without great difficulty. By adding new languages, ASTRID can in principle extract common-sense knowledge from any and all texts in the world. And ASTRID can do this by simply reading text, just like you do when you read a book.

With every next sentence she reads, ASTRID adds more common-sense knowledge to her database. Common-sense knowledge allows a computer or robot to understand conversation and to be active in the messy human world. This is why common-sense knowledge is a kind of holy grail for companies like Google, Facebook and Apple. They need common-sense knowledge to go beyond the skills of chatbots like Siri. Also, the major robotics companies can’t wait to augment the performance of their computers with the help of common-sense knowledge. Important applications lay in elderly care, in disaster relief, and in space exploration. The market for this knowledge is enormous. And the company who is the first to solve the problem of common-sense knowledge may be expected to become a large player, and employer, in the high-tech sector.

Those who visited the presentation yesterday, have experienced that while they were chatting about AI, ASTRID was reading chapter one of Wuthering Heights of Emily Brontë. They have seen how the patterns of relationships formed on the screen. These patterns represent the foundations of ASTRID’s brain. Those at the meeting witnessed what can be viewed as an enigmatic breakthrough on the path from the first computers to conscious AI.

Now that the obstacle of common-sense knowledge has vanished, it is really only a relatively minor step towards providing ASTRID with a proper consciousness. Hans Peter Willems has already figured out and documented the technical engineering of the next steps. And in part, these next steps are already implemented on the computers of Mind|Construct. In a few months, he and his team expect to invite you for a next breakthrough: the start of the era in which humans can chat with ASTRID-like artificial intelligences.

 

About the author:

Gerard has been one of the first external scientists who became involved with the ASTRID project. He is an inspirational advisor and sparring-partner, and has been for many years now. Because of his background in Bio-Informatics, he is especially capable of asking the right questions for us to answer. His involvement also relates to his own research regarding the 'Operator Hierarchy'.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
Other Articles in CogSci
 
  Blogs
  • 2024-08-22 - Large Language Models and typing monkeys - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2023-10-04 - A new classification for AI systems - Hans Peter Willems - CEO MIND|CONSTRUCT 

 
  News
  • 2021-03-30 - New research Paper - ASTRID: Bootstrapping Commonsense Knowledge 
  • 2021-03-12 - New research paper - Self-learning Symbolic AI: A critical appraisal 
  • 2020-03-23 - ASTRID reaches 50.000 concepts learned 
  • 2020-02-14 - ASTRID's Deep Inference Engine handles analogous information 
  • 2019-01-20 - Fuzzy Semantic Relations in the ASTRID system 
  • 2012-05-09 - Research paper online: Why we need 'Conscious Artificial Intelligence'