News Article
ASTRID reaches 50.000 concepts learned
ASTRID reaches 50.000 concepts learned

Published: 2020-03-23 in CogSci

The ASTRID-system has, through unsupervised learning, reached the equivalent conceptual knowledge of a Human Brain.

The brain of a well-educated human adult contains roughly 10,000 to 20,000 words, as determined in early research back in the sixties and seventies of last century. From our research, we estimate that the lower boundary of 10,000 words consists largely of Commonsense Knowledge (describing everyday reality), while the difference with the upper boundary most likely consists of domain-specific knowledge, linked to special interests and profession-related information.

Going from there, and considering that many words describe a concept on their own, but there are also combinations of words that describe specific concepts, we assume that the human brain holds about 50,000 concepts, based on the 10,000 unique words mark. It is therefore an important milestone for the ASTRID-project to reach almost 50,000 concepts and close to 400,000 direct conceptual links, all learned through unsupervised training.

Calculated with 5 levels of inference depth, the ASTRID system can find an estimated more than 840,000,000 inference paths.

 

There are clear indicators that the 50,000 concepts level is indeed the actual level of Commonsense Knowledge of well-educated adults. The most obvious indicator is the fact that we trained the ASTRID-system mainly on social fiction, and staying clear of specific profession-related information, and around the 50K mark we clearly hit a ceiling. No matter how many other texts we submitted to the system, it wouldn't find many more new concepts to add to its world-model. It is clear that 50,000 concepts is enough to describe normal daily life in detail.

Because the ASTRID-system stores each unique concept only once, and builds the internal world-model by finding semantic relations between these concepts, this metric gives great insight into our own perceived complexity of our reality. We anticipate gaining additional insights over time regarding the manner in which human brains handle perception and the creation of world-models.

 

NOTE: This news-article is presented here for historical perspective only.

This article is more than two years old. Therefore, information in this article might have changed, become incomplete, or even completely invalid since its publication date. Included weblinks (if present in this article) might point to pages that no longer exist, have been moved over time, or now contain unrelated or insufficient information. No expectations or conclusions should be derived from this article or any forward-looking statements therein.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
Other Articles in CogSci
 
  News
  • 2021-03-30 - New research Paper - ASTRID: Bootstrapping Commonsense Knowledge 
  • 2021-03-12 - New research paper - Self-learning Symbolic AI: A critical appraisal 
  • 2020-02-14 - ASTRID's Deep Inference Engine handles analogous information 
  • 2019-01-20 - Fuzzy Semantic Relations in the ASTRID system 
  • 2012-05-09 - Research paper online: Why we need 'Conscious Artificial Intelligence'  

 
  Blogs
  • 2024-08-22 - Large Language Models and typing monkeys - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2023-10-04 - A new classification for AI systems - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2018-01-24 - Breakthrough in common-sense knowledge - Dr. Dr. Ir. Gerard Jagers op Akkerhuis